Create prototypes of the initial experience often and as early as possible. Conduct walkthroughs with users in which a number of prototype alternatives are used to get early feedback on the designs. Apply this feedback during design iterations to narrow the prototype alternatives.
Conduct formal and informal user tests with both prototypes and development level initial experience designs. Fix all problems identified by users. An iterative design and test process will make it unlikely that severe problems will be found in later iterations. If a problem is found too late in the schedule to be fixed, make sure it is well documented and fixed in the next version.
Avoid making design changes late in the cycle. Such changes almost always cause problems. Perform benchmark comparisons against competitors. Identify differences in approach and advantages and disadvantages they may have. Use this data in the design of the next version of the product.
Evaluate the initial experience in realistic customer environments using tools and special equipment the customer may use. For example, if the customer will use a pallet jack to move a piece of equipment, make sure the equipment and its packaging work with the jack.
Examine competitive products to determine how they address the same initial experience tasks and environments. Establish base-line measurements comparing your product to those of competitors and repeat the measurements periodically to assess improvements and changes in approaches.
The following methods are recommended:
- Design Walkthroughs. Introduce new design concepts to users to elicit preferences and suggestions for improvement.
- Heuristic Evaluations. Review low fidelity and high-fidelity initial experience prototypes. Provide feedback to and make requests of the development team to correct observed problems. Walk through the sequence and elements of competitors' initial experiences. Videotape user behavior and record observations, count steps, and create comparative checklists.
- Usability Testing. Recruit potential users to try the initial experience and provide feedback on high fidelity prototypes. Videotape user behavior and collect performance data, such as task times, success rates, error counts, and step counts. Also collect subjective data such as satisfaction ratings, rankings, and general comments.
- Benchmark Tests. Compare the initial experiences of two or more top competitors. Recruit users to experience your initial experience and the competitors'. Videotape user behavior and collect performance data, such as task times, success rates, error counts, and step counts. Also collect subjective data such as satisfaction ratings, preference rankings, and general comments.
- Checklist Evaluation. Perform assessments and have the target audience assess the initial experience against checklists used by reviewers, consultants, retailers, and customers, such as major enterprise purchasers. Note unique positive and negative variations.
- External Heuristic Evaluations. Use independent consultants to objectively evaluate the initial experience design and implementation, and evaluate products in actual customer locations as well as in lab settings.
- Retailer/Seller Feedback. Collect feedback from retailers and other sales channels regarding reasons for customer returns and questions asked at the point of sale and during customer installations. Go to retail sales locations (as a customer) and ask questions about buying and setting up the product of interest. Users' expectations about the initial experience are influenced during their purchasing experience.
- Support Feedback. Talk with product support personnel to find out what kinds of problems users typically have. Arrange interaction sessions between the developers and support the exchange of ideas for solutions. Have support personnel perform the initial experience tasks themselves, and have developers sit with and assist product support periodically.