But what... is it good for? : An investigation into the process of evaluating potentially creative and innovative products

Download files
Access & Terms of Use
open access
Copyright: Seah, Yuan
Altmetric
Abstract
Research and famous near-miss innovations suggest that individuals are generally poor self and peer evaluators of new products. Industry and academia acknowledges the importance of accurate judgements for creative and innovative products yet within the creativity, brainstorming, new product development, and innovation literatures, the issue of accurate product evaluations remains under-examined. In lieu of focusing on the evaluation process, research in these areas have largely focused on the generative aspects of new ideas and products, leading Mueller, Melwani, and Goncalo (2012, p. 17) to argue that the field of creativity may need to shift its current focus from identifying how to generate more creative ideas to identifying how to help innovative institutions recognize and accept creativity . This neglect is further compounded in much of the existing research within creativity examining new product evaluation by the use of divergent thinking tests, rather than real-world creative products. The extent to which findings and research conclusions derived in this manner can inform our understanding of real-world new product evaluation is limited. As part of this drive towards the evaluative aspects of creativity and innovation, our research examined the new product evaluation process. We do so by firstly introducing a process model of creative and innovative product evaluation which integrates our extensive review of the literature on creativity, innovation, evaluation, and judgement accuracy. Using this model as a guiding framework, we conducted a series of four empirical studies involving new real-world products. We examined the effects of 1) different evaluator perspectives (e.g., whether a creator or a non-task-involved evaluator is rating the product), 2) differential use of judging standards (and whether it may account for differences in evaluation outcomes), 3) different evaluation instruments and methods (and whether it leads to differences in rating accuracy), 4) the role of creator and evaluator personal characteristics, and 5) environmental influences. Our experiments utilised a mixture of laboratory and quasi-field designs. They involved a variety of participants (e.g., target consumers, bank managers, and domain experts) who were tasked with creating new products (or evaluation targets) from different domains. We examined product evaluation from both an external, consumer perspective and an internal, organisational point of view, and utilised a variety of evaluation instruments and methods. Creator and evaluator personal characteristics studied here included the Big Five personality traits, divergent thinking abilities, adaptive flexibility, goal orientation, novelty seeking, thinking styles, psychological empowerment, and domain knowledge and expertise. Our findings reveal that product creators and peer evaluators (who were involved in the same task but not having created the specific product being evaluated, i.e., task-involved-evaluators ) consistently overestimated the significance (e.g., innovativeness and effectiveness) of new products relative to target consumers and the consensus judgements of experts. At the same time, we did not find consistent evidence for creator self bias (or overestimation) relative to peer evaluator ratings. Exploring the effectiveness and accuracy of teams, and consistent with the mixed findings in the literature, we found no significant advantage in terms of evaluative accuracy for team self evaluation, compared to averaged, nominal team self ratings. Increased domain knowledge was observed to lead to more accurate new product evaluations. Going beyond summary judgements, our study found that for task-involved evaluators (i.e., creators and peer evaluators), differences in the use of judging standards and criteria can account, at least partially, for differences in evaluation outcomes. Finally, our investigations into the effects of creator and evaluator personal characteristics on judgement accuracy, and our study of the implications of using different evaluation instruments and procedures yield inconclusive results. We end by noting the paucity of existing empirical investigations into the new product evaluation process and highlight avenues for future research with our model.
Persistent link to this record
Link to Publisher Version
Link to Open Access Version
Additional Link
Author(s)
Seah, Yuan
Supervisor(s)
Birney, Damian
Beckmann, Jens
Creator(s)
Editor(s)
Translator(s)
Curator(s)
Designer(s)
Arranger(s)
Composer(s)
Recordist(s)
Conference Proceedings Editor(s)
Other Contributor(s)
Corporate/Industry Contributor(s)
Publication Year
2012
Resource Type
Thesis
Degree Type
PhD Doctorate
UNSW Faculty
Files
download whole.pdf 1.98 MB Adobe Portable Document Format
Related dataset(s)