top of page

Instructional Design Research

EDUA 6377 - Instructional Design


According to the study conducted by Williams, et al. (2011). How Do Instructional Designers Evaluate? A qualitative study of evaluation in practice, the study employed a qualitative research design to investigate how instructional designers use evaluation in everyday design practice. Furthermore, the researchers’ study was based on intensive interviews of seven practitioners with varied educational backgrounds and employed in different professions, their findings included ten themes regarding how designers use evaluation to improve their products. Throughout the seven interviews done with each practitioner on three different occasions and questionnaires, this formed in part, the methodology that was utilized for this study, the research substantiated the practitioner’s claim that clients will not pay for formal evaluations, but the study further suggested that practitioners use evaluation in important, but less formal ways. Furthermore, this study's empirical literature determined that successful or expert design is often associated with factors such as careful analysis, appropriate learning objectives, and openness to alternative ideas, but rarely with formalized evaluation activities per se (Cox & Osguthorpe, 2003; Ker, 1983; Klimezak & Wedman, 1997; Liu, Gibby, Quiro, & Demps, 2002; Pieters & Bergman, 1995; Rowland, 1992)


The study further inferred that in practice, however, evaluation seems to be handled somewhat differently. Although virtually no studies in the literature directly and substantively address actual evaluation activities by designers, there is some evidence that formalized evaluation is not always central to the practice of instructional design. (Williams, et al. 2011) Summarizing the results found in the study, participants reported not emphasizing formal evaluation in their work, the study found evidence that these designers regularly used and talked about information evaluation as part of their design efforts (Theme 1), following patterns similar to those recommended for formal evaluation by evaluation theorist (e.g., Alkin, 1991; Stufflebeam, 2000; Patton, 2011) as they integrated evaluation into the ADDIE design process.


According to the conclusions reached by this study, these designers and their clients did conduct several formative evaluations that could also be considered summative in a sense through the process of making strategic design decisions, as reported elsewhere in this article. Likewise, during the design steps, while formatively considering alternative ways to meet needs, designers decided summatively on which of several instructional options to include in their designs. During the development steps, designers summatively chose certain technologies to use while rejecting others after formatively considering options. Finally, while designers hoped for formative feedback during the implementation of their designs, target audiences summatively decided whether to implement the instruction as intended or not. A concluding point about the study, it was observed that these instructional designers did not seem to easily fit the traditional separate roles of formative or summative evaluation. Designers focused on designing and developing learning experiences in ongoing dialogue with clients and team members, doing this while seeking constant feedback and evidence to adjust their use of theory, technology, testing, and other important components of the learning environments they were building.

Ultimately, development evaluation is a critical part of the instructional design process, as it helps designers create more effective and engaging learning experiences for their target audience, with certain expectations because this study found that practitioner instructional designers do this as an informal evaluation wrapped up as part of the development of the overall learning as defined by the client’s goals and objectives and with their feedback throughout the development process.


In conclusion, my takeaway from experienced practitioners in instructional design is about focusing on the results and ensuring that the desired product is reflective of the needs of the audience and/or client. The client ultimately drives how development evaluation will be done and too many times, budget constraints discourage a formal evaluation process. In the real world of instructional design, practitioners must be intentional, but flexible, good listeners who are engaged and focused on the deliverables or outcomes yet create an eLearning product that satisfies the needs of the client.


References

Alkin, M. C. (1991) Evaluation theory development: II. In M.W. McLaughlin & D. C. Phillips (Eds.) Evaluation and education: at quarter century, Ninetieth Yearbook of the National Society for the Study of education, Part II. Chicago: University of Chicago Press.


Cox, S., & Osguthorpe, R. T. (2003). How do instructional design professionals spend

their time? TechTrends, 47 (3), 29, 45-47.


Kerr, S. T. (1983). Inside the black box: Making design decisions for instruction. British Journal of Educational Technology, 14, 45–58.


Klimczack, A. K., & Wedman, J. F. (1997). Instructional design project success factors: An empirical basis. Educational Technology Research and Development, 45(2), 75-83.


Liu, M., Gibby, S., Quiros, O, & Demps, E. (2002). Challenges of being an instructional designer for new media development: A view from practitioners. Journal of Educational Multimedia and Hypermedia, 11(3), 195-219.


Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press.


Pieters, J. M., & Bergman, R. (1995). The empirical basis of designing instruction. Performance Improvement Quarterly, 8(3), 118.


Rowland, G. (1992). What do designers actually do? An initial investigation of expert practice. Performance Improvement Quarterly, 5(2), 65–86.


Stufflebeam, D. L. (2000). The CIPP model for evaluation. In D. L. Shufflebeam, G. F. Madaus, T. Kelleghan (Eds.) Evaluation models: Viewpoints on educational and human services evaluation (2nd ed., 274-317).


Williams, D. D., South, J. B., Yanchar, S. C., Wilson, B. G., & Allen, S. (2011). How Do

designers evaluate? A qualitative study of evaluation in practice. Educational Technology Research and Development, 59, 885-907.



Коментарі


Contact Us

Thank You for Contacting Us!

© 2023 Isis Cardenas Educational Blog. All Rights Reserved.

bottom of page