Monthly Archives: July 2014

Upgrading the Dog-House…NOT your typical tests

blog 3 image

In this day an age, the term testing and education seem to go hand-in-hand, in a not so nice way. All of the students in today’s public schools have never gone a year without some form of standardized testing, whether it is practice or actual testing. However, testing has a different connotation when it comes to instructional design. In the ADDIE model, the development phase requires testing….pilot-testing.

Pilot-testing is often associated with experiments in the world of science and engineering, however, they hold great significance in instructional design. Why is it so important you ask? Well, “Pilot testing is a chance to evaluate a project before it goes into full implementation and is a key component of the development stage” (Hodell, 2011). There are some hiccups that are associated with pilot-testing because you may not always have the ideal audience, or the target audience. However, you can address a number of questions, which help to improve the success of the training in the long run. Here are some questions that can/will be addressed by pilot-testing:

    • Does the lesson plan work?
    • Are the directions to the facilitator clear and concise?
    • Are the facilitator’s materials appropriate and thorough enough?
    • Are the learner’s materials appropriate and thorough enough?
    • Are the support materials (slides, overheads, handouts, and the like) what you expected?
    • Does the timing of each of the segments match your estimates?
    • Are the technology components (audio, video, computers, and so forth) appropriate?
    • Do the instructional methods work as planned?
    • What does not work they way you thought it should?
    • What needs to be changed?

(Hodell, 2011).

What other types of questions might you ask and/or address through the use of pilot-testing?

Feedback, both positive and negative can help the instructional designer evaluate all aspects of the training. This article helps to point out the importance of pilot-testing, problems encountered in pilot-testing and why the data collected from them is still considered informal. Think back to the last blog post about the difference between formal and informal evaluation. The same principle applies here. The informal data gathered from a pilot-test helps to inform the instructional designer about what works and doesn’t work. Much like a recipe. You wouldn’t publish a recipe in a mass-produced cookbook without trying it out a few times.

Happy cooking!

 

References:

Hodell, C. (2011). ISD From the Ground Up (3rd ed.). Alexandria, VA: ASTD Press.

Advertisements

ADDIA…a new and improved dog house…

In recent news, the ISD model formally known as ADDIE has filed to change it’s name to ADDIA. Through numerous discussions, it seems that the E, evaluation, is really found throughout the whole design process. In this blog, we are going to take a closer look at what this changes means.

my isd model

To recap, ADDIE is an acronym for an instructional design model that follows the following steps: Analysis, Design, Development, Implementation and Evaluation. It was discovered that the evaluation phase of the model really happens throughout the whole process. “Evaluation doesn’t deserve to be listed last in the ADDIE model because it takes place in every element and surrounds the instructional design process. Evaluation is a constant guard at the gate of failure” (Hodell, pp. 25). Evaluation is an ongoing process that happens formally and informally.

The instructional designer starts out with the analysis phase, where they collect data of all sorts, ask all kinds of questions and determine whether or not a program is the solution to the presented problem. During this phase, the instructional designer is informally evaluating the information coming in.  As they progress through the model, the designer informally evaluates the decisions regarding the design plan, lesson structure, goals and objectives. During the development stage, informal evaluation is essential because the designer will put the potential program through a rigorous testing. They need to informally evaluate a variety of things to make sure all parts come together, work well, and will help the participants achieve the desired goals and objectives. As the designer rounds the final corner in the model, the implementation is when the program goes live. The instructor and designer must work together in this stage to determine in the planned course is working, and therefore, must informally evaluate how the program is working. Now onto to the new stage of the model: Assessment. This phase is the formal evaluation of the whole program. The instructional designer must collect information to determine what worked and didn’t work, what they need to change, what they need to develop, etc. The are assessing how the program worked.

So two terms have been thrown around in this blog: informal and formal. What does that mean when it comes to the ISD model? Well, informal connotes a sense of casualness. In the case of the instructional designer, they casually collect information throughout all the various phases to determine the successes and failures. When the ISD model arrives at the last phase, then they start a formal investigation. Formal implies there is a specific procedure to follow and must meet certain standards. The information collected in this part helps to determine the successes and failures, but also produces definitive, quantitative and qualitative results that the designer can use in the future.

As you venture off into your instructional design endeavors, make sure you think about how the new model ADDIA is met in your design. The website below, holds some great tips on how to be successful, and what qualities you must possess.

http://www.learndash.com/9-essential-instructional-designer-skills/

References:

Hodell, C. (2011). ISD From the Ground Up (3rd ed.). Alexandria, VA: ASTD Press.