Overall, performance on Exam #2 was a bit lower than the first exam. In this debrief I will review the items that were most commonly missed and discussed how final scores will be adjusted.
Here are the test-level statistics:
Although these numbers are not terrible (an average score of 77% is not as high as I like, but not a failure), I believe there were some issues related to a few items as well as the instruction that contributed to the lower scores. Here are the items with incorrect responses by more thasn half the class:
This data reflects responses from half the class after I noticed it had the lowest correct response rate of any item. When an item has such a low score, the point biserial is negative (people who did better on the exam overall did worse on this item), and a discrimination index of 0 (those in the upper half of the scoring group did not perform better than those in the lower scoring half)...well, this means something is wrong. The correct answer is correct: this is a rule. However, I used the word "classify" which, according to the information I provided, is a common verb used in outcomes that are defined concepts. I didn't want you to be confused by this, so I did not include defined concepts as a choice...but when I looked back at the information on the webpage I noticed that it was hard to see that classify belonged to concrete concepts and not defined concepts. In any case, the use of verbs to determine type of learning is not ideal; you need to consider the intention of the skill. This is why I don't like Bloom's taxonomy because the categories of intentions are somewhat nebulous once you move past comprehend (understand). Anyway, I decided to change this item by changing the word "classify" to something else for the second half of the class, and here is what happened:
The results were WORSE! Yes, "decide" can be synonymous with "differentiate" (sort of), but again you have to consider the intention of the skill. But I concede that there simply was not enough instruction (especially examples) to be able to perform the skill of classifying outcomes well.
Another problematic Gagne item. The numbers on this item are actually not bad, apart from the low percent of people responding correctly. In this case, the only skill type that included visualization strategies for helping people learn is motor skills. The research is very clear about the efficacy of visualization for motor skill acquisition and refinement. I think this is cool.
The example provided in this item was taken from the book, which reflects the results of studies done to measure spreading activation.
This item seemed clear to me when I first wrote it. I thought it provided a good example of a retention modeling process, adhering to the description: "Retention is increased by rehearsing information to be learned, coding in visual and symbolic form, and relating new material to information previously stored in memory." But I can see how the content of the example (math) may have contributed to the abstract nature of the item.
Based on the fact that two items (classifying outcomes and identifying modeling strategies) measured skills that you did not have enough elaboration, practice and examples to learn well, AND that the highest exam score was ~91%, I am going to make it easy on me and you and add 5 points to everybody's score. If you happened to get either problematic question correct in the first place, consider it a small bonus for your efforts.
The exam is now opened up for you to review individual items if you wish. If you have any questions about any item, please contact me!