Joi Converse

What Does it Mean When the Estimated Impact of Disengagement on RIT is Positive?

Blog Post created by Joi Converse on Apr 17, 2018

NWEA recently introduced some new metrics into our MAP Growth Reports that identify to what extent students rapidly guessed when they took their test, and the effect that rapid guessing had on their RIT scores. I described these metrics in some detail in this blog post, which also includes a link to some broader guidance we wrote about how to make use of these metrics when interpreting your students’ test scores.

 

While we tried to cover most of the questions around these metrics, there is one question we are getting from our partners that is causing a lot of confusion – “What does it mean when the estimated impact of disengagement on a student’s RIT score is….POSITIVE?”

 

Let me explain why you may occasionally see a positive impact by talking about a response to a single item. On that item, there are generally four response options – one correct answer, and three incorrect answers (or “distractors”). If a student provides a rapid guess to the item, what is the probability that the student is going to answer the item incorrectly? Given that three of the four response options are incorrect, if a student guesses on the item, there is about a 75% chance (3/4) that the student will get the item wrong, and conversely, about a 25% chance (1/4) that the student will answer the item correctly. So, when students rapidly guess, they have a higher likelihood of getting the item wrong, and when that occurs, there can be a subsequent negative impact on their score.

 

But what if the student randomly gets the item…correct? If a student guesses and gets the answer right, the test score can still be biased – however, in this case, the student’s score can be positively biased. That is, the estimated impact of disengagement on the student’s RIT score can result in the student’s score being higher than if the student had actually tried on the item.

 

Let’s expand this even further. Let’s say that a student rapidly guessed on 10% of items on a reading test – so 4 out of the 40 reading items were rapidly guessed. If we apply those some probabilities from before, we might expect the student to get 3 of those 4 rapidly guessed items wrong, and 1 of the 4 rapidly guessed items correct. In this situation, there would likely be a negative impact on the student’s RIT score (the score would be negative biased, and the impact would be a negative value, such as -1).

 

But what if this student was incredibly lucky, and managed to guess correctly on 3 of those 4 items? In that case, the student’s score would be improved because of guessing, not negatively affected by it (the score would be positively biased, and the impact would be a positive value, such as +1). And while this isn’t common (given the low probability of guessing correctly), that doesn’t mean it doesn’t happen. And across millions of test events, there are some students who get luckier than others – they guess correctly at a rate higher than we would expect.

 

So, what should you do when you see positive values? In this case, our guidance still applies – you should consider what percentage of items were rapidly guessed, and in turn, what the subsequent impact was on the student’s RIT score. If less than 10% of items were rapidly guessed, there likely won’t be a huge impact – positive or negative – on a student’s score. And, if the percentage exceeds 30%, that is a clear indicator that a student’s test score isn’t valid and the student should be considered for retesting – even if the impact of the student’s RIT score is positive!

 

The goal of this metric is to inform you when you should or should not have confidence that student RIT scores are true reflections of their achievement level. So, whether you see a -3 or a +3 on the impact of disengagement metric, those are both telling you the same thing – the student’s rapid guessing had an impact on his or her score, and consideration should be given around if that score should be used (or the student should be retested), and how to make sure the student stays engaged during future testing sessions.

 

About the Author

Dr. Nate Jensen

Nate Jensen is a Research Scientist at NWEA, where he specializes in the use of student testing data for accountability purposes. He has provided consultation and support to teachers, administrators, and policymakers across the country to help establish best practices around using student achievement and growth data in accountability systems. Nate holds a Ph.D. in Counselor Education from the University of Arkansas, an M.A. in Counseling Psychology from Framingham State University, and a B.S. in Psychology from South Dakota State University.

Outcomes