Skip navigation
All Places > Welcome New Members > Blog > 2018 > April
2018

Each spring, many district administrators out there are tasked with reporting metrics to others in their communities – from parents to school boards – that answer the question, “How are the schools in our community doing?”

 

Analyzing and interpreting student data can be a time-consuming process, and it's hard to know what conclusions to draw. Researchers at NWEA recognized the need to make this annual data gathering and reporting easier. About two years ago, they created the Insights Report, a 15-page analysis of district data to offer insights into students’ academic achievement and growth. Dr. Andy Hegedus was one of the innovative NWEA researchers who created the report; we talked to him about what kind of value it provides to MAP Growth partners. Below, you can learn more about the Insights Report in our video overview, or read more of our interview with Dr. Andy Hegedus on Teach Learn Grow.

 

What 2-3 areas of the report do you typically emphasize when introducing it to a superintendent or an administrator?

 

1) The simple, straight-forward language and graphics that anyone can understand.

2) The one-page executive summary, which hits three key points – How are our kids doing? How are we doing on proficiency and college-readiness? How are we doing over time?

3) The report progresses from high-level all the way to an in-depth analysis of growth and achievement by gender and race. Superintendents told us they wanted help with the analysis of their data since we have the data expertise to do it and they typically don’t. Now we have a way to do that at scale.

 

How are current MAP Growth partners using the Insights Report?

 

They are providing the Insights Report to the school board – and to their teams – using the PowerPoint we provide as a starting point. (The report comes with a PowerPoint presentation, pdf, and an hour of consulting from NWEA.) By providing a written report in plain language, and a PowerPoint presentation, we intentionally made reporting district performance easier on administrators: We do the analysis for them. It saves them tons of time, and they can have faith in the numbers we provide. The Insights Report also provides third-party independence around the metrics. That’s important to some administrators and boards.

 

What was the most interesting question you got from an administrator about the data?

The most interesting question I had was from a high-performing, international school that questioned our analysis on college readiness benchmarks. Our data said that around 70% of their students were meeting college readiness benchmarks – but 100% of their graduating students attend college.

 

In that case, I showed the administrators their achievement percentiles by grade, and they had a bunch of early learners in the 20-30th percentiles in achievement. Now, every year, their achievement got higher, and in 10th grade, they were at the 80th percentile in achievement. So their students grew better than average, and their achievement percentile increased every year – and they left the school ready for college. But if you identified a student that would best represent the entire school, he or she would be a student somewhere in between – a student on the journey to be college ready; they weren’t all at the top, and they weren’t all at the bottom. We discussed that growth is what you keep your eye on. Growth is your leading indicator. If you keep growing students a lot, your achievement will improve over time. It’s through growth that all of the kids will become college and career ready. If growth drops for some, they might not make it.

 

Take a look: 

 

NWEA recently introduced some new metrics into our MAP Growth Reports that identify to what extent students rapidly guessed when they took their test, and the effect that rapid guessing had on their RIT scores. I described these metrics in some detail in this blog post, which also includes a link to some broader guidance we wrote about how to make use of these metrics when interpreting your students’ test scores.

 

While we tried to cover most of the questions around these metrics, there is one question we are getting from our partners that is causing a lot of confusion – “What does it mean when the estimated impact of disengagement on a student’s RIT score is….POSITIVE?”

 

Let me explain why you may occasionally see a positive impact by talking about a response to a single item. On that item, there are generally four response options – one correct answer, and three incorrect answers (or “distractors”). If a student provides a rapid guess to the item, what is the probability that the student is going to answer the item incorrectly? Given that three of the four response options are incorrect, if a student guesses on the item, there is about a 75% chance (3/4) that the student will get the item wrong, and conversely, about a 25% chance (1/4) that the student will answer the item correctly. So, when students rapidly guess, they have a higher likelihood of getting the item wrong, and when that occurs, there can be a subsequent negative impact on their score.

 

But what if the student randomly gets the item…correct? If a student guesses and gets the answer right, the test score can still be biased – however, in this case, the student’s score can be positively biased. That is, the estimated impact of disengagement on the student’s RIT score can result in the student’s score being higher than if the student had actually tried on the item.

 

Let’s expand this even further. Let’s say that a student rapidly guessed on 10% of items on a reading test – so 4 out of the 40 reading items were rapidly guessed. If we apply those some probabilities from before, we might expect the student to get 3 of those 4 rapidly guessed items wrong, and 1 of the 4 rapidly guessed items correct. In this situation, there would likely be a negative impact on the student’s RIT score (the score would be negative biased, and the impact would be a negative value, such as -1).

 

But what if this student was incredibly lucky, and managed to guess correctly on 3 of those 4 items? In that case, the student’s score would be improved because of guessing, not negatively affected by it (the score would be positively biased, and the impact would be a positive value, such as +1). And while this isn’t common (given the low probability of guessing correctly), that doesn’t mean it doesn’t happen. And across millions of test events, there are some students who get luckier than others – they guess correctly at a rate higher than we would expect.

 

So, what should you do when you see positive values? In this case, our guidance still applies – you should consider what percentage of items were rapidly guessed, and in turn, what the subsequent impact was on the student’s RIT score. If less than 10% of items were rapidly guessed, there likely won’t be a huge impact – positive or negative – on a student’s score. And, if the percentage exceeds 30%, that is a clear indicator that a student’s test score isn’t valid and the student should be considered for retesting – even if the impact of the student’s RIT score is positive!

 

The goal of this metric is to inform you when you should or should not have confidence that student RIT scores are true reflections of their achievement level. So, whether you see a -3 or a +3 on the impact of disengagement metric, those are both telling you the same thing – the student’s rapid guessing had an impact on his or her score, and consideration should be given around if that score should be used (or the student should be retested), and how to make sure the student stays engaged during future testing sessions.

 

About the Author

Dr. Nate Jensen

Nate Jensen is a Research Scientist at NWEA, where he specializes in the use of student testing data for accountability purposes. He has provided consultation and support to teachers, administrators, and policymakers across the country to help establish best practices around using student achievement and growth data in accountability systems. Nate holds a Ph.D. in Counselor Education from the University of Arkansas, an M.A. in Counseling Psychology from Framingham State University, and a B.S. in Psychology from South Dakota State University.