Return to Blogs Main Page

Understanding Negative Growth

By mike_locascio on January 20, 2012 at 10:53 am

Login to recommend this post »

Please login to like this post. 8

Welcome guest blogger Michael Lo Cascio. Michael  is the Data and Assessment Specialist at Valley View Community School District 365U in Illinois.

As we begin to analyze the results of our latest NWEA testing many will notice a phenomenon known as negative growth, an occurrence most commonly seen in winter testing, where the RIT scores for some students appear to have fallen from previous levels in the year.  This unexpected fall in a student’s score can lead to confusion, misinterpretation, and even frustration when it comes to understanding student data.

TYPICAL RESULTS

If we were to plot the fall, winter, and spring RIT scores for all of our students, most results would create a line graph similar to the one presented in figure 1.  In this example, positive RIT growth is shown between each testing administration. 

In these instances, any movement upward on the RIT scale is considered evidence of student growth and an indicator of effective instruction.

(NWEA does not consider winter assessments to be an official growth point due to the limited number of instructional weeks, but winter assessments are a useful barometer of student progress and should be viewed in this light.)

NEGATIVE GROWTH

In examining data throughout the year we may come across occasions when students produce seemingly random scores or potentially inaccurate test data.  This is most commonly seen in winter due to the limited number of instructional weeks but can occur in spring as well.  If we were to plot the fall, winter, and spring RIT scores for these students, we may see graphs that show significant drops at some point in the year. 

Why these falling outcomes occur and how to deal with them are important discussions we need to undertake.  Not only do we need to know that our assessment data is reliable, but even more importantly, we need to know how to use this data to inform or adjust our instruction accordingly.

GATHERING INFORMATION

While we cannot always ascertain why an anomaly occurred, it is necessary that we gather information around the event in order to inform an appropriate response. How much time did the student take to complete the test? How much negative growth was actually observed?  Have living conditions changed at home? Is the student suffering from poor attitude, anxiety, or motivation issues?  Have there been changes in the student’s attendance or in classroom instruction?  Is the data consistent with other classroom data?  These questions are provided as a resource at the end of this article and are encouraged to be used when examining students’ scores. Gathering information around these questions can help inform our response, which generally would involve a combination of the following two actions:

1.     Determining the need to retest the student (critical if the drop is more than 10 RIT)

2.     Determining the need to adjust instruction (utilizing DesCartes)

PUTTING STUDENT GROWTH IN CONTEXT

There can be many reasons for a drop in a student’s score, but one important factor that must be considered is something known as standard error of measurement.  This statistical term is used to indicate the error in a reported RIT score, providing a possible range of expected scores for the student generally known as a RIT Range.

Typically, this approximates to about plus or minus three RIT on any given test, so a RIT score of 200 would likely yield a RIT Range of 197-203.  Let’s look at the considerations of this a bit further.

Spotlight:  Standard Error of Measurement and RIT Ranges

Consider a student with the following test data:

Fall RIT Score:               210          +/- 3 RIT                    Fall RIT Range:               207-213
Winter RIT Score:        203          +/- 3 RIT                    Winter RIT Range:        200-206

How much negative growth is seen for this student between fall and winter testing?

Is it 1, 7, or 13 RIT? The truth is that the amount of negative growth could be any of these amounts.  The RIT Ranges imply that the spread of accurate scores could have fallen from an extreme high of 213 in the fall to an extreme low of 200 in the winter.  They also could have minimally fallen from 207 in the fall to 206 in the winter or any combination within the parameters of the RIT Ranges. All that can truly be ascertained is that the score has fallen.  Which leads to the larger question, what is the significance of this change in RIT?

UTILIZING DESCARTES

As the spotlight above shows, differences in RIT may not always be what they appear. Of course this same reasoning holds true for any perceived gains, which is why it is best to look at the impact of any change in RIT from an instructional viewpoint. When we insert either of the RIT scores from the above example into DesCartes (207 or 203) we find that the student is still performing in the same RIT Band of 201-210. Fundamental to the use of DesCartes is the notion that the learning level of the student will remain the same throughout the ten point RIT Band. This holds true regardless of any perceived gains or losses between test sessions.  Perhaps this student needs more practice, or alternative methods of instruction, but the content of the RIT Band is still the appropriate learning level for the student.

When first referencing DesCartes it is common to focus solely on the RIT score. While this is appropriate, the adjacent RIT Bands in DesCartes may contain useful instructional skills which relate to the student’s RIT Range. You may remember that the three adjacent RIT Bands in DesCartes represent skills which a student may answer correctly 25%, 50%, and 75% of the time respectively, with the student’s RIT score falling into the center RIT Band. Though it is important to utilize the student’s primary RIT Band during instruction, the skills in the adjacent RIT Bands should not be ignored.

DEALING WITH NEGATIVE GROWTH

It is well known that instructional decisions should not be based entirely on one data point, but in situations where negative growth pulls a student’s RIT score into a lower RIT Band this is especially true. In these instances it is essential that we rely on other sources of data to verify the child’s instructional level. By reviewing classroom assignments and assessments we may be able to determine whether the student had an off day during the test or whether there is actually a need to adjust the student’s instruction. Either way, teacher judgment is crucial in this process.

Ultimately we need to realize the importance of viewing student learning in a broader context. Though new data is useful in driving instruction and guiding discussions, utilizing additional points of data will provide a more complete picture of student performance. Consider again the student with a fall RIT score of 210 and a winter RIT score of 203.  What if the student’s previous spring score was 196? Given this additional information, we would clearly see the student’s overall trend towards positive RIT growth over time.

CHANGING OUR DISCUSSIONS

We must remember that each assessment is only a snapshot of a single point in time.  Negative growth does not necessarily mean a student is not learning, or that classroom instruction has not been effective, or that NWEA data is not reliable.  Rather, negative growth allows for additional opportunities to change the way we discuss our students’ learning.  Instead of ignoring these instances as anomalies or assigning blame for their results, we should recognize negative growth as an important element in the culture of data driven instruction. Simply put, for unexplained reasons student scores sometimes fall. How this affects instruction should be the focus of our conversations.

MOVING FORWARD:  LIMITING NEGATIVE GROWTH

Negative growth is a reality of MAP assessment, but it can be limited. The following steps will create conditions for the best growth possible at your school:

  • Proctors should circulate the room during testing, monitoring student progress and behavior, making students restart the test when they are not taking it seriously or rescheduling them to another day.
  • Poor student behavior and discipline issues can result in poor performance on assessments.  Have teachers make judgment calls on the day of testing.  Rescheduling testing for these students is often a sound practice.
  • Give consideration to all students who complete the test in less than 15 minutes and determine whether they should retest on another day.
  • Create a positive data driven culture with your staff and students by celebrating the MAP assessments school-wide before, during, and after testing, including recognizing all students that demonstrate growth.

RESOURCES

I have developed a set of guiding questions as a teacher resource to help you determine when to adjust instruction and offer some guidance on when it is necessary to retest a student.  Download and see this resource under "Negative Growth Questionnaire and Guide" in the resource section of SPARK as well as in the link provided.

 

 

4 comment(s) - you must be logged in to comment

Bryan Devine's picture
Mike, Great article. In Peoria, we have a lot of conversations around this topic. My talking points to staff are similar to yours. However, you are much more articulate and easier to follow.

Michael LoCascio's picture
Thanks for the comment Laura. I will be updating this article in the near future, but to address your issue, your school may need to reconsider the entrance and exit criteria for your program. It's not in the best interests of students to move in and out of programs. I would suggest taking into account the standard error of measurement if your system does not already do so, essentially adding 3 RIT to your existing exit criteria. This would help ensure that students are actually scoring high enough to exit your program. A better alternative would be to base the exit criteria off of spring growth expectations rather than winter growth expectations. This idea more than anything should help ensure that students have demonstrated appropriate growth to no longer be in need of intensive services.

Laura Troha's picture
This article really hits home for me as a Middle School Reading specialist. When you mentioned that the negative growth is frustrating; you are correct. However, what I feel is more frustrating is when the students make growth, then have negative growth, then grow again. Their scores are up and down, up and down. Their scores indicate when they enter & exit into my program. I know my district does use DesCartes, but probably not enough for individual instruction. We have huge issues with motivation: students during testing who just keep clicking to get the test over with quickly. I like the idea of celebrating the MAP test & scores as a school. Ive talked about doing that with my principal. So far, the students have stickers in their assignment notebooks to remind them of their goals. However, when they do meet their goals, nothing is celebrated. I think we need to revisit the process.

Janice Matthews's picture
Thank you for the information on NWEA growth. We use this tool in our district as well. It is good for me to remember that people are complicated and even the best tool does not provide perfect information all of the time.

Join the Community

Create Account

Already a member? Sign in

Login to Your Account

Ask a Question

Draw on the knowledge, experience and innovation of your peers across the country and around the world.

Ask a question now!