Joi Converse

Understanding Instructional Weeks: Why Are They Important?

Blog Post created by Joi Converse on Aug 7, 2017

As the school year approaches, we’re starting to get more questions about interpreting and adjusting instructional weeks. Specifically, these questions revolve around how instructional weeks impact interpretations of student test performance relative to NWEA norms (i.e., achievement and growth percentiles) when the default instructional weeks settings don’t correspond to when students actually test. Let’s start off by talking about why we adjust our norms by instructional weeks and how instructional weeks are established, and then I’ll provide a practical example to show why paying attention to instructional weeks is so important.

 

First, why do we adjust our norms – and the growth projections you see in your reports that are based on those norms – by instructional weeks? The simple answer is that achievement is related to instruction. More instruction tends to produce higher achievement. Growth projections work the same way. Students typically show greater growth over an interval of 34 weeks of instruction than over an interval of 24 weeks of instruction, all other things being equal.

 

The default values for instructional weeks in the reporting system are the result of collecting school calendar information from NWEA partner districts over multiple school years and comparing those to the dates on which students tested. Overall, most students receive 4 weeks of instruction prior to testing at the start of the year, 20 weeks of instruction prior to mid-year testing, and 32 weeks of instruction prior to end-of-year testing. These observations were used to establish the 4th, 20th, and 32nd weeks as our “default” number of instructional weeks for fall, winter, and spring testing. However, NWEA recognizes that these default instructional week values don’t fit the testing schedule of all school systems, so we allow you to modify the instructional weeks in our reporting system to match your testing schedule. It’s important to modify the instructional weeks in reporting if your school tests at different times than the default values. For example, if your schools deliver 24 weeks of instruction between fall and spring testing, you don’t want to use a norm that assumes you gave students 28 weeks. 

 

How does NWEA define what is considered an “instructional week”? An instructional week is a set of five days in which students receive instruction, which doesn’t include holidays, time off for spring break, or other days when students are out of school – but it would include half days, late start days, and testing days. The determination of what is considered an instructional day – and by extension, an instructional week – is really up to the district, and should be considered when determining how much instruction has occurred. So, when you are deciding how many instructional weeks students have received in your school or district, we’d recommend you count every five days of instruction, not counting days when students aren’t in school, as one instructional week.

 

How much do instructional weeks impact interpretations of student test performance? Let me show you how both achievement and growth can be affected, using the performance of a 4th grade student in mathematics. Let’s say the student received a score of 202 in the fall; how does that score compare to that of other students in the same grade and subject area? Based on our norms, we know that this student’s score would translate to achievement at the 50th percentile if we are using the default 4 instructional weeks as our frame of reference (indicating that the student has received 4 weeks of instruction). But what if this student actually received two weeks of instruction prior to testing, and we used the corresponding 2 week norms? This student’s score would now translate to achievement at the 53rd percentile. Why the change? If the student produced a score of 202 with two fewer weeks of instruction, his or her standing relative to peers (i.e., normative percentile rank) would be higher than students who received that same RIT score two weeks later after receiving those 10 additional days of instruction in math. Conversely, if the student received that score of 202 after 6 weeks of instruction, then this student’s score would translate to achievement at the 48th percentile (relative to 6 week norms). 

 

 

The same pattern holds for growth projections. Projected growth over an interval of 24 weeks of instruction should be a bit less than it would be for that same student over 28 weeks of instruction. Using the 4th grader who starts the year with a score of 202 again as our example, the normative growth projection for this student with 28 weeks of instruction (4 to 32 weeks) between the fall and spring test event is 11.55 points (rounded to 12 points for reporting purposes). If we shortened the number of instructional weeks to 24 weeks of instruction – such as testing after 6 weeks of instruction in the fall and after 30 weeks of instruction in the spring – the growth projection drops to 9.91 points (10 points rounded). This again likely makes intuitive sense – we would expect less improvement over time for students if they have fewer days of actual instruction. The opposite is true, too – the normative growth projection extends to 13.18 points (13 points rounded) with 32 weeks of instruction (testing after 2 and 34 weeks of instruction).

 

 

For both achievement and growth, you can see that our interpretation of student performance relative to the norms can really shift if there is a large difference between how much instruction has actually occurred and the instructional week values specified in the reporting system. If, for example, the same 4th grade student we’ve been using in our examples grew 12 points over the course of 32 weeks of instruction, our interpretation of his performance would be that he did not meet his growth projection, since the 32-week growth projection for this student is 13.18 points.

 

However, if the instructional weeks were not adjusted in reporting, and the default values were used (28 weeks of instruction, with a projected growth value of 11.55 points), then our interpretation of this student’s performance would be that he had exceeded his growth projection. Naturally, the reverse situation also holds: when students receive fewer weeks of instruction between test events than are assumed by the default values, then the growth projections against which students are being evaluated are artificially high. In such cases, one might erroneously conclude that a student had failed to meet his growth projection, when in fact he had.

 

This should demonstrate how important it is to pay attention to instructional weeks when interpreting the test performance of your students. This is especially true when student test results are being used for high stakes purposes for your students, teachers, or schools. Ultimately, we want to make sure that the data we have about our students give us an accurate picture of their achievement and growth. And in order to do that, it’s really important that you pay attention to the number of instructional weeks your students have received, and just as important, that those weeks match the instructional weeks you’ve set up in the reporting system.


About the Author

Dr. Nate Jensen

Nate Jensen is a Research Scientist at NWEA, where he specializes in the use of student testing data for accountability purposes. He has provided consultation and support to teachers, administrators, and policymakers across the country to help establish best practices around using student achievement and growth data in accountability systems. Nate holds a Ph.D. in Counselor Education from the University of Arkansas, an M.A. in Counseling Psychology from Framingham State University, and a B.S. in Psychology from South Dakota State University.

Outcomes