Skip navigation
All Places > Welcome New Members > Blog > Author: Joi Converse
1 2 Previous Next

Welcome New Members

21 Posts authored by: Joi Converse


 

MAP Spanish offers equitable assessment tools to better understand the next steps in learning for your Spanish-speaking students. Here are answers to the top 10 most frequently asked questions:

 

1. How much does it cost?

There is no additional charge for MAP Spanish assessments. Starting this fall, Spanish MAP Growth reading and math tests will automatically be included with MAP Growth K-2 and MAP Growth licenses, and MAP Reading Fluency licenses will include Spanish assessment options, as well. You simply select a language preference for the assessments you already have–and that’s it.

 

2. If a student takes both English and Spanish tests, does that require two licenses?

No. A student can take any tests included with the license for the regular, single license price.

 

3. What subjects and grades will have Spanish test options?

Spanish assessments will be available for the following subjects and grades starting this fall:

  • MAP Growth Reading K-8
  • MAP Growth Math K-12
  • MAP Reading Fluency K-3

 

4. What do we need to do to get the Spanish assessments for next fall?

You don’t need to do anything. The Spanish assessments will automatically be included with your MAP Growth and MAP Reading Fluency assessments for the 2019-20 school year at no additional cost.

 

5. Which students are the Spanish assessments appropriate for?

The Spanish assessments can be used by native Spanish speakers receiving Spanish-only instruction, native Spanish speakers receiving English-only instruction, native Spanish speakers receiving English and Spanish instruction, and any students learning Spanish as part of a dual language immersion or foreign language program. Note that educators should expect that students receiving English-only instruction will likely show lower growth on the Spanish assessments than students also receiving instruction in Spanish.

 

6. Are these just translated tests, or how do you build the item pool?

The item pool for the Spanish MAP Growth assessments are made up of both items that are trans-adapted from our English item pool, meaning translated and checked for cultural bias, and newly created, authentic Spanish items. All of the Spanish passages and items for MAP Reading Fluency are newly created, authentic Spanish content.

 

7. What Spanish dialect are the test items written in?

We used a generic, standard variety of Spanish that is not specific to any one dialect. We avoid words or phrases that are dialect specific.

 

8. Our district has licenses for the Spanish reading screener in MAP Growth—what’s happening to that assessment come fall?

Because MAP Growth for K-8 and MAP Reading Fluency for K-3 offer comprehensive, adaptive reading assessments in Spanish, the K-8 Spanish Reading Screeners will be retired at the end of the current school year (2018-19) and will no longer be available in the fall.

 

9. Our district has licenses for Spanish math in MAP Growth—what’s happening to that assessment for fall?

The Spanish MAP Growth math assessments will continue as is and will be available to all MAP Growth partners. You will no longer need any additional licenses for the Spanish math tests; they will simply be included with your MAP Growth license at no additional charge.

 

10. Can schools get started with the Spanish assessments this school year?

Yes, we are actively looking for additional partners to join our pilot programs this school year. There is no cost to join the pilot, and you can get started right away. If you are interested in joining the Spanish MAP Growth Reading or Spanish MAP Reading Fluency pilots for spring 2019, please contact your account manager today.

Ever wonder how someone gets started with MAP Growth – and then becomes an expert? Check out a recent post over at our Teach. Learn. Grow. blog to get a perspective from a MAP-novice-turned-professional-learning-facilitator.

Former teacher Lindsay Stoelting shares how he became the MAP Coordinator at his international school (hint: pretty much by accident!). He shares what he thinks and what’s he learned about how much schools have in common now that he is a Professional Learning Facilitator for NWEA.

 

And stay tuned – this is the first in a series where we will be talking to former teachers and asking them to share “what they wish they had known” about implementing MAP Growth back when they were just getting started.

NWEA recently introduced some new metrics into our MAP Growth Reports that identify to what extent students rapidly guessed when they took their test, and the effect that rapid guessing had on their RIT scores. I described these metrics in some detail in this blog post, which also includes a link to some broader guidance we wrote about how to make use of these metrics when interpreting your students’ test scores.

 

While we tried to cover most of the questions around these metrics, there is one question we are getting from our partners that is causing a lot of confusion – “What does it mean when the estimated impact of disengagement on a student’s RIT score is….POSITIVE?”

 

Let me explain why you may occasionally see a positive impact by talking about a response to a single item. On that item, there are generally four response options – one correct answer, and three incorrect answers (or “distractors”). If a student provides a rapid guess to the item, what is the probability that the student is going to answer the item incorrectly? Given that three of the four response options are incorrect, if a student guesses on the item, there is about a 75% chance (3/4) that the student will get the item wrong, and conversely, about a 25% chance (1/4) that the student will answer the item correctly. So, when students rapidly guess, they have a higher likelihood of getting the item wrong, and when that occurs, there can be a subsequent negative impact on their score.

 

But what if the student randomly gets the item…correct? If a student guesses and gets the answer right, the test score can still be biased – however, in this case, the student’s score can be positively biased. That is, the estimated impact of disengagement on the student’s RIT score can result in the student’s score being higher than if the student had actually tried on the item.

 

Let’s expand this even further. Let’s say that a student rapidly guessed on 10% of items on a reading test – so 4 out of the 40 reading items were rapidly guessed. If we apply those some probabilities from before, we might expect the student to get 3 of those 4 rapidly guessed items wrong, and 1 of the 4 rapidly guessed items correct. In this situation, there would likely be a negative impact on the student’s RIT score (the score would be negative biased, and the impact would be a negative value, such as -1).

 

But what if this student was incredibly lucky, and managed to guess correctly on 3 of those 4 items? In that case, the student’s score would be improved because of guessing, not negatively affected by it (the score would be positively biased, and the impact would be a positive value, such as +1). And while this isn’t common (given the low probability of guessing correctly), that doesn’t mean it doesn’t happen. And across millions of test events, there are some students who get luckier than others – they guess correctly at a rate higher than we would expect.

 

So, what should you do when you see positive values? In this case, our guidance still applies – you should consider what percentage of items were rapidly guessed, and in turn, what the subsequent impact was on the student’s RIT score. If less than 10% of items were rapidly guessed, there likely won’t be a huge impact – positive or negative – on a student’s score. And, if the percentage exceeds 30%, that is a clear indicator that a student’s test score isn’t valid and the student should be considered for retesting – even if the impact of the student’s RIT score is positive!

 

The goal of this metric is to inform you when you should or should not have confidence that student RIT scores are true reflections of their achievement level. So, whether you see a -3 or a +3 on the impact of disengagement metric, those are both telling you the same thing – the student’s rapid guessing had an impact on his or her score, and consideration should be given around if that score should be used (or the student should be retested), and how to make sure the student stays engaged during future testing sessions.

 

About the Author

Dr. Nate Jensen

Nate Jensen is a Research Scientist at NWEA, where he specializes in the use of student testing data for accountability purposes. He has provided consultation and support to teachers, administrators, and policymakers across the country to help establish best practices around using student achievement and growth data in accountability systems. Nate holds a Ph.D. in Counselor Education from the University of Arkansas, an M.A. in Counseling Psychology from Framingham State University, and a B.S. in Psychology from South Dakota State University.

Thanks to all of you who attended our webinar, Get to know MAP Reading Fluency on February 7. We could not get to all your questions during the live webinar and wanted to share answers to the most frequently asked questions here.

 

Does MAP Reading Fluency work on iPads?

 

There is a dedicated app to deliver the MAP Reading Fluency student test on an iPad; however, it is currently in use for research purposes only. This app will be available for general use in the 2018-19 school year.

 

 

What type of headsets are required? Can you provide a recommended brand and approximate cost?

 

Over-ear headsets with a boom-style microphone and passive noise canceling are required to use MAP Reading Fluency. NWEA successfully used Avid brand models AE-36 and AE-39 in pilot testing. The two models vary in the input connection. AE-36 is an analog (3.5mm or aux) connection, and AE-39 is USB.

 

Preferred pricing is available to NWEA partners though Supply Master. The AE-36 model is $10 per unit and the AE-39 is $21. The USB connection provides a higher quality audio recording, which may improve ease of use of MAP Reading Fluency tests. USB headsets are recommended for desktop and laptop test delivery. Analog is required for iPad and recommended for Chromebooks.

 

How often can the test be given? Can MAP Reading Fluency be used as a universal screener and/or as a progress monitoring tool?

 

MAP Reading Fluency can be used in fall, winter, and spring as a universal screener or benchmark assessment. NWEA plans to support more frequent usage for progress monitoring in a future version of the test, pending the calibration of sufficient content.

 

Is there an extra cost to add MAP Reading Fluency?

 

Yes, MAP Reading Fluency is an additional piece of the MAP Suite that is available at an additional cost. However, special pricing is available for existing MAP Growth partners. Contact your account manager for more details.

 

Can the test be used with ELL students?

 

Absolutely! MAP Reading Fluency is well-suited to assess English learners because it isolates specific skills and dimensions of oral reading for which English learners may experience uneven progress. For example, some ELL students may struggle more with comprehension than their fluency level would suggest, and this is clearly identified in MAP Reading Fluency reports.

 

How does the test account for speech delays, accents, or other speech issues that might be found in K-3 students?

 

The settings or “strictness” of the speech-scoring engine have been tuned to a general population of K-3 students across the U.S., including those with regional accents, second language acquisition accents, and speech impairments. Developmental speech patterns and moderate accents are well-tolerated because the tuning has been set leniently. For students with pronounced articulation difficulties or strong accents, a higher rate of un-scorable audio may be observed. This can be addressed by using the audio review functionality to provide a score manually.

 

What norms are used for determining the grade-level expectations for Words Correct Per Minute (WCPM)?

 

Hasbrouck and Tindal norms for oral reading fluency are used to set the expectation levels for passage reading. Performance on other measures is classified as meeting, above, approaching, or below expectation based on judgement from curriculum experts and empirical data from field testing.

 

Can MAP Reading Fluency be used beyond grades K-3 for struggling readers?

 

At this time, MAP Reading Fluency is only appropriate for K-3 students. In a future version of the assessment, NWEA plans to introduce content, test logic, and an interface that are suited to older, struggling readers.

 

Is the instructional reading level aligned with leveled readers, such as Fountas and Pinnel, DRA, or others?

 

The instructional reading level is reported as a range, using a grade-equivalency scale. Using an instructional reading level chart, this grade-based value can be correlated with leveled readers, such as Fountas and Pinnel, DRA, and more.

 

Can schools get started yet this school year? How can we learn more, get a demo, or get started?

 

Yes! The Early Adopter program is accepting enrollments in schools and districts in the US throughout the remainder of the 2017-18 school year for those who want to start now. The winter test window is currently open, and the spring window will begin March 26th.

 

Contact your account manager to enroll, and to learn more or get a demo. If you are not sure who your account manager is, please visit http://nwea.us/followup, or call 1-866-654-3246.

As educators, we constantly hear how important data collection is, but are often not given the tools for what to do with data. We need to change that! In this post, I’m tackling how data can be used to design small reading groups (guided reading) in K-2 classrooms. The steps below outline a repeatable framework that can be applied each time you collect data and regroup students according to their reading level.

 

Assess all students’ reading over the course of 1-5 days. Ideally, assessment occurs 3-5 times per year to provide actionable data. The rationale for testing your entire class over the course of 1-5 days is simply to ensure ALL data is collected within a manageable time frame. Time is a scarce resource for educators, so setting a concrete timeline helps to ensure all students’ reading is assessed. When I taught, I tested in September, December, February, April, and June, and created “Inquiry Week” mini-units (students voted on the unit topic). This provided new, exciting content for students to learn and allowed me to pause my guided reading instruction, so I could test everyone.

 

Assess multiple reading skills to build a full reader profile of each student. The testing process will look different depending on the grade level, but your overall assessment should include a consistent set of leveled texts that all students read (some read one, some read multiple, but the key is that the texts stay consistent regardless of the student). When reading a text, assess students on the following: concepts of print (Kindergarten only), accuracy, comprehension, rate, and fluency. Most assessments already contain these subtests, but if they don’t, create a quick template for your class so you have data in all the categories listed above.

 

Analyze the reading data on a class, group, and individual student level. This is the most crucial step in creating your small groups because this is where data becomes action.

 

  • Class-wide lens: Using your class list, enter each student’s score on all sub-tests to view the data from a class wide lens.
  • Group-wide lens: Using the above, at, and below benchmarks, create small reading groups of about six students each (educators with large class sizes can increase but not exceed eight per group). Students grouped together should be within 1-2 levels of each other to be most effective. As you create these small groups, make note of the most common concepts of print (Kindergarten only), accuracy, comprehension, rate, and fluency instructional needs for the group.
  • Individual-student lens: Once you have each student in a small group, scan the data for the instructional area that is the highest leverage for the student’s reading growth. A helpful question to ask yourself is, “What held this student back from reaching the next level?”

 

Create mini-instructional units for each small group. Mini-instructional units will guide your small group instruction over the next assessment period. Typically, mini-instructional units cover 4-6 weeks of learning. The timeframe gives students time to learn new skills, apply them in real time with your feedback, and make solid progress. This is where that common goal you set aside during the “group-wide lens” analysis is a big help! Take that goal and backwards plan 4-6 weekly objectives to guide students in meeting that goal. Now that your mini-unit has an instructional focus, drop in the relevant content standards and your daily objectives. To be even MORE precise, add in weekly phonics goals for each group – sometimes referred to as “word work.”

 

Share individual goals with students! By sharing individual learning goals with students, they begin to take ownership over their learning. You can type out student goals on small strips of paper, print them on labels to create “stickers” for students, or share them verbally. This process begins to shift the continuum of voice from teacher centered to learner centered.

 

You can use this framework each time you assess your class reading growth to create focused instruction for all your students.

 

How do you create reading groups? Share your thoughts in the discussion below!

 

View original blog post on Teach. Learn. Grow here.


About the Author

Amy Schmidt is a content designer on the Professional Learning Design team at NWEA. As a former K-2 classroom teacher, instructional coach, and curriculum designer, she passionately believes all children can and will learn. She loves creating professional learning that creates meaningful growth experiences for teachers and students.

As the school year approaches, we’re starting to get more questions about interpreting and adjusting instructional weeks. Specifically, these questions revolve around how instructional weeks impact interpretations of student test performance relative to NWEA norms (i.e., achievement and growth percentiles) when the default instructional weeks settings don’t correspond to when students actually test. Let’s start off by talking about why we adjust our norms by instructional weeks and how instructional weeks are established, and then I’ll provide a practical example to show why paying attention to instructional weeks is so important.

 

First, why do we adjust our norms – and the growth projections you see in your reports that are based on those norms – by instructional weeks? The simple answer is that achievement is related to instruction. More instruction tends to produce higher achievement. Growth projections work the same way. Students typically show greater growth over an interval of 34 weeks of instruction than over an interval of 24 weeks of instruction, all other things being equal.

 

The default values for instructional weeks in the reporting system are the result of collecting school calendar information from NWEA partner districts over multiple school years and comparing those to the dates on which students tested. Overall, most students receive 4 weeks of instruction prior to testing at the start of the year, 20 weeks of instruction prior to mid-year testing, and 32 weeks of instruction prior to end-of-year testing. These observations were used to establish the 4th, 20th, and 32nd weeks as our “default” number of instructional weeks for fall, winter, and spring testing. However, NWEA recognizes that these default instructional week values don’t fit the testing schedule of all school systems, so we allow you to modify the instructional weeks in our reporting system to match your testing schedule. It’s important to modify the instructional weeks in reporting if your school tests at different times than the default values. For example, if your schools deliver 24 weeks of instruction between fall and spring testing, you don’t want to use a norm that assumes you gave students 28 weeks. 

 

How does NWEA define what is considered an “instructional week”? An instructional week is a set of five days in which students receive instruction, which doesn’t include holidays, time off for spring break, or other days when students are out of school – but it would include half days, late start days, and testing days. The determination of what is considered an instructional day – and by extension, an instructional week – is really up to the district, and should be considered when determining how much instruction has occurred. So, when you are deciding how many instructional weeks students have received in your school or district, we’d recommend you count every five days of instruction, not counting days when students aren’t in school, as one instructional week.

 

How much do instructional weeks impact interpretations of student test performance? Let me show you how both achievement and growth can be affected, using the performance of a 4th grade student in mathematics. Let’s say the student received a score of 202 in the fall; how does that score compare to that of other students in the same grade and subject area? Based on our norms, we know that this student’s score would translate to achievement at the 50th percentile if we are using the default 4 instructional weeks as our frame of reference (indicating that the student has received 4 weeks of instruction). But what if this student actually received two weeks of instruction prior to testing, and we used the corresponding 2 week norms? This student’s score would now translate to achievement at the 53rd percentile. Why the change? If the student produced a score of 202 with two fewer weeks of instruction, his or her standing relative to peers (i.e., normative percentile rank) would be higher than students who received that same RIT score two weeks later after receiving those 10 additional days of instruction in math. Conversely, if the student received that score of 202 after 6 weeks of instruction, then this student’s score would translate to achievement at the 48th percentile (relative to 6 week norms). 

 

 

The same pattern holds for growth projections. Projected growth over an interval of 24 weeks of instruction should be a bit less than it would be for that same student over 28 weeks of instruction. Using the 4th grader who starts the year with a score of 202 again as our example, the normative growth projection for this student with 28 weeks of instruction (4 to 32 weeks) between the fall and spring test event is 11.55 points (rounded to 12 points for reporting purposes). If we shortened the number of instructional weeks to 24 weeks of instruction – such as testing after 6 weeks of instruction in the fall and after 30 weeks of instruction in the spring – the growth projection drops to 9.91 points (10 points rounded). This again likely makes intuitive sense – we would expect less improvement over time for students if they have fewer days of actual instruction. The opposite is true, too – the normative growth projection extends to 13.18 points (13 points rounded) with 32 weeks of instruction (testing after 2 and 34 weeks of instruction).

 

 

For both achievement and growth, you can see that our interpretation of student performance relative to the norms can really shift if there is a large difference between how much instruction has actually occurred and the instructional week values specified in the reporting system. If, for example, the same 4th grade student we’ve been using in our examples grew 12 points over the course of 32 weeks of instruction, our interpretation of his performance would be that he did not meet his growth projection, since the 32-week growth projection for this student is 13.18 points.

 

However, if the instructional weeks were not adjusted in reporting, and the default values were used (28 weeks of instruction, with a projected growth value of 11.55 points), then our interpretation of this student’s performance would be that he had exceeded his growth projection. Naturally, the reverse situation also holds: when students receive fewer weeks of instruction between test events than are assumed by the default values, then the growth projections against which students are being evaluated are artificially high. In such cases, one might erroneously conclude that a student had failed to meet his growth projection, when in fact he had.

 

This should demonstrate how important it is to pay attention to instructional weeks when interpreting the test performance of your students. This is especially true when student test results are being used for high stakes purposes for your students, teachers, or schools. Ultimately, we want to make sure that the data we have about our students give us an accurate picture of their achievement and growth. And in order to do that, it’s really important that you pay attention to the number of instructional weeks your students have received, and just as important, that those weeks match the instructional weeks you’ve set up in the reporting system.


About the Author

Dr. Nate Jensen

Nate Jensen is a Research Scientist at NWEA, where he specializes in the use of student testing data for accountability purposes. He has provided consultation and support to teachers, administrators, and policymakers across the country to help establish best practices around using student achievement and growth data in accountability systems. Nate holds a Ph.D. in Counselor Education from the University of Arkansas, an M.A. in Counseling Psychology from Framingham State University, and a B.S. in Psychology from South Dakota State University.

As educators, we’re not strangers to working on teams. We work with our colleagues to enrich the curriculum; we work with our peers to monitor the lunchroom and playground; we work with our students to determine their path to success. And of course, we tag-team with parents to cheer students on, hold them accountable, and look out for their best interests.

 

With increasingly personalized MAP results that pinpoint exact areas for students’ focus, communicating with parents becomes at once easier and more important. Thankfully, as parents familiarize themselves with the results, teamwork becomes more seamless as a clear, custom plan for their child emerges.

 

So, how do we get parents on-board? The first step is understanding. It’s up to us to demystify MAP for parents, just as we did for ourselves and our students. Below are some ways to help parents feel confident in their understanding and courageous about tag-teaming a plan for their student’s success.

 

  • First, direct parents to a dedicated NWEA parents’ page, designed to demystify MAP and answer questions. It includes a very helpful video to explain the intent behind MAP testing.
  • Advocate for your school to have a MAP fluency night. A presentation on common vocabulary and an explanation of the test alleviates confusion and explains the ultimate purpose of the test: to get student-level data to tailor instruction to their specific needs. Community Consolidated School District 181 even filmed theirs for parents who could not attend!
  • For a more personal option, consider sharing an orientation video that you make yourself and post to YouTube for parents who wish to watch at home. Consider including automatic subtitles (in the original language or a translated version!) for increased accessibility.
  • You can always post a link to or embed the Parent’s Guide to MAP!

 

The more parents are aware of the game-plan, the more confident they’ll feel on the court! Have any other ideas of how to gear up for the season? Let us know in the comments below!

As you read the blog, consider the question: Are there any games and programs you are using in your classroom to help personalize learning for your students?

 

After you have read the post, continue the conversation in the comment section below.


Originally posted on Teach.Learn.Grow on September 22, 2015 by Joi Converse

 

Personalized learning, the practice of tailoring instruction to meet each student’s strengths, needs, and interests, helps create a classroom environment that engages and accelerates learning for all students. Studies have shown that teachers who use assessment data to customize their instruction improved students’ reading and math, while also closing persistent achievement gaps.

 

Thanks to technology like smartphones, tablets, and laptops, there are now apps that teachers and students can use to facilitate personalized learning and instruction. Graphite.org has a nice list of some games, apps, and sites that can help teachers put students first as they deliver personalized learning. Here are seven to consider:

 

  1. Smarty Ants – Designed for grades Pre-K – 3, this adaptive learning, game-based program helps kids build their literary skills. The adaptive approach helps tailor the program to meet a diverse set of learners. Free to try and then paid.
  2. Scratch – An MIT project from their Lifelong Kindergarten Group helps teach kids, K through 12, math and programming skills through creative expression. Coding is fast becoming a versatile skill that can help kids in developing their math skillset. Scratch is free.
  3. Goalbook Toolkit – Designed for grades Pre-K – 12, Goalbook Toolkit is a site designed to help teachers with Common Core State Standards (CCSS) learning goals and interventions in English and math subject areas. Free to try and then paid.
  4. MinecraftEdu – Designed for grades 1 – 12, MinecraftEdu puts the power of Minecraft into a teacher-directed virtual learning environment. If you have kids at home, you know Minecraft empowers them to build collaboration and creativity and MinecraftEdu builds on that with teacher empowerment. This is a paid program.
  5. DIY – For grades 3 – 10, DIY helps kids develop critical thinking and creativity skills by using everyday materials to complete challenges in various skill areas. A great program for engaging kids in problem-solving using their creativity. DIY is free.
  6. Duolingo – For grades 6 – 12, Duolingo is a game-based language-learning tool that covers a number of languages. It individualizes the pace of learning for each student and is interactive to allow them to personalize their experience. Duolingo is free.
  7. DreamBox Learning Math – For grades K – 6, this game is an interactive, adaptive and self-paced program that helps build essential mathematics skills. This is a paid program.

 

Whether you or your school takes advantage of programs and games like those above or uses MAP assessment data, personalized learning and instruction is something that all educators can utilize as part of their daily plan. We partner with many companies, organizations and educational leaders to help support teachers, students, parents and administrators with content providers who offer tools to enhance the depth of kids’ educations. Here’s a complete list of our Instructional Content Providers: https://www.nwea.org/business-alliances/.

 

Reach out and share your ideas on our Facebook page or via Twitter @NWEA.

 


About the Author

 

57.thumbnail.jpgJoi Converse brings passion, creativity and a desire to communicate effectively to her role as the Interactive Marketing Coordinator at NWEA. In what seems like another lifetime, she “herded cats” on various college campuses while also proactively growing her technological skills in SQL, HTML and web content management systems. Joi received her Masters in Higher Education Leadership from the University of Nevada, Las Vegas and a Bachelor of Arts in Psychology from Whitworth University. When not at work, she enjoys exploring the beauty of the Northwest with her family. However, Joi still has not found mountains that can compare to those in her home state of Alaska.

As you read, consider the following question

  • How are you using assessment data to customize learning for students daily?

After you read through the blog, continue the conversation in the comment section below.


Originally posted on Teach.Learn.Grow on April 1, 2015 by Joi Converse

 

NWEA’s Jean Fleming recently had a guest blog post at Getting Smart – The Future of Personalized Learning is Now – which highlighted how personalized, differentiated instruction using a variety of meaningful assessment data, is making a meaningful impact on student learning.

 

As Jean notes in her post:

 

Getting Smart and the Next Generation Learning Challenges (NGLC) recently released a report profiling 14 schools across the country breaking through the traditional model of teaching and learning by providing personalized learning experiences that are proven to enhance student learning. The schools profiled are experiencing success in part by setting high expectations for college readiness and tailoring instruction to each student’s individual needs and measuring growth through the use of the Measures of Academic Progress Assessment, or MAP test.

 

When Jean visited a Teach to One school in Brooklyn a while back, she saw firsthand how a personalized approach to math literally broke down literal and figurative walls, resulting in customized assignments tailored to students on a daily basis. What if you opened up the classroom experience, created a team of teachers and a block of time for different instructional modalities? And what if the data triage needed to personalize instruction were done behind the scenes, so that students could show up, look at their placement for the day, and get to work? That’s the value that can come from meaningful assessment data.

 

Jean closes her post:

 

According to a RAND study released in November, personalized learning is advancing academic gains in classrooms. It is the wave of the future when it comes to how teachers will teach and students will learn. And, despite ongoing concerns about testing in schools, it will continue to grow in classrooms throughout the country as educators, administrators, students and parents begin to see the value of targeted learning through the use of meaningful assessment data.

 

If your district uses MAP like the schools in the study, you already have access to powerful personalized instructional resources linked to student scores from the assessment.

 

MAP data helps to define individual student learning paths and is directly actionable in a few important ways, and at no added cost:

 

  • Identify what a student needs help with, or is ready to be challenged on, using the recently enhanced interactive Learning Continuum.
  • Use an individual student’s RIT score in math from MAP to identify standards-aligned instructional resources from Khan Academy.
  • Access the RIT to Resource portal, which is powered by Gooru and enables teachers and parents to find a wealth of standards-aligned Open Educational Resources (OERs).

 

No matter where your students are performing, assessment information can be a critical tool in pinpointing students’ unique needs, tailoring instruction – and thereby expanding the achievement possibilities for all your students.

 


About the Author

57.thumbnail.jpgJoi Converse brings passion, creativity and a desire to communicate effectively to her role as the Interactive Marketing Coordinator at NWEA. In what seems like another lifetime, she “herded cats” on various college campuses while also proactively growing her technological skills in SQL, HTML and web content management systems. Joi received her Masters in Higher Education Leadership from the University of Nevada, Las Vegas and a Bachelor of Arts in Psychology from Whitworth University. When not at work, she enjoys exploring the beauty of the Northwest with her family. However, Joi still has not found mountains that can compare to those in her home state of Alaska.

As you read, consider the following question

  • Do you agree that transparent learning goals and standards and student ownership are essential in the effectiveness of personalizing learning? Are there other aspects you find essential?

After you read through the blog, continue the conversation in the comment section below.


Originally posted on Teach.Learn.Grow by Jean Fleming on November 24, 2014

 

Findings from an ongoing study released by the Bill & Melinda Gates Foundation provide compelling evidence that when teachers personalize learning experiences based on students’ unique needs, great things can happen. The study, conducted by the Rand Corporation, found that students whose teachers used assessment data to customize their learning improved in reading and math significantly over similar schools not employing personalized instructional approaches.

 

Personalized instruction, the well-studied and sometimes conflated practice of tailoring learning to meet each student’s strengths, needs, and interests, helps to create a classroom environment that engages and accelerates learning for all students. The study also suggests that this approach can help educators close persistent achievement gaps.

 

Two aspects of personalized approaches were shown to be essential to their effectiveness: transparent learning goals and standards and student ownership. The knowledge and skills students must learn as they move through school must be clear, and students should participate in their learning by partnering with teachers to set goals and track progress.

 

Although the 5,000 K-12 mostly low income students in urban charter schools showed varied results among the 23 schools included in the study, two-thirds found that personalized learning had significant positive effects on students’ math and reading scores as measured on the Measures of Academic Progress® (MAP®) assessment. Perhaps the most exciting finding is the impact on struggling students. Personalized instructional approaches helped to lift students who started the school year performing below the national average to finish the year close to or above it.

 

How can you use this evidence to ramp up your own practice, you ask?

 

If your district uses MAP like the schools in the study, you already have access to powerful personalized instructional resources linked to student scores from the assessment.

 

MAP data helps to define individual student learning paths and is directly actionable in a few important ways, and at no added cost:

 

  • Identify what a student needs help with, or is ready to be challenged on, using the recently enhanced interactive Learning Continuum.
  • Use an individual student’s RIT score in math from MAP to identify standards-aligned instructional resources from Khan Academy.
  • Access the RIT to Resource portal, which is powered by Gooru and enables teachers and parents to find a wealth of standards-aligned Open Educational Resources (OERs).

 

No matter where your students are performing, assessment information can be a critical tool in pinpointing students’ unique needs, tailoring instruction – and thereby expanding the achievement possibilities for all your students.

 

Learn more about how other districts are using personalized learning and using data to inform instructional decisions. Stay tuned for more on personalized learning as we follow the study.

 


About the Author

41.thumbnail.jpgJean Fleming brings over 25 years of experience in education to her role at NWEA. She began as a middle school reading teacher in the Berkeley, California public schools. There, she developed a curriculum focused on engaging students in career explorations to foster a love of reading. She served as lead instructional designer for an online reading curriculum, held senior editorial positions with Technology & Learning magazine and Scholastic.com, and managed global communications for the Intel Foundation’s professional development program.

As you read, consider the question that Christina poses: Is it learning if it's not personal?

After you read through the blog, continue the conversation in the comments section below.


Blog originally posted on Teach. Learn. Grow. on October 20, 2016

By Christina Hunter

 

Personalized Learning. What images or thoughts surface when you read those words? When I heard “personalized learning,” my thoughts went to Derek, a long-ago student. A few years after having the honor of teaching and learning with Derek, I received a postcard from him in Hawaii. I had forgotten that he had chosen to study rock during one of our units until I read his postcard. The action of sending the postcard and his recollection of his study suggested to me that the learning was personal for him. He wrote,

 

“Hey Mrs. Hunter! I saw igneous rock! I saw magma! Did you know there are different kinds of volcanic rock?”

 

Of course, as a teacher, my thoughts go to what else might I have done to support Derek. Was he ready to learn about the different kind of volcanic rock when he was with me? Naturally, the NWEA voice in my head says, “If only I had the MAP assessment at that time…”

 

So what does personalized learning mean? According to The Glossary of Education Reform by Great Schools Partnership, “The term personalized learning, or personalization, refers to a diverse variety of educational programs, learning experiences, instructional approaches, and academic-support strategies that are intended to address the distinct learning needs, interests, aspirations, or cultural backgrounds of individual students.” It has come to the forefront of education with the backing of foundations such as the Bill & Melinda Gates Foundation, the Eli and Edythe Broad Foundation, Charter School Growth Fund, EDUCAUSE, and the Next Generation Learning Challenges (NGLC). In Personalized Learning: What It Really Is and Why It Really Matters, the authors suggest, “The semantics of the title set us up for yet another ‘war on definitions.’”

 

Linking thoughts about what I know and read about effective teaching, the goal of personalized learning, and Derek, I continued on my quest for clarity. I did a bit more research and quite a few more Google searches. According to “Personalized Learning: A Working Definition,” in EdWeek (published 10/22/14), there is a four-part working definition of the attributes of personalized learning:

 

  • Competency Based Progression (Continuous assessment against clearly defined goals)
  • Flexible Learning Environments (A learning environment driven by student needs)
  • Personal Learning Paths (learning path based on progress, motivations and goals)
  • Learning Profile (individual strengths/needs, motivations and goals)

 

Michael Feldstein and Phil Hill suggest that we think about personalized learning as a practice rather than a product. In addition, they state, “Technology then becomes an enabler for increasing meaningful personal contact.” They call out three main technology-enabled strategies for lowering classroom barriers to one-on-one teacher/student (and student/student) interactions.

 

  1. Moving content broadcast out of the classroom (flipping the classroom; sharing lectures through recordings assigned as homework).
  2. Turning homework time into contact time (utilizing digital products to make visible student thinking/work and trends in student work).
  3. Providing tutoring (using adaptive learning software to support students in areas of need that don’t require a human instructor).

 

In the Glossary of Education Reform, the Great Schools Partnership reminds us that “…personalized learning, as it is typically designed and implemented in K-12 public schools, can differ significantly from the forms of ‘personalized learning’ being offered and promoted by virtual schools and online learning programs.” I admit to taking a deep cleansing breath after reading that, and smiling when I found “Through the Student’s Eyes.” It stated that “although this more comprehensive approach to personalized learning may be facilitated by technology, its tenets may be applied without technology or, more likely, in a blended context.” I was quite pleased with the clear delineation of personalized learning and products that may help to facilitate it!  recalled the suggestion of Feldstein and Hill as fitting: “Think about personalized learning as a practice rather than a product.” Teaching is indeed a Practice! The practice of effective teaching encompasses curriculum, assessment, and technology. Knowing where a student is in his or her learning before setting off on a learning journey is essential.

 

I went back to thoughts of Derek. We did indeed have continuous assessment. We were practicing formative assessment minute-to-minute and checking progress daily. Derek had clearly defined goals based on his individual learning needs. We used pre-assessments to determine where Derek was in his learning, and then Derek, my teaching partner and I developed goals and worked together to develop a path to support him in reaching his goals. Our learning environment was student centric. It extended beyond our two classrooms into the halls, media center, and school yard. Although we didn’t have one-to-one computers, we did have a few that students used for research and the development of products.  Did we practice personalized learning? We practiced the art of teaching, which requires personalization.

 

While I started with the question, “What is personalized learning?” I end with the question, “Is it learning, if it’s not personal?”

 

To learn more about formative assessment, check out our formative assessment PD offering or our previous post on four key formative assessment practices that form the foundation of successful implementation.

 

 

http://edglossary.org/personalized-learning/

 

http://www.edweek.org/ew/collections/personalized-learning-special-report-2014/a-working-definition.html

 

http://www.centeril.org/publications/2013_09_Through_the_Eyes.pdf

 

http://er.educause.edu/articles/2016/3/personalized-learning-what-it-really-is-and-why-it-really-matters

 


About the Author

96.thumbnail.jpgIn nearly 20 years of education, Christina Hunter has kept one thing front and center: a passion for student success. She has worked across all levels from primary grades to college, with a continued focus on doing what's best for children and their families. She is a dedicated professional who receives the ultimate joy in watching students and educators not only discover their individual strengths and areas for growth, but also take action on their discoveries. With a background in assessment literacy, differentiated instruction, IB, project based learning and data-driven decision making, her experiences have provided the opportunity to consult, coach, present, and facilitate in schools, districts, and conferences across the country and internationally. Currently, as the Senior Manager for Professional Development at NWEA, she is honored to support the work of more than 50 consultants.

As you read, consider the following question:

  • How are you using assessment data to personalize and differentiate learning for your students?

After you read through the blog, continue the conversation in the comments section below.


Blog originally posted on Teach. Learn. Grow. on December 15, 2015

By Jean Fleming

 

Personalized instruction, the practice of tailoring learning to meet each student’s strengths, needs, and interests, helps to create an environment that engages and accelerates learning for all students. I saw this first hand when I visited a Teach to One school in Brooklyn some time ago; a personalized approach to math broke down literal and figurative walls, resulting in a truly responsive educational experience.

 

Earlier this year, I authored a guest post at Getting Smart titled – The Future of Personalized Learning is Now – in which I highlighted how meaningful assessment data from the computer adaptive MAP test can be used to personalize and differentiate learning. So I was delighted to see a recent blog at Education Elements – On Storytelling with Data and the Power of Personalized Learning – where Nikki Mitchell dove deep into MAP assessment data to support my claim.

 

In fact, Nikki highlighted some findings from research Education Elements conducted over the 2014-2015 school year and created a compelling report – The Positive Power of Personalized Learning. Using NWEA Norms from the MAP assessment results, they provided some powerful insights:

 

Personalized learning impacts student achievement on nationally normed tests (MAP): students in personalized learning classrooms showed 135 percent growth in their reading exam and 119 percent growth in math.

 

As Nikki put it in her post:

 

Students’ growth over the course of the school year, compared to national norms, was the same level of progress you would expect if they received an extra third of a year of instruction in reading, and an extra fifth of a year of instruction in math.

 

These are some powerful arguments for introducing personalized learning and the meaningful assessment in MAP helps make that case. MAP data helps to define individual student learning paths and is directly actionable in a few important ways, and at no added cost:

 

  • Identify what a student needs help with, or is ready to be challenged on, using the recently enhanced interactive Learning Continuum.
  • Use an individual student’s RIT score in math from MAP to identify standards-aligned instructional resources from Khan Academy.
  • Access the RIT to Resource portal, which is powered by Gooru and enables teachers and parents to find a wealth of standards-aligned Open Educational Resources (OERs).

 

No matter where your students are performing, assessment information can be a critical tool in pinpointing students’ unique needs, tailoring instruction – and thereby expanding the achievement possibilities for all your students.

 


About the Author

 

41.thumbnail.jpgJean Fleming brings over 25 years of experience in education to her role at NWEA. She began as a middle school reading teacher in the Berkeley, California public schools. There, she developed a curriculum focused on engaging students in career explorations to foster a love of reading. She served as lead instructional designer for an online reading curriculum, held senior editorial positions with Technology & Learning magazine and Scholastic.com, and managed global communications for the Intel Foundation’s professional development program.

As you read, consider the following question:

  • What are you doing to ensure equity and accessibility for all students within assessment?

 

After you read through the blog, continue the conversation in the comments section below.


Blog originally posted on Teach. Learn. Grow. on May 10, 2016

By Elizabeth Barker

 

So what do a pair of exercise *****, an old lectern, and tiny kindergarten-size chairs have to do with second grade education? For me they turned out to be the keys I needed to create an accessible environment for my students. When it came to learning mathematics, one student needed Unifix cubes to support patterning, another needed to use the TouchMath method of skills, while the other three students needed time for repetition of the concept being taught. It was important to create a flexible and instructional space for my students that met their needs, made learning comfortable and nurtured equity and fairness for all students.

 

When I started my journey with NWEA, that passion and drive to create an accessible classroom was intensely focused on a new goal.  I was now asking what providing equity and accessibility looks like from an assessment perspective. Determined to learn more and support students to the best of our ability, we requested Center of Applied Specialized Technology (CAST), the experts of Universal Design for Learning (UDL) and the National Center of Accessible Media (NCAM) at WGBH to train our test and item writers on the framework of UDL and accessibility. We learned a tremendous amount about how to emphasize the importance of diversity, while removing barriers and addressing student differences right from the start. One way we are making MAP more accessible is by applying alternative text or alt-tags to our images within our test items. Alt-tags refers to language descriptions for pictures, graphs, charts and other images. This provides access to information that might otherwise be unavailable to students who are blind or have a visual disability.

 

Removing barriers is certainly a way to address one form of equity. Another is by incorporating more culturally rich passages into our assessments. As a classroom teacher, providing materials that spoke to the students, in which they could see themselves, brought comfort and helped to remove anxiety. Removing anxiety is vital to an assessment situation. However, culturally rich materials can also be very sensitive, and there is a difference between having the ability to discuss rich, sensitive materials in the classroom versus an assessment. Therefore, we have our reading passages reviewed by an external panel of experts and teachers with backgrounds in multicultural education and disabilities. This gives us the ability to have culturally deep passages that avoid overly sensitive topics for assessment purposes.

 

I came to NWEA because my values and goals for my students were the same as NWEA’s mission, partnering to help all kids learn.  Creating tests and items from the beginning with UDL in mind, removing barriers by adding alt-tags, and incorporating more culturally rich materials are all steps NWEA is doing to improve our equity for all students. The journey for equity and accessibility will not stop there, more steps need to be taken.

 

To learn more about these ongoing efforts please visit our Accommodation & Accessibility webpage.


About the Author

79.thumbnail.jpgElizabeth brings years of personal and academic experience to her position here at NWEA. She began her career in education as a middle school and elementary special education teacher, specifically in emotional behavior in Michigan. She continued her teaching career while she earned a master’s degree in special education from the University of Colorado, Denver. Shortly after, Elizabeth went onto pursue her doctoral degree from the University of Oregon with an emphasis on growth trajectory for students with learning disabilities in mathematics and reading comprehension. She has served as a lead in her school districts by teaching courses on how to collect and use data to inform instruction.

As you read the blog, consider:

  • What questions do you have about Standard Error of Measurement?

After you read the blog, continue the conversation in the comments section below.


Blog originally posted on Teach. Learn. Grow. on December 3, 2015

By Dr. Nate Jensen

 

If you want to track student progress over time, it’s critical to use an assessment that provides you with accurate estimates of student achievement— assessments with a high level of precision. When we refer to measures of precision, we are referencing something known as the Standard Error of Measurement (SEM).

Before we define SEM, it’s important to remember that all test scores are estimates of a student’s true score. That is, irrespective of the test being used, all observed scores include some measurement error, so we can never really know a student’s actual achievement level (his or her true score). But we can estimate the range in which we think a student’s true score likely falls; in general the smaller the range, the greater the precision of the assessment.

SEM, put in simple terms, is a measure of precision of the assessment—the smaller the SEM, the more precise the measurement capacity of the instrument. Consequently, smaller standard errors translate to more sensitive measurements of student progress.

On MAP assessments, student RIT scores are always reported with an associated SEM, with the SEM often presented as a range of scores around a student’s observed RIT score. On some reports, it looks something like this:

Student Score Range: 185-188-191

So what information does this range of scores provide? First, the middle number tells us that a RIT score of 188 is the best estimate of this student’s current achievement level. It also tells us that the SEM associated with this student’s score is approximately 3 RIT—this is why the range around the student’s RIT score extends from 185 (188 – 3) to 191 (188 + 3). A SEM of 3 RIT points is consistent with typical SEMs on the MAP tests (which tend to be approximately 3 RIT for all students).

The observed score and its associated SEM can be used to construct a “confidence interval” to any desired degree of certainty. For example, a range of ± 1 SEM around the observed score (which, in the case above, was a range from 185 to 191) is the range within which there is a 68% chance that a student’s true score lies, with 188 representing the most likely estimate of this student’s score. Intuitively, if we specified a larger range around the observed score—for example, ± 2 SEM, or approximately ± 6 RIT—we would be much more confident that the range encompassed the student’s true score, as this range corresponds to a 95% confidence interval.

So, to this point we’ve learned that smaller SEMs are related to greater precision in the estimation of student achievement, and, conversely, that the larger the SEM, the less sensitive is our ability to detect changes in student achievement.

Why is this fact important to educators?

If we want to measure the improvement of students over time, it’s important that the assessment used be designed with this intent in mind. And to do this, the assessment must measure all kids with similar precision, whether they are on, above, or below grade level. Recall, a larger SEM means less precision and less capacity to accurately measure change over time, so if SEMs are larger for high- and low-performing students, this means those scores are going to be far less informative, especially when compared to those students who are on grade level. Educators should consider the magnitude of SEMs for students across the achievement distribution to ensure that the information they are using to make educational decisions is highly accurate for all students, regardless of their achievement level.

Grade 5 Reading SEMAn example of how SEMs increase in magnitude for students above or below grade level is shown in the figure to the right, with the size of the SEMs on an older version of the Florida 5th grade reading test plotted on the vertical axis relative to student scale scores on the horizontal axis. What is apparent from this figure is that test scores for low- and high-achieving students show a tremendous amount of imprecision. In this example, the SEMs for students on or near grade level (scale scores of approximately 300) are between 10 to 15 points, but increase significantly for students the further away they get from grade level. This pattern is fairly common on fixed-form assessments, with the end result being that it is very difficult to measure changes in performance for those students at the low and high end of the achievement distribution. Put simply, this high amount of imprecision will limit the ability of educators to say with any certainty what the achievement level for these students actually is and how their performance has changed over time.

Of course, the standard error of measurement isn’t the only factor that impacts the accuracy of the test. Accuracy is also impacted by the quality of testing conditions and the energy and motivation that students bring to a test. In fact, an unexpectedly low test score is more likely to be caused by poor conditions or low student motivation than to be explained by a problem with the testing instrument. To ensure an accurate estimate of student achievement, it’s important to use a sound assessment, administer assessments under conditions conducive to high test performance, and have students ready and motivated to perform.


About the Author

Dr. Nate Jensen

Nate Jensen is a Research Scientist at NWEA, where he specializes in the use of student testing data for accountability purposes. He has provided consultation and support to teachers, administrators, and policymakers across the country, to help establish best practices around using student achievement and growth data in accountability systems. Nate holds a Ph.D. in Counselor Education from the University of Arkansas, an M.A. in Counseling Psychology from Framingham State University and a B.S. in Psychology from South Dakota State University.

As you read the blog, consider the question posed by the author:

  • Consensus suggests that timeliness of assessment data matters, but what do you think?

 

After you read the blog, continue the conversation in the comments section below.


Blog originally posted on Teach. Learn. Grow. on February 18, 2014

By Kathy Dyer

 

data-table3.jpg

Assessment is most powerful when it is used to help students learn and teachers improve their practice. For me, that’s what the “action” in actionable is. For teachers and students to be able to identify and take actions there are a few features about assessment (and its accompanying data) to keep in mind:

+ Timeliness of the data
+ Understandability of the data
+ Ability to apply the data

The timeliness of data helps determine just how actionable it is in informing teaching choices and student learning. It can be both a) in the moment, and b) returned quickly. Sometimes in the assessment world, the terms short-, medium- and long-cycle are used to describe examples of timely data.

Data in Action

Let’s take two scenarios – formative (short-cycle) assessment and interim assessment (long-cycle).

The interim assessment, given at three intervals through the year – fall, winter and spring – provides multiple data points which can be used to look at student growth. Using data from the fall assessment, teachers can work together in a PLC to understand where their students are starting. What do they understand? What do they need additional support in? Then they can develop learning plans and set goals with their students.

The winter test provides a check on how students are progressing toward those goals, as this interview with Alex McPherson, a teacher at KIPP Charlotte, North Carolina makes clear.

“In some ways, winter is almost more important than spring. This is the one chance that we have to assess and adjust. Spring is the end game, but we can use winter as a temperature gauge to see how we’re doing. We found that even with the switch to Common Core state assessments, MAP really did predict passing results. If you value spring testing, you have to value winter testing. If you’re going to use it instructionally, then the winter test is super important.”

Interim assessment data is useful to support more than just goal setting and monitoring progress. This data also informs plans for learning paths, flexible grouping and differentiating instruction.

Grunwald-Chart3.gifWith formative assessment, used within instruction, both teachers and student have the opportunity to make immediate adjustments to instruction and learning tactics. If a teacher employs individual whiteboards (a form of anall student response system) to see how students would graph information from a science problem, within 30 seconds he can determine which of 3 courses of action (and there maybe more) to take: 1) everyone got it and we can move on, 2) some students got it and some did not so perhaps small groups are formed or 3) he can ask clarifying questions of students who both got it and did not get it so understanding the thinking behind the answers and then determine what actions to take.

Does timeliness matter?

In a research report NWEA had commissioned by Grunwald Associates, apparently it does. For parents, assessment results begin losing their relevance within one month after assessments are administered (67%). Among teachers and administrators, 67 percent “completely” or “somewhat agree” that formative and interim assessment results deliver timely data as compared to 32 percent with summative assessments.

Consensus suggests that timeliness of assessment data matters, but what do you think?


About the Author

Kathy Dyer

Kathy Dyer is a Sr. Professional Development Content Specialist for NWEA, designing and developing learning opportunities for partners and internal staff. Formerly a Professional Development Consultant for NWEA, she coached teachers and school leadership and provided professional development focused on assessment, data, and leadership. In a career that includes 20 years in the education field, she has also served as a district achievement coordinator, principal, and classroom teacher. She received her Masters in Educational Leadership from the University of Colorado Denver. Follow her on Twitter at @kdyer13.