Skip navigation
All Places > Welcome New Members > Blog > 2016 > September
2016

Welcome New Members

September 2016 Previous month Next month

As you read the blog, consider:

  • What questions do you have about Standard Error of Measurement?

After you read the blog, continue the conversation in the comments section below.


Blog originally posted on Teach. Learn. Grow. on December 3, 2015

By Dr. Nate Jensen

 

If you want to track student progress over time, it’s critical to use an assessment that provides you with accurate estimates of student achievement— assessments with a high level of precision. When we refer to measures of precision, we are referencing something known as the Standard Error of Measurement (SEM).

Before we define SEM, it’s important to remember that all test scores are estimates of a student’s true score. That is, irrespective of the test being used, all observed scores include some measurement error, so we can never really know a student’s actual achievement level (his or her true score). But we can estimate the range in which we think a student’s true score likely falls; in general the smaller the range, the greater the precision of the assessment.

SEM, put in simple terms, is a measure of precision of the assessment—the smaller the SEM, the more precise the measurement capacity of the instrument. Consequently, smaller standard errors translate to more sensitive measurements of student progress.

On MAP assessments, student RIT scores are always reported with an associated SEM, with the SEM often presented as a range of scores around a student’s observed RIT score. On some reports, it looks something like this:

Student Score Range: 185-188-191

So what information does this range of scores provide? First, the middle number tells us that a RIT score of 188 is the best estimate of this student’s current achievement level. It also tells us that the SEM associated with this student’s score is approximately 3 RIT—this is why the range around the student’s RIT score extends from 185 (188 – 3) to 191 (188 + 3). A SEM of 3 RIT points is consistent with typical SEMs on the MAP tests (which tend to be approximately 3 RIT for all students).

The observed score and its associated SEM can be used to construct a “confidence interval” to any desired degree of certainty. For example, a range of ± 1 SEM around the observed score (which, in the case above, was a range from 185 to 191) is the range within which there is a 68% chance that a student’s true score lies, with 188 representing the most likely estimate of this student’s score. Intuitively, if we specified a larger range around the observed score—for example, ± 2 SEM, or approximately ± 6 RIT—we would be much more confident that the range encompassed the student’s true score, as this range corresponds to a 95% confidence interval.

So, to this point we’ve learned that smaller SEMs are related to greater precision in the estimation of student achievement, and, conversely, that the larger the SEM, the less sensitive is our ability to detect changes in student achievement.

Why is this fact important to educators?

If we want to measure the improvement of students over time, it’s important that the assessment used be designed with this intent in mind. And to do this, the assessment must measure all kids with similar precision, whether they are on, above, or below grade level. Recall, a larger SEM means less precision and less capacity to accurately measure change over time, so if SEMs are larger for high- and low-performing students, this means those scores are going to be far less informative, especially when compared to those students who are on grade level. Educators should consider the magnitude of SEMs for students across the achievement distribution to ensure that the information they are using to make educational decisions is highly accurate for all students, regardless of their achievement level.

Grade 5 Reading SEMAn example of how SEMs increase in magnitude for students above or below grade level is shown in the figure to the right, with the size of the SEMs on an older version of the Florida 5th grade reading test plotted on the vertical axis relative to student scale scores on the horizontal axis. What is apparent from this figure is that test scores for low- and high-achieving students show a tremendous amount of imprecision. In this example, the SEMs for students on or near grade level (scale scores of approximately 300) are between 10 to 15 points, but increase significantly for students the further away they get from grade level. This pattern is fairly common on fixed-form assessments, with the end result being that it is very difficult to measure changes in performance for those students at the low and high end of the achievement distribution. Put simply, this high amount of imprecision will limit the ability of educators to say with any certainty what the achievement level for these students actually is and how their performance has changed over time.

Of course, the standard error of measurement isn’t the only factor that impacts the accuracy of the test. Accuracy is also impacted by the quality of testing conditions and the energy and motivation that students bring to a test. In fact, an unexpectedly low test score is more likely to be caused by poor conditions or low student motivation than to be explained by a problem with the testing instrument. To ensure an accurate estimate of student achievement, it’s important to use a sound assessment, administer assessments under conditions conducive to high test performance, and have students ready and motivated to perform.


About the Author

Dr. Nate Jensen

Nate Jensen is a Research Scientist at NWEA, where he specializes in the use of student testing data for accountability purposes. He has provided consultation and support to teachers, administrators, and policymakers across the country, to help establish best practices around using student achievement and growth data in accountability systems. Nate holds a Ph.D. in Counselor Education from the University of Arkansas, an M.A. in Counseling Psychology from Framingham State University and a B.S. in Psychology from South Dakota State University.

As you read the blog, consider the question posed by the author:

  • Consensus suggests that timeliness of assessment data matters, but what do you think?

 

After you read the blog, continue the conversation in the comments section below.


Blog originally posted on Teach. Learn. Grow. on February 18, 2014

By Kathy Dyer

 

data-table3.jpg

Assessment is most powerful when it is used to help students learn and teachers improve their practice. For me, that’s what the “action” in actionable is. For teachers and students to be able to identify and take actions there are a few features about assessment (and its accompanying data) to keep in mind:

+ Timeliness of the data
+ Understandability of the data
+ Ability to apply the data

The timeliness of data helps determine just how actionable it is in informing teaching choices and student learning. It can be both a) in the moment, and b) returned quickly. Sometimes in the assessment world, the terms short-, medium- and long-cycle are used to describe examples of timely data.

Data in Action

Let’s take two scenarios – formative (short-cycle) assessment and interim assessment (long-cycle).

The interim assessment, given at three intervals through the year – fall, winter and spring – provides multiple data points which can be used to look at student growth. Using data from the fall assessment, teachers can work together in a PLC to understand where their students are starting. What do they understand? What do they need additional support in? Then they can develop learning plans and set goals with their students.

The winter test provides a check on how students are progressing toward those goals, as this interview with Alex McPherson, a teacher at KIPP Charlotte, North Carolina makes clear.

“In some ways, winter is almost more important than spring. This is the one chance that we have to assess and adjust. Spring is the end game, but we can use winter as a temperature gauge to see how we’re doing. We found that even with the switch to Common Core state assessments, MAP really did predict passing results. If you value spring testing, you have to value winter testing. If you’re going to use it instructionally, then the winter test is super important.”

Interim assessment data is useful to support more than just goal setting and monitoring progress. This data also informs plans for learning paths, flexible grouping and differentiating instruction.

Grunwald-Chart3.gifWith formative assessment, used within instruction, both teachers and student have the opportunity to make immediate adjustments to instruction and learning tactics. If a teacher employs individual whiteboards (a form of anall student response system) to see how students would graph information from a science problem, within 30 seconds he can determine which of 3 courses of action (and there maybe more) to take: 1) everyone got it and we can move on, 2) some students got it and some did not so perhaps small groups are formed or 3) he can ask clarifying questions of students who both got it and did not get it so understanding the thinking behind the answers and then determine what actions to take.

Does timeliness matter?

In a research report NWEA had commissioned by Grunwald Associates, apparently it does. For parents, assessment results begin losing their relevance within one month after assessments are administered (67%). Among teachers and administrators, 67 percent “completely” or “somewhat agree” that formative and interim assessment results deliver timely data as compared to 32 percent with summative assessments.

Consensus suggests that timeliness of assessment data matters, but what do you think?


About the Author

Kathy Dyer

Kathy Dyer is a Sr. Professional Development Content Specialist for NWEA, designing and developing learning opportunities for partners and internal staff. Formerly a Professional Development Consultant for NWEA, she coached teachers and school leadership and provided professional development focused on assessment, data, and leadership. In a career that includes 20 years in the education field, she has also served as a district achievement coordinator, principal, and classroom teacher. She received her Masters in Educational Leadership from the University of Colorado Denver. Follow her on Twitter at @kdyer13.

As you read, consider the following questions:

  • What do you think of including students in developing rubrics?
  • What are some other ways you can improve classroom collaboration and the assessment process?

 

After you read through the blog, continue the conversation in the comments section below.


Blog originally posted on Teach. Learn. Grow. on May 17, 2016

improve-classroom-collaboration-img.jpgBy Kathy Dyer

 

Involving students in their own learning is a core component of effective formative assessment practice. Empowering student collaboration can help them understand their learning targets, while moving the entire classroom forward. In my role at NWEA, I quite often come across teachers who have some unique stories to share on accomplishing successful student collaboration and formative assessment in general. One such story comes from a science teacher.

 

He really wanted to work on using comments more effectively and put more responsibility for learning back on the student, but he couldn’t figure out how he could comment on 120 upcoming lab reports, return them, and do it all over again. Are there enough hours in a day to do this? So, he thought and pondered and schemed and came up with this plan: the students could draft their rough draft of an experiment they had just finished. The teacher would give them time in the computer lab to finalize and format them, and then together, they would “comment score” them using a comment rubric upon which they had all decided upon.

 

He said that coming up with the rubric was really fun. He began by modifying an ELA rubric that had format, grammar, and content components, and each class came up with its own “Well Done” section and “Needs Improvement” section. The comments were all numbered for ease. Most importantly, they all agreed that allowing others to score their work was really hard to do. Letting someone evaluate their report was nerve wracking, and they promised to respect the feelings of those around them.

 

So, they rough-drafted and typed (the kids who didn’t rough draft their paper had to do so, in class, before typing), printed off reports, and got ready to score. On the Big Scoring Day, each student read and scored three other students’ work. To the teacher’s surprise, the students worked hard and thoughtfully during the process. They also signed the bottom of the paper. When the scored papers were handed back, TOTAL SILENCE reigned while each student read the comments. Very few went back to seats; most stopped in place, intently reading the rubric and comments. The teacher waited for pandemonium, but it never came. Most were satisfied that the comments were accurate. No one was given a grade, just the opportunity to fix anything that needed fixing.

 

The kids loved it. They got lots and lots of comments, good ideas from seeing other people’s work, and a better idea of how they stacked up compared to others, AND two days to redo it before the teacher scored it.

 

The teacher loved it too. It was the ONLY piece of work that he scored for two and a half weeks. It was worth a lot of points, but the kids rose to this high-stakes challenge. AND, he offered to provide them only feedback comments when they turned it in again, no score. Oooooh . . . GOOD idea, they said.

 

What is the bottom line here? They put more time and effort into their work. Those who didn’t do as well on the report because they fell behind or opted not to participate stuck out like sore thumbs. Some of those kids have vowed that they will be ready with the rough draft next time so they don’t waste time and miss on Scoring Day. The teacher’s students are now “Focused on Flawless.” He has replaced the “F” of failure with that of FLAWLESS. A student may turn in his or her corrected report as many times as he or she would like during the one week following the scoring session. The first “Flawless” awarded was to a special Ed student who did his report six times before declaring it to be Flawless. The teacher agreed that it was so, but for one thing. He forgot to spell his name correctly!

 

Was it time consuming? Oh, yes. Did he fall behind on the pacing guide? Yes. Is he going to “do this” again? Yes . . . this week on their graphs. Do they love it? Surprisingly, yes. Are they learning more? Yes. Grades and morale have improved.

 

Does the teacher love it? Yes, YES, though he can’t rid himself of the suspicion that he has done this all wrong. He really wanted to work more on it, modify the rubric and the environment some. It has been the best thing for his teaching since . . . well . . . since he started teaching eight years ago.

 

I love coming across stories from teachers like this. Applying some core formative assessment components to classroom instruction – in this case student collaboration and using learning targets – to improve student learning. We’ll share more as they become available.

 


About the Author

 

23.thumbnail.jpg

Kathy Dyer is a Sr. Professional Development Content Specialist for NWEA, designing and developing learning opportunities for partners and internal staff. Formerly a Professional Development Consultant for NWEA, she coached teachers and school leadership and provided professional development focused on assessment, data, and leadership. In a career that includes 20 years in the education field, she has also served as a district achievement coordinator, principal, and classroom teacher. She received her Masters in Educational Leadership from the University of Colorado Denver. Follow her on Twitter at @kdyer13.

As you read, consider the following questions posed by the author:

  • What do you think are some keys to making data actionable?
  • If you’re a teacher, how do you make the most out of assessment data in your classroom?

 

After you read through the blog, continue the conversation in the comments by thinking about the questions above.


Blog originally posted on Teach. Learn. Grow. on April 29, 2014

By Kathy Dyer

 

"Over the years, I have seen the phrase “data-driven instruction” become such a driving force in our schools that some principals became data collectors in order to survive the new accountability pressure."

 

These are the words of Lillie Jessie in her blog Data, Data Everywhere but Not a Drop to Drink at ALLTHINGSPLC. And they are likely the sentiments of many teachers who just want to teach and not concern themselves with the assessment data collected every time a student takes a test. But there is value in the assessment data, whether it comes from interim or formative assessment, or even from summative results. It just has to be used correctly.

 

As Lillie mentions in her piece, many principals and administrators use the data for presentations, and many teachers collect the data but perhaps don’t use it correctly. In fact, she points out four data collector ‘types’ in her blog that use the data, but not necessarily to its utmost potential.

 

We’ve talked about 3 keys to making data actionable – timeliness, understandability and the ability to apply. Lillie ends her post by talking about the time to apply. Scheduling time as close as possible after any interim, benchmark or summative assessment supports all three of the action keys.

 

  • Provide teachers access to the data as soon as possible after the administration of the assessment. Whether through the assessment system, a data warehouse, SIS or a spreadsheet, get the data to the teachers immediately so the action can begin; time is of the essence!
  • Schedule time for teachers to meet – grade level, content or vertical teams, staff meetings, data teams, PLCs, TLCs, whatever system you use for teacher collaboration. This time should be regular and even habit forming, in fact. In the beginning, teachers will need some basic aspects of data literacy to fully understand what they see in the data and to be able to talk about it in quantifiable language.
  • Provide time for teachers to plan to apply the data. Teachers will have the opportunity to deepen their data literacy as they become more adept at knowing which kinds of data to use for which decision. These habits of regularly looking at data and then acting upon it to advance student learning can be reinforced by dialoguing in a systemic way about the data. A variety of protocols exist to foster these habits from the data conversation tools used in NWEA’s Coaching Services to Critical Friends protocols for looking at student work.

 

Lillie mentioned, and we all hear, ‘there’s too much testing… just let them teach!’ Testing is critical and of tremendous value if the data are used properly. High-quality assessments can and should empower teachers and improve the teaching and learning process, and beyond the three keys above, there are many resources to help educators get the most from assessment data.

 

Not too long ago, our resident expert Dr. Anne Udall wrote a piece that shared some resources. Head over to her post – 13 Resources for Making the Most of Assessment Data – and check it out.

 

What do you think are some keys to making data actionable? If you’re a teacher, how do you make the most out of assessment data in your classroom?


About the Author

23.thumbnail.jpgKathy Dyer is a Sr. Professional Development Content Specialist for NWEA, designing and developing learning opportunities for partners and internal staff. Formerly a Professional Development Consultant for NWEA, she coached teachers and school leadership and provided professional development focused on assessment, data, and leadership. In a career that includes 20 years in the education field, she has also served as a district achievement coordinator, principal, and classroom teacher. She received her Masters in Educational Leadership from the University of Colorado Denver. Follow her on Twitter at @kdyer13.

As you read the blog, consider the question posed by the author:

  • How are you using interim assessment and its data in your school or district?

 

After you read the blog, continue the conversation in the comments section below.


Blog originally posted on Assessment Literacy on October 7, 2015

 

While formative assessment can provide the day-to-day, minute-by-minute data teachers need to make in-the-moment adjustments to teaching, there is still a need for interim assessments. These assessments are taken at intervals throughout the year – every five to nine weeks or so – to evaluate student knowledge and skills relative to a specific set of academic goals. The results are used by principals, school leaders and teachers to inform instruction and decision-making in the classroom and at the school and district level, as well as to measure student growth over time.

 

As Kim Marshall stated in his article – Interim Assessments: Keys to Successful Implementation – in 2006:

The basic argument for interim assessments is actually quite compelling: let’s fix our students’ learning problems during the year, rather than waiting for high-stakes state tests to make summative judgments on us all at the end of the year, because interim assessments can be aggregated and have external referents (projection to standards, norms, scales).

 

A good, balanced model of assessments includes interim assessments, which have three primary purposes and responsibilities:

  1. Provide information to help educators guide instruction for all students in a manner that supports growth and achievement.
  2. Project performance on the state assessment in order to help educators identify students who may need intervention to meet standards.
  3. Provide educators and parents with an accurate measure of the student’s growth over time.

 

When they are properly implemented, interim assessments serve as a time-efficient means of measuring student progress within a general subject area. Typically, interim assessments include 30 to 50 items, take the average student about an hour to complete, and produce both a relatively accurate estimate of student performance in a discipline, as well as an estimate of performance in the primary standards within that discipline.Of course with any assessment, it’s all about how the data is used. Michael LoCascio, a district administrator in Illinois, shared some common mistakes that educators make when interpreting assessment data. He pointed out several correctable mistakes, including:

 

1.Confusing Correlation with Causation. Educators learn to think fast in the classroom, making split second decisions on how to adjust instruction appropriately during a lesson. Yet this same ingrained ability to form quick decisions can serve as a distraction when analyzing student performance data. When two seemingly related events occur at the same time, it can be easy to assume that one event caused the other. A rise in student grades may coincide with the start of an after school homework club. Yet, what appears to be a logical relationship may actually be misleading information, and occasionally, when poorly interpreted, can convince schools to perpetuate ineffective practices and programs.

 

2. Failing to Understand the Intricacies of Averaged Data. The mean is created by adding together all the scores in a given set and dividing by the number of entries. In short, it is the sum divided by the count. Though easy to calculate and compare, the mean has a few drawbacks: it can be easily skewed by abnormally high or low scores, and it can be strongly influenced by the size of the count itself. And, averages based on very small groups have less validity than averages based on large groups.  Due to these issues, the resulting average score can be misleading and, either in a positive or negative way, potentially create inaccurate descriptions of student performance.

 

3. Creating False Connections between Averaged Data and Real Students. Despite its connotation, there are actually very few students who are the living embodiment of averaged data.  Most of our students have instructional needs—both strengths and weaknesses—which vary greatly from the average score. Yet both our informal opinions and our large scale school analysis tend to place a large emphasis on the averaged data. As a result, we tend to create broad conclusions about student learning which do not necessarily reflect the strengths and needs of our real students.

 

In general, if used and executed correctly, interim assessments can play a strong role in moving student learning forward. And together with embedded formative assessment, the two are strong measurement tools for all educators. How are you using interim assessment and interim assessment data in your school or district?

As you read the blog, consider the question posed by the author: What was the quality of the assessment tools that were being utilized for the students receiving RTI support in your study?

 

After you read the blog, continue the conversations in the comments section below.


Blog originally posted on Teach. Learn. Grow. on January 21, 2016

By Virginia "Jenny" Williams

 

4-key-elements-img.jpgRecently I was reading the latest Education Week when I ran across an article discussing Tier 2 interventions for reading in elementary classrooms. The article discussed a study which suggested that students in first grade, who were receiving intervention in Tier 2, were not only not making significant progress, but were actually losing ground. The results of the study were somewhat frustrating, but not surprising to me. For a while now, I have suspected that our tiered system of intervention was wounded and in need of support. So I wanted to unpack where I see the disconnects, and suggest some ways to rectify this situation.

 

Content alignment is a necessity for filling instructional gaps.

 

When students are not responding to core instruction that is differentiated, it is often because they have a “gap” in their content knowledge. Educators may identify this as “lack of background knowledge.” The Center for RTI suggests that quality core instruction that is differentiated ought to satisfy the need of most (approximately 80%) students. They recognize that some 10-20% will need an intervention that is targeted to fill the “gaps” or misconceptions that have occurred in learning. For these students, something additional is needed to supplement their core instruction. We need to focus on this idea that Tier 2 instruction ought to “fill a gap” that has occurred in instruction. In other words, Tier 2 instruction should supplement and support Tier 1 core instruction instead of replacing it, at least for most students. This means there is a need for alignment between the content that is addressed in both tiers.

 

Using a strength-based focus builds foundations for new skills that can close learning gaps.

 

According to the RTI center, the purpose of Tier 2 instruction is to focus on a small portion of a content that is missing or unclear, not replace the initial instruction. When we focus on “struggling students” then it is easy to overlook their strengths, therefore we consistently ask students to work harder and longer on things that are difficult for them, rather than finding an area of strength that the skills can be scaffolded onto. If we look at students’ abilities rather than their deficiencies then we are likely to design instruction that provides success instead of failures. We are likely to design instruction that fills the “gap” instead of adding to it. But, how can we achieve this?

 

Assessment can help identify gaps.

 

A quality assessment “toolbox,” one that contains a variety of tools that provide a comprehensive view of the student, can help improve the efficacy of an RTI program. This toolbox should include assessments that provide information regarding the students’ ability to master grade-level standards, as well as those that give teachers insight into whether students are performing below or above grade level expectations. These tools need to be able to identify where the student is strong and where they need support.

 

Data triangulation is a necessary component of the RTI process and should include summative assessment information.

 

Summative assessments provide the RTI team with knowledge of how a student is performing on state outcome measures that are correlated to grade level standards. Summative assessments are difficult to use for guiding instruction because they are generally done late in the year and are only focused on whether the student performed at grade level or not. These assessments can be used effectively during the triangulation phase of the data-based decision-making process, because they provide clues to the consistency of the student’s performance across environments and assessments

 

Interim assessment that align to state standards and assess a variety of grade level content can accurately identify gaps and provide a strong foundation for scaffolding new content.

 

Another type of assessment that is necessary for a quality toolbox is an interim assessment. These assessments identify a student’s current achievement level, and their growth over time. Interim assessments help the RTI team determine if there are gaps in learning that need to be addressed. MAP is an interim assessment that provides a comprehensive look at what a student knows and what they are ready to learn, based on state standards, but not limited to the student’s grade level. Not limiting questions to the student’s grade level is the key to identifying what it is that the student knows and what they need to learn. When interim assessment goes beyond the student’s current grade level, it has potential to identify learning gaps that could have occurred – and also areas of strength to build on. Interim assessments that isolate content to a particular grade level cannot identify gaps in learning, unless that gap happens to be occurring at the grade level content, so they do not alert the teacher to content that could bridge the gap in the student’s knowledge.  Interim assessments that do not provide information beyond current grade level content cannot fully support quality differentiated instruction because they do not identify the zone of proximal development that detects the foundational knowledge to support scaffolding for new content.

 

Formative assessment provides information for differentiating instruction on a daily basis.

 

Formative assessment guides day-to-day instruction and is a necessity for a quality assessment toolbox. Formative assessment are questions and tasks that teachers have students do for the purpose of identifying misconception and the need for additional instruction regarding the specific content that is being taught. Formative assessment can directly support and guide differentiated instruction within the classroom, regardless of the RTI tier, because it is a direct reflection of whether or not the student has understanding of the content that is being taught. Because formative assessment is directly related to whether the student understands the content that is being presented, it can also serve as progress monitoring for interventions at the various tiered levels.

 

Progress monitoring measures need to align to standards and to the intervention.

 

Progress monitoring is mandated for intervention activities that are devised by RTI teams supporting struggling students and it is also an important element of a quality assessment toolbox. Progress monitoring is the data that is collected regarding a student’s progress in relation to a specific goal set forth by the RTI team. It helps to direct future goals and interventions that will support the students’ progress. Because progress monitoring is focused on closing gaps in learning, it must begin with a direct correlation to state standards that are grade appropriate for the student. Assessment tools such as Skills Navigator directly link discrete skills to grade level content strands. Progress monitoring tools that do not directly link to grade level content standards are much more difficult to interpret and identify as having direct impact on closing gaps for students in the RTI process. Progress monitoring tools that are loosely correlated to grade level content also have potential for focusing on the students’ deficits rather than their strengths when building new skills.

 

So, my response to the Education Week article would be a follow up question: What was the quality of the assessment tools that were being utilized for the students receiving RTI support in your study? To Teach, Learn, Grow readers, I would challenge you to assess your own RTI assessment toolbox to determine the quality and fluidity of your assessment tools.

 


About the Author

52.thumbnail.jpg

Virginia “Jenny” Williams attended Armstrong Atlantic State University where she obtained a Bachelor and Master of Science in speech-language pathology/special education. She also attended Georgia Southern University and received a Doctorate in curriculum studies and educational leadership. Jenny has held a variety of positions within education including speech-language pathologist, lead teacher, literacy coach, assistant special education director, program specialist for a regional education service agency and college professor. Jenny has been responsible for providing professional development to teachers for the past eight years and has done considerable work in guiding teachers through the data analysis process focusing on instructional decision-making. Jenny joined the NWEA family 3 years ago and has qualified to facilitate NWEA Professional Development content.

As you read, consider the following discussion topics:

  • What conversations are you having with parents about assessment?
  • Do you send out surveys to parents for feedback on assessments?
  • How do you involve parents in the process of assessing students?

 

After you read through the blog, continue the conversation in the comments by thinking about the questions above.


Blog originally posted on Teach. Learn. Grow. on July 14, 2016

By Kara Bobowski

 

Over the past 5 years, NWEA has sponsored three national studies focused on what various stakeholders, including superintendents, principals, teachers, students and parents think about assessment. As a parent myself, it’s the parents’ views of assessment that I find the most interesting – and more nuanced than you might think.

 

Back in 2012 when we blogged on what parent’s thought, 68 percent of parents “completely” or “somewhat” agreed that formative and interim assessments provide data about individual student growth and achievement. (*For quick definitions of different assessment types that were provided to parents surveyed, see below.) Sixty-six percent agreed that formative and interim assessments help teachers better focus on the content that students need to learn, and 60 percent agreed that these assessments provide teachers with the information needed to pace instruction for each student. Generally speaking, 84% of parents found formative assessments “extremely” or “very” useful, and 67% said that about interim assessments, while only 44% found summative assessments that useful. The data gathered in the 2012 survey suggested that parents prefer a more embedded formative assessment classroom strategy using timely and informative results.

 

Fast forward to 2016 and our latest survey, Make Assessment Work for All Students, and now, 76 percent of parents surveyed value interim assessments and 74 percent value formative assessments. Generally, parents considered multiple assessment types helpful to their child’s learning. Majorities of parents said that classroom tests and quizzes were helpful to themselves (65%), their children (76%) and their children’s teachers (83%). However, only 46% of parents considered state accountability tests to be useful to the audience for whom they are designed – school administrators. The findings highlight the need for more communication and understanding targeted at parents around the purposes of different assessments.

 

While the perceived value of certain types of assessment is growing among parents, there is still a need for better communication of assessment results. Our latest survey showed that more than six in 10 parents say that their child’s teachers rarely (39%) or never (22%) discuss assessment results with them. Interestingly, parents whose children attend large schools and suburban schools are more likely than those with children at small- or medium-sized schools or urban schools to say that teachers never discuss results with them.

 

While there is a need to better communicate assessment results with parents, surprisingly, parents in both surveys feel that students spend an appropriate amount of time on assessment. Controversy over state accountability tests is likely an important influence on the widespread perception that U.S. students are tested too much. Common criticisms of accountability assessments are that they take time that could be better used to meet the specific needs and interests of students and that they detract from teachers’ ability to differentiate instruction. Yet,  more than half of parents (52%) say students spend the right amount of time or too little time taking assessments in the latest survey.

 

For more assessment perceptions from parents – along with teachers, students, and school administrators — download the latest survey – Make Assessment Work for All Students: Multiple Measures Matter. And if you are a parent looking to understand more about NWEA’s MAP test specifically, check out our online resources for parents.

 

*By formative assessment, we mean classroom observations, class quizzes and tests, and other practices used by teachers and students during instruction to provide in-the-moment feedback so teachers can adjust accordingly. Interim assessments were defined for parents as assessments administered at different intervals throughout the year to evaluate student knowledge relative to specific goals. Summative assessments were defined as assessments such as state- or district-wide standardized tests that measure grade-level proficiency, and end-of-year subject or course exams. To learn more about assessment types and their purposes, check out AssessmentLiteracy.org.


86.thumbnail.jpgAbout the Author

 

Kara brings 15+ years of marketing communications experience to her role as the Senior Manager of Digital Content at NWEA. She is passionate about learning and creating opportunities to share NWEA partner stories on a variety of platforms. For the past year, Kara has been creating content for Assessment Literacy.org, working on the NWEA-Gallup assessment perceptions study, and collaborating with the Task Force on Assessment Education for Teachers. Prior to that, Kara held communications and consulting roles in a variety of industries. She completed her undergraduate work at the University of Notre Dame and earned a master's degree from Northwestern University.

Blog originally posted on Teach. Learn. Grow. on August 9, 2016

 

By Kathy Dyer

 

Using some existing educational research, we can begin to piece together how people learn, which can ultimately inform how teachers should teach. Makes sense right? While there are certainly numerous ways that people can take in and process information, research clearly shows that the following three findings are consistent and can have strong implications on how we teach and engage students. Before school starts again, teachers should take a few minutes and re-acclimate themselves with these key points.

 

1. Students come to the classroom with preconceptions about how the world works. If their initial understanding is not engaged, they may fail to grasp the new concepts that are taught, or they may learn them for purposes of a test, but revert to their preconceptions outside the classroom.

 

Research on early learning suggests that the process of making sense of the world begins at a very young age. Children begin in preschool years to develop sophisticated understandings – accurate or otherwise – of the world around them (Wellman, 1990). Those initial understandings can have a powerful effect on the integration of new concepts and information. Sometimes those understanding are accurate, providing a foundation for building new knowledge, but sometimes they are inaccurate (Carey and Gelman, 1991). Drawing out and working with existing understandings is important for early learners, as well as learners of all ages.

 

2. To develop competence in an area of inquiry, students must (a) have a deep foundation of factual knowledge, (b) understand facts and ideas in the context of a conceptual framework, and (c) organize knowledge in ways that facilitate retrieval and application.

 

This principle emerges from research that compares the performance of experts and novices and from research on learning and transfer. Experts, regardless of the field, always draw on a richly structured information base; they are not just ‘good thinkers’ or ‘smart people.’ The ability to plan a task, to notice patterns, to generate reasonable arguments and explanations, and to draw analogies to other problems are all more closely intertwined with factual knowledge than was once believed. But knowledge of a large set of disconnected facts is not sufficient. To develop competence in an area or inquiry, students must have opportunities to learn with understanding. Deep understanding of subject matter transforms factual information into usable knowledge. One of the pronounced differences between experts and novices is that experts’ command of concepts shapes their understanding of new information: it allows them to see patterns, relationships, or discrepancies that are not apparent to novices.

 

In most areas in K-12 education, students begin as novices; they will have informal ideas about the subject of study and will vary in the amount of information they have acquired. The enterprise of education can be viewed as moving students in the direction of more formal understanding (to become more expert) of the subject area at hand. This requires a deepening of the information base and the development of a conceptual framework for that subject area, which ultimately helps students organize their expertise around principles that support their understanding.

 

3. A ‘metacognitive’ approach to instruction can help students learn to take control of their own learning by defining learning goals and monitoring their progress in achieving them.

 

Because metacognition – the awareness of one’s own learning – often takes the form of an internal conversation, it can easily be assumed that individuals will develop the internal dialogue on their own. Yet many of the strategies that we use for thinking reflect cultural norms and methods of inquiry (Hutchins, 1995; Brice-Heath, 1981, 1983; Suina and Smolkin, 1994). Research has demonstrated that children can be taught these strategies, including the ability to predict outcomes, explain to oneself in order to improve understanding, note failures to comprehend, activate background knowledge, plan ahead, and apportion time and memory. The model for using the metacognitive strategies is provided initially by the teacher, and students practice and discuss the strategies as they learn to use them. Ultimately, students are able to prompt themselves and monitor their own comprehension without teacher support. The teaching of metacognitive activities must be incorporated into the subject matter that students are learning (White and Frederickson, 1998).

 

Metacognition is critical for activating learners. It connects to the formative assessment process where we ask three questions from a student perspective:

 

  1. Where am I going? What is my learning target or goal? Some of this may be internal and some may be guided by teacher conversation.
  2. Where am I now? Students need to be able to self assess and monitor their own progress. When they become engaged (#1) and can organize what they are learning (#2) it is easier for them to figure out where they are in relation to the target or goal.
  3. How will I get there? As students develop their expertise, we are talking about both knowledge and expertise in using strategies to support their learning. Reflecting on what they need to do and how they can support themselves (which may include asking the teacher or peers for support) is one way that students use metacognitive strategies.

 

Understanding these three aspects of how people learn can be the difference between learning a procedure and learning with understanding.

 


 


About the Author

 

23.thumbnail.jpg

Kathy Dyer is a Sr. Professional Development Content Specialist for NWEA, designing and developing learning opportunities for partners and internal staff. Formerly a Professional Development Consultant for NWEA, she coached teachers and school leadership and provided professional development focused on assessment, data, and leadership. In a career that includes 20 years in the education field, she has also served as a district achievement coordinator, principal, and classroom teacher. She received her Masters in Educational Leadership from the University of Colorado Denver. Follow her on Twitter at @kdyer13.