Don’t Go on a Wild Goose Chase

There are two typical responses to the release of 3-8 state test scores. For some districts who are disappointed in these scores, Chicken Little comes to mind. Other districts strut their stuff because their scores are high. A few districts don’t pay too much attention. Which is the right approach?

It’s a political reality that these scores do get some political and public attention (although it does seem that the interest in these scores is waning possibly because of all the drama and possibly fatigue). The thing about these scores, though, is that it is practically impossible to do anything about them.

ChartThe biggest reason why it is exceedingly difficult to move the scores is that the scores measure the rate of economic disadvantage of a district more than they measure anything else. The department, in their press release that accompanied the release of the 2013-2014 3-8 scores stated: “Although there is some correlation between 2014 ELA performance and poverty, there are many examples of higher poverty/higher performance schools.” While there might be some outliers, a close look at the situation suggests that economic circumstances explain almost all of the variation in 3-8 scores. The state did not release their correlation calculation between economic disadvantage and test scores, but as we usually do, we did the calculation for our region. And, as has been the case in the past, the correlations were very strong: correlations ranged from -.90 to -.61. Overall, the average correlation was a -.76. The figure illustrates one relationship in our region. As we all know from our statistics courses, this is a very strong negative relationship. In fact, few variables have as strong of a correlation as this. To cut to the chase: most of the differences in 3-8 scores can be explained by the economic circumstances of the community (a.k.a. zip code). Because of the very strong relationship, choosing to focus your data analysis emphasis on state assessments is ill-advised.

Chart2The second reason why trying to move state test scores by focusing on the state tests is not helpful is the fundamental misunderstanding of the role these state assessments play in a balanced assessment system. The state tests are designed for the purpose of accountability and they are simply not constructed to inform instruction. The state tests can provide information about program at some level but they aren’t designed to inform instruction. This means that chasing after the test questions is a bit of a wild goose chase. Some familiarity is good, but focusing on the questions has never proved fruitful and certainly isn’t now. In fact, we’ve specifically been told that questions will be deliberately constructed in different ways for different years on the test.

What else doesn’t work? Here’s a list of frequent practices that aren’t supported by research or experience:

  • Chasing assessment items from state tests
  • AIS that is small groups of students in small rooms
  • RtI that is focused on labeling and sorting rather than Tier 1 interventions
  • Moving from one initiative to another
  • Defense of the status quo

Unfortunately, these ineffective practices do occur in a number of districts. This is bad news. The good news, however, is that we actually do know what works, based on research and experience:

  • Guaranteed and viable Curriculum
  • Student focus
  • Common formative interim assessment
  • Teacher collaboration on the right work (this means time)
  • Coaching
  • “Whatever it Takes” culture
  • Long term planning for instruction (and monitoring)

Much of this list of effective practices is organized under the Professional Learning Community umbrella. This means an implementation of a PLC with fidelity (this post explains).

The answer, then, to the question about what to do about the state scores is known. Hattie and Marzano, and DuFour agree. The question should not be about state test scores. In order to impact state test scores there are four different questions that should be asked:

  1. What do we expect our students to learn? (This means a guaranteed and viable curriculum)
  2. How will we know they are learning? (This is about common formative interim assessments, teacher collaboration on the right work, and a student focus)
  3. How will we respond when they don’t learn? (This connects to teacher collaboration on the right work (this means time), coaching, and a “Whatever it Takes” culture)
  4. How will we respond if they can already do it? (This, too, connects to teacher collaboration on the right work (this means time), coaching, and a “Whatever it Takes” culture)

When districts have a long-term plan for doing these things student achievement, and test scores, will improve.

Craig,-Jeff_WEBJeff Craig
Assistant Superintendent for Instructional Support Services
JCraig@ocmboces.org

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s