Now we are getting to the meat of the sandwich!
Why Precise Accountability Is Important
Just because a student is learning does not mean a teacher is teaching.
Can we take credit for success? Or failure? We need to know the effectiveness of our program if we hope to increase that effectiveness. What, for example, if the school year gains only make up for a huge summer regression? What if students, after a year of work, only gain slightly?
Knowing precisely where gains and losses occur is essential for change–or, for standing by a program that works! Well meaning initiatives are the norm, but they are often based on assumptions. Two things then happen: 1. The reality of how learning occurs does not match, so the results show no change, or 2. Those forces against change raise concerns that are as valid (or more valid) than the evidence supporting the original initiative. In the end, the initiative fails, fades or simply disappears and everyone feels just a tad more of initiative fatigue. Precise knowledge of where programs succeed and fail stops such failures and creates real change.
For example, our SU had a push for something called “Calendar 2.0”, a re-imaging of the school year calendar that would break the year into six week unit sized chunks with a week or two break between. It added no more days to the year, but spread out school days. The intent was good. During the breaks teachers could plan their next unit in reaction to the results of the previous unit’s results, and students could get help mastering skills they had yet to gain. Schools might even be able to offer enrichment during that time! The shorter summer break was designed to prevent summer regression.
For families, it shrunk the summer break, and created more week-long breaks. It cut into summer camps and made child care difficult and somewhat random. There were a vocal group, too, that argued for the unstructured time that summer afforded. In reality, many families were going to plan vacations during that time–told that their child was reading poorly and needed to stay would not trump the money already spent to go to Disney World.
Because Calendar 2.0 was not based on the clear need of students in our schools, it failed. It sounded good–summer regression! Planning time! Time for shoring up skills!–but there was no local data supporting it. Of all the areas teachers saw in need of shoring up, working in the summer did not rank highly.
Where in the Year is the Gain?
In my last post, we looked at year-to-year gains and regression. When we look at the 12 kids who regressed, half did so over the summer. But half did so over the school year. So, over 180 days of instruction student reading actually regressed for 6 students. Our school made them go backwards. Both summer or school year regression are results we should be concerned about, but the latter points to something we can control but are not.
Below, I identify students who had a dramatic difference between their summer and school year gains. Some students showed consistent gains, while others consistently stagnated. My two periods of examination–summer and school year–are based on discussions people at my school were having about programs we could institute (Calendar 2.0 being the most prominent). You should do those time periods that you feel are important to you. It could be the November-January holiday period (where family strife makes learning difficult), or May-June (end-of-year-itis) or days of the week (Mondays are about mentally coming back, Wednesday is the drag of “hump day” and Friday is checking out–so when DO kids focus?). The important part is to use data to examine it. All of my parenthetical asides are assumptions, many of which I have found untrue. You will be surprised how assumptions and old saws are wrong.
Students Who Gain Over School Year
Instead of always looking at the negative, let’s try and determine where kids succeed. These students (scores highlighted in yellow) showed significantly more gains in reading over the school year than the summer. In fact, relatively, this group lost ground over the summer.
Note how, because I gave the DRP in the spring, fall and again in the spring I was able to measure a) year-to-year growth, b) spring-to-fall growth (in short, gains or losses during the summer) and c) school year growth.
The difference between the school year and summer demonstrates the importance of being in a literate environment for reading growth to occur. Being forced to spend time with text leads to reading success.
Note that growth occurred both in students who struggle with reading (expected) and those who are in the top group. Even good readers benefit from the literate school environment. If these students get more time in a literate environment–more reading time–these gains should continue and increase.
Summer Gains, School Year Loss
There is a population who sees a loss in reading progress over the school year (notes in that dark khaki color), yet sees gains over the summer. They are a diverse group for which this phenomena could have many causes.
Still, one factor carries through many of those identified: Time. Many of these students lead busy lives, with responsibilities including sports, family, work and school. Reading for fun, and leisure time in general, is at a premium. Without practicing their reading, they show no growth–or a loss.
In the summer, these students enjoy a lighter schedule. They fill it with reading. In order for them to see year round growth they need time. These same results have been observed in other studies and are especially prevalent in middle class students saddled with activities.
Will Successes Scale Up?
Okay, so we saw some success. Often, when we do nothing, someone gains. Was it me? The conclusion I reached–more time spent reading will improve reading skills–makes instinctual sense. And the research backs that up. Good, right?
That said, the data I have at hand is thin. I am relying on a certain knowledge of my students and that invites bias. My sample sizes are small. As a data point, the DRP is more like a machete than a scalpel. (Read The DRP as an Indicator Of….) Will more Sustained Silent Reading (SSR) result in progress? It will take some time for the data to prove me out or cause me to change course. And, of course, more data, and more precise data. But, I am aware of all of this as I move forward. My program will react not to theory, but to what students experience in the classroom.
For example, of those 12 students who regressed, 3 are in the top stanines–they have nowhere to go. Similarly, the glut of students with none to minimal gains are also in the top stanines nationally–they had nowhere to go. But, I am wary of making excuses, so data.
Restatement: Introduction to These Next Few Blog Posts (Backstory for those coming to this post first).
We get a lot of data. It may come in the form of test scores or grades or assessments, but it is a lot. And we are asked to use it. Make sense of it. Plan using it.
Two quotes I stick to are:
- Data Drives Instruction
- No Data, No Meeting
They are great cards to play when a meeting gets out of hand. Either can stop an initiative in its tracks!
But all of the data can be overwhelming. There are those who dismiss data because they “feel” they know the kids. Some are afraid of it. Many use it, but stop short of doing anything beyond confirming what they know–current state or progress. And they can dismiss it when it does not confirm their beliefs. (“It’s an off year”) Understanding data takes a certain amount of creativity. At the same time, it must remain valid. Good data analysis is like a photograph, capturing a picture of something you might not have otherwise seen.
This series of blog posts will take readers through a series of steps I took in evaluating the effectiveness of my reading program. I used the DRP (Degree of Reading Power), a basic reading comprehension assessment, as my measure because it was available. I’m also a literacy teacher, so my discussion will be through that lens–but this all works for anything from math to behavior data.
* A stanine (STAtistic NINE) is a nine point scale with a mean of 5. Imagine a bell curve and along the x-axis you divide it into nine equal parts. The head and tail is very small area (4%) while the belly is huge (20%). Some good information can be found in this Wikipedia entry.