Activity to Get Faculty to Engage in Data

Getting people to care about data, engage in data and use data to drive instruction is difficult, but this exercise can serve the dual goal of data engagement and team building.

Too many teachers are stuck on current practices.  We teach as we were taught.  We hold biases.  When data confirms what we know, it feels a waste of time.  When it contradicts our assumptions, we make excuses as to why that might be so.  The hustle and bustle of the profession provides an easy excuse to pass over important data about our students instead of having meaningful engagement.

Here’s an exercise–a kind of data jigsaw–to cut through that.

  1. Data.  You will need to break your data down into smaller bits.  Say, Math scores for the 4th grade.  And, Reading scores for the 7th.  More and more.  Take the data you have already, and slice and dice it until you have as many bits as faculty and staff present.  You can use graphs, too.
  2. Each person in the room has one part of a table or a graph, which you prepared in the last step.
  3. Looking at their data, people should think about what next piece of data they need to create context (so, if I had 4th grade Math data, I might want to know how the supervisory union did. Or how these same students did in 3rd).  You could have them write the question on a sticky note (this makes it concrete) or not (which allows mental flexibility).  With the former, they can track their own thought pathway, too.
  4. People then find whomever who is holding that table or graph with the information they seek.  They chat.
  5. Together, they come to conclusions (and write them on their sticky note).  They then decide together if they need another piece of data, or their next question.  Or, they might decide each needs different data or has different questions and separate.
  6. Repeat.  The exercise ends when each individual has “next steps” for what students need based on a clear picture of where they are.

A lot of graphs and broken up data tables are needed for that!  I recommend any facilitator play this game amongst other admins or even alone to get an idea if a) would it actually work, b) what pieces are wanted and needed–with data, what makes sense in YOUR head is not always clear to others.

Big Picture, Little Picture Data

My admins have begun to ask good questions that require data to form the answer, and as a result have given me access to vast amounts of data to explore with.  After answering their initial questions, one admin began to talk about using data in a Response to Intervention (RtI) type model.  A big believer in RtI, I find I have little interest in playing with data around it because it seems pretty straightforward and not particularly interesting from an analysis standpoint–no real toys to play with; you just see something, respond and remeasure.

It did make me wonder about micro vs. macro uses of data.  In this EdX course I am taking (Data, Analytics and Learning) they touch on Learning Analytics vs. Academic Analytics; that is, looking at a set of students and the teacher in a classroom, as opposed to the administrative and larger organization level.  Often, because conversations about data are so rudimentary, we blur the two.  At our school, we slap the NCLB numbers on the whole school in one bubble and then look at individuals in another bubble (if we look at data at all).  If we could, many would use a single number and call it analysis.

I find there are three types of people who look at data:

  • Micro data users: These are RtI folks, but also anyone who looks at individual students and assignments and uses that data to help that student, the class or general, personal instruction.  Many elementary teachers fall into this.  In fact, they are very good at this–pioneers!
  • Macro data users: We are interested in systems, starting with the classroom, but then scaling up results to schools and beyond.  If you can change the system for the better, there exists a higher, more solid foundation for the micro folks to build on.  Few teachers are here (unfortunately, few admins are here).
  • Non data users: These are the folks who know what they know (or think they do). From their point of view, when the data does not confirm, it’s flawed and useless.  When it does confirm, it’s a redundant waste of time.  I’m throwing half of all teachers and admins, and many secondary teachers, into this lot.

Of course, I have no actual data to support these conclusions (ha, ha).  It did, though, make me wonder which focused efforts–macro or micro–show best results.

In looking at the data that was given to me, I was able to track how cohorts did compared to other schools in our district.  I find it helpful to think in terms of sports: If a cohort beats the district average we “won” and if we did not, we “lost”.  Looking at six cohorts over three years, two cohorts clearly had something going on–they had “winning” records.

When I looked at teachers, only two seemed to have “winning” records.  One was me.  The other had those winning cohorts two of their three years, so it is hard to discern if they are a good teacher or just got good kids–the results of that third year indicates the former.

Beyond tooting my own horn, it has made me think about my program.  A big picture person, I tend to avoid the trivialities of my subject–spelling, grammar, historical dates and the like–and focus on large trends, like being coherent, doing analysis and having the reader understand the content.  It has been a struggle.  Besides a raft of coordinators demanding spelling programs, I want to do right by my students.  Cultural currency is important–our ideas will not be judged if the reader dismisses your work due to spelling errors.  I find, though, that the resources allocated to such things is better spent elsewhere–the kids who can’t spell can’t do plenty, while the others don’t need the spelling program.

Instead, I look at long timelines.  Until last year, I taught multi-age.  I had a two year timeline; students often came back from the summer seemingly smarter, and tested great before leaving me for good.  Success!  Success?

At this point, I am struggling with the balance.  Before you respond “both” to the question of micro or macro, know that choices have consequences–what you choose is also you not choosing something.  Imagine a spectrum where, on one end, we did 100% spelling.  Now, slide away form that so you can include grammar, then writing and then perhaps a little reading.  As you add, you approach zero spelling.

Question: Is there a point where you do so little spelling that it really isn’t worth it to do any?  That’s where I am (I don’t do any).

Question: What, then, is the point of diminishing returns where input and output are maximized?

Question: In plotting out the returns of all areas needed to be covered, which has the “most bang for the buck”?

In economics, this is called the “opportunity cost” of choices–what do you lose when you make a choice, and is that loss less than the choice you went with?

All a good struggle to have.

The Follow Up: Interventions Often Need Interventions

9780812973815

There is no silver bullet.  Change requires tweaking and digging until you get to the bone.  Be ready; it’s hard work.

Last week I wrote some idea of looking at data, in part use counterfactual questions.  In a counterfactual assessment you prove what is wrong, because it is easier than proving what is right.  For example, I can’t prove that coming to school daily works, but I can easily show that students who are tardy and miss a lot do poorly.

So, what do we know does not work?  What stops learning?  These questions came while reading Nassim Nicholas Taleb’s The Black Swan: The Impact of the Highly Improbable.  Check out that post (Counterfactual Questioning of Data and Needs) here.

The working method behind counterfactual questioning is to continue to shed what does not work.  As a teacher or leader, I am sure you already know the kids the system is not working for.  And I am sure you know what areas are not working for them.

Why, then, do you continue with your system?

I will bet that the finger points to the kid, the family, or some other factor (poverty, tragedy, punkishness….)–but NOT the system in general.

When we adopt a pedagogy, we do so because we believe it will work.  It’s proven.  There’s data behind it–perhaps, even, from your school!  But, looking at your data now, you know it is not working for all students.

And you adapt.  And that does not work.  Or it doesn’t work for another group.    Or, the system is blamed and abandoned TOO SOON.  What you are doing, or some version of it, probably has many components that do work, but you have to adapt it to the students sitting in front of you.

I used the example of reading in that post, so let me focus on it again.  By putting SSR (Sustained Silent Reading) in the school we can focus more specifically on what does not work until it does (we control the environment, thus making it a nice little laboratory). Is it seating (move them), book selection (look at level and keep trying new titles), fluency ro eye tracking (Google Text to Speech)…. And as we take away what does not work we are left with nothing but reading.

Sherlock Holmes said, “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.”  This is kind of like that–the second half of the process.  Once you stop trying to make your square peg fit the round hole, you’ll find a whole bunch of things that don’t work.  Great.  And as you try more, seemingly improbable solutions, you will find more that don’t work.  Until you hit on one that does work.  Pay dirt.

Start with your solid pedagogy.  You have to.  But, then, measure.  If it isn’t working, figure out why.  Then try something.  You might start with the old toolbox–why reinvent the wheel–but quickly throw the net wider.  As you do, you’ll get a sense of the student and why they aren’t succeeding.  Be Sherlock Holmes and find that improbable solution.

511ngcy9isl-_sx395_bo1204203200_

All of that said, let me recommend Nancie Atwell’s In the Middle: A Lifetime of Learning About Writing, Reading and Adolescents.  In short, she promotes reading and writing workshops: Students work, and she offers mini-lesson and conferences.  Unlike current workshop queen Lucy Caulkins, Atwell’s program adapts, constantly.  The program fits the student.

Don’t worry if reading or writing or middle school is not your focus–the methods she employs works for any subject.  And she is honest about her journey finding these methods–she’s being doing for thirty years and is still refining.  Try to introduce a bit of it into your instruction.

 

 

Counterfactual Questioning of Data and Needs

9780812973815I have been reading Nassim Nicholas Taleb’s The Black Swan: The Impact of the Highly Improbable after reading a Wikipedia article about Stephen Bannon and his influences.  This was one, and it made the news clearer in many ways.

It also showed me a different way of using the data we are now getting almost too much of.  Here are a few interesting take-aways as how I might look at data new–and use it moving forward.

1. Look at the outliers.  When we focus on what works, or decide that it does not work, we often don’t really look at who falls short.  We look at the bulk–the middle kids–and evaluate the program based on them.  Then, we look at how we can “fix” the others to fit into the system which (we think) works.  Or blame them for their shortcomings.

It becomes about fitting a square peg into a round hole.

For some, we intervene with Tier II or even go EST (Educational Support Team), if they don’t already have IEPs or a 504.  For others, we explain it away (often with history–they’ve always done that–or other–family issues, anxiety, a bad day….) which is another version of blame-the-kid.  But if 20% of the group is not succeeding, how can we call a program a success?  (FYI, I’ve seen programs at 40% success declared working, because you have to understand…)  And if the response is that those 20% get something different, a) does that work, b) is that the best use of resources, and c) is that best for all kids?

One concern about any program outside of the normal program is measuring effectiveness of that intervention.  Often, if the intervention works–great work–but if it fails the child we find excuses with the child.  Honestly, once a kid falls to Tier II or Tier III they stay there.  (There is little incentive to change that, but that’s addressed later).  So the first step needs to be coupled with….

2. Tailor the group to the outliers.  While a program should not revolve around a single student, are there changes that would help other students as well?  For example, in the talks about anxiety our school did the presenters sold the idea that such interventions help other students.  True.  Compared to our childhood, we no longer time tests while we do allow movement breaks, reteach, and incorporate a dozen other techniques that began with outlier kids and are now mainstream.  But what is required is….

3. Thinking of needs outside the standard toolbox.  My students, for example, started the year talking about needing “hands on” learning (whatever that means).  Then, it was movement breaks.  Then, social time.  All of these were valid, but they did not really have an idea of what they needed.  Instead, our team read between the lines and gave them what they actually needed.  But even then, our responses were pretty standard (more projects, less seat work).  And, we are having a difficult time with a different outlier group (we are working on working independently, working with peers, time management and other, new issues).

Recently, we have looked at other needs.  We have a few kids questioning their identity (sexual orientation, interests, even their given name).  We have several kids with family issues–really worried about their family, illnesses, moving, relationships and the like.  We can see with after school groups that a desire for identity and connection exists.  A social survey with gave students showed a desire for help with peers and impulse control.  What to do with that?  We are currently looking beyond the wisely constructed groupings to regroup with needs, and soft-sell the focus bringing them together now.

But we aren’t counselors and this is not a therapy session.  Our goal is academics.  Plus, there are other needs and other solutions.  For example, in 2014 I found that what moved scores was participation in a competitive activity (AAU, Mini Metro, Far Post, ballet…).  There are a number of needs, many you don’t see because our students are faces in a specific location and a data point (go out to recess or the common spaces and see them differently).  So, we find that need by….

4. Ask counterfactual questions.  In a counterfactual assessment you prove what is wrong, because it is easier than proving what is right.  For example, I can’t prove that coming to school daily works, but I can easily show that students who are tardy and miss a lot do poorly.

So, what do we know does not work?  What stops learning.  Time is a general issue.  Interruptions are more pressing.  Homework does not really help those we want it to help.  But what the homework issue shows us is that time is needed–to read, for example.  When we do get kids to read at home, they get better at reading.  Now, we don’t know if that is due to attitude or practice or what.  But by putting SSR (Sustained Silent Reading) in the school we can focus on that.  For those who is does not work, why?  Seating (move them), book selection (look at level and keep trying new titles), fluency (Google Text to Speech)….  And as we take away what does not work we are left with reading.  But you need to ask….

5. Instead of asking what they need, ask what they don’t need and shed it.  Does a student who reads all of the time need SSR?  They might like it, or it may serve a calming function, but it does not add much to the academic purpose of SSR for them.  Found time.  They might, though, need that calming function–can it be given more effectively (offer meditation or counseling)?  Or, when their non-needs are whittled away, what’s left?  You can ask this of the student who gets instrument lessons outside of school (why do they still get school lessons?), the kid who plays Mini-Metro during the basketball unit (is relearning how to dribble worth their time?), and so on.  And we know who is lacking–we can tick it off the top of our head.  We have a list ready of what is not working.

A simple example is leadership.  We say we need it.  In our school, we have a group of intelligent athletes who think leadership is getting their way, instead of making their peers look better and rise up together.  They pick the team, have the ball, take the shot, and are ready with the excuse of why X didn’t get the ball (“They aren’t very good.”)  They are teachers waiting to be shown how.  If they teach X, that kid X will get the ball.  A simple pivot around the data sets that up–Kid A is measured on basketball skills and Kid B is measured on his teaching others.

6. And before you talk about time, think about how much time we spend accommodating, chasing down a few kids, and adapting work.  Differentiation is hard and time consuming and still we have failure–because we differentiate the wrong thing.  Use data to find the right thing, but cutting back everything that does not work.

Of course, this is still turning around in my head.  But it might cause you to think about things differently as you approach configuration, schedule and PBL in the next month in anticipate of the next school year.

Why Data Matters

My primary blog is Middle School Poetry 180, which you can find here.  I live in the world of the Humanities even while I love all things data.  But, that’s why the bent is towards English majors.

Unfortunately, too many English and liberal arts majors throw their hands up and say, “I can’t do math.”  Idiots.  First, you can do math.  Second, that just gives license to all of the STEM kids to dismiss your teaching of poetry in a “why do we need to know this” way.  Don’t go there.

Let me be brief.

First, data should drive instruction.  If we believe in a growth mindset, data should be used to measure a baseline and then measure progress (if you don’t believe in a growth mindset, please get out of education).  What you do in the classroom should have an impact on student learning.  That’s what we are paid for.

When our school board planned to change how literacy intervention was delivered, I was asked to help craft a slide presentation defending how it had been done.  When I asked for data–some measure that the program worked–I was told there was none.  Instead, I was asked “How can you measure a child’s love of reading?” by one of the instructors.  Well, you can measure how many books they read independently, how much time they spend reading each week, or even how they rate the books they read.  None of those are perfect, but they begin to ask the question.

If you look, you will find numerous programs around your school that have little to no data showing their efficacy.  On a committee to create an alternative program (I began my teaching career in behavioral programs, so I get tapped for such initiatives), I was shown a model program at a local 5-8 school.  When I asked how many students “graduated” back to the mainstream, they had no answer.  In fact, none had.  The program was a warehouse for problem kids.  When I researched data or papers on such programs, I found (at the time) little existed.  I did find one interesting study that explained the phenomena of no data: When a kid succeeds in an alternative program the program is praised, and when they don’t the kids are blamed (“What can you do with those kids?  You tried.”).  Data tends to reveal just how wasteful such programs are–in dollars, time and in wasted youth.

In your own classroom, you might be teaching skills that kids already know.  You might also think they “got it” when they don’t.  Our school (and state) have embraced Proficiency Based Learning (PBL).  If you are not using data, look it up.  Formative and summatives.  No zero.  Redo work until it’s right.  It is transformative.

Second, data drives everything you do regardless of your awareness.  Take the schedule.  One of the issues in our middle school is that students pick up new subjects (Band, World Language) and don’t lose old ones.  Where does the time come from?  When they diced that out, the kids had a lot of transitions–fourteen over a seven hour day.  Adding it up, kids spent more time in the hallways over the course of a single day than they did learning Art in the week.  Data–time–revealed the issue.

Look at your own class.  With the push towards Project Based Learning (PBL, not to be confused with the other PBL) teachers really need to question the value of time spent.  For example, we had students create maps of Europe.  Some spent a full hour coloring them with colored pencils.  Did that add to their understanding of geography?  No, but it took an hour away from their working on their reading.  A basic question to ask is if the time put into something will get the value.

Time is a resource.  It is valuable.  Forget dollars and computers and supplies, time is the resource that helps students most.  Just having time to read can do wonders.  And we measure that in minutes.  In days.  Twenty minutes of Sustained Silent Reading (SSR) over one hundred and eighty days equals sixty hours of reading.  For a kid who refused to read, that’s gold.

Don’t put on a sad math face.  Just look out at your class; can you tell who “gets it” or who is progressing?  Is the project you are doing worth the time, or having effect at all?

Hattie, Homework and an Insightful Analytical Blog Post (and Cautionary Tale)

Our school is struggling with the issue of homework.

Recently, we have moved away from it.  Some parents equate more homework with academic rigor and quality.  Of course, the data says otherwise–but also supports that community perception of the school goes up when the homework load goes up.  Now, the rallying call is for a research based policy on homework.

I often start with John Hattie and his file reviews and meta-analyses because the guy covers everything.  From his file reviews you can also find great original research worth pursuing.

Rather than rehash, let me guide folks to an insightful analysis of Hatties’ conclusions in “Homework: What does the Hattie research actually say?” on Tom Sherrington’s headguruteacher blog.  Sherrington does a great job looking beyond the overall effective rating of d = 029, which implies that homework is not an effective (Hattie argues that a practice is “effective” when above d = 0.40, and even that statement is a simplification of his work).  Note: Sherrington’s analysis begins a bit technical, a reflection of Hattie’s technical meta-analysis that he’s analyzing.  Wade through it–it’s worth it.

In short, Sherrington notes that Hattie reports that homework is ineffective at the primary grade level (d = 0.15), but quite effective at the secondary level (d = 0.64).  Of course, these results have caveats about type of work and the like.  Read the article, as Sherrington goes in-depth with the details and their implications.

But Sherrington’s analysis is also instructive when we look at bias in analysis and reporting.  Up front, he makes his pro-homework views clear and shares a link to another piece of his, ‘Homework Matters: Great Teachers set Great Homework’.  The analysis that follows is, as I’ve said, pretty solid.  Until the end.

All of this makes sense to me and none of it challenges my predisposition to be a massive advocate for homework.  The key is to think about the micro- level issues, not to lose all of that in a ridiculous averaging process.  Even at primary level, students are not all the same.  Older, more able students in Year 5/6 may well benefit from homework where kids in Year 2 may not.  Let’s not lose the trees for the wood!  Also, what Hattie shows is that educational inputs, processes and outcomes are all highly subjective human interactions.  Expecting these things to be reduced sensibly into scientifically absolute measured truths is absurd.  Ultimately, education is about values and attitudes and we need to see all research in that context.

Let’s ignore the statement “All of this makes sense to me and none of it challenges my predisposition…” which is the definition of denial (or perhaps cognitive dissidence).  You would think, at least, Sherrington would concede that at the primary level homework offers little benefit (remember: d = 0.15 vs. d = 0.40 entering the effective range).  No.  Instead, he qualifies, “Older, more able students in Year 5/6 may well benefit…”  That word: May.  Ugh.

Through such phrases a bus runs through.  After presenting, breaking down and analyzing the data he a) ignores the data that refutes his viewpoint, b) presents an alternative based on no data.  You cannot do that!  It’s not good science!  If Sherrington wants to use his theory as a basis for more research, great.  Instead, he simply argues that the data says one things, he believes another and so he’s going with his gut.  At least he’s transparent about it.

But I have been in countless meetings and conversations like this: Someone concedes that a practice is not effective and then justifies its continuation.  Parents and educators will often come with opinion pieces that seem like common sense, but with little data.  With a little research these views are often refuted, and sometimes quite harmful practices.  Yet, the idea will persist.  As professionals and organizations we refuse to trust data–or even consider it.

The posts to Sherrington’s piece are a typical reaction to any presentation of data that challenges orthodoxy.  Some accept it, but many qualify their views and use that data Hattie presents in an interpretive way.  Again, a good study in identifying bias.

My suggestion in such situations is to turn it around.  First of all, if some primary kids “may” benefit, might it also be said that some secondary kids “may not” benefit?  Do they get penalized?  What’s the plan for them?  Remember–when there is a majority, there is also a minority.  Why do we make policy when the majority agrees with us, but ignore the majority when they disagree?

Second, all choices have consequences: When you choose one thing, you are giving up another.  To have homework, students and teachers are giving up something.  That might be something simple like time–a student with thirty minutes of homework loses thirty minutes of play time, for example.  Is the loss worth the gain?  Hattie’s analysis seems to indicate that little is gained at the primary level, so any loss might not be worth it.  Educators should focus on the trade-off.  Is it worth it?

For Sherrington, he seems as interested in getting in the content and practice in the face of limited class time.  He is trading off the student’s time for academic gain.  That might be a fair trade-off, especially if the students are part of the decision making process.  But there might be other inefficiencies in Sherrington’s lesson planning that can be exploited (I have no idea) that a student might want addressed before they give up their after-school time.

Is it worth it?  Really, that’s the essence of all of this data analysis.  Sherrington needs to respect that data a bit more instead of dismissing it.

Evaluating a Program With Minimal Data Points: Step 3: Where in the Year is the Gain?

Now we are getting to the meat of the sandwich!

Why Precise Accountability Is Important

Just because a student is learning does not mean a teacher is teaching.

Can we take credit for success?  Or failure?  We need to know the effectiveness of our program if we hope to increase that effectiveness.  What, for example, if the school year gains only make up for a huge summer regression?  What if students, after a year of work, only gain slightly?

Knowing precisely where gains and losses occur is essential for change–or, for standing by a program that works!  Well meaning initiatives are the norm, but they are often based on assumptions.  Two things then happen: 1. The reality of how learning occurs does not match, so the results show no change, or 2. Those forces against change raise concerns that are as valid (or more valid) than the evidence supporting the original initiative.  In the end, the initiative fails, fades or simply disappears and everyone feels just a tad more of initiative fatigue.  Precise knowledge of where programs succeed and fail stops such failures and creates real change.

For example, our SU had a push for something called “Calendar 2.0”, a re-imaging of the school year calendar that would break the year into six week unit sized chunks with a week or two break between.  It added no more days to the year, but spread out school days.  The intent was good.  During the breaks teachers could plan their next unit in reaction to the results of the previous unit’s results, and students could get help mastering skills they had yet to gain.  Schools might even be able to offer enrichment during that time!  The shorter summer break was designed to prevent summer regression.

For families, it shrunk the summer break, and created more week-long breaks.  It cut into summer camps and made child care difficult and somewhat random.  There were a vocal group, too, that argued for the unstructured time that summer afforded.  In reality, many families were going to plan vacations during that time–told that their child was reading poorly and needed to stay would not trump the money already spent to go to Disney World.

Because Calendar 2.0 was not based on the clear need of students in our schools, it failed.  It sounded good–summer regression!  Planning time!  Time for shoring up skills!–but there was no local data supporting it.  Of all the areas teachers saw in need of shoring up, working in the summer did not rank highly.

Where in the Year is the Gain?

In my last post, we looked at year-to-year gains and regression.  When we look at the 12 kids who regressed, half did so over the summer.  But half did so over the school year.  So, over 180 days of instruction student reading actually regressed for 6 students.  Our school made them go backwards.  Both summer or school year regression are results we should be concerned about, but the latter points to something we can control but are not.

Below, I identify students who had a dramatic difference between their summer and school year gains.  Some students showed consistent gains, while others consistently stagnated.  My two periods of examination–summer and school year–are based on discussions people at my school were having about programs we could institute (Calendar 2.0 being the most prominent).  You should do those time periods that you feel are important to you.  It could be the November-January holiday period (where family strife makes learning difficult), or May-June (end-of-year-itis) or days of the week (Mondays are about mentally coming back, Wednesday is the drag of “hump day” and Friday is checking out–so when DO kids focus?).  The important part is to use data to examine it.  All of my parenthetical asides are assumptions, many of which I have found untrue.  You will be surprised how assumptions and old saws are wrong.

Students Who Gain Over School Year

Instead of always looking at the negative, let’s try and determine where kids succeed.  These students (scores highlighted in yellow) showed significantly more gains in reading over the school year than the summer.  In fact, relatively, this group lost ground over the summer.

Note how, because I gave the DRP in the spring, fall and again in the spring I was able to measure a) year-to-year growth, b) spring-to-fall growth (in short, gains or losses during the summer) and c) school year growth.

Reading Data Gains

The difference between the school year and summer demonstrates the importance of being in a literate environment for reading growth to occur.  Being forced to spend time with text leads to reading success.

Note that growth occurred both in students who struggle with reading (expected) and those who are in the top group.  Even good readers benefit from the literate school environment.  If these students get more time in a literate environment–more reading time–these gains should continue and increase.

Summer Gains, School Year Loss

There is a population who sees a loss in reading progress over the school year (notes in that dark khaki color), yet sees gains over the summer.  They are a diverse group for which this phenomena could have many causes.

Reading Data Gains 2

Still, one factor carries through many of those identified: Time.  Many of these students lead busy lives, with responsibilities including sports, family, work and school.  Reading for fun, and leisure time in general, is at a premium.  Without practicing their reading, they show no growth–or a loss.

In the summer, these students enjoy a lighter schedule.  They fill it with reading.  In order for them to see year round growth they need time.  These same results have been observed in other studies and are especially prevalent in middle class students saddled with activities.

Will Successes Scale Up?

Okay, so we saw some success.  Often, when we do nothing, someone gains.  Was it me?  The conclusion I reached–more time spent reading will improve reading skills–makes instinctual sense.  And the research backs that up.  Good, right?

That said, the data I have at hand is thin.  I am relying on a certain knowledge of my students and that invites bias.  My sample sizes are small.  As a data point, the DRP is more like a machete than a scalpel. (Read The DRP as an Indicator Of….)  Will more Sustained Silent Reading (SSR) result in progress?  It will take some time for the data to prove me out or cause me to change course.  And, of course, more data, and more precise data.  But, I am aware of all of this as I move forward.  My program will react not to theory, but to what students experience in the classroom.

For example, of those 12 students who regressed, 3 are in the top stanines–they have nowhere to go.  Similarly, the glut of students with none to minimal gains are also in the top stanines nationally–they had nowhere to go.  But, I am wary of making excuses, so data.

*

Restatement: Introduction to These Next Few Blog Posts (Backstory for those coming to this post first).

We get a lot of data. It may come in the form of test scores or grades or assessments, but it is a lot.  And we are asked to use it. Make sense of it. Plan using it.

Two quotes I stick to are:

  • Data Drives Instruction
  • No Data, No Meeting

They are great cards to play when a meeting gets out of hand.  Either can stop an initiative in its tracks!

But all of the data can be overwhelming.  There are those who dismiss data because they “feel” they know the kids.  Some are afraid of it.  Many use it, but stop short of doing anything beyond confirming what they know–current state or progress.  And they can dismiss it when it does not confirm their beliefs. (“It’s an off year”)  Understanding data takes a certain amount of creativity.  At the same time, it must remain valid.  Good data analysis is like a photograph, capturing a picture of something you might not have otherwise seen.

This series of blog posts will take readers through a series of steps I took in evaluating the effectiveness of my reading program.  I used the DRP (Degree of Reading Power), a basic reading comprehension assessment, as my measure because it was available.  I’m also a literacy teacher, so my discussion will be through that lens–but this all works for anything from math to behavior data.

* A stanine (STAtistic NINE) is a nine point scale with a mean of 5.  Imagine a bell curve and along the x-axis you divide it into nine equal parts.  The head and tail is very small area (4%) while the belly is huge (20%).  Some good information can be found in this Wikipedia entry.