The Follow Up: Interventions Often Need Interventions

9780812973815

There is no silver bullet.  Change requires tweaking and digging until you get to the bone.  Be ready; it’s hard work.

Last week I wrote some idea of looking at data, in part use counterfactual questions.  In a counterfactual assessment you prove what is wrong, because it is easier than proving what is right.  For example, I can’t prove that coming to school daily works, but I can easily show that students who are tardy and miss a lot do poorly.

So, what do we know does not work?  What stops learning?  These questions came while reading Nassim Nicholas Taleb’s The Black Swan: The Impact of the Highly Improbable.  Check out that post (Counterfactual Questioning of Data and Needs) here.

The working method behind counterfactual questioning is to continue to shed what does not work.  As a teacher or leader, I am sure you already know the kids the system is not working for.  And I am sure you know what areas are not working for them.

Why, then, do you continue with your system?

I will bet that the finger points to the kid, the family, or some other factor (poverty, tragedy, punkishness….)–but NOT the system in general.

When we adopt a pedagogy, we do so because we believe it will work.  It’s proven.  There’s data behind it–perhaps, even, from your school!  But, looking at your data now, you know it is not working for all students.

And you adapt.  And that does not work.  Or it doesn’t work for another group.    Or, the system is blamed and abandoned TOO SOON.  What you are doing, or some version of it, probably has many components that do work, but you have to adapt it to the students sitting in front of you.

I used the example of reading in that post, so let me focus on it again.  By putting SSR (Sustained Silent Reading) in the school we can focus more specifically on what does not work until it does (we control the environment, thus making it a nice little laboratory). Is it seating (move them), book selection (look at level and keep trying new titles), fluency ro eye tracking (Google Text to Speech)…. And as we take away what does not work we are left with nothing but reading.

Sherlock Holmes said, “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.”  This is kind of like that–the second half of the process.  Once you stop trying to make your square peg fit the round hole, you’ll find a whole bunch of things that don’t work.  Great.  And as you try more, seemingly improbable solutions, you will find more that don’t work.  Until you hit on one that does work.  Pay dirt.

Start with your solid pedagogy.  You have to.  But, then, measure.  If it isn’t working, figure out why.  Then try something.  You might start with the old toolbox–why reinvent the wheel–but quickly throw the net wider.  As you do, you’ll get a sense of the student and why they aren’t succeeding.  Be Sherlock Holmes and find that improbable solution.

511ngcy9isl-_sx395_bo1204203200_

All of that said, let me recommend Nancie Atwell’s In the Middle: A Lifetime of Learning About Writing, Reading and Adolescents.  In short, she promotes reading and writing workshops: Students work, and she offers mini-lesson and conferences.  Unlike current workshop queen Lucy Caulkins, Atwell’s program adapts, constantly.  The program fits the student.

Don’t worry if reading or writing or middle school is not your focus–the methods she employs works for any subject.  And she is honest about her journey finding these methods–she’s being doing for thirty years and is still refining.  Try to introduce a bit of it into your instruction.

 

 

Counterfactual Questioning of Data and Needs

9780812973815I have been reading Nassim Nicholas Taleb’s The Black Swan: The Impact of the Highly Improbable after reading a Wikipedia article about Stephen Bannon and his influences.  This was one, and it made the news clearer in many ways.

It also showed me a different way of using the data we are now getting almost too much of.  Here are a few interesting take-aways as how I might look at data new–and use it moving forward.

1. Look at the outliers.  When we focus on what works, or decide that it does not work, we often don’t really look at who falls short.  We look at the bulk–the middle kids–and evaluate the program based on them.  Then, we look at how we can “fix” the others to fit into the system which (we think) works.  Or blame them for their shortcomings.

It becomes about fitting a square peg into a round hole.

For some, we intervene with Tier II or even go EST (Educational Support Team), if they don’t already have IEPs or a 504.  For others, we explain it away (often with history–they’ve always done that–or other–family issues, anxiety, a bad day….) which is another version of blame-the-kid.  But if 20% of the group is not succeeding, how can we call a program a success?  (FYI, I’ve seen programs at 40% success declared working, because you have to understand…)  And if the response is that those 20% get something different, a) does that work, b) is that the best use of resources, and c) is that best for all kids?

One concern about any program outside of the normal program is measuring effectiveness of that intervention.  Often, if the intervention works–great work–but if it fails the child we find excuses with the child.  Honestly, once a kid falls to Tier II or Tier III they stay there.  (There is little incentive to change that, but that’s addressed later).  So the first step needs to be coupled with….

2. Tailor the group to the outliers.  While a program should not revolve around a single student, are there changes that would help other students as well?  For example, in the talks about anxiety our school did the presenters sold the idea that such interventions help other students.  True.  Compared to our childhood, we no longer time tests while we do allow movement breaks, reteach, and incorporate a dozen other techniques that began with outlier kids and are now mainstream.  But what is required is….

3. Thinking of needs outside the standard toolbox.  My students, for example, started the year talking about needing “hands on” learning (whatever that means).  Then, it was movement breaks.  Then, social time.  All of these were valid, but they did not really have an idea of what they needed.  Instead, our team read between the lines and gave them what they actually needed.  But even then, our responses were pretty standard (more projects, less seat work).  And, we are having a difficult time with a different outlier group (we are working on working independently, working with peers, time management and other, new issues).

Recently, we have looked at other needs.  We have a few kids questioning their identity (sexual orientation, interests, even their given name).  We have several kids with family issues–really worried about their family, illnesses, moving, relationships and the like.  We can see with after school groups that a desire for identity and connection exists.  A social survey with gave students showed a desire for help with peers and impulse control.  What to do with that?  We are currently looking beyond the wisely constructed groupings to regroup with needs, and soft-sell the focus bringing them together now.

But we aren’t counselors and this is not a therapy session.  Our goal is academics.  Plus, there are other needs and other solutions.  For example, in 2014 I found that what moved scores was participation in a competitive activity (AAU, Mini Metro, Far Post, ballet…).  There are a number of needs, many you don’t see because our students are faces in a specific location and a data point (go out to recess or the common spaces and see them differently).  So, we find that need by….

4. Ask counterfactual questions.  In a counterfactual assessment you prove what is wrong, because it is easier than proving what is right.  For example, I can’t prove that coming to school daily works, but I can easily show that students who are tardy and miss a lot do poorly.

So, what do we know does not work?  What stops learning.  Time is a general issue.  Interruptions are more pressing.  Homework does not really help those we want it to help.  But what the homework issue shows us is that time is needed–to read, for example.  When we do get kids to read at home, they get better at reading.  Now, we don’t know if that is due to attitude or practice or what.  But by putting SSR (Sustained Silent Reading) in the school we can focus on that.  For those who is does not work, why?  Seating (move them), book selection (look at level and keep trying new titles), fluency (Google Text to Speech)….  And as we take away what does not work we are left with reading.  But you need to ask….

5. Instead of asking what they need, ask what they don’t need and shed it.  Does a student who reads all of the time need SSR?  They might like it, or it may serve a calming function, but it does not add much to the academic purpose of SSR for them.  Found time.  They might, though, need that calming function–can it be given more effectively (offer meditation or counseling)?  Or, when their non-needs are whittled away, what’s left?  You can ask this of the student who gets instrument lessons outside of school (why do they still get school lessons?), the kid who plays Mini-Metro during the basketball unit (is relearning how to dribble worth their time?), and so on.  And we know who is lacking–we can tick it off the top of our head.  We have a list ready of what is not working.

A simple example is leadership.  We say we need it.  In our school, we have a group of intelligent athletes who think leadership is getting their way, instead of making their peers look better and rise up together.  They pick the team, have the ball, take the shot, and are ready with the excuse of why X didn’t get the ball (“They aren’t very good.”)  They are teachers waiting to be shown how.  If they teach X, that kid X will get the ball.  A simple pivot around the data sets that up–Kid A is measured on basketball skills and Kid B is measured on his teaching others.

6. And before you talk about time, think about how much time we spend accommodating, chasing down a few kids, and adapting work.  Differentiation is hard and time consuming and still we have failure–because we differentiate the wrong thing.  Use data to find the right thing, but cutting back everything that does not work.

Of course, this is still turning around in my head.  But it might cause you to think about things differently as you approach configuration, schedule and PBL in the next month in anticipate of the next school year.

Why Data Matters

My primary blog is Middle School Poetry 180, which you can find here.  I live in the world of the Humanities even while I love all things data.  But, that’s why the bent is towards English majors.

Unfortunately, too many English and liberal arts majors throw their hands up and say, “I can’t do math.”  Idiots.  First, you can do math.  Second, that just gives license to all of the STEM kids to dismiss your teaching of poetry in a “why do we need to know this” way.  Don’t go there.

Let me be brief.

First, data should drive instruction.  If we believe in a growth mindset, data should be used to measure a baseline and then measure progress (if you don’t believe in a growth mindset, please get out of education).  What you do in the classroom should have an impact on student learning.  That’s what we are paid for.

When our school board planned to change how literacy intervention was delivered, I was asked to help craft a slide presentation defending how it had been done.  When I asked for data–some measure that the program worked–I was told there was none.  Instead, I was asked “How can you measure a child’s love of reading?” by one of the instructors.  Well, you can measure how many books they read independently, how much time they spend reading each week, or even how they rate the books they read.  None of those are perfect, but they begin to ask the question.

If you look, you will find numerous programs around your school that have little to no data showing their efficacy.  On a committee to create an alternative program (I began my teaching career in behavioral programs, so I get tapped for such initiatives), I was shown a model program at a local 5-8 school.  When I asked how many students “graduated” back to the mainstream, they had no answer.  In fact, none had.  The program was a warehouse for problem kids.  When I researched data or papers on such programs, I found (at the time) little existed.  I did find one interesting study that explained the phenomena of no data: When a kid succeeds in an alternative program the program is praised, and when they don’t the kids are blamed (“What can you do with those kids?  You tried.”).  Data tends to reveal just how wasteful such programs are–in dollars, time and in wasted youth.

In your own classroom, you might be teaching skills that kids already know.  You might also think they “got it” when they don’t.  Our school (and state) have embraced Proficiency Based Learning (PBL).  If you are not using data, look it up.  Formative and summatives.  No zero.  Redo work until it’s right.  It is transformative.

Second, data drives everything you do regardless of your awareness.  Take the schedule.  One of the issues in our middle school is that students pick up new subjects (Band, World Language) and don’t lose old ones.  Where does the time come from?  When they diced that out, the kids had a lot of transitions–fourteen over a seven hour day.  Adding it up, kids spent more time in the hallways over the course of a single day than they did learning Art in the week.  Data–time–revealed the issue.

Look at your own class.  With the push towards Project Based Learning (PBL, not to be confused with the other PBL) teachers really need to question the value of time spent.  For example, we had students create maps of Europe.  Some spent a full hour coloring them with colored pencils.  Did that add to their understanding of geography?  No, but it took an hour away from their working on their reading.  A basic question to ask is if the time put into something will get the value.

Time is a resource.  It is valuable.  Forget dollars and computers and supplies, time is the resource that helps students most.  Just having time to read can do wonders.  And we measure that in minutes.  In days.  Twenty minutes of Sustained Silent Reading (SSR) over one hundred and eighty days equals sixty hours of reading.  For a kid who refused to read, that’s gold.

Don’t put on a sad math face.  Just look out at your class; can you tell who “gets it” or who is progressing?  Is the project you are doing worth the time, or having effect at all?

Hattie, Homework and an Insightful Analytical Blog Post (and Cautionary Tale)

Our school is struggling with the issue of homework.

Recently, we have moved away from it.  Some parents equate more homework with academic rigor and quality.  Of course, the data says otherwise–but also supports that community perception of the school goes up when the homework load goes up.  Now, the rallying call is for a research based policy on homework.

I often start with John Hattie and his file reviews and meta-analyses because the guy covers everything.  From his file reviews you can also find great original research worth pursuing.

Rather than rehash, let me guide folks to an insightful analysis of Hatties’ conclusions in “Homework: What does the Hattie research actually say?” on Tom Sherrington’s headguruteacher blog.  Sherrington does a great job looking beyond the overall effective rating of d = 029, which implies that homework is not an effective (Hattie argues that a practice is “effective” when above d = 0.40, and even that statement is a simplification of his work).  Note: Sherrington’s analysis begins a bit technical, a reflection of Hattie’s technical meta-analysis that he’s analyzing.  Wade through it–it’s worth it.

In short, Sherrington notes that Hattie reports that homework is ineffective at the primary grade level (d = 0.15), but quite effective at the secondary level (d = 0.64).  Of course, these results have caveats about type of work and the like.  Read the article, as Sherrington goes in-depth with the details and their implications.

But Sherrington’s analysis is also instructive when we look at bias in analysis and reporting.  Up front, he makes his pro-homework views clear and shares a link to another piece of his, ‘Homework Matters: Great Teachers set Great Homework’.  The analysis that follows is, as I’ve said, pretty solid.  Until the end.

All of this makes sense to me and none of it challenges my predisposition to be a massive advocate for homework.  The key is to think about the micro- level issues, not to lose all of that in a ridiculous averaging process.  Even at primary level, students are not all the same.  Older, more able students in Year 5/6 may well benefit from homework where kids in Year 2 may not.  Let’s not lose the trees for the wood!  Also, what Hattie shows is that educational inputs, processes and outcomes are all highly subjective human interactions.  Expecting these things to be reduced sensibly into scientifically absolute measured truths is absurd.  Ultimately, education is about values and attitudes and we need to see all research in that context.

Let’s ignore the statement “All of this makes sense to me and none of it challenges my predisposition…” which is the definition of denial (or perhaps cognitive dissidence).  You would think, at least, Sherrington would concede that at the primary level homework offers little benefit (remember: d = 0.15 vs. d = 0.40 entering the effective range).  No.  Instead, he qualifies, “Older, more able students in Year 5/6 may well benefit…”  That word: May.  Ugh.

Through such phrases a bus runs through.  After presenting, breaking down and analyzing the data he a) ignores the data that refutes his viewpoint, b) presents an alternative based on no data.  You cannot do that!  It’s not good science!  If Sherrington wants to use his theory as a basis for more research, great.  Instead, he simply argues that the data says one things, he believes another and so he’s going with his gut.  At least he’s transparent about it.

But I have been in countless meetings and conversations like this: Someone concedes that a practice is not effective and then justifies its continuation.  Parents and educators will often come with opinion pieces that seem like common sense, but with little data.  With a little research these views are often refuted, and sometimes quite harmful practices.  Yet, the idea will persist.  As professionals and organizations we refuse to trust data–or even consider it.

The posts to Sherrington’s piece are a typical reaction to any presentation of data that challenges orthodoxy.  Some accept it, but many qualify their views and use that data Hattie presents in an interpretive way.  Again, a good study in identifying bias.

My suggestion in such situations is to turn it around.  First of all, if some primary kids “may” benefit, might it also be said that some secondary kids “may not” benefit?  Do they get penalized?  What’s the plan for them?  Remember–when there is a majority, there is also a minority.  Why do we make policy when the majority agrees with us, but ignore the majority when they disagree?

Second, all choices have consequences: When you choose one thing, you are giving up another.  To have homework, students and teachers are giving up something.  That might be something simple like time–a student with thirty minutes of homework loses thirty minutes of play time, for example.  Is the loss worth the gain?  Hattie’s analysis seems to indicate that little is gained at the primary level, so any loss might not be worth it.  Educators should focus on the trade-off.  Is it worth it?

For Sherrington, he seems as interested in getting in the content and practice in the face of limited class time.  He is trading off the student’s time for academic gain.  That might be a fair trade-off, especially if the students are part of the decision making process.  But there might be other inefficiencies in Sherrington’s lesson planning that can be exploited (I have no idea) that a student might want addressed before they give up their after-school time.

Is it worth it?  Really, that’s the essence of all of this data analysis.  Sherrington needs to respect that data a bit more instead of dismissing it.

Evaluating a Program With Minimal Data Points: Step 3: Where in the Year is the Gain?

Now we are getting to the meat of the sandwich!

Why Precise Accountability Is Important

Just because a student is learning does not mean a teacher is teaching.

Can we take credit for success?  Or failure?  We need to know the effectiveness of our program if we hope to increase that effectiveness.  What, for example, if the school year gains only make up for a huge summer regression?  What if students, after a year of work, only gain slightly?

Knowing precisely where gains and losses occur is essential for change–or, for standing by a program that works!  Well meaning initiatives are the norm, but they are often based on assumptions.  Two things then happen: 1. The reality of how learning occurs does not match, so the results show no change, or 2. Those forces against change raise concerns that are as valid (or more valid) than the evidence supporting the original initiative.  In the end, the initiative fails, fades or simply disappears and everyone feels just a tad more of initiative fatigue.  Precise knowledge of where programs succeed and fail stops such failures and creates real change.

For example, our SU had a push for something called “Calendar 2.0”, a re-imaging of the school year calendar that would break the year into six week unit sized chunks with a week or two break between.  It added no more days to the year, but spread out school days.  The intent was good.  During the breaks teachers could plan their next unit in reaction to the results of the previous unit’s results, and students could get help mastering skills they had yet to gain.  Schools might even be able to offer enrichment during that time!  The shorter summer break was designed to prevent summer regression.

For families, it shrunk the summer break, and created more week-long breaks.  It cut into summer camps and made child care difficult and somewhat random.  There were a vocal group, too, that argued for the unstructured time that summer afforded.  In reality, many families were going to plan vacations during that time–told that their child was reading poorly and needed to stay would not trump the money already spent to go to Disney World.

Because Calendar 2.0 was not based on the clear need of students in our schools, it failed.  It sounded good–summer regression!  Planning time!  Time for shoring up skills!–but there was no local data supporting it.  Of all the areas teachers saw in need of shoring up, working in the summer did not rank highly.

Where in the Year is the Gain?

In my last post, we looked at year-to-year gains and regression.  When we look at the 12 kids who regressed, half did so over the summer.  But half did so over the school year.  So, over 180 days of instruction student reading actually regressed for 6 students.  Our school made them go backwards.  Both summer or school year regression are results we should be concerned about, but the latter points to something we can control but are not.

Below, I identify students who had a dramatic difference between their summer and school year gains.  Some students showed consistent gains, while others consistently stagnated.  My two periods of examination–summer and school year–are based on discussions people at my school were having about programs we could institute (Calendar 2.0 being the most prominent).  You should do those time periods that you feel are important to you.  It could be the November-January holiday period (where family strife makes learning difficult), or May-June (end-of-year-itis) or days of the week (Mondays are about mentally coming back, Wednesday is the drag of “hump day” and Friday is checking out–so when DO kids focus?).  The important part is to use data to examine it.  All of my parenthetical asides are assumptions, many of which I have found untrue.  You will be surprised how assumptions and old saws are wrong.

Students Who Gain Over School Year

Instead of always looking at the negative, let’s try and determine where kids succeed.  These students (scores highlighted in yellow) showed significantly more gains in reading over the school year than the summer.  In fact, relatively, this group lost ground over the summer.

Note how, because I gave the DRP in the spring, fall and again in the spring I was able to measure a) year-to-year growth, b) spring-to-fall growth (in short, gains or losses during the summer) and c) school year growth.

Reading Data Gains

The difference between the school year and summer demonstrates the importance of being in a literate environment for reading growth to occur.  Being forced to spend time with text leads to reading success.

Note that growth occurred both in students who struggle with reading (expected) and those who are in the top group.  Even good readers benefit from the literate school environment.  If these students get more time in a literate environment–more reading time–these gains should continue and increase.

Summer Gains, School Year Loss

There is a population who sees a loss in reading progress over the school year (notes in that dark khaki color), yet sees gains over the summer.  They are a diverse group for which this phenomena could have many causes.

Reading Data Gains 2

Still, one factor carries through many of those identified: Time.  Many of these students lead busy lives, with responsibilities including sports, family, work and school.  Reading for fun, and leisure time in general, is at a premium.  Without practicing their reading, they show no growth–or a loss.

In the summer, these students enjoy a lighter schedule.  They fill it with reading.  In order for them to see year round growth they need time.  These same results have been observed in other studies and are especially prevalent in middle class students saddled with activities.

Will Successes Scale Up?

Okay, so we saw some success.  Often, when we do nothing, someone gains.  Was it me?  The conclusion I reached–more time spent reading will improve reading skills–makes instinctual sense.  And the research backs that up.  Good, right?

That said, the data I have at hand is thin.  I am relying on a certain knowledge of my students and that invites bias.  My sample sizes are small.  As a data point, the DRP is more like a machete than a scalpel. (Read The DRP as an Indicator Of….)  Will more Sustained Silent Reading (SSR) result in progress?  It will take some time for the data to prove me out or cause me to change course.  And, of course, more data, and more precise data.  But, I am aware of all of this as I move forward.  My program will react not to theory, but to what students experience in the classroom.

For example, of those 12 students who regressed, 3 are in the top stanines–they have nowhere to go.  Similarly, the glut of students with none to minimal gains are also in the top stanines nationally–they had nowhere to go.  But, I am wary of making excuses, so data.

*

Restatement: Introduction to These Next Few Blog Posts (Backstory for those coming to this post first).

We get a lot of data. It may come in the form of test scores or grades or assessments, but it is a lot.  And we are asked to use it. Make sense of it. Plan using it.

Two quotes I stick to are:

  • Data Drives Instruction
  • No Data, No Meeting

They are great cards to play when a meeting gets out of hand.  Either can stop an initiative in its tracks!

But all of the data can be overwhelming.  There are those who dismiss data because they “feel” they know the kids.  Some are afraid of it.  Many use it, but stop short of doing anything beyond confirming what they know–current state or progress.  And they can dismiss it when it does not confirm their beliefs. (“It’s an off year”)  Understanding data takes a certain amount of creativity.  At the same time, it must remain valid.  Good data analysis is like a photograph, capturing a picture of something you might not have otherwise seen.

This series of blog posts will take readers through a series of steps I took in evaluating the effectiveness of my reading program.  I used the DRP (Degree of Reading Power), a basic reading comprehension assessment, as my measure because it was available.  I’m also a literacy teacher, so my discussion will be through that lens–but this all works for anything from math to behavior data.

* A stanine (STAtistic NINE) is a nine point scale with a mean of 5.  Imagine a bell curve and along the x-axis you divide it into nine equal parts.  The head and tail is very small area (4%) while the belly is huge (20%).  Some good information can be found in this Wikipedia entry.

DRP as an Indicator Of….

When I arrived at my current school a decade ago, there was no definitive measure of a student’s ability to read.  That may be hard to conceive in this data-drives-instruction landscape (although in education you can find plenty of instances where a lot of data will not tell you the most basic things about students), but the feeling was that teachers knew the students and the word was spread as they moved from grade to grade.

In November of that first year, I found that I had a student who did not know alphabetical order.  I had asked her to look something up in the dictionary; she faked it for a few minutes and then threw a fit that got us distracted from the whole enterprise.  Fortunately, my aide noticed the faking and followed up.  This student was a known non-reader, but no one knew she was reading at a second grade level.  How, after all, could someone be reading at so low a level?  And she carried around grade level appropriate books and sat quietly during SSR.  When pressed, she created a scene of distraction.  In trying to fit in, she slipped through the cracks.  She is exactly why we have and use data today.

What is the DRP? (Skip Down for Discussion on Validity)

At a previous job, we had used the Degree of Reading Power, or DRP, on 10th graders.  Created by Questar, the DRP measures reading comprehension.  In the assessment a passage is provided with certain words removed.  Students are asked to fill in the blank from a selection of five words.  The 7th grade test is 63 questions, while the 10th grade was 110 when I gave it years ago (I doubt it’s changed).  The questions start out easy, and get progressively harder as the student goes on.

We use bubble sheets–it’s that dull.  But, in adopting the DRP, we have a screening tool that lets us question who has mastered the basics of reading.  From that baseline, we ask follow-up questions, plus have students write reading journals and answer prompts to measure understanding and deeper meaning throughout the year.  For the hour we put into it, we get what we need out of it.

The DRP also provides some good data.  From the raw score, Questar gives you an Independent Reading Score, or I90.  The I90 indicates the level of a book a student can independently read without problems or assistance.  So, Harry Potter and the Sorcerer’s Stone is ranked a 56, meaning a student who scores an I90: 56 should be able to read it unassisted (this does not take into account cultural literacy or maturity, which is why caution should be used on Of Mice and Men’s “easier” score of 53).  It also offeres I80 and I70 scores, which indicate increasing support offered for understanding, plus a “frustration level”–the point where a student might throw the textbook across the room.  Questar also ranks a student’s score against the nation, providing national percentiles and stanines.  I’ve never asked what database they get this information from (is it against other users of the DRP, or larger pools?), or if it updates every year, but it’s a larger sample size nonetheless.

When I first did this, there was a booklet filled with tables that converted all of this for you, but they later came out with a database for the computer.  The company also provided a directory of popular classroom texts and their DRP, so you could match students with books.  None of these CD ROMs ever really worked for our computers.  Questar seemed locked in the 1970s.  The online information today smells like a dying company or division being run out of habit, where each year someone has an idea to update stuff but never to revamp the entire test for the NCLB age.  Even the name Questar sounds like one of the lesser computers of the early 80’s competing with Tandy and Commadore.  I think they know what they have and keep plugging.

The neatest thing about the DRP, though, is that the I90 score measures across grades.  You can compare an I90 score taken in 2nd grade with the I90 taken with the 7th grade test.  So, if a 2nd grader scores a 43 as a 2nd grader and a 45 as a 7th grader, you know they have not progressed over five years of schooling.  You can also give a poor reading 7th grader a 5th grade test and they will not meet their frustration point until much deeper in, providing a more accurate result.  In the end, I like to measure growth.  The DRP is great for that.

What Does the DRP Really Measure?

Every September we give the DRP to our 7th grade, and every May we give it again.  Because of the design, we can measure I90 growth over the year.  We can also measure it against their 6th grade result.  If we use the 6th grade spring results against the 7th grade fall, we are be able to measure gain (or loss) over the summer.  We can do the same thing when we measure the same kids in 8th grade.

But the DRP is dull.  And, remember, the questions start out easy, and get progressively harder as the student goes on.  For some students, the first hard question throws them.  Then, they just color in dots.  One way Questar makes money is that they sell their bubble sheets and then correct them for you (and put the results on a disk, ready for manipulation).  Instead of paying for that, we took our answer key and made an overlay (overhead transparency sent through the photocopier).  In correcting it ourselves we can see where kids give up from a series of wrong answers.

Which lead us to the question we’ve wondered for a while: Does the DRP measure reading comprehension or stamina?

To answer that question, last September I broke up the 63 question DRP I usually give my 7th graders into three parts with 21 questions each.  Then, I measured growth (or not) with their 6th grade scores from the previous May.  In the end, nothing significant showed up except that one group got better at reading over the summer: Over scheduled kids.  I had read that high achieving middle class kids who participate in a lot of activities–soccer, music, the school play–cannot find time to read during the school year.  My data showed that, but nothing about stamina.  In fact, the ups and downs over the summer made little sense.

But discoveries often happen by accident.  This May, I went back to the old administration of the test–63 questions in one sitting.  Our 8th graders were taking a NCLB mandated Science assessment, so I used that time to give the 7th graders the DRP.  Because the 8th grade were monopolizing our aides and classrooms, I set the 7th graders up in the cafeteria while the kitchen staff were whipping up lunch in the kitchen.  My hope was that the blowers and bacon smell would be white noise and calming as the students and DRP assessments spread out across the antiseptic tables in the grey room.  Some finished quickly, while others lingered over an hour.

The results were not inspiring.  I had been unhappy with my reading program–I’m unhappy every year with both my reading and writing programs, but this year I now had weak data to prove it.  I uploaded my scores into a spreadsheet, looked at growth, ranked and sorted.  The high kids stayed high, and the middle kids stayed in the middle, with a few growing or dropping a bit.  Even that assessment is a bit inflated, if I’m honest.  It was not a good year.

Then there were the kids at the bottom.  About ten students had dropped between ten and thirty points over the school year (on a scale topping out at 80).  This was significant.  Our entire Tier II intervention placement was based on these scores.  Several students who had moved out of Tier II were looking at returning in 8th grade.  Those receiving Tier II were seeing regression.  What, I wondered, was I doing wrong?  (I had ideas, and it started with sacrificing SSR time for any distraction that came down the pipe).

In looking at the names of the students, I realized that those students who either had a diagnosis of ADD or ADHD, or we suspected of having ADD or ADHD, had tanked.  Our literacy group had often wondered to what extent the DRP was a test of stamina as much as it tested reading.

In looked at their answer sheets, I noticed that around the 20th question these students began to get questions wrong.  Not just a few as the questions got harder, but a string wrong and then another string wrong.  The break, I suspected, was because, even when guessing, a student will get some correct, because probability.  They had given up and were just filling in bubbles.  Bad data.

The next week, I had a few of these students redo questions 22 through 42.  They were placed in a quiet room in two groups of four.  I had explained my belief in them and personally appealed to their sense of pride and in controlling the environment.  In short, was trying to get them to focus on the task and then setting up an environment that fostered focus.  Six of the eight did significantly better, from 5 to 12 questions better.  When I had them redo the last twenty questions, I saw the same results.  Five the students went from the 4th or 5th stanine for reading nationally to the 8th.

Of the two students who showed little improvement, one is not ADD or ADHD.  The other is suspected of ADHD and was even more hyperactive than the first administration and openly hostile to the retake.  Either they had learned to read in a week, or I had been measuring stamina before.

Why does it matter beyond the one assessment?  Our school uses the DRP data to decide who get Tier II help and who has “graduated” to Tier I.  Tier II instruction happens against World Language, so it can be a reward or punishment depending on the family.  There is some pressure for students to be taking a World Langauge (often from their parents), or a desire by students to “flunk” into Tier II so they can a) avoid the hard work of learning French and b) be with their Tier II friends.  These numbers weigh heavily in the court of “what’s best for the child”.

It also matters on how we take other, higher stakes assessments.  For their NCLB assessment, Vermont uses the SBAC.  Entirely online, students have a lot of control–if they choose to take it.  Those who click through quickly and take a long break find those answers locked when they return.  They can, though, slowly go through a small number and break.  Then return for a few more.  This is different from past assessments, which means we need to retrain and empower students.  These results tell us that we need to instruct some students in how to take a test–an instruction that in tailored to individual students and different from just attacking the questions themselves.  The results also tell us we need to create a different environment–one in which students can move about without disturbing others, and they are less tied to a clock.

All of this leads me to a more outlandish proposition that I am still thinking about: Our school uses the DRP to measure where students are, but I’d like assessment to be more predictive about potential.  Why?  Because when an assessment just measures, I find the school’s reaction is to address what they think it measures.  So, those who tank the DRP get put into the standard Tier II reading program.  But if we can measure elements that go into that measure–like stamina–it gives us a better idea of what to address.  The potential is there.  The fix, then, might be more around Habits of Mind than more phonics.  At present, we are not sure.

Of course, our support services responds by offering more assessments.  But that is often guesswork and time consuming.   If the cafeteria with bacon wafting through the room is not condusive to results, I cannot image the forty minutes a special educator can give me to do a “quick” BES is much better.  And the coordinator who battered kids with an AIMS-Web in a noisy hallway (the only space available) produced little that was useful.  And, if anything is found, the student is often dumped into a program with a promise that “we’ll work on that” cause when they have time after the reading instruction is done.  No one has time.  In identifying causes, we might find the solution can be had with greater efficiency.

My hope is assessments that can be more predictive, and can be done by empowering students.  By having the students value the assessment, and understanding the consequences of their choices, they own it.  When we give them the tools to do their best work, they use them.  In the end, the measure becomes about reading.

Then we’ll have to find another test for stamina.

Schedule Choices: Class Size vs. Number of Classes

For years educators have been advocating for smaller class sizes.  It’s a noble long term goal, but it is also an example of how a myopic focus one issue can blind people to the big picture, especially in the short term.  And it reinforces the notion that choices means losing something in order to gain.

When they sat down in May to create a schedule for the following year, Grace Have School, a K-8, had smaller classes sizes at the top of their list.  Unfortunately, the primary class sizes were locked.  There were a set number of students (s), and, when the budget was crafted, the number of classroom teachers was determined (t).  The formula was simple:

s/t = c

Because each primary class gets a teacher assigned to them, that class is one section.  In Grace Haven’s case, they had 500 students and 25 full time faculty for core instruction.  That resulted in classes of 20, on average.

Of course, averages are a funny thing as, in the real world of education, they often do not give an accurate picture of the situation.  The administration had decided to have smaller classes in the K-2 and much larger classes in the middle school.  But, the formula holds: If you know two of the variables, the third is a given.  So, if you want a different outcome (i.e., even smaller classes sizes) you need to change the other two variables by shrinking your student population or hiring more teachers.

The Specials, though, had more flexibility.  Because PE, Art, Music, Health and such could be untethered from the core classroom, they had an opportunity to control their class sizes because they could change the number of sections they offered.  By changing the number of sections, they were, essentially, adding teachers.

To understand this, let’s juggle the formula a bit:

s/c = t

Students (s) remains the same: 500.  That leaves the World Language teacher with a range: She could teach one section (c) of 500 (s), or 500 sections with a single student in it.  Both are ridiculous, but you can hopefully see the sliding range.

At present, each Specials teacher had 25 sections with 20 kids in each section.  Over five days, that meant 5 sections a day.

But the desire was to having smaller class sizes for a number of reasons; one of them being that fewer students meant more hands-on chances for each one.  It was solid pedagogy.  So, they aimed for 15 in a class.  Punch that into the formula:

500/15 = 33.3.

Let’s assume that’s only 33 needed, with a couple of classes having 16 students.  Still, in order to have the desired class size each Special would have to offer 33 sections.  Instead of 5 sections a day, they were now looking at 6.6 sections a day.  Yikes!

That, anyway, was the reaction of the faculty.  The gain of 1.6 new sections a day did not only mean they would be teaching more, or that each of those sections needed to be prepped for.  No, the gain of 1.6 classes also meant the LOSS of 1.6 prep periods.  So, more sections to prep for and less time to do so.

The Specials were caught unaware.  Unfortunately, by the time people realized the trade-off the schedule had gone further down the road.  Attempts to undo the damage were met by others on the committee with annoyance, and their pointing out that they were losing preps garnered little sympathy from primary teachers who had few to begin with.  In the end, they wound up with a difficult schedule.

Lesson: With limited resources (time, bodies), choices are trade-offs.  Know what you are giving up for what is gained.  Is your gain greater than your loss?  Go in eyes-open.