Evaluating a Program With Minimal Data Points: Step 1: Sorting Proficiency

Introduction to These Next Few Blog Posts (Skip down to the meat)

We get a lot of data. It may come in the form of test scores or grades or assessments, but it is a lot.  And we are asked to use it. Make sense of it. Plan using it.

Two quotes I stick to are:

  • Data Drives Instruction
  • No Data, No Meeting

They are great cards to play when a meeting gets out of hand.  Either can stop an initiative in its tracks!

But all of the data can be overwhelming.  There are those who dismiss data because they “feel” they know the kids.  Some are afraid of it.  Many use it, but stop short of doing anything beyond confirming what they know–current state or progress.  And they can dismiss it when it does not confirm their beliefs. (“It’s an off year”)  Understanding data takes a certain amount of creativity.  At the same time, it must remain valid.  Good data analysis is like a photograph, capturing a picture of something you might not have otherwise seen.

This series of blog posts will take readers through a series of steps I took in evaluating the effectiveness of my reading program.  I used the DRP (Degree of Reading Power), a basic reading comprehension assessment, as my measure because it was available.  I’m also a literacy teacher, so my discussion will be through that lens–but this all works for anything from math to behavior data.

Step 1: Sorting Proficiency

The first, most basic step in analyzing a program is to find out how many of your students can do the skill you are interested in.  It seems basic, but so many teachers assume they know the answer.  Never assume.  A number of our students read a lot, but don’t really think about their reading–their ability is on the surface and their memory of what they read is weak.  Because we see them with their nose in a book, though, we tag them as a reader.  Others can, but don’t read.  They do, though, test well.  In a previous post I discussed if the DRP was a measure of reading or stamina (or ability to focus).  That may an issue for some.  It is certainly an excuse–they don’t test well, or they’re unable to focus.  You can do analysis later, but you first have to see where your class stands before you begin asking why and proposing solutions.

Choose an assessment and give it.  You want to give one that you would consider valid–that is, few variables.  You can measure books and/or pages read, stamina in SSR, depth of reading using reading logs, or a good old standardized test.  What that means is up for debate, but I used the basic off-the-shelf DRP to measure reading comprehension.  You also want to administer an assessment that you give multiple times.  We administer it in the fall and spring to allow for tracking progress (more on that later).

Here is a sample of a class from Grace Union Elementary:

Reading Data 1

Students highlighted in lavender are in the top three stanines* of achievement nationally, or the top 23% of readers in the same grade.  In this cohort there are 24 out of 47 in that group.  So, half of our students are top readers.  In addition, 3 other students met our local standard, but fell short of the top national stanines (highlighted in purple).  Twenty-seven out of 47 students scoring well is great news, right?

It depends.  Looking at your data, you do have to decide where the line between proficient and below are.  Our supervisory union does that by pegging the “local standard” to a certain national average point.  You might disagree with your local designation–I used the stanine to raise my bar above ours–but since the results change depending on that line your choice is important.  What line will reveal the most about your program?.

For example, an additional 15 students were in the 5th or 6th stanine, putting them at or above the mean nationally.  Not bad, when added to the 24 who were in the 7th, 8th and 9th stanines.  I could comfortably go to our admins and the school board and talk about 39 out of 47 being average or above.  If I wanted to, I could point out that a number of our struggling readers have IEPs or other plans.  Everyone would agree that my program is solid.

Except, locally, that’s not good enough.  When they get to high school they will struggle if they are merely average.  Six of my students were in the lowest stanines, or about 1 out of 8.  Not great numbers.  And 20 students don’t meet our local standard of proficiency.  They are leaving my classroom unprepared for what awaits them.

Rubrics are a start.  What is “proficiency” for you?  The NCLB data is a nice yardstick–what measures do you have that correlate with the data you are seeing there?  Think about if they do, in fact, correlate.  Our old NCLB writing data (NECAP) seemed to inflate our ability, so we have created a local assessment that gives us a little guidance on what to work on.  I correlate that with what I see in my classroom assignments.  If we still gave the old NECAP, I’d take the data with a grain of salt.

For half of our students, reading is a natural activity and they do it well.  Twenty-seven students can claim to be proficient.  But, even for them, I have no idea if I can claim their success a result of my program.

That is the next question.

* A stanine (STAtistic NINE) is a nine point scale with a mean of 5.  Imagine a bell curve and along the x-axis you divide it into nine equal parts.  The head and tail is very small area (4%) while the belly is huge (20%).  Some good information can be found in this Wikipedia entry.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s