Unknown Known

There are the known knowns; there are things we know we know.  We also know there are known unknowns; that is to say we know there are some things we do not know.  But there are also unknown unknowns–there are things we do not know we don’t know. –Donald Rumsfeld, Secretary of Defense under George W. Bush

Donald Rumsfeld made these comments in response to the possibility of weapons of mass destruction in Iraq.  He got a lot of flack and mockery for it (the flip-floppery of the words and their utter ridiculousness on the surface seemed to encapsulate the Bush White House response on everything, even if the core idea was valid) from an American people that were post 9/11 afraid, sitting on the cusp of a new, violent world.  As Iraq proved, there were plenty that we did not know we did not know.


Nate Silver, the founder of the great sports, politics, and economics data site FiveThirtyEight, also wrote a pretty good (a bit long) book The Signal and the Noise.  It’s worth the read as it will make you think about data in new ways, and really question the assessments we do (and the assumptions we can and cannot make).  In the chapter “What You Don’t Know Can Hurt You” he looks at intelligence failures, including 9/11 and Iraq, and delves into the idea of the “knowns”.  The breakdown is interesting and important:

Known Knowns: You know the problem and have the answer.  You know you need enough chairs for your class to sit.  Thanks to a class list, you know how many kids will be in your class.

Known Unknown: You know the problem but do not have the answer.  This is the first day of class.  You know the material you have to cover.  Unknown are the skills students bring into the classroom: Can they even read the text you are counting on?  Unknown are the personalities: Do they like to learn or are their “issues”?  These questions x 1,000.  Schools combat this unknown with assessments and data; a good administration will give teachers access to databases or just include basic data in your class list (my first job included DRP scores and IEP designations with the list).  Elementary schools spend a lot of time crafting classroom balance when moving kids grades, and reporting on each child before the new teacher takes over.  High schools have more informal information exchanges, in the teachers’ room over coffee or, later, in a bar over drinks.  Schools recognize this problem and use data to solve it as much as they can (caffeine and alcohol are mere balms).

Unknown Unknown: “A contingency that we have not even considered,” writes Silver.  “We have some kind of mental block against it, or our experience is inadequate to image it; it’s as though it doesn’t even exist.” (421)  There is a reason we pay experienced teachers more.  If you’ve had a student teacher recently, or mentored a new one, you can see they have no idea what lies ahead.  Not only do new teachers not realize it can take fifteen minutes for a student to find a pencil, they have NO idea what the home life of many are and how unimportant Poe’s “The Murders in the Rue Morgue” is at that moment in time.  Who knew?  There is a reason so many teachers quit in the first three years.

* If we could get parents, school board members and others to teach for a month the entire tone around teacher negotiations would change.  There is nothing more frustrating (I would argue a microagression) than a non-teacher suggesting a lesson.  “If you would just….”  Thanks, no.

Unknown Known: As veteran and studied educators, we know the results but not the problems (we do make judgments, though–I used to blame middle school teachers for what my 9th graders couldn’t do, until I became a middle school teacher (I try not to blame the elementary teachers)).  I remember a few yeas ago, I got data showing that nearly 25% of my incoming students were illiterate.  Easily, one-third are below grade level.  That state is known.  The problem is not.

Silver does not talk about this–and I have no doubt I’m getting the binary wrong and this state of mental organization doesn’t even exist–but for me the “unknown known” seems to be the blind spot in education.  Instead of looking at a known problem (kids can’t read) and the unknown solution (Why?  How can we fix it?) we should be looking at how we got here.

The Difference and How It Helps

In our district, we are pushed to look at where the student is and solve the problem.  A student can’t read (known) we teach them how (unknown how, but solve it over 180 days).  That’s the school year in a nutshell.  Next year, repeat.

The problem is that it is reactive.  Ten (more?) years ago our district went full RtI (Response to Intervention).  For at least two school years (a lot time for many initiatives to last) we talked about using data (then, a new idea) to drive instruction (an even newer idea).  If Johnny did not know his alphabet, someone would take him aside and drill it; then he’d be with the class and ready to push on as an equal.  It is a great idea.  It reminds me of herding stray sheep to keep the whole alive.  We even got these great laminated folders that detailed much of the philosophy and protocol (I kept mine–it is so clear–while I’ve dumped my share of other such initiatives and supporting materials).  Unfortunately, RtI got watered down by the differentiation push that followed it.  Plus, because PBL (Performance Based Learning) had not yet happened, those teachers in the upper grades complained that their content was too complexly woven together to do a simple intervention (PBL and targets takes some of that argument away–just teach a focused Evidence group, for example, if that’s their weak spot).

The unknown known is not about the student in front of you.  It is about the path that brought them to this moment.  You know the result: one-third of my class struggled with literacy.   I don’t know the problem: For some reason, a large number of students could arrive at middle school without being able to read, but I don’t know why.  We have good teachers in the younger grades.  We have resources.  It is unknown how we got here.

In putting the unknown first (unknown known) we focus on the system, not the individual.  In this instance, I am not looking at my students but those who are coming up.  In theory, if I can know that unknown those coming into my class in future years will not have this issue–they will be able to read, and I can focus on bringing them up even higher.

The Power (and Blind Spots) of Linear Thinking

Semantics?  Perhaps.  But there is a lot to be said about linear thinking.  Our school is dogged by linear thinkers that cannot see the complex interconnectedness that is life (and teaching).  They often hold back discussions and real change because they cannot see how fixing C before B helps get to M–and we all get bogged down.  But linear also clarifies.  In thinking about what lead up to this moment, our solutions look to the future.

The question to ask is simple: How did we get here?  It is one we rarely have time to address because, teaching.  Those one-third in my classroom right now need me.  Those two-thirds need me, too.  Someone, though, needs to be thinking about how we got here.

But we know the unknown unknown (confused or just meta?).  Ten years ago, I sat in a Literacy Committee meeting and heard from the kindergarten about this group.  The next year, the first grade told us about them.  By the time the third grade teacher reported out about “this group” I asked what we were doing about it.  Nothing.  Blank stares.  Crickets.  Then, we threw extra resources at it.  They got better.  To the fourth grade teacher, this was problem: solution.  For me, though, that group was known but I had no idea why.  When they got to me, I knew plenty.

Root out those linear thinkers who bog down every other discussion and put them to work.  One a wall in a conference room put two lists: Cause and Effect.  The latter is what we know (literacy).  Charge them with solving the cause.


Maker Space, Lending Libraries and 21st Century Learning

Maker space has gone from mainstream nerd to mainstream.  Along with Project Based Learning (the other PBL), educators have embraced the maker space movement as a 21st century education must.  As with any movement, the push often comes before the need is evident.  Some good, earnest work at our school lead to naught as our librarian and tech educator failed to start a fire.  Below is my suggestion for expanding our library’s function and supporting PBL and the maker movement.</

As we are contemplating and experimenting with Project Based Learning we are running into a few issues.  My thoughts turned to the Maker Space.  From another conversation I had with our librarian, it sounds like it is not being utilized.  Based on our classroom needs I wanted to offer a few thoughts that might help teachers in their efforts to do projects and encourage "hands on" and exploitative learning–the ideal of the maker space movement.

Lending Library of Tools: As libraries move to expand beyond books, I have read about ones that lend tools.  It makes sense–I use my post hole digger about once every five years; if the library had it I’d still have access and so would the entire community.  I have a box of 10 saws and a few hammers in my classroom from our long-gone tech program, but no one in the school knows it, thinks about it, and if they did borrow it I’d worry they’d never come back.  Still, I only use them a few times a year–they are an underutilized resource.  Put a bar code on them and now they are public, in the database and tracked.

There are so many tools classroom teachers search for; glue guns, longer tape measures and Phillips screwdrivers, are always in demand.  These are not tools most classrooms would need on a regular basis, but when you need them, you need them.  Put them together and bar code them for inventory control.

Depot of Free Consumables: When a teacher needs cardboard, shoe boxes, yogurt containers and such the all-staff email goes out.  In that one week window a whole classroom project hinges on our ability to consume and remember to bring in a certain number of 2 liter bottles.  While teachers could do a better job of planning, when we respond to student needs we often need to turn on a dime–suddenly, shoebox dioramas make sense.  There are constant (cardboard) and perennial (yogurt cups) needs that could be slowly collected–I toss a single good shoebox out about every month.  A single shelving unit of dedicated tubs for such things would go far.

Store of Emergency Cost Consumables: I am often hit up for duct tape.  I have a lot, because when you need it you need it, plus the borrowing.  There is nothing more frustrating than needing supplies.  A project grinds to a half for want of a glue stick.  Or blue paint.  As a community, we are good about sharing.  And our Art teacher is generous, although her budget is limited and each request costs her interruptions of class and time in general.  Still, a basic stockroom of project based supplies would help support the maker ideal.  This would not be a “borrow”, but a store.  And users would be charged in their supplies account for the replacement.  It would be a lifeline, though.

I know the first stumbling block for this re-imagining is space.  Then money.  Management, too.  If the idea is worth pursuing, though, we can work those details out.  If we are serious about project based learning and letting the students lead their own learning, our school is going to have to be ready to provide the resources necessary–and we are going to be hard pressed in our individual classrooms to provide for every scenario.

Elo Balanced Grading

How do you fairly measure two students of different academic ability?

One way is to use a rubric.  It’s cold and fair, giving a score on where each falls on the assignment.  Aligned with the Common Core and it objectively shows if a student is on grade level.  But, if you have heterogeneous classes, how can you push that scoring to better reflect progress, disincentivise coasting  and add more options to your differentiated offerings?

Elo ratings.

Think sports.  On a traditional scoreboard, one team earns a win and the other earns a loss–it does not matter if the teams are evenly matched, mismatched, or how close the score is.  After the match, one team is 1-0 and the other 0-1.  Those of us who have watched underdogs come close–or triumph–know that the score does not always reflect what happened on the field.  Elo weighs those factors and results.

The Elo rating system is a method for calculating the relative skill levels of players in competitor-versus-competitor games.  Created by Arpad Elo, a Hungarian-born American physics professor, it was originally used for chess.  It has been adapted for other competitive sports, including football and soccer.

In short, Elo ratings give a score to two competitors prior to a match based on their ability, and update scores based on new results.  Let’s say a poor team goes against a very good one.  Intuitively, we expect the poor team to lose.  If the poor team loses, their Elo score goes down, but not by much–no surprise in the results.  The very good team rises in Elo, but, also, not by much–again, no surprise.  We expect that outcome.

If, though, the poor team wins, they get a bunch of points in the Elo, and the good team loses a bunch.  The poor team earns more in a victory than the very good team because their victory required playing above their norm.

If you use a more advanced algorithm, Elo can also account for the closeness in a score.  If the poor team should have lost by a huge margin, but the game is close, their points lost are minimal–it recognizes they played above their norm.  Similarly, the points gained by the very good team are also minimal, even with a win, because it should have been a blow-out.

Student Scores, Assignments and Differentiation

In using a rubric with a scale of 1 to 4, the struggling student will inevitably earn a 2.  That score is accurate, but does not take into account risk, progress or the difficulty of the task.  Similarly, a high achieving student typically collects 3s and 4s.  No surprise.  It is the sports equivalent of being 0-1 or 1-0.

With Elo the struggling student can score against the assignment, earning more for tackling one more difficult–even if they fall short.  So, if a student who typically earns a 2 does well on a particularly difficult assignment, they might earn a personal score of 2.5.  In addition, if the high flyer decides to coast, their scores will not be as stellar as if they chose a harder assignment.  Perhaps, a 3.5 personal score instead of an automatic 4.

Not that the rubric should go out the window–that is the objective measure.  It would make sense to have two scores–a cold rubric to have a norm, plus an Elo personal score.  The former would be the equivalent of the scoreboard, the latter the classic Elo.  This personal score would show the struggle.

Many teachers already offer differentiated assignments.  Now, they can make it clear what each’s is score.  Students could choose based not only on interest, but challenge.

Even more advanced, teachers could offer their normal assignments and use student scores to rate the assignment’s difficulty–the scores of various students would weigh it in relation to other students.  There is a bit of bell-curve to this, but that’s why the Elo is for the personal score.

Elo Teacher Scoring

Another application is in balancing grades across varying teachers.

When a local high school implemented Proficiency Based Learning (PBL), and Proficiency Based Grading (PBG, aka SBG) it did not go smoothly.  Those teachers who embraced the formative/summative model tended to produce higher grades, on average, than teachers who clung to the older rating system.  At issue was the lead-up to assessments, supports and allowance of retakes in the PBL classes–with the emphasis on mastery, all that mattered was getting it in the end.  Those old-school teachers tended to use one-and-done assessments, average everything into the grade, dock points for late assignments, and use a 100 points scale.

Whatever your position on PBL and PBS the debate exposed the age-old debate of “easy” teachers vs. the old-school hard -ss.  As GPA becomes more and more important for college acceptance (even if that is only a teenager’s perception) many students are avoiding hard graders and hard classes because of real and perceived grading discrepancies.  That’s a crime.

But what if teachers were graded using the Elo?  Each class could be averaged for GPA and an Elo assigned to that teacher based on that average.  Then, when a student earns a grade in that course, a second Elo score is used to indicate relative difficulty of the teacher.  A more advanced algorithm might take into account individual student GPA in relation to each class’ earned grade.  And even adjust teacher Elo based on that.  If not on the transcript, it might at least give administrators and teachers an idea of grade inflation or deflation.  Correlated with other measures–SAT and the like–it might show which teachers make students earn their learning, and which are just difficult.



Evaluating a Program With Minimal Data Points: Step 1: Sorting Proficiency

Introduction to These Next Few Blog Posts (Skip down to the meat)

We get a lot of data. It may come in the form of test scores or grades or assessments, but it is a lot.  And we are asked to use it. Make sense of it. Plan using it.

Two quotes I stick to are:

  • Data Drives Instruction
  • No Data, No Meeting

They are great cards to play when a meeting gets out of hand.  Either can stop an initiative in its tracks!

But all of the data can be overwhelming.  There are those who dismiss data because they “feel” they know the kids.  Some are afraid of it.  Many use it, but stop short of doing anything beyond confirming what they know–current state or progress.  And they can dismiss it when it does not confirm their beliefs. (“It’s an off year”)  Understanding data takes a certain amount of creativity.  At the same time, it must remain valid.  Good data analysis is like a photograph, capturing a picture of something you might not have otherwise seen.

This series of blog posts will take readers through a series of steps I took in evaluating the effectiveness of my reading program.  I used the DRP (Degree of Reading Power), a basic reading comprehension assessment, as my measure because it was available.  I’m also a literacy teacher, so my discussion will be through that lens–but this all works for anything from math to behavior data.

Step 1: Sorting Proficiency

The first, most basic step in analyzing a program is to find out how many of your students can do the skill you are interested in.  It seems basic, but so many teachers assume they know the answer.  Never assume.  A number of our students read a lot, but don’t really think about their reading–their ability is on the surface and their memory of what they read is weak.  Because we see them with their nose in a book, though, we tag them as a reader.  Others can, but don’t read.  They do, though, test well.  In a previous post I discussed if the DRP was a measure of reading or stamina (or ability to focus).  That may an issue for some.  It is certainly an excuse–they don’t test well, or they’re unable to focus.  You can do analysis later, but you first have to see where your class stands before you begin asking why and proposing solutions.

Choose an assessment and give it.  You want to give one that you would consider valid–that is, few variables.  You can measure books and/or pages read, stamina in SSR, depth of reading using reading logs, or a good old standardized test.  What that means is up for debate, but I used the basic off-the-shelf DRP to measure reading comprehension.  You also want to administer an assessment that you give multiple times.  We administer it in the fall and spring to allow for tracking progress (more on that later).

Here is a sample of a class from Grace Union Elementary:

Reading Data 1

Students highlighted in lavender are in the top three stanines* of achievement nationally, or the top 23% of readers in the same grade.  In this cohort there are 24 out of 47 in that group.  So, half of our students are top readers.  In addition, 3 other students met our local standard, but fell short of the top national stanines (highlighted in purple).  Twenty-seven out of 47 students scoring well is great news, right?

It depends.  Looking at your data, you do have to decide where the line between proficient and below are.  Our supervisory union does that by pegging the “local standard” to a certain national average point.  You might disagree with your local designation–I used the stanine to raise my bar above ours–but since the results change depending on that line your choice is important.  What line will reveal the most about your program?.

For example, an additional 15 students were in the 5th or 6th stanine, putting them at or above the mean nationally.  Not bad, when added to the 24 who were in the 7th, 8th and 9th stanines.  I could comfortably go to our admins and the school board and talk about 39 out of 47 being average or above.  If I wanted to, I could point out that a number of our struggling readers have IEPs or other plans.  Everyone would agree that my program is solid.

Except, locally, that’s not good enough.  When they get to high school they will struggle if they are merely average.  Six of my students were in the lowest stanines, or about 1 out of 8.  Not great numbers.  And 20 students don’t meet our local standard of proficiency.  They are leaving my classroom unprepared for what awaits them.

Rubrics are a start.  What is “proficiency” for you?  The NCLB data is a nice yardstick–what measures do you have that correlate with the data you are seeing there?  Think about if they do, in fact, correlate.  Our old NCLB writing data (NECAP) seemed to inflate our ability, so we have created a local assessment that gives us a little guidance on what to work on.  I correlate that with what I see in my classroom assignments.  If we still gave the old NECAP, I’d take the data with a grain of salt.

For half of our students, reading is a natural activity and they do it well.  Twenty-seven students can claim to be proficient.  But, even for them, I have no idea if I can claim their success a result of my program.

That is the next question.

* A stanine (STAtistic NINE) is a nine point scale with a mean of 5.  Imagine a bell curve and along the x-axis you divide it into nine equal parts.  The head and tail is very small area (4%) while the belly is huge (20%).  Some good information can be found in this Wikipedia entry.

Proficiency Based Learning (PBL) or Standards Based Learning (SBL)?

If you thought all of the great education debates were finished, our supervisory union has debated all year if we are going to adopt the term Proficiency Based Learning (PBL) or Standards Based Learning (SBL).

Although the former is more accurate to our intent–we wish to measure students demonstrating and not simply that standards exist–I cannot warm up to the term.  I think the term is too soft.  So, I did a bit of research as to its origins on dictionary.com.

According to dictionary.com:

1580s, back-formation from proficiency or else from Old French proficient(15c.), from Latin proficientem (nominative proficiens), present participle of proficere “to make progress, go forward, effect, accomplish, be useful” (seeproficiency ). Related: Proficiently.

Those soft French words–Romance languages in general–smack of someone is obfuscating what’s really going on. The entire catalog of edu-speak causes a general distrust among the public.

It is important that the public understands initiatives. They don’t have time for change, yet yearn for the system to work better than it currently is. The issues is trust–they want to trust that the change is, after all of the time and money and resources thrown at it–going to work.

The term standards is more direct. Again, from dictionary.com:

1125-75; Middle English < Old French, probably < Frankish *standord(compare German Standort standing-point), conformed to -ard -ard

So gutteral. It feels real. As if we are holding people accountable. And that is what the citizenry craves. I believe that most people don’t mind spending money on a good education, but that we suspect that money is not being spent wisely.

Leave it to a word of German origin to cut to the core of the debate.