Reflecting on #macul14

So many great things went on these past few days in Grand Rapids, MI. When I consider that the last time I was along the Lake Michigan coast, I was in Ludington to see Dan Meyer, I’d say West Michigan has treated me and my students pretty well.

For those of you who aren’t familiar MACUL is Michigan Association of Computer Users and Learning. They are an organization dedicated to supporting Michigan teachers in the pursuit of making education better through the effective use of instructional technology. This was their annual conference.

The best thing about this particular experience is that there was a little of everything from academic talks like Erica Hamilton’s (@ericarhamilton) talk about Teacher Integrated Knowledge, to incredibly practical, I-could-totally-do-this-tomorrow talks like Bree Davey’s (@studiobree) talk on student blogging. There was the inspirational talks of Rushton Hurley (@rushtonh) to the intensely technical and energetic Leslie Fisher (@lesliefisher) teaching use the finer points of how to use iStuff to take pictures that don’t suck. It’s a lot to take in. Here are some summaries of my favorite sessions:

Erica Hamilton – Teacher Integrated Knowledge – Erica (soon-to-be Dr. Hamilton) did a fantastic job of detailing the how teaching becomes more complex in the 1:1 format. She spoke about the different types of knowledge that a teacher naturally has to draw from in the process of doing his/her job. (Content knowledge, curriculum knowledge, pedagogical knowledge, environmental knowledge, etc.) Switching to a 1:1 format add a few others that aren’t there (at least not nearly with the intensity) otherwise, which can make reinventing teaching to maximize 1:1 seem really, really intimidating because if you are, for example, going to ask your students to go out and make short videos, you might need to teach them what to consider to make a video effective (Lighting, stabilizing the camera, speaking clearly, editing tools, etc.). That isn’t part of most curricula. 1:1 puts teachers face-to-face with having to acquire those kinds of knowledge. My favorite idea that Erica kept coming back to: “It all comes back to what do you want students to learn? What do you need to teach to do it? What tools are available?” Excellent, excellent talk.

Ben Rimes and friends – #MichEd PLN – #MichEd is a PLN dedicated to connecting educators in Michigan with the goal of spreading ideas. This is a dedicated group of educators with a podcast and a weekly twitter chat. They are dedicated to the idea that we all need to grow and all we should have to do is ask the experts around us for help. We all have something to ask and to offer. It seems this panel discussion was best summed up by George Couros the next morning when he said “Isolation is a choice teachers make. If you’re isolated, you’re choosing not to connect.” Speaking of connecting, it was fantastic to get to meet these folks face to face for the first time.

Tara Becker-Utess – Flip Class Model – This was an important talk for me to go to because I was always a little bit uncomfortable with the flipped class model. Tara (whom I am proud to know personally these past 10 years) didn’t quite draw into the realm of the full believers (for reasons I can explain more if your curious), but she did a very nice job of explaining the philosophy behind flipping. I was relieved to find that I was able to identify with a ton of the spirit. Tara made some fantastic points, especially the absolute need for teachers who flip to plan very, very well for their time in class. “If you were used to using 20-30 minutes to lecture, you just got 20-30 minutes to plan rich activities for your students. That is usually a shock to people who are flipping for the first time.” (Those are probably more paraphrased thoughts than actual quotes, to be fair.)

I also had the privilege of presenting a one-hour session. If you are curious what it was about (or if you attended and want to revisit) I invite you to check out the “MACUL 2014 Presentation” link at the top right of my blog to get the details. Thank you for your kindness, warmth and enthusiasm during my session. It definitely did not go unnoticed.

Advertisements

For your students’ sake: Don’t stop being a learner

Yesterday, we designed an Algebra II lesson using 3D modeling to derive the factored formula for difference of cubes. As we began to finish up, Sheila (@mrssheilaorr), the math teacher sitting beside me made a passing reference to being frustrated trying to prove the sum of cubes formula. Me, being a geometry teacher by trade decided to give it a try perhaps hoping to offer a fresh perspective. I mean, I was curious. It looked like this:

2014-02-20 13.26.00

On the surface, it didn’t seem unapproachable. I quickly became frustrated as well. Most frustrating was the mutual feeling that we were so stinkin’ close to cracking the missing piece. Finally, Luann, a math teaching veteran sat down beside us, commented on her consistently getting stuck in the same spot we were stuck and then, as the three of us talked about it, the final piece fell in and it all made sense (it’s always how you group the terms, isn’t it?)

Then this morning, it happened again. Writing a quiz for Calculus, I needed a related rates problem. Getting irritated with the lousy selection of choices online, I decided that I needed to try to create my own. And I wanted to go #3Act and after some preliminary brain storming with John Golden (@mathhombre) (Dan Meyer’s Taco Cart? Nah… rates of walkers not really related…) we found some potential in Ferris Wheel (also by Dan Meyer)! Between my curiosity and my morning got mathy in a hurry.

First I tried to design and solve the problem relating the rotational speed in Act 1 to the height of the red car. That process looked like this:

2014-02-21 20.32.42

Meanwhile Dr. Golden found a video of a double Ferris wheel, which was pretty awesome. Seemed a little it out of my league, so kept plucking away at my original goal.

It clearly wasn’t out of Dr. Golden’s league, as he took to Geogebra and did things I didn’t know Geogebra was capable of. (You’re going to want to check that out.)

So, what is the product of all of this curiosity and random math problem-solving? As I see it, these past 48 hours have done two things: Reminded me of what makes me curious and reminded me what it’s like to be a learner.

I have a feeling my students will be the beneficiaries of both of those products. There’s a certain amount of refreshment that comes from never being too far removed from the stuff that drew us to math in the first place. The problem you want to solve just because you want to see what the answer looks like.

And this curiosity, the pursuit, it feeds itself. In the process of exploring that which you set out to explore, you get a taste of something else that you didn’t know you would be curious about until it fell down in front of you. (For example, Geogebra… have no idea what that program is capable of, which is a shame because it is loaded on all my students’ school-issued laptops…)

And this process breeds enthusiasm. Enthusiasm that comes with us into our classrooms and it spreads. I’m not trying to be cheesy, but much has been said about math functionality in the modern economy, how essential it is in college-readiness and the like with few tangible results. Let’s remember that there are kids who are moved by enthusiasm, who will respond to joy, who will pay attention better simply because the teacher is excited about what they are teaching. It won’t get them all, but neither will trying to convince them of any of the stuff on this poster.

Now, who’s going to teach me how to use Geogebra?

Feeding The Elephant in the Room

I am going to ramble a bit in this piece, but as you read, keep a specific thought in your mind:

When our students have graduated high school, we will know we educators have done our job because _____________________.

Now, onto the ramble:

So, a lot gets said about the struggles of American secondary education. Recently, Dr. Laurence Steinberg took his turn in Slate coming right out in the title and calling high schools “disasters”. Which, as you can imagine, got some responses from the educational community.

Go ahead and give the article a read. I’ll admit that education is not known as the most provocative topic in the American mainstream, but Dr. Steinberg has written a piece that has been shared on Facebook a few thousand times and on twitter a few hundred more. It’s instigated some thoughtful blog responses. You have to respect his formula.

He starts with a nice mini Obama dig.

Makes a nice bold statement early (“American high schools, in particular, are a disaster.”)

Offers a “little-known” study early to establish a little authority.

Then hits the boring note and hits it hard. High school is boring. Lower level students feel like they don’t belong. Advanced students feel unchallenged. American schools is more boring than most other countries’ schools.

Then he goes on to discredit a variety of things education has tried to do over the last 50 (or so) years including: NCLB, Vouchers, Charters, Increased funding, lowering student-to-teacher ratio, lengthening the school day, lengthening the school year, pushing for college-readiness. I mean, with that list, there’s something for everyone

Like it or hate it, that is an article that is going to get read.

However, there isn’t a lot in the way of tangible solutions. The closest Dr. Steinberg comes is in this passage: ” Research on the determinants of success in adolescence and beyond has come to a similar conclusion: If we want our teenagers to thrive, we need to help them develop the non-cognitive traits it takes to complete a college degree—traits like determination, self-control, and grit. This means classes that really challenge students to work hard…”

Nothin’ to it, right? It’s as easy as making our students “grittier”.

Now, I will repeat the introductory thought: When our students have graduated high school, we will know we educators have done our job because _____________________.

That blank gets filled in a variety of ways from the area of employability, or social responsibility, or liberation and freedom, or social justice to a variety of other thises and thats that we use our high schools for. We are using our high schools as the training ground for the elimination of a wide variety of undesirable social things. We’ve used our schools to eliminate obesity, teen pregnancy and STIs, discrimination based on race, gender, or alternative lifestyles. We have allowed colleges to push college-readiness to make their job easier. We’ve allowed employers to push employability to make their jobs easier. The tech industry feels like we need more STEM. There’s a push-back from folks like Sir Ken Robinson who feel like it’s dangerous to disregard the arts.

And they all have valid points. I’m certainly not mocking or belittling any of those ideas.

However, very little is getting said on behalf of the school. We are treating the school as a transparent entity with none of its own roles and responsibilities. It is simply the clay that gets molded into whatever society decides it should be. Well, since the 60’s, society has had a darned hard time making up its mind about what it wants and so the school has become battered and bruised with all the different initiatives and plans, data sets, and reform operations. Reform is an interesting idea when the school hasn’t ever formally been formed in the first place.

So, we have this social institution that we send 100% of our teenagers to in some form or another and we don’t know what the heck its for. No wonder, as Dr. Steinberg puts it, “In America, high school is for socializing. It’s a convenient gathering place, where the really important activities are interrupted by all those annoying classes. For all but the very best American students—the ones in AP classes bound for the nation’s most selective colleges and universities—high school is tedious and unchallenging.”

Public enemy #1 needs to be the utter and complete lack of purpose in the high school system. We are running our young people through exercises… why? For what? What do we hope to have happen at the end? When we decide the answer to that question, then we eliminate the rest. It isn’t lazy to say, “I’m not doing that, because that isn’t my job.” It’s efficient. If you start doing the work of others, you stop doing your work as well.

We’ve never agreed on the work of the American high school, but I suspect some of what we are asking it to do should belong on the shoulders of something or someone else. I suspect as soon as we establish a purpose and simplify the operations around that purpose, we can start to see some progress on the goals that we have for our schools, which will spell success for our students and start to clean up the disaster that so many feel like the high schools currently are.

Perplexing the students… by accident.

It seems like in undergrad, the line sounds kind of like this: “Just pick a nice open-ended question and have the students discuss it.” It sounds really good, too.

Except sometimes the students aren’t in the mood to talk. Or they would rather talk about a different part of the problem than you intended. Or their skill set isn’t strong enough in the right areas to engage the discussion. Or the loudest voice in the room shuts the conversation down. Or… something else happens. It’s a fact of teaching. Open-ended questions don’t always lead to discussions. And even if they did, class discussions aren’t always the yellow-brick road lead to the magical land of learning.

But sometimes they are. Today, it was. And today, it certainly wasn’t from the open-ended question that I was expecting. We are in the early stages of our unit on similarity. I had given a handout that included this picture

Similar Quads

… and I had asked them to pair up the corresponding parts.

The first thing that happened is that half-ish of the students determined that in figures that are connected by a scale factor (we hadn’t defined “similar” yet), each angle in one image has a congruent match in the other image. This is a nice observation. They didn’t flat out say that they had made that assumption, but they behaved as though they did and I suspect it is because angles are easier to measure on a larger image. So, being high school students and wanted to save a bit of time, they measured the angles in the bigger quadrilateral and then simply filled in the matching angles in the other.

Here’s were the fun begins. You see, each quadrilateral has two acute angles. Those angles have measures that are NOT that different (depending on the person wielding the protractor, maybe only 10 degrees difference). And it worked perfectly that about half of the class paired up one set of angles and the other half disagreed. And they both cared that they were right. It was the perfect storm!

Not wanting to remeasure, they ran through all sorts of different explanations to how they were right, which led us to eventually try labeling side lengths to try to sides to identify included angles for the sake of matching up corresponding sides. But, that line of thought wasn’t clear to everyone, which offered growth potential there as well.

2014-02-11 13.51.01

By the time we came to an agreement on where the angle measures go, the class pretty much agreed that:

1. Matching corresponding parts in similar polygons is not nearly as easy as it is in congruent polygons.

2. Similar polygons have congruent corresponding angles.

3. The longest side in the big shape will correspond to the longest side in the small shape. The second longest side in the big, will correspond to the second longest side in the small. The third… and so on.

4. Corresponding angles will be “located” in the same spot relative to the side lengths. For example, the angle included by the longest side and the second longest side in the big polygon will correspond to the angle included by the longest side and the second longest side in the small polygon. (This was a tricky idea for a few of them, but they were trying to get it.)

5. Not knowing which polygon is a “pre-image” (so to speak) means that we have to be prepared to discuss two different scale factors, which are reciprocals of each other. (To be fair, this is a point that has come up prior to this class discussion, but it settled in for a few of them today.)

I’d say that’s a pretty good set of statements for a class discussion I never saw coming.

The Reteaching Tightrope

So, the 70-70 trial has reached it’s first needed reteach session. (I explain the 70-70 trial here.)

Only, here’s the thing: Not every class who needs to explore a topic for the second time in the same situation. As part of my data collection for this trial, I am exploring the mean of the top 10 scores of each class as well as the scores of the bottom 10 scores on each individual assessment. I am doing this with two classes. One had a Top10-Bottom10 gap of 46.1 percentage points. The other class had a gap of 33.6 percentage points.

My reason for exploring this gap is that if a group is struggling to meet the 70-at-70 line, I want to know if where the mastery of those who understand the material compares to the mastery of the students who are struggling.

If there’s a lot of mastery among the top performers and a very low amount of mastery among the lowest performers, then the reteaching session becomes a little bit tricky because a large chunk of the class fits into two categories: Those who get it really well and those who don’t get it very well. Both of those groups are naturally resistant to reteaching. One because it is completely unnecessary and the other because it is completely uncomfortable.

All of which makes for a very delicate classroom management strategy for that hour, which I didn’t have today. I should have seen it coming. The successful students were not inspired to support the struggling students, and in fact (a few of them) blamed the struggling students for what they considered to be a meaningless class period.  The struggling seemed uncomfortable. I kept forcing them to do work they didn’t know how to do.

The class where the high achievers weren’t quite as high and the lower achievers weren’t as low took to the reteaching much, much better. The second try, the gap closed to 28.8 percentage points, with the average of the top 10 scores being over 90%. It seems like that class had a stronger sense that they all had something to gain from the extra learning time…

… as opposed to the other group where the majority felt like they had nothing to gain.

The 70-70 Trial

Education is a world with a whole lot of theories. Intuitive theories at that. I’m sure it’s like this in most professions. We are seeing an issue. We reason out what the problem seems to be. We determine what the solution to our supposed problem seems to be. And we implement.

The problem with that is problems often have multiple causes. Solutions are often biased. Results have a tendency to be counter intuitive. For example, a paper recently published suggested that the increase of homework might actually cause a decrease in independent thinking skills. This probably isn’t a conclusive study, but recognize the idea that if students aren’t demonstrating independent-thinking skills, prescribing a problem-for-problem course of study for them to do on their own might not be the best solution.

This leads me to a trial that I am running in my classroom for a semester. I have four sections of geometry. I am going to leave two as a “control group” (very imprecise usage, I’ll admit) that will run exactly the same as they did first semester. The other two will run “The 70-70 Trial.” This is one of those theories that has gotten tossed about our district many times. It seems intuitive. It seems like it addresses a persistent problem.

The theory goes like this: If you go into a test knowing that 70% of the students have 70% or more on all the formative assessments leading up to the summative assessment, then we know that the students are reasonably prepared to do well on the test. If you give a formative assessment, and you hit the 70-70 line or better, you move on with your unit, business as usual. If you miss the 70-70 line, you pause on the unit until enough of the class is ready to go.

This seems reasonable to some, and ridiculous to others. Our staff meetings have seen some pretty intense discussion over it. Proponents lean on the logic. How can a group of students with high scores on formative assessments struggle on summative assessments? Opponents speak to the time crunch. When do you decide to move on? You can’t just keep stopping and stopping forever? You’ll never get through the material. Both seem like logical points…

But, as far as I can tell, no one has tried it to see what would happen. So, I figured that I had two classes that really struggled their way through first semester. It became very hard to energize and motivate these students because of how difficult they found the material. Perhaps shaking up the classroom management and unit design will add a bit of a spark. These two classes will be the focus of the 70-70 trial. I will use this blog as a way to record my observations and entertain any ideas from people who are looking to give me ideas to help this idea work.

This starts one week from today. I don’t know if it will work. I have my guesses as to what I think will happen, but I am going to keep those to myself. I absolutely want to see this work because if it does, that means my students were successful. My chief area of concern is what to do when 61% of the students score 70% or better (for example). By the rules of the trial, I can’t go on. I need a reteach day, but over half the class finds themselves ready to move on. What do I do to extend the learning for those students, while supporting the learning for those who need some reteaching and another crack at the formative assessment?

These are the kinds of things I will be looking for help with. Thank you for being patient and willing to walk this path with me. I will look forward to hearing whatever ideas you have.

Oh, Data’s driving the decision-making, all right…

Data… oh data… 

Schools, teachers, administrators, school boards, are being asked left and right to “use data” to drive the school improvement process. I put “use data” in quotes because that is term… “use data”… is becoming about as commonplace and vague as the term “school improvement” itself.

Perhaps if we had a better understanding of what it meant to “use data”– personally, I’d like to add a few words in there. How about instead of “use data”, we focus on “using GOOD data WELL“?

Here’s what I’m saying: teachers all over Michigan are being told that they are being evaluated, in part, on how well they “use student achievement data to make instructional decisions” (or something like that…). So, that means… what, exactly?

Are we talking about class averages on summative assessments driving comparisons between two instructors teaching the same course?

Are we talking about teachers deciding to slow down and reteach because of low formative assessment data?

Are we talking about teachers making decisions about what to do with the last two weeks of the semester because their students’ grades are lower than they’d like and they need to do something to inflate them?

All of those examples are decisions made using student achievement data. Are they all effective uses? Are they all using good data well?

I’ll tell you when this hit me. I was preparing a written report summarizing the summative assessments at the end of a geometry unit (a requirement in our district) and I was describing what I thought was contributing to my students’ low unit test scores. In general, this particular test is usually a tough one for the students. It is the first time they’ve seen a math tests completely devoid of number-crunching (gotta love the proving part of geometry!) and that leads to some fairly predictable avoidance behaviors. That is, students avoid practice AND lack of focus on the feedback on their formative assessments, two things that are going to compound the frustration on what is already a frustrating unit. But, alas… I have yet to provide any data to support this conclusion. (Remember, in Michigan the evaluations are becoming more and more focused on how teachers use data to make decisions.)

So, I thought of something. I embrace the practice of allowing students to retake assessments. Now, the nice thing about this potential data set is the retake process is VOLUNTARY! So, the frequency of retaken assessment can give us some indication as to how engaged the students are in one non-mandatory achievement support mechanism.

So, how many formative assessments were retaken during the unit leading into the test that demonstrated the low results? 1.1% (4 retakes out of 360 student-assessments).

Is that a meaningful data set? Well, the students performed a ton better (on average) on Unit 1 test, and the retake rate was up over 12%. Not as well on Unit 2, retake rate 6-ish%, and now quite poorly on Unit 3 Test with a retake rate 1.1%.

There appears to be a correlation (although with only three data pairs to consider, it isn’t really something worth talking about), but which came first? Is the material more difficult, so it drove down engagement in the retakes? Or did the reduced engagement in the retakes drive down achievement?

Quite frankly, I don’t know. But, I have data.

And I know this: My conclusion and subsequent instructional changes are going to depend quite heavily on how I answer the questions I just asked. If I feel like low engagement in the retakes is a cause of the low achievement, then my changes are going to be motivational and structural, with the goal of getting more students to use the feedback on the first-try formative assessments to prepare for a second-try.

If I feel like low engagement in the retakes is a symptom of my instruction being poor during the unit, then I will need to create/steal new activities to drive my instruction.

Oh, and none of that answers (perhaps) the first important question: Is formative assessment retake rate even a useful data set? (I don’t know the answer to this question either, by the way.)

There, I’ve used data to drive my decision-making… sort of.

Question: is this really the work we want our teachers doing? I can see the possibility for a variety of valid arguments from that question. If so, what guidance can we give them deciding what data sets are effective? And how to use them?