Everything is assessment

If there is one bias that I have developed when it comes to assessment for learning it is this: As much as possible, we should not have to stop teaching in order to conduct our formative assessments.  In other words, if I were to walk into a classroom and observe, the lines between the moments of assessment, instruction, and feedback would be blurred; the chosen strategies would seamlessly lead students and teachers through a continuous assessment-instruction-feedback loop. While there are always exceptions to any rule, we should, as much as possible, strive to infuse our assessment for learning practices into our instructional strategies.

With that, formative assessment is actually easier to infuse than some might think. So many of the strategies that teachers have been using for years can – quite effortlessly – be used for formative assessment purposes. In fact, when I’m asked to provide/discuss some effective formative assessment strategies with teachers I’m often met with the fairly typical response of, “Oh, I already do that.” 

Now, I’m not doubting their responses.  The truth is that many teachers are already doing or using the strategy I describe, at least at first glance. Upon further review, however, I’ve come to realize that while many are using the strategies I outline, the strategies fall short of serving as an assessment for learning.

Everything teachers do – every strategy, activity, or process – is an assessment in waiting. Every activity students participate in – every project, assignment, or task – has information that can be used for formative purposes if we follow two simple guidelines.

TargetFirst, every activity must be linked to the intended learning. Activities are just activities unless there is a direct link between the activity and the intended learning; that’s what turns a task into a target. Even better is expressing this link in student-friendly language so that students may have intimate access to what they are expected to learn from the activity. This link is what’s often missing in far too many classrooms. Think about how often you begin a lesson by describing to students what they are going to do as opposed to what they are going to learn? The link to learning will establish far greater relevance for students and assist in their understanding of why – especially with knowledge targets – what there doing today is important and relevant for tomorrow (and beyond).

fork_in_the_road_signSecond, the results of every activity must have the potential to illicit an instructional response from the teacher. One of the core fundamentals behind formative assessment is that the collective results are used to decide what comes next in the learning. Now I use the word potential because the results of your activities (assessments) may indicate that what you had previously planned to do tomorrow is, in fact, the most appropriate decision. You’re not always going to change course, but for an activity to serve a formative assessment purpose it must have the potential to influence what you plan to do next. As long as you are willing to consider some instructional adjustments based on the results of the activity then it becomes an assessment for learning. As well, the more we can involve students in the process of self-assessment and personalized adjustments the more they become meaningful decision-makers in their own learning.

Whether it’s a class discussion, an A/B partner talk exercise, an Exit Slip, a 4 Corners Activity, a Jigsaw, or the use of exemplars, we can infuse our assessment/feedback practices into our instructional routines. When we link an activity to the intended learning and allow the results of the activity to potentially influence our instructional decisions, it moves from being just an activity to an assessment. Everything is an assessment in waiting if we use these two guideline to enhance what we’re already doing. 

Function over Format

In all of the discussion and debate regarding summative and formative assessments, there is one misunderstanding that seems to be revealing itself more and more.

Too often, the discussion regarding summative v. formative assessment seems to navigate toward a critique of certain assessment formats and their place in either the summative or formative camp.  For me, this is an irrelevant discussion and can distract us from developing balanced assessment systems that seek to match assessment methods most appropriately with the intended learning.  In short, an assessment’s format has little, if anything, to do with whether an assessment is formative or summative; what matters is its function.

To determine whether an assessment is formative or summative ask yourself this one simple question: Who is going to use the assessment results?

See, if you take those assessment results and use them to provide useful advice to students on how the quality of their work or learning can improve AND you don’t “count” them toward any type of report card or reporting process then they’re formative; even more formative if the students self-assessed and set their own learning goals.  If, however, you are determining how much progress a student has made as of a certain point in the learning AND are going to include the result in a cumulative report card or other reporting process then they’re summative.  Whether you convert and/or combine the results into another format (letter grade, etc.) is really not relevant.

In essence, if the assessment results from your classroom leave the classroom and inform others about how students are doing then you’ve got a summative assessment.  If the results stay within the classroom and are used for feedback, that’s formative.

Here’s the rub – every assessment format has the potential to be formative or summative since it has everything to do with the function (or purpose) of the assessment and nothing to do with format.

Now, I’m not here to suggest that a short/selected answer assessment is the deepest, most meaningful assessment format, however, in some cases a short/selected answer assessment can be the most efficient means by which a teacher might know whether his/her students have mastery over the key terminology in a science unit.  This demonstrated mastery will allow the teacher to feel more confident about moving on to more meaningful learning opportunities.  What the teacher does with the results will be the determining factor as to whether it was a formative or summative event.  If the results “count” then it was summative; if it doesn’t “count” then formative.  Summative and formative assessments begin with two very different purposes; knowing our purpose for assessing is the first key to developing high quality, accurate, and clear assessment information.

Anyway, the discussion/debate regarding what quality assessments look like is one for a future post. For now, know that every assessment format can be a viable option and it’s what happens with the results that matters the most.

Incidentally, for another recent and interesting take (one which I happen to agree with) on summative assessments, please check out Darcy Mullin’s post here.

Practice without Penalty

Somewhere along the way we created an educational mindset around practice and homework that determined that if we don’t count it, the students won’t do it. This idea that everything counts is wrought with misrules and situation that make accurate grades a near impossibility. In so many other aspects of life – fine arts, athletics – we value the impact and importance of practice.  It seems odd that in school we’ve decided that every moment should be measured.

Here is my position:

Anytime a student makes a first attempt at practicing new learning it should not be included in the grade book until the teacher provides descriptive feedback on the student’s work.

First, let me clarify my view on the difference between practice and homework.

  • Practice refers to those times where students are making a first attempt and using or working with new learning.  For most of us, this represents some of the traditional homework we used to do and, in some cases, still assign. 
  • Homework refers more to work completed at home that is either an extension or deepening of the key learning outcomes or work completed after descriptive feedback has been provided and or in preparation for a summative assessment.

From my perspective, I don’t have any issues with this type of homework counting toward a final grade; my issue is when practice counts.  Here’s why:

1) Whose work is it? When students take work home there is always the possibility of outside influence.  Older siblings, parents, friends can (and one might argue should) be involved in supporting the student as he/she increases their understanding of the key learning.  The problem arises when practice results go into the grade book.  The outside influences could affect assessment accuracy and distort achievement results.

2) Flawless Instruction? The idea that I can teach something once and 30 diverse learners can now go home and proficiently complete an assignment is absurd.  We can’t assume that our instructional practices are so flawless that 30 different students (or even more if you teach multiple sections) will all get it at the end of the block…every day; even the most exceptional teachers can’t do that.

3) Clear directions? Even with the best intentions, we are not always clear with the directions we provide to students for completing the work independently. That’s the key – independently. It is also possible that we were clear but some students misunderstood, which is their responsibility, however, it wouldn’t be the first time a student, especially a vulnerable learner, misunderstood what they were supposed to do.

4) With or without me? This, of course, will shift as students become more mature, but in general, I’d rather students do the vast majority of their learning with me rather than without me.  By doing so, I can more accurately assess (not test) where they are along their learning continuum.

5) Score the GAMES, not the practice. There is a lot wrong within the professional sports world, but they do understand the importance of practice.  There is training camp, where they wear all of the equipment but it’s not a real game.  Then they have exhibition games which look, sound, and smell like real games – even charge the public real prices – but they don’t count.  Yes, they even keep score, but the games are zero weighted…they don’t matter.  Then they play the regular season, which counts, except nobody really cares who’s in first place after that because all that matters is who won the championship.  Somehow we need to have more “training camps”, “exhibition games”, and even “regular games” before our academic play-offs!

Two additional thoughts:

  1. If everything counts, when are students supposed to take the academic risks we encourage them to take? Most kids will stay in their safe zone.  Why risk a ‘F’ by going for an ‘B’ when I’m happy with a ‘C’?
  2. If the prospect of the grade is the only potential motivator, then it is possible the assignment isn’t really important and maybe the students shouldn’t be asked to do it in the first place.

My bias on Practice was/is this.

  • I assigned practice and checked to see if it was completed.
  • We went through the practice assignments and provided descriptive feedback to students.
  • I kept track of their practice scores (zero weight) but they never counted toward a report card grade!
  • Most students did their practice assignments and I never experienced the flood of assignments at the end of the year!

I think our students need room to breathe at school.  If every moment is graded students will play it safe, become passive learners, and never stretch themselves to their potential.

“How did he get so Good?”

My son, Adrian, is an exceptional skier.  Now before you think I’m about to take credit for it you need to know that it had nothing to do with me.  In my first blog post on confidence (Jan. 27) I mentioned how afraid of skiing he was when he first learned; he’s not afraid anymore. The picture on the left is of Adrian last year (age 9) so clearly fear is no longer a factor.

Every Saturday morning Adrian and I make the 35 min. drive up to Apex Mountain. Adrian is part of the Apex Freestyle Program.  Now I’m not saying he’s going to the Olympics or anything, but when your ten-year old tells you that the double-black diamond runs are easy and boring you know he has some skills.  Every Saturday he skis with the freestyle program.  So we drive up to Apex, I drop him off with his coach, and then I head off for a day of skiing.  If my wife and daughter choose to come then I ski with them; otherwise I’ll ski with some of my friends who are at Apex that day.  It’s a small mountain so it’s easy to find someone.

This past Saturday, however, was a little different. There was no one for me to ski with so I decided to ski alone.  One thing I learned about skiing alone is that it really tests your own commitment to improvement.  I am a good skier, not great; moguls are my enemy!  I found out that when you ski alone you don’t really have to ski any mogul runs. After all, no one is watching.  However, I did head over to a few of the more challenging runs because I do want to improve.  After bouncing my way down one of the black diamond runs – I didn’t fall, but make no mistake, I didn’t ski it – I stopped to catch my breath.  As I stood – alone – thinking about what I had just done, I kept mentally referring back to the images of Adrian skiing the very same run. I wondered aloud, “How did he get so good?”

I traced it back to two things: First, he has spent a tremendous amount of time on his skis over the last 4 years; last year he skied 25 times.  However, time on skis is not enough if you are consistently practicing the wrong technique.  Second, and more importantly, he’s had excellent coaching.  As I thought about the coaching he has received and reflected upon it within the context of what we know about assessment for learning and sound instruction I realized his coaches did all the right things.

Here’s what his coaches did:

  • They figured out his strengths and weaknesses as a skier.
  • They made Adrian spend most of his time strengthening his strengths in order to maintain his confidence & motivation.
  • They put Adrian on the edge of improvement by introducing challenges that gradually pushed him.
  • They gave him specific, descriptive feedback on how to improve.
  • They taught him how to recognize his own mistakes and how to correct them.

Here’s what his coaches DIDN’T do.

  • Evaluate him every week and send a report home.
  • Set unattainable improvement targets.
  • Keep the standards for excellent skiing a secret.
  • Only tell him what he was doing wrong.
  • Set a time limit for improvement.
  • Scare him or stress him out by expecting too much too soon.

As I thought about Adrian’s experience, and how proficient at skiing he has become, I realized that his ski instructors got it.  Everything they did with him fell in line with the current thinking about formative assessment, descriptive feedback, and the role both play in allow kids (students) to maximize their success.  The self-assessment/correction piece was the icing on the cake.  By teaching him the standards of excellent skiing and breaking down the techniques into manageable chunks, Adrian is able to now correct himself when something (or he) goes sideways! After all, the person who does the assessing does the learning.

How do we get more of this in our classrooms?  I know it’s there, but how do we get more? How can we give our students more opportunities to strengthen their strengths, self-assess their progress, and continually sit on the edge of improvement?