Accurate Grading with a Standards-Based Mindset (WEBINAR)

On Monday, December 16, 2013 I conducted a webinar entitled Accurate Grading with a Standards-Based Mindset. The archived copy of the webinar can be viewed here. (It’s just under 80 min, including questions at the end.)

Please note: This webinar was presented by Pearson Education. When you click on the link above you will first be asked to provide contact information before the webinar will play. While it says “the author has requested the information” please know that it is Pearson (not me) who requires this…just wanted you to know.

Below is a brief blog post that encapsulates (but by no means covers) the essence of the webinar:

While the standards-based grading movement is in full swing, not every school, district, or state is in exactly the same place. The difference is attributable to a variety of factors including the level of the school within which a standards-based approach to grading is being implemented. Elementary school standards-based report cards often look very different from middle or even high school standards-based report cards; that’s not a bad thing as the application of standards-based reporting at each level needs to be suitable for that level. The point is that schools and districts across the country are at various places along the standards-based grading continuum. While some have implemented fully, others are still exploring.

Despite this variation, the common link for all along the standards-based grading continuum is how we think about grading; what I call the standards-based mindset. This mindset represents the heavy lifting of the grading conversation. Once we shift how we think about grading, the implementation of standards-based reporting is easy, or at least easier, since the way we think about grades, how we organize evidence, and what is most heavily emphasized is different. We become more thoughtful about ensuring that a student’s grade represents their full level of proficiency and not just the average of where they were and where they are now.

Adults are rarely mean averaged and certainly, it is irrelevant to an adult that they used to not know how to do something. Yet for a student, these two factors are dominant in their school experience. A student’s grade (at least traditionally) is almost always a function of the mean average and a failed quiz or assignment early in the learning almost always counts against them; remember, every 40 needs an 80 just to get a 60.

The standards-based mindset shifts this conversation to a more accurate way of reporting a student’s level of proficiency. Like any idea, there are always detractors who’ll try to hijack the conversation by suggesting that standards-based grading is about lowering standards, making it easier for students, and letting them off-the-hook; none of that is true. Standards-based grading is about accurately reporting a student’s true level of proficiency. As students learn and grow through the curriculum they should be given full credit for their achievement. When they don’t receive full credit we are sending the not so subtle message that you should have learned faster!

A standards-based mindset is separate from how we report grades. With a standards-based mindset you can still report traditional grades, it’s just that how you determine grades is significantly different. Teachers with a standards-based mindset eliminate the influence of non-learning factors from their gradebooks. Whether it’s extra credit or late penalties, a standards-based mindset is about accurately reporting a student’s level of proficiency. Another misrule about standards-based grading is that meeting deadlines, completing work, etc. are not important; they are, however, they’re different. A student isn’t less proficient in math because the work was handed in two days late. Attributes, habits, and college/career skills are important and with a standards-based approach they remain important, but separate.

The standards-based mindset is about emphasizing the most recent evidence of learning by allowing students to receive full credit for their accomplishments. Reassessment is not about hitting the reset button or developing a do-over generation; it’s about recognizing that students have surpassed their previous levels of proficiency and giving students the opportunity to demonstrate those higher levels…and then giving them full credit for their learning. Students are still held accountable – for the learning – and not punished when they fall short of expectations. We don’t need to use the gradebook to teach students to be responsible; when we do it leads to grades that are inaccurate.

Standards-based grading and reporting is about levels of proficiency, accurate information, and a reorganization of evidence. Before we can fully implement standards-based reporting we need to develop a new way of thinking about grading; we need a standards-based mindset. Whether you still report traditional letter grades derived from predetermined percentage scales or are somewhere in the midst of developing a standards-based way of reporting, the standards-based mindset is the necessary first step toward more thoughtful and meaningful ways of reporting. With a standards-based mindset, if a student used to be a 40, but now is an 80, she should get an 80, not a 60!

Points over Practice?

This post is written as a precursor to my session on homework at next week’s Pearson-ATI Summer Conference

practiceYou’d think by now we’d have the whole homework thing figured out. Should it be assigned? What is the purpose of homework? How much is too much? How much is too little? Should it be graded? Is it formative? What if my students don’t do it? What if only half of my students do it? Why do we continue to act surprised by the fact that some students don’t master the intended learning the first time they practice it? These (and so many other questions) fuel a continual debate over where the actual sweet spot of our homework routines is.

Is homework the means or the end? In other words, does homework present students with an opportunity to further advance their proficiency with regards to specific curricular standards or is it an event all unto itself? While some might be tempted to answer both, it is challenging to come up the middle on the means vs end discussion.

As a means, homework tends to be about practice. Inherent in this practice paradigm is the elimination of points and their contribution to an overall grade. In other words, as practice, homework is formative. As an end, homework is just the opposite; it tends to be an event that independently contributes (even in a small way) to a report grade. While subsequent new evidence of learning may emerge, homework as an end remains a contributor to what could eventually be an inaccurate grade. And that is the bigger point. Whatever we report about student learning – and however we determine the substance of what we report – must be as accurate as possible. Previous evidence (homework) that no longer reflects a student’s current level of proficiency has the potential to misinform parents and others. When homework counts, we are emphasizing points over practice.

“…but it only counts for a small percentage of a student’s final grade,” some might argue, “so it doesn’t really matter.” I suppose on one level that might be true, however, consider a scenario where someone steals a five dollars from you and then asks you to dismiss it since they didn’t steal a lot of money. Now, I do understand that making the connection between stealing and counting homework is a stretch, but my point is that if learning (and the accurate reporting of a student’s achievement) is our priority, then emphasizing points clearly misses the mark. It’s not how much the inclusion of homework impacts the student’s final grade; it’s that it does in the first-place.

Still, others may proclaim (and wholeheartedly believe) that, “…if I don’t grade it, they won’t do it.” Again, while that might be the paradigm in a classroom, we have to ask ourselves who is responsible for creating that paradigm. We must recognize that students don’t enter school in Kindergarten with a point accumulation mindset; the K student never asks her teacher if the painting is for points! So where do they learn that? Somewhere in their experience points (and grades) become a priority for the adults…so they become a priority for students. Parents and students also contribute to this mindset, but we have to acknowledge our role as well. Also, if the only thing motivating students to complete any assignment is the promise of points then we really have to consider whether the assignment is truly worth completing in the first place. Again, is homework a means or an end?

I am looking forward to sharing more on the topic of homework, practice, and assessment at Pearson-ATI’s 20th Annual Summer Conference next week (July 8-10, 2013) in Portland, OR. The session on homework entitled Practice without Points will explore the biggest hurdles that prevent some teachers from eliminating the points attached to practice work, the reasons we assign homework and how those reasons fit within a balanced assessment system, and how teachers can thoughtfully respond to the trends they see between initial homework results and subsequent assessment data. You can read more on why I believe homework should be for practice and used formatively (here) rather than being used as part of a summative reporting process.

I will also be leading a session on Effective Leadership in Assessment specifically suited for those responsible for taking assessment literacy to scale and a session entitled Infused Assessment that takes participants back to the core fundamentals of formative assessment by infusing it into already existing instructional practices rather than creating  summative-events-that-don’t-count. 

If you’re unable to attend the conference, please take some time to follow the hashtag #ATIcon on Twitter.

Everything is assessment

If there is one bias that I have developed when it comes to assessment for learning it is this: As much as possible, we should not have to stop teaching in order to conduct our formative assessments.  In other words, if I were to walk into a classroom and observe, the lines between the moments of assessment, instruction, and feedback would be blurred; the chosen strategies would seamlessly lead students and teachers through a continuous assessment-instruction-feedback loop. While there are always exceptions to any rule, we should, as much as possible, strive to infuse our assessment for learning practices into our instructional strategies.

With that, formative assessment is actually easier to infuse than some might think. So many of the strategies that teachers have been using for years can – quite effortlessly – be used for formative assessment purposes. In fact, when I’m asked to provide/discuss some effective formative assessment strategies with teachers I’m often met with the fairly typical response of, “Oh, I already do that.” 

Now, I’m not doubting their responses.  The truth is that many teachers are already doing or using the strategy I describe, at least at first glance. Upon further review, however, I’ve come to realize that while many are using the strategies I outline, the strategies fall short of serving as an assessment for learning.

Everything teachers do – every strategy, activity, or process – is an assessment in waiting. Every activity students participate in – every project, assignment, or task – has information that can be used for formative purposes if we follow two simple guidelines.

TargetFirst, every activity must be linked to the intended learning. Activities are just activities unless there is a direct link between the activity and the intended learning; that’s what turns a task into a target. Even better is expressing this link in student-friendly language so that students may have intimate access to what they are expected to learn from the activity. This link is what’s often missing in far too many classrooms. Think about how often you begin a lesson by describing to students what they are going to do as opposed to what they are going to learn? The link to learning will establish far greater relevance for students and assist in their understanding of why – especially with knowledge targets – what there doing today is important and relevant for tomorrow (and beyond).

fork_in_the_road_signSecond, the results of every activity must have the potential to illicit an instructional response from the teacher. One of the core fundamentals behind formative assessment is that the collective results are used to decide what comes next in the learning. Now I use the word potential because the results of your activities (assessments) may indicate that what you had previously planned to do tomorrow is, in fact, the most appropriate decision. You’re not always going to change course, but for an activity to serve a formative assessment purpose it must have the potential to influence what you plan to do next. As long as you are willing to consider some instructional adjustments based on the results of the activity then it becomes an assessment for learning. As well, the more we can involve students in the process of self-assessment and personalized adjustments the more they become meaningful decision-makers in their own learning.

Whether it’s a class discussion, an A/B partner talk exercise, an Exit Slip, a 4 Corners Activity, a Jigsaw, or the use of exemplars, we can infuse our assessment/feedback practices into our instructional routines. When we link an activity to the intended learning and allow the results of the activity to potentially influence our instructional decisions, it moves from being just an activity to an assessment. Everything is an assessment in waiting if we use these two guideline to enhance what we’re already doing. 

Over-Prepare ‘Em

Although many schools/districts have had students in session for a while now, this week, for many, marks the second week of school. As such, it is likely that many of you are preparing your students for their first summative assessment/moment in your class (maybe it’s already happened).  Back in January – in my first blog post no less – I wrote that “It’s all about Confidence.”  While a new school year can provide many students with the opportunity to re-invent themselves and fix what (in their minds) needs fixing, there is an unparalleled opportunity to build student confidence through success on the first summative assessment.

This is not a debate about the merits of summative assessments; this about the realization that many of you will be using some form of summative assessment to determine whether or not your students have reached the intended learning goals. Therefore, if you want students to have a positive emotional response (feeling confident) to the prospect of being assessed, over-prepare your students to the point where success is almost guaranteed.

Two things that over-preparing doesn’t mean: It doesn’t mean you give it away nor does it mean dumb-it-down. In either situation students will quickly recognize that the summative moment is atypical and does not represent their usual experience in school/your class, thereby rendering the assessment results meaningless.  Over-preparing means we provide the maximum amount of learning and support to ensure that they are ready for that first authentic summative moment.  This will maximize their success and likely result in many students “out-performing” themselves – which leads to increased confidence that this year might be different and that success (or even greater success) is possible!  As a reminder, here is one of my favorite quotes from the book Confidence by Rosabeth Moss Kanter:

The expectation about the likelihood of eventual success determines the amount of effort people are willing to put in. Those who are convinced they can be successful – who have ‘self-efficacy’ – are likely to try harder and to persist longer when they face obstacles. (pg. 39)

Now…imagine what might happen if we over-prepare ’em for every assessment?

Function over Format

In all of the discussion and debate regarding summative and formative assessments, there is one misunderstanding that seems to be revealing itself more and more.

Too often, the discussion regarding summative v. formative assessment seems to navigate toward a critique of certain assessment formats and their place in either the summative or formative camp.  For me, this is an irrelevant discussion and can distract us from developing balanced assessment systems that seek to match assessment methods most appropriately with the intended learning.  In short, an assessment’s format has little, if anything, to do with whether an assessment is formative or summative; what matters is its function.

To determine whether an assessment is formative or summative ask yourself this one simple question: Who is going to use the assessment results?

See, if you take those assessment results and use them to provide useful advice to students on how the quality of their work or learning can improve AND you don’t “count” them toward any type of report card or reporting process then they’re formative; even more formative if the students self-assessed and set their own learning goals.  If, however, you are determining how much progress a student has made as of a certain point in the learning AND are going to include the result in a cumulative report card or other reporting process then they’re summative.  Whether you convert and/or combine the results into another format (letter grade, etc.) is really not relevant.

In essence, if the assessment results from your classroom leave the classroom and inform others about how students are doing then you’ve got a summative assessment.  If the results stay within the classroom and are used for feedback, that’s formative.

Here’s the rub – every assessment format has the potential to be formative or summative since it has everything to do with the function (or purpose) of the assessment and nothing to do with format.

Now, I’m not here to suggest that a short/selected answer assessment is the deepest, most meaningful assessment format, however, in some cases a short/selected answer assessment can be the most efficient means by which a teacher might know whether his/her students have mastery over the key terminology in a science unit.  This demonstrated mastery will allow the teacher to feel more confident about moving on to more meaningful learning opportunities.  What the teacher does with the results will be the determining factor as to whether it was a formative or summative event.  If the results “count” then it was summative; if it doesn’t “count” then formative.  Summative and formative assessments begin with two very different purposes; knowing our purpose for assessing is the first key to developing high quality, accurate, and clear assessment information.

Anyway, the discussion/debate regarding what quality assessments look like is one for a future post. For now, know that every assessment format can be a viable option and it’s what happens with the results that matters the most.

Incidentally, for another recent and interesting take (one which I happen to agree with) on summative assessments, please check out Darcy Mullin’s post here.

Practice without Penalty

Somewhere along the way we created an educational mindset around practice and homework that determined that if we don’t count it, the students won’t do it. This idea that everything counts is wrought with misrules and situation that make accurate grades a near impossibility. In so many other aspects of life – fine arts, athletics – we value the impact and importance of practice.  It seems odd that in school we’ve decided that every moment should be measured.

Here is my position:

Anytime a student makes a first attempt at practicing new learning it should not be included in the grade book until the teacher provides descriptive feedback on the student’s work.

First, let me clarify my view on the difference between practice and homework.

  • Practice refers to those times where students are making a first attempt and using or working with new learning.  For most of us, this represents some of the traditional homework we used to do and, in some cases, still assign. 
  • Homework refers more to work completed at home that is either an extension or deepening of the key learning outcomes or work completed after descriptive feedback has been provided and or in preparation for a summative assessment.

From my perspective, I don’t have any issues with this type of homework counting toward a final grade; my issue is when practice counts.  Here’s why:

1) Whose work is it? When students take work home there is always the possibility of outside influence.  Older siblings, parents, friends can (and one might argue should) be involved in supporting the student as he/she increases their understanding of the key learning.  The problem arises when practice results go into the grade book.  The outside influences could affect assessment accuracy and distort achievement results.

2) Flawless Instruction? The idea that I can teach something once and 30 diverse learners can now go home and proficiently complete an assignment is absurd.  We can’t assume that our instructional practices are so flawless that 30 different students (or even more if you teach multiple sections) will all get it at the end of the block…every day; even the most exceptional teachers can’t do that.

3) Clear directions? Even with the best intentions, we are not always clear with the directions we provide to students for completing the work independently. That’s the key – independently. It is also possible that we were clear but some students misunderstood, which is their responsibility, however, it wouldn’t be the first time a student, especially a vulnerable learner, misunderstood what they were supposed to do.

4) With or without me? This, of course, will shift as students become more mature, but in general, I’d rather students do the vast majority of their learning with me rather than without me.  By doing so, I can more accurately assess (not test) where they are along their learning continuum.

5) Score the GAMES, not the practice. There is a lot wrong within the professional sports world, but they do understand the importance of practice.  There is training camp, where they wear all of the equipment but it’s not a real game.  Then they have exhibition games which look, sound, and smell like real games – even charge the public real prices – but they don’t count.  Yes, they even keep score, but the games are zero weighted…they don’t matter.  Then they play the regular season, which counts, except nobody really cares who’s in first place after that because all that matters is who won the championship.  Somehow we need to have more “training camps”, “exhibition games”, and even “regular games” before our academic play-offs!

Two additional thoughts:

  1. If everything counts, when are students supposed to take the academic risks we encourage them to take? Most kids will stay in their safe zone.  Why risk a ‘F’ by going for an ‘B’ when I’m happy with a ‘C’?
  2. If the prospect of the grade is the only potential motivator, then it is possible the assignment isn’t really important and maybe the students shouldn’t be asked to do it in the first place.

My bias on Practice was/is this.

  • I assigned practice and checked to see if it was completed.
  • We went through the practice assignments and provided descriptive feedback to students.
  • I kept track of their practice scores (zero weight) but they never counted toward a report card grade!
  • Most students did their practice assignments and I never experienced the flood of assignments at the end of the year!

I think our students need room to breathe at school.  If every moment is graded students will play it safe, become passive learners, and never stretch themselves to their potential.

“How did he get so Good?”

My son, Adrian, is an exceptional skier.  Now before you think I’m about to take credit for it you need to know that it had nothing to do with me.  In my first blog post on confidence (Jan. 27) I mentioned how afraid of skiing he was when he first learned; he’s not afraid anymore. The picture on the left is of Adrian last year (age 9) so clearly fear is no longer a factor.

Every Saturday morning Adrian and I make the 35 min. drive up to Apex Mountain. Adrian is part of the Apex Freestyle Program.  Now I’m not saying he’s going to the Olympics or anything, but when your ten-year old tells you that the double-black diamond runs are easy and boring you know he has some skills.  Every Saturday he skis with the freestyle program.  So we drive up to Apex, I drop him off with his coach, and then I head off for a day of skiing.  If my wife and daughter choose to come then I ski with them; otherwise I’ll ski with some of my friends who are at Apex that day.  It’s a small mountain so it’s easy to find someone.

This past Saturday, however, was a little different. There was no one for me to ski with so I decided to ski alone.  One thing I learned about skiing alone is that it really tests your own commitment to improvement.  I am a good skier, not great; moguls are my enemy!  I found out that when you ski alone you don’t really have to ski any mogul runs. After all, no one is watching.  However, I did head over to a few of the more challenging runs because I do want to improve.  After bouncing my way down one of the black diamond runs – I didn’t fall, but make no mistake, I didn’t ski it – I stopped to catch my breath.  As I stood – alone – thinking about what I had just done, I kept mentally referring back to the images of Adrian skiing the very same run. I wondered aloud, “How did he get so good?”

I traced it back to two things: First, he has spent a tremendous amount of time on his skis over the last 4 years; last year he skied 25 times.  However, time on skis is not enough if you are consistently practicing the wrong technique.  Second, and more importantly, he’s had excellent coaching.  As I thought about the coaching he has received and reflected upon it within the context of what we know about assessment for learning and sound instruction I realized his coaches did all the right things.

Here’s what his coaches did:

  • They figured out his strengths and weaknesses as a skier.
  • They made Adrian spend most of his time strengthening his strengths in order to maintain his confidence & motivation.
  • They put Adrian on the edge of improvement by introducing challenges that gradually pushed him.
  • They gave him specific, descriptive feedback on how to improve.
  • They taught him how to recognize his own mistakes and how to correct them.

Here’s what his coaches DIDN’T do.

  • Evaluate him every week and send a report home.
  • Set unattainable improvement targets.
  • Keep the standards for excellent skiing a secret.
  • Only tell him what he was doing wrong.
  • Set a time limit for improvement.
  • Scare him or stress him out by expecting too much too soon.

As I thought about Adrian’s experience, and how proficient at skiing he has become, I realized that his ski instructors got it.  Everything they did with him fell in line with the current thinking about formative assessment, descriptive feedback, and the role both play in allow kids (students) to maximize their success.  The self-assessment/correction piece was the icing on the cake.  By teaching him the standards of excellent skiing and breaking down the techniques into manageable chunks, Adrian is able to now correct himself when something (or he) goes sideways! After all, the person who does the assessing does the learning.

How do we get more of this in our classrooms?  I know it’s there, but how do we get more? How can we give our students more opportunities to strengthen their strengths, self-assess their progress, and continually sit on the edge of improvement?