Most Recent? Most Frequent? Most Accurate?

accuracyOne of the fundamental tenants of standards-based grading is that greater (if not exclusive) emphasis is placed on the most recent evidence of learning. As students move through their natural learning trajectory it is important that students be credited with their actual levels of achievement. That is, when students reach a certain level of proficiency it is important that what is reported accurately reflects that level. To average, for example, the new evidence with the old evidence is to distort the accuracy of the grade; the grade is then reflective of where the student used to be as the student was, at some point, likely at the level the average represents.

What we have collectively realized is that the speed at which a student achieves has inadvertently become a significant factor in determining a student’s grade, especially when determined within a traditional grading paradigm. When averaging is the main (or sole) method for grade determination, success is contingent upon early success or the average of what was and what is will continue to distort the accuracy of the students’ grades. Never forget that every 40 needs an 80, just to get a 60. That’s pure mathematics; the lower the initial level, the more a student has to outperform his/herself in order to achieve even a minimal level of proficiency.

More often than not, the most recent evidence of learning is the most accurate. This is especially true when our standards, targets, or performance underpinnings represent foundational knowledge and skills. Foundational knowledge and skills are typically elements of the curriculum that have a fairly linear progression and slip back is highly unlikely; once students truly know or can do something it is unlikely that they will, even after an extended period of time, suddenly not know or know how. That doesn’t mean mistakes won’t occur. Even the most proficient students make mistakes. That also doesn’t mean they’ve suddenly lost proficiency; errors are an eventual occurrence. But does that mean the most recent evidence is always the most accurate? Not always.

Sometimes the most frequent evidence is the most accurate. Generally, the more complex the standards or the demonstration of proficiency is, the more likely it is that a teacher will need to consider the most frequent evidence as the most accurate. Take, for example, writing. Students are often asked to write in a variety of styles and/or genres. As such, taking the most recent writing sample may be misguided since the expected style/genre could be the student’s weakest. For example, if a teacher asked the students to write an argumentative paper as their final paper, and that is the student’s weakest form of writing, then the potentially poorer result may give the appearance that the student’s writing skills have declined. However, if the teacher had simply chosen to reorder the assignments and make argumentative writing the first paper, then the optics would reveal a very different trajectory. With complex standards/outcomes like writing, accuracy is more effectively achieved when the teacher examines all of the writing samples and looks for the most frequent results as they relate to the intended standards.

Staying with the writing thread, the most recent writing samples may be the most accurate within a particular style; if a student writes multiple argumentative papers then it’s likely the most recent is the most accurate. However, as the styles change, most frequent may be more accurate. The point is that we need to be more thoughtful about how we apply the concepts of most recent versus most frequent. This is more art than science and teachers must become comfortable with using their professional judgment. Remember, the goal is accurate grading and reporting. The art of grading is about the teacher using his/her professional judgment to determine a student’s level of proficiency. Teachers are more than data-entry clerks who enter numbers into an electronic gradebook; they are professionals who understand what quality work looks like, who know what is needed for students to continue to improve, and know when the numbers don’t tell the full story.

I am looking forward to sharing more on this topic at the Pearson-ATI Sound Grading Practices Conference (Dec. 5-6, 2013) in my session titled Most Recent? Most Frequent? Other sessions I will be leading include Zero Influence-Zero Gained, which examines the misguided logic behind punitive grading and Effective Leadership for Sound Grading and Reporting for administrators and teacher-leaders looking to implement more sound, fair, reasonable, and accurate grading practices in their department, school, and/or district.

As well, I will be presenting a keynote session entitled Accurate Grading with a Standards-Based Mindset where I will outline the mindset necessary to begin the shift away from traditional grading toward a more accurate, standards-based approach that maintains student confidence and focuses on learning rather than the simple accumulation of the requisite number of points.

If you’re unable to attend the conference, please take some time to follow the hashtag #ATIcon on Twitter.

6 thoughts on “Most Recent? Most Frequent? Most Accurate?

  1. Very thoughtful post, Tom. I completely agree with the sentiment and the discussion is an important one, but I’d like to offer some push back. I don’t see what you are discussing as being relevant TO standards based grading, but an argument FOR it. In SBG, we wouldn’t report on “writing” but individually on each of the styles/genres, or even better, the development of meaning, form, style, conventions, etc within and across styles.

    When reporting on specific standards/outcomes/objectives, there’s no way to average. It’s simply a matter of reporting if the student is meeting the expectations for that standard, and possibly to what degree. Now, obviously traditional report cards are completely useless for communicating this with parents, but that’s a whole other conversation.

    • Thanks for commenting Jeremy, but I’m not sure I’m with you. What SBG looks like is often contextual (i.e. jurisdiction) and varies depending on levels. Yes, there are core fundamentals, but there is much to consider when implementing SBG. One of those core fundamentals is using the most recent/frequent evidence so I’m not sure I see (or agree) with the TO/FOR point you make. Admittedly, the post is relatively generic, but that’s intentional to capture the widest range of reader. It isn’t always the case that separate levels would be given for form, style, etc. Younger students…sure, but one of the concerns with separating standards too far is that we lose sight of what really matters, which is pulling together multiple standards and outcomes into more authentic/larger demonstrations. Sometimes the whole is greater than the sum (i.e. PBL) The idea of a high school biology teacher, for example, reporting on each standard (and the subsequent underpinnings) would be overwhelming and not likely consumable by parents. SBG is not necessarily about creating a never ending checklist of finite skills that we check off. While that may happen in some places and/or with younger students, it won’t happen everywhere. There is no SBG formula so there really is no, “in SBG you do X” because of the variations of curricular standards and/or interpretations vary. Yes, there is the “common” core in the US, but the way school districts/states decide to report on standards is not uniform.

      Also, while I’m not a defender of averaging, it is simply not true that there is “no way” to average when averaging is done across 4 levels (vs. 101 percentage points). SBG will (and does) look different at the high school level; often some form of summarization is necessary. I understand your comment from an elementary perspective, but to apply that to the high school level is not replicable.

      • My apologies, I often speak of theoretical ideals, rather than practicalities. Ideally, I do think we should be separating out, and giving feedback on individual standards. Practically, the specificity to which these are prescribed by the governing body will likely make this a challenge in many cases and I am admittedly unfamiliar with how standards are written in the common core, but I do like the direction in which the new curriculum is addressing this in BC. As a current elementary teacher, I feel fortunate with the flexibility I have in the prescribed curriculum to focus on the big picture ideas. I didn’t have that flexibility when I taught senior math and chemistry classes; though I also didn’t know any different at the time.

        Coming from a math and science background, I can confidently say that averages are statistically irrelevant unless that which is being measured is the same each time. It’s certainly not in most cases. Regardless, I think we both agree that averaging is not a powerful practise.

        I don’t think the idea of a checklist to too far-fetched when reporting to parents. For example, when I give a math assessment (formerly known as tests), they’re always short answer, each question addresses a specific learning outcome, and each outcome usually has three or four questions associated with it. Students and parents never see a number associated with the assessment, or a percentage, they just get a checklist that identifies (on a 4 point scale) how well the student demonstrated the expected learning. This way I communicate with parents very specifically what the students know and demonstrate, and where they need to focus more. It’s a lot of work to setup, but fits with my vision of SBG and I think it can be done with most courses at most levels with a reasonable list of standards to implement.

        I also think that most teachers probably do a decent job of explaining what the expectations are and how students meet them (or not). I think the bigger problem here is report cards and the expectation that they give a complete and accurate picture of how students are doing. I wonder if we’d be having this discussion is report cards weren’t mandated 2-4 times a year.

  2. Mr. Schimmer, I am a student at the University of South Alabama enrolled in an EDM310 course. The post you have written has given me some excellent tips to becoming a better teacher. Once I have my own classroom I will definitely remember some of the points you have brought up. I agree with this post the most in that accuracy is the best option. I can think back to my 11th grade English course, where in the first half of the semester I did terrible because I was not that great at reading and regurgitating facts on tests. The second half of the semester was all poetry and creative writing, and it was like I was a totally different student. I do not blame the teacher at all, she was in fact one of my favorite teachers, but, as stated above, what are some tips on how I could avoid running into that problem with students and still get the required curriculum down?

  3. Mr. Schimmer, I am a student at the University of South Alabama enrolled in EDM310. I really enjoyed reading your post. It gave me some great thoughts on becoming a future teacher. I agree that the pace someone works should not be how they are graded. In my freshmen year of college, had a class where we had to come in everyday and spend the first 10 minutes on a given topic and write a 7-10 sentence paragraph. Although to others, it may seem easy, but for me, I felt rushed and couldn’t think off the “top” of my head. How can I manage this for the writing part of my future classrooms?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s