Eleven months ago I wrote a letter to my AP Biology students about stumbling in my efforts to include more learners in my AP Biology program. I was deeply conflicted in deciding how to proceed from our scores; student morale was as good as it had ever been and enrollment was up but their scores were the lowest of my career.
This was my last year teaching AP Biology and the changes in my methods continued. Enrollment this year tripled last year’s and early numbers showed them on the rise again next year (had I remained to teach again). Since my commitment to inclusiveness over scores two and a half years ago, I have lost ZERO students to the scythe of early year panic drops. I had groups of students approaching me, a remarkable number of which were future enrollees whom I had yet to even teach, looking for lab placements and enrichment experiences to get more involved in science. Students believed they could biology. I am happy to say I built the environment I sought for my AP program.
Scores are not out yet, but I did another overhaul of my assessment system which I think is worth sharing. When I was brute-forcing my student success I used textbook question banks and regular weekend quizzes to ensure my handful of students did A LOT of testing and their AP scores were very strong. I transitioned last year to only assess by asking students to write what they know and we focused their analysis on what they could add over multiple attempts. Every student knew something about photosynthesis and every student could know something more than what they did each time. Nobody felt useless or stumped, because even if they knew they “weren’t there yet” they could work from what they did know and focus on filling their gaps and fixing their misconceptions.
The shortcoming in this system was the lack of an anchor for the students in evaluating what they know against what the College Board asks them to know. My students were surprised in May because I had told them they had mastered a topic, but my judgement was imperfect and being able to talk about what they know is meaningfully different from solving problems set within a schema… or more often at an intersection between multiple large networks of ideas. I must give my students practice working from what they know while they experience problem solving in ways I had failed to maintain during my transition last year.
So this year I changed my assessment perspective. I still need to hook students on our culture of knowing things, and you must know things to solve problems. For those reasons my first semester changed very little. I made a re-commitment to inquiry and lab experience, but my knowledge assessment suite was only sharpened and refined.
Second semester, however, we were ready to be dangerous. We knew about the world of molecular biology, so from day one we worked to address problems. In January I said, “Muscular dystrophy… what’s the deal with that?” We actually had about half an hour of productive discussion regarding what we did know (I got yet another reminder that students are not blank slates!), but then I handed them our first formal assessment. It had a full page of background information pulled from expert sources and a deceptively simple prompt. They said we don’t know this…
Great! What do you need to know to be able to solve this problem? We made a list. Our work was filling the gaps they needed to address the problem. Once the list was all crossed off, we attempted the problem. After several attempts, they were ready for more. “AIDS resistance… how’s that possible?” Away we went again.
The top level of work changed each time they attempted the assessment, but usually only in small ways that ensured they focused on the ideas rather than the test itself. Every student could still know something, but now there was a much more concrete framework for their trajectory of development. It was challenging to writing milestone assessments that appropriately built student understanding in ways that actually conferred success on the summative assessments… but I think I hit the mark more often than not. In January students needed 3+ attempts per assessment to finally bank all levels, but by April I did have students banking things first try.
Logistically, I provided students with the concepts level and the background reading at the start. The synthesis was brand new each of the first several attempts, but as a class they found it more comfortable to only attempt concepts the first time around so they could have more time to write and revise without the time crunch of getting both things done while both phases were unfamiliar. If students banked all levels of the knowledge assessments/driving problems, they were automatically awarded credit for all formative milestones (but not visa versa). This took a lot of pressure off students who wanted to focus on growth during the unit to be ready to crush at the end, and eliminated the redundancy of going back to do in part something they’d already demonstrated they could do in full. I was surprised to see an even divide of the class, some preferred to bank every chapter first and some wanted to just work the big problems.
I don’t think these assessments are perfect, and I’m posting them warts and all. What matters is that old concepts continue to be explicitly assessed in later topics (see biochem stuff in like… every single assessment). Here they are. Many need copy editing and revision from how they were delivered this year because they were new. Take, modify, use and share in good health.
Students need opportunities to show what they know. Deficit grading will disenfranchise too many students, especially in AP classes. To remain successful we have to find ways to provide layered assessments that are accessible to students at many points on the learning trajectory while still building toward the robust understanding expected by AP exams. Philosophically, I am really liking the path I’ve been on. Perhaps some of you will walk this same direction and push even further down the road.
I’ll update this post in July when the scores are released.