PBL: Water Quantity and Water Quality

A *New* Biology Adventure for Your Kansas Students: PBL – Water Quantity and Water Quality
KNE NewsThe NSF Kansas EPSCoR project titled, Microbiomes of Aquatic, Plant, and Soil Systems across Kansas (MAPS), a collaboration of researchers from KU, K-State, WSU, Fort Hays State, and Haskell Indian Nations University, hosted 12 Kansas biology teachers in a 2018 Summer Institute from June 4-8, 2018. Broken into three teams — Aquatics, Terrestrial, and ArcGIS, our goal was to work with researchers to investigate how the microbiomes of Kansas are critical to understanding several key issues for our state, including agricultural sustainability, water quality, greenhouse gases, plant productivity, and soil fertility. In addition to using ArcGIS to map native and restoration prairie species distribution under the direction of Drs. Helen AlexanderPeggy Schultz, and Jim Bever, we all did some aquatics field work led by the Deputy Director of the Kansas Biological Survey, Dr. Jerry deNoyelles, and Assistant Research Professor, Dr. Ted Harris, who specializes in Harmful Algal Blooms (HABs). We learned how to use lake surveying equipment to test water quality parameters and sampled macroinverts in thermally-stratified Cross Reservoir. We also seined Mud Creek, where Drew Ising apparently stumbled into a parallel universe when I botched this pano:

KNE News

The end result was this *NEW* PBL on Water Quantity and Quality, which I hope benefits your Biology students as much as I know it will benefit mine:

KNE News
——————————————————————————————————————–

About Me and my PBL Life
Continue reading “PBL: Water Quantity and Water Quality”

Data Analysis in a Natural Selection Simulation

+/-1 SEM bars added

I really like the HHMI Biointeractive activity “Battling Beetles”. I have used it, in some iteration (see below), for the last 6 years to model certain aspects of natural selection. There is an extension where you can explore genetic drift and Hardy-Weinberg equilibrium calculations, though I have never done that with my 9th graders. If you stop at that point, the lab is lacking a bit in quantitative analysis. Students calculate phenotypic frequencies, but there is so much more you can do.  I used the lab to introduce the idea of a null hypothesis and standard error to my students this year, and I may never go back!

 

We set up our lab notebooks with a title, purpose/objective statements, and a data table. I provided students with an initial hypothesis (the null hypothesis), and ask them to generate an alternate hypothesis to mine (alternative hypothesis). I didn’t initially use the terms ‘null’ and ‘alternative’ for the hypotheses because, honestly, it wouldn’t have an impact on their success, and those are vocabulary words we can visit after demonstrating the main focus of the lesson. When you’re 14, and you’re trying to remember information from 6 other classes, even simple jargon can bog things down.  I had students take a random sample of 10 “male beetles” of each shell color, we smashed them together according the HHMI procedure, and students reported the surviving frequencies to me.

Once I had the sample frequencies, I used a Google Sheet to find averages and standard error, and reported those to my students. Having earlier emphasized “good” science as falsifiable, tentative and fallible, we began to talk about “confidence” and “significance” in research. What really seemed to work was this analogy: if your parents give you a curfew of 10:30 and you get home at 10:31, were you home on time? It isn’t a perfect comparison, and it is definitely something I’ll regret when my daughter is a few years older, but that seemed to click for most students. 10:31 isn’t 10:30, but if we’re being honest with each other, there isn’t a real difference between the two. After all, most people would unconsciously round 10:31 down to 10:30 without thinking. We calculated the average frequency changed from 0.5 for blue M&M’s to 0.53, and orange conversely moved from 0.5 to 0.47. So I asked them again: Does blue have an advantage? Is our result significant?

Error bars represent 95% C.I. (+/- 0.044) for our data.

Short story, no; we failed to reject the null hypothesis. Unless you are using a 70% confidence interval, our result is not significantly different based on 36 samples. But it was neat to see the interval shrink during the day. After each class period, we added a few more samples, and the standard error measurement moved from 0.05 to 0.03 to 0.02. It was a really powerful way to emphasize the importance of sample size in scientific endeavors. 

Should the pattern (cross-cutting concept!) hold across 20 more samples, the intervals would no longer overlap, and we could start to see something interesting. So if anyone has a giant bag of M&M’s lying around and you want to contribute to our data set, copy this sheet, add your results, and share it back my way. Hope we can collaborate!

Email results, comments, questions to Drew Ising at aising@usd348.com or drewising@gmail.com

–Versions of Battling Beetles Lab I’ve Tried–

HHMI Original

My “Student Worksheet” Edit

Lab Instructions Google Doc

Lab Notebook Intro. from 2017-18

Lab Notebook Data from 2017-18

Assessing the Science and Engineering Practices

I have been thinking a lot about the message that I want to send to students about science and reflecting on my own understanding of what science is. In my short two years as a teacher a lot of kids have come into my room conditioned into memorizing words and concepts until a test. They see science classes as more challenging versions of the memorization-regurgitation cycle and often have insecurities about science. As a student it took me a really long time to realize that science isn’t about memorizing processes or vocabulary but about the feeling I get in my head when I don’t know something yet but know that there is something to be learned. It’s about the confusion that happens when you have data that doesn’t come out you expected it to and you don’t understand why, or the excitement when you can connect two ideas you didn’t realize were related to each other before. I only realized these things when  I had mentors in college who asked me questions that I couldn’t answer by regurgitating vocabulary words. They taught me how to learn rather than how to be taught, and I gained so much confidence. No matter how difficult the concept, I had gained some kind of magic comfort in my abilities to work through problems and struggle through sense-making because I had sort of re-focused my education on the act of learning versus the things I learned.

But how do I get 15 year-olds who have been trained from a young age to read their books, do their vocabulary words, and memorize what the teacher tells them to change their ways and actually do this science? How do I give them the the science magic that I found during my college years? Thankfully I am not the only educator who has asked these questions and the creators of NGSS built in science and engineering practices to the standards. I’ve always planned my lessons with the science and engineering practices in mind but I’ve never really told my students what the practices are or how you exactly do those things. So this year I’ve promised myself that I’m going to be more deliberate about this. I made colorful posters with the practices on them and hung them in my room, and have told my students and their parents multiple times that I value the practices. I don’t think that these practices are THE ANSWER to helping students understand real science but I think they are a good place to build from. 

I’m going to value these skills in my classroom and I added a grade book category just for them. My goal is to assess my students on one of the practices at least once a week and to be very explicit and clear with them what these skills look like.In an attempt to briefly outline mastery, proficient, and developing skills I put together a rubric that includes all 8 standards. I plan on using the rubric as a general guideline to grade various different projects or tasks, varying from exit slips or bell ringers to longer in-class activities. If I want to assess a certain practice more in-depth I will break it down into its own more detailed rubric, but for now this is what I’ve got. I’ve attached my first and second drafts of these rubrics in attempt to show how my thought process changed. I love google docs and have given all viewers of these documents the ability to add comments…please do so! I am more happy with iteration 2 but am not sure that everything is student friendly or actually what those skills look like. Big thanks to Camden Hanzlick-Burton and Michael Ralph and others on the KABT Facebook page who encouraged and pushed my thinking before I was quite ready to make a blog post.

TLDR: Science is awesome! How do I get students to stop memorizing and do science? I made some rubrics to assess science and engineering skills but think they could use some improvement: HELP!

DRAFT 1

DRAFT 2

Trying Something New With Grades

I have wanted to change the way I assess students for a while. I have made changes to how and when I grade assignments, the format of tests, and how understanding is communicated during and after lab activities. But in the end, I was still grading students the same way I always had, the same way I was in school, and the same way students have for quite a while. Kid accumulated points, some assignments were weighted more than others, and students who turned in most of their work on time (regardless of quality) tended to do well. This school year, I am not doing that. I will probably fail spectacularly. Luckily I have administrators who are supporting me, knowing I am trying to do what is best for our students. I am going to try this first with my AP Biology students, since I share the Biology 1 classes with two other teachers, and hope this leads to a wider transition.

I will share what I am doing, but I need your help. After reading through my plan, send me a message or leave a comment with your feedback. What looks good? What should I change? What have you tried and can share to improve my students’ experience? 

via GIPHY

I am basing my course assessment off a document shared by AP Biology/Calculus teacher Chi Klein. The College Board shares, as part of the curriculum framework, “Essential Knowledge” statements and has recommended “Learning Objectives” from them. Ms. Klein compiled and organized those learning objectives into a document that could be shared with her students. I will be sharing a GoogleDoc with my students in the first days of class which they will use over the course of the school year.

As is the case in most standards-based and “gradeless” classes I have seen, students will be responsible for justifying their level of mastery over the content. The “Learning Objectives” document I will share with them covers 149 content standards. Students will be able to earn up to four points for each standard based on their mastery of the content, meaning we’d have 596 possible points by the end of the school year. Here is what I’m thinking for my mastery levels (category title suggestions welcomed):

Level of Mastery

Example Activities

Knowledge

Notes, Guided Readings, Discussions

Comprehension

Class activities, Worksheets, POGILs, Article Annotations, Quizzes

Application

Experiments, Virtual Labs, Demonstrations, etc.

Synthesis

Summative Exams, Projects, etc.

I envision the initial knowledge mastery as being pretty straight-forward to demonstrate. For the successive levels, I have been torn as what threshold to use for mastery. If a student wants to use an assignment, lab, test question, etc., do I require them to have earned all possible points? I have been considering at least 90% on a given assignment/test item before a student can try to use it to justify mastery. As an example, if I have a free response item on our evolution test with 10 possible points, a student would need at least 9 points before they could use that in a grade conference. If a student only earned 6 points, they would have to revise their response and get new feedback on the item before trying to use it again during their next conference.

So students are still earning points, and the points they earn as a percentage of the overall points possible still determine their final grade. Not very earth shattering there. How they are being assessed, and what is being assessed is different than how I have ever done this before. There is a much greater burden of responsibility (and independence) placed on the student. My feedback is going to need to be both more flexible and more timely to allow students to complete any needed revisions. If not, I will be setting my students up for a very difficult experience.

The one final change is, at least for my AP Biology class, I am moving away from the traditional 90/80/70/60 scale for grades. The purpose of the AP class, to me, is to prepare students for post-secondary success and to show well on the AP Biology test.  So I want the rigor of the class to match the rigor of the expectations and examination. As anyone who has taken or taught AP Biology can attest, this won’t be difficult. I also want my scoring to reflect that of an AP test. If a student has an A in my class, I want them to have an expectation to earn a 5 on the test. If they have a C in my class, they might expect to earn a 3 (which in Kansas would now get them college credit; good change KSBOE/Regents!). Going back through all the data I could find on the correlation of raw exam scores to 5-point AP Scores, here is what I am going to roll with this year:I am going into this completely aware that revisions will happen when I get AP scores back in the summer. If I have a student who earned 499 points in class, but only got a 3 on the exam, I will need to reconsider either the point range for that grade, or how I let students demonstrate mastery. Again, I am very lucky to have administrators who are willing to let me take this chance, fully aware of I will likely make mistakes.

As for pacing, I am planning on emphasizing one Big Idea each quarter. We’ll start with Big Idea 1 (evolution), which will be more teacher-centered as my students (and I) learn how to function in this new system. As the school year progresses, I hope to transition to a more student-centered model with Big Idea 4 being largely personalized by each individual. Shouts to David Knuffke and Camden Burton for the inspiration here.

This will be my 11th year in the classroom, and 5th teaching AP Biology, and I am finally to a point where I am comfortable enough with my knowledge and abilities to make some changes. I hope this will be a better and more accurate way of assessing student knowledge and mastery, providing more meaning to the grade students earn in my class. But what do you think? What feedback can you give me? I’d love to hear from you in the comments, social media (@ItsIsing), or you can email me (drewising@gmail).

Here goes nothing…

via GIPHY

–Documents of Note and Muses–
Syllabus: Ising APBio2017
Student Learning Objectives: GoogleDoc
Camden’s BioBlog Post: http://www.kabt.org/2015/02/23/my-biology-objectives/
Kelly’s Gradeless Classroom: http://www.kabt.org/2015/06/26/the-great-gradeless-experiment-1/
Bob Kuhn’s 52-Week Gradeless Blog: https://medium.com/@mszczepanik/52-weeks-of-grade-less-week-1-the-journey-begins-da3e03739a7e
David Knuffke’s Published Thoughts on SBG: http://www.knuffke.com/search?q=standards

 

NOW ACCEPTING 2017 Fall Conference Session Proposals

Friends, Members, and Colleagues,

The Kansas Association of Biology Teachers would like to encourage you to submit a session proposal for our upcoming fall conference. We are being hosted by the Sternberg Museum (Fort Hays State University, Hays, KS) Saturday, September 9th (more information to follow soon). Whether you are a seasoned presenter or a first-timer, an individual or a group, we’d love to have everyone share something with us. Our strength is in the innovation and openness of our classrooms, and we can’t wait to see what amazing stuff is going on across our state.

Proposals will be accepted from 21 July-8 August. Presenters will be notified of proposal status no later than 11 August. 

2017 FALL CONFERENCE PROPOSALS

If you have any questions regarding the conference, proposals, Shark Week, etc, contact Drew (andrewising@gmail) or Sara (sarahettenbach@gmail).