Several universities across this great state have partnered together on a massive interdisciplinary project (MAPS – Microbiomes of Aquatic, Plant and Soil Systems). As part of this project, they are holding annual Summer Institutes for teachers interested in these fields. This summer is the second edition, being held 17-21 June 2019 at the Konza Prairie Biological Station. Participants are given travel allowances and a stipend, and anyone with a commute >1 hour driving time from Konza will be provided with lodging.
If you have any questions, contact one of the project leaders, Dr. Peggy Schultz (firstname.lastname@example.org). KABT Members Drew Ising, Michael Ralph, Marylee Ramsay, Andrew Davis and Bill Welch were participants or organizers for the first summer institute and can also help.
Imagine for me, if you would, this scenario: you are trying to make a diagram for a lab report (or assessment or poster or whatever) but you can’t find the right figure. So you draw something that resembles what you want, or you use an image you found online that is similar to what you want, but then you spend almost as much time identifying and discussing the weaknesses of the model as you do working with the model itself.
[ESPN Documentary Narrator Voice] What if I told you there was a free way to make high-quality, detailed models with your students?
My wife’s uncle shared BioRender with me this week, and I knew I needed to share this ASAP. Watch this intro video you’ll see when you sign up for a free account, and try to act cool… I’ll wait.
DID YOU FREAK OUT A LITTLE BIT?! I did. (OK, maybe more than a little bit.) There is a lot to explore with this, but here are some highlights for me. Not only are there 1000s of icons you can add to your figure, but you can control the color scheme for many of them and add labels to make your models even more robust. It has some built-in support to pull models from the Protein Databank. When you have the EXACT protein you want to use, you can control how your protein is visualized and rotate the protein so you show the exact part of interest. After Andrew Taylor’s Fall Conference presentation on 3D-printed models, I went looking for the proteins associated with the pharmaceutical product Gleevec.
I encourage you to go check this out. Visit https://biorender.io/ and create an account. Once you start creating, share your best figures with us here or on social media. I may be speaking for myself here, but I can’t wait to start using and making these models with my students!
Our annual board meeting is set to take place this Saturday, February 17th. Due to a forensics tournament, it has been moved to Baldwin Elementary School-Primary Center’s community room. The address is 500 Lawrence St., Baldwin City, KS 66006. The door to the Community Room is on the southern side of the building (furthest away from US-56 HWY) next to the gym entrances.
Who: All KABT Board Members, current KABT members, and invited stake holders and guests.
What: Board Meeting
When: 10AM-3PM Saturday 2/17
Where: Baldwin Elementary School- Primary Center. 500 Lawrence St. Baldwin City, KS 66006
Why: To discuss old business, upcoming professional development, Spring Field trips, possible by-law changes (to be voted on later), other new business from KABT members.
I will post minutes from the previous meeting here along with the agenda for this meeting when it is available.
Please direct any questions to andrewising(at)gmail(dot)com or 913-795-1247.
I really like the HHMI Biointeractive activity “Battling Beetles”. I have used it, in some iteration (see below), for the last 6 years to model certain aspects of natural selection. There is an extension where you can explore genetic drift and Hardy-Weinberg equilibrium calculations, though I have never done that with my 9th graders. If you stop at that point, the lab is lacking a bit in quantitative analysis. Students calculate phenotypic frequencies, but there is so much more you can do. I used the lab to introduce the idea of a null hypothesis and standard error to my students this year, and I may never go back!
We set up our lab notebooks with a title, purpose/objective statements, and a data table. I provided students with an initial hypothesis (the null hypothesis), and ask them to generate an alternate hypothesis to mine (alternative hypothesis). I didn’t initially use the terms ‘null’ and ‘alternative’ for the hypotheses because, honestly, it wouldn’t have an impact on their success, and those are vocabulary words we can visit after demonstrating the main focus of the lesson. When you’re 14, and you’re trying to remember information from 6 other classes, even simple jargon can bog things down. I had students take a random sample of 10 “male beetles” of each shell color, we smashed them together according the HHMI procedure, and students reported the surviving frequencies to me.
Once I had the sample frequencies, I used a Google Sheet to find averages and standard error, and reported those to my students. Having earlier emphasized “good” science as falsifiable, tentative and fallible, we began to talk about “confidence” and “significance” in research. What really seemed to work was this analogy: if your parents give you a curfew of 10:30 and you get home at 10:31, were you home on time? It isn’t a perfect comparison, and it is definitely something I’ll regret when my daughter is a few years older, but that seemed to click for most students. 10:31 isn’t 10:30, but if we’re being honest with each other, there isn’t a real difference between the two. After all, most people would unconsciously round 10:31 down to 10:30 without thinking. We calculated the average frequency changed from 0.5 for blue M&M’s to 0.53, and orange conversely moved from 0.5 to 0.47. So I asked them again: Does blue have an advantage? Is our result significant?
Short story, no; we failed to reject the null hypothesis. Unless you are using a 70% confidence interval, our result is not significantly different based on 36 samples. But it was neat to see the interval shrink during the day. After each class period, we added a few more samples, and the standard error measurement moved from 0.05 to 0.03 to 0.02. It was a really powerful way to emphasize the importance of sample size in scientific endeavors.
Should the pattern (cross-cutting concept!) hold across 20 more samples, the intervals would no longer overlap, and we could start to see something interesting. So if anyone has a giant bag of M&M’s lying around and you want to contribute to our data set, copy this sheet, add your results, and share it back my way. Hope we can collaborate!
Email results, comments, questions to Drew Ising at email@example.com or firstname.lastname@example.org
I have wanted to change the way I assess students for a while. I have made changes to how and when I grade assignments, the format of tests, and how understanding is communicated during and after lab activities. But in the end, I was still grading students the same way I always had, the same way I was in school, and the same way students have for quite a while. Kid accumulated points, some assignments were weighted more than others, and students who turned in most of their work on time (regardless of quality) tended to do well. This school year, I am not doing that. I will probably fail spectacularly. Luckily I have administrators who are supporting me, knowing I am trying to do what is best for our students. I am going to try this first with my AP Biology students, since I share the Biology 1 classes with two other teachers, and hope this leads to a wider transition.
I will share what I am doing, but I need your help. After reading through my plan, send me a message or leave a comment with your feedback. What looks good? What should I change? What have you tried and can share to improve my students’ experience?
I am basing my course assessment off a document shared by AP Biology/Calculus teacher Chi Klein. The College Board shares, as part of the curriculum framework, “Essential Knowledge” statements and has recommended “Learning Objectives” from them. Ms. Klein compiled and organized those learning objectives into a document that could be shared with her students. I will be sharing a GoogleDoc with my students in the first days of class which they will use over the course of the school year.
As is the case in most standards-based and “gradeless” classes I have seen, students will be responsible for justifying their level of mastery over the content. The “Learning Objectives” document I will share with them covers 149 content standards. Students will be able to earn up to four points for each standard based on their mastery of the content, meaning we’d have 596 possible points by the end of the school year. Here is what I’m thinking for my mastery levels (category title suggestions welcomed):
Level of Mastery
Notes, Guided Readings, Discussions
Class activities, Worksheets, POGILs, Article Annotations, Quizzes
Experiments, Virtual Labs, Demonstrations, etc.
Summative Exams, Projects, etc.
I envision the initial knowledge mastery as being pretty straight-forward to demonstrate. For the successive levels, I have been torn as what threshold to use for mastery. If a student wants to use an assignment, lab, test question, etc., do I require them to have earned all possible points? I have been considering at least 90% on a given assignment/test item before a student can try to use it to justify mastery. As an example, if I have a free response item on our evolution test with 10 possible points, a student would need at least 9 points before they could use that in a grade conference. If a student only earned 6 points, they would have to revise their response and get new feedback on the item before trying to use it again during their next conference.
So students are still earning points, and the points they earn as a percentage of the overall points possible still determine their final grade. Not very earth shattering there. How they are being assessed, and what is being assessed is different than how I have ever done this before. There is a much greater burden of responsibility (and independence) placed on the student. My feedback is going to need to be both more flexible and more timely to allow students to complete any needed revisions. If not, I will be setting my students up for a very difficult experience.
The one final change is, at least for my AP Biology class, I am moving away from the traditional 90/80/70/60 scale for grades. The purpose of the AP class, to me, is to prepare students for post-secondary success and to show well on the AP Biology test. So I want the rigor of the class to match the rigor of the expectations and examination. As anyone who has taken or taught AP Biology can attest, this won’t be difficult. I also want my scoring to reflect that of an AP test. If a student has an A in my class, I want them to have an expectation to earn a 5 on the test. If they have a C in my class, they might expect to earn a 3 (which in Kansas would now get them college credit; good change KSBOE/Regents!). Going back through all the data I could find on the correlation of raw exam scores to 5-point AP Scores, here is what I am going to roll with this year:I am going into this completely aware that revisions will happen when I get AP scores back in the summer. If I have a student who earned 499 points in class, but only got a 3 on the exam, I will need to reconsider either the point range for that grade, or how I let students demonstrate mastery. Again, I am very lucky to have administrators who are willing to let me take this chance, fully aware of I will likely make mistakes.
As for pacing, I am planning on emphasizing one Big Idea each quarter. We’ll start with Big Idea 1 (evolution), which will be more teacher-centered as my students (and I) learn how to function in this new system. As the school year progresses, I hope to transition to a more student-centered model with Big Idea 4 being largely personalized by each individual. Shouts to David Knuffke and Camden Burton for the inspiration here.
This will be my 11th year in the classroom, and 5th teaching AP Biology, and I am finally to a point where I am comfortable enough with my knowledge and abilities to make some changes. I hope this will be a better and more accurate way of assessing student knowledge and mastery, providing more meaning to the grade students earn in my class. But what do you think? What feedback can you give me? I’d love to hear from you in the comments, social media (@ItsIsing), or you can email me (drewising@gmail).