Thursday, February 26, 2015

movin' and shakin'

The blog is moving.

Given Google's new censorship rules for Blogger, I'll no longer be hosting here.  I have no intention of posting adult images ;-) ; it's just the principle of the thing.  I'll leave this page up for the foreseeable future, and crosslink both blogs, but here's all you really need: http://codykirkpatrick.com/blog/.  The RSS feed is built-in; just add that link to your favorite reader.  See you there!

Multiple choice instructions

Yeah, this is probably not the best set of instructions I've ever seen...



http://cheezburger.com/8449095680/everything-about-this-is-wrong

Wednesday, February 25, 2015

Why I don't give extra credit

Since the title explains itself, I can dive right into the reasons.  There's not always a need for a long, drawn-out introduction!

1. I'm a meteorologist, and we don't do that.

Put very simply, we don't get a chance to "make up" for a bad forecast.  We can only learn from our mistakes, move on, and do better the next time.  It's illogical to think that a forecaster's mistakes -- no matter if they are major or minor -- can somehow be washed off his record by doing bonus work.

An example of minor mistakes: the forecaster who continually has a warm bias to his temperatures.  Once you realize what you've done wrong, fix it!  Explaining to people why you made the mistake is important, yes -- but does not absolve the mistake.  (See #2 below for more on this.)

A major mistake: missed forecasts can have fatal results.  In a recent 5-year period, 17 tornado fatalities occurred without a tornado warning being issued (Brotzge and Erickson 2009).  There is no apology great enough to overcome this one.  I know this all sounds painfully harsh and trite to those who aren't as familiar with the forecast process.  But put simply, in our field these are the lessons we face every day.  I'm a firm believer that students should experience science as science is practiced, and a no-extra-credit policy is a clear application of that principle.

2. My ideas of sound pedagogy don't support that model.

Four reasons instructors may offer extra credit are given in this Faculty Focus blog post from 2011.  None of them convince me.

  • "It reduces student anxiety and builds confidence."
This is the only one that holds some water to me.  But this is exactly what practice assignments and homeworks should be designed to do, right?  Build confidence so that students can perform well on major assessments?  Aren't we already supposed to be designing our courses to that students are well-prepared for exams?  I don't understand how something like attending an evening seminar about a peripheral topic (a classic extra credit idea) builds confidence.
  • "If learning is the goal and students haven't learned important content, extra credit offers a second chance to master the material."
  • "Not all students 'get it' the first time."
Teachers of college writing know that revision is a key to students improving their writing skills.  In the hard sciences we might use the word practice, especially in meteorology where forecast opportunities are fleeting and revisions to previous work aren't possible.  I don't remember but I'm pretty sure my first few forecasts as an undergraduate were awful, and that they improved with repeated practice (to the somewhat less-awful state they're in now!).  Our assignments and courses should be arranged to give students multiple opportunities to master difficult content before a major assessment takes place.  When we don't provide this structure, we are less effective teachers.
  • "Students are motivated to do it, so why not capitalize on this motivation by creating a robust learning opportunity."
It's a bit cynical but to me the implication here is that students aren't motivated for ordinary classwork.  I certainly hope that's not the case!  Every learning opportunity should be robust and motivational.  If it's not, it doesn't belong in our classroom.  Why should we relegate our most creative assignments for extra credit opportunities that may get done by only a handful of students?
One thing to point out is that I differentiate between large, formal "extra credit" assignments and the rare "bonus" questions that occur on a quiz or an exam: Michael Leddy offers a nice example and his take here.  Most often, I use those to help me scale exam or course grades to better align with student expectations (I'll rant about the insistence of a 90-80-70 letter-grade cutoff some other time).  But my students can attest that I do this about once per course and is part of an assessment that already exists.  My bonus questions are always opt-out (right there on the page for you to try), not opt-in (available only if you ask or by doing something else external to class).  I'll avoid saying much about the ethical issues of opt-in extra credit, too, beyond saying that they terrify me.  Is the extra work only available to students who ask?  Are they allowed to tell their peers?  What if someone can't attend that special guest speaker's talk because of their job or family?

So there you have it.  Let's make our coursework compelling the first time 'round, and let's create assignments that are not busy work but help students learn what we truly want them to do.  That way, they get it right when the grades are on the line.

Wednesday, October 8, 2014

Sirens, Tornado Warnings, and Messaging

TL;DR version: Sirens go off if any part of the county is put under a warning, even if the risk is nowhere near your part of the county. YOU may not even be at risk. 

Last night in Bloomington was a textbook case of how complicated the "weather warning business" really is. Here's a rundown of the most important issues.

Warnings. Since 2007, the National Weather Service has issued tornado warnings not by county but by risk area--it's called a "polygon" because, well, it looks like one:



The area in that pink box is the area the experts at NWS in Indianapolis placed under a tornado warning, for the storm that's also in the box (this is a pretty standard weather radar image that you'd see on tv, with red indicating heavy rain and hail and the small green triangle also an indicator of hail). This image is of the first tornado warning from last night. Notice how this box does not include any part of downtown Bloomington, or the heart of the IU campus (red dot), or even my house (yellow plus).  This polygon is the box I use to make my own safety decisions.  Any weather app that's worth its salt will plot these polygons. Look at that image again. For the entire time the warning was in effect, NWS predicted that the storm would remain in that box (and it did). There is no reason to panic or to take shelter if you're not in the path of the storm--which is what the box shows for this warning.

As the NWS office in Birmingham, Alabama says"It is our goal that only those inside the polygon should take action."

Sirens.  Many siren systems in the US are still sounded by county. That means that no matter how small the sliver of your county, if any part of the county is placed under a tornado warning, the sirens will go off everywhere. This is true in Monroe County--it happened twice last night. So the takeaway messages are:
  1. Sirens do NOT always imply that your location is in danger. They imply that some PART of your county is in danger. The storm may stay 10, 20, or even 30 miles away from you.
  2. Sirens are sounded by a county employee (at least here). No one on the IU campus, to my knowledge, has any control over the sirens. None.
By the way, the sirens went off twice in Bloomington last night. The second time was for a storm that was forecast to clip the northeastern part of Monroe County.  Here's the radar and tornado warning polygon for the second one:


Again, no risk for Bloomington.  Zero, zilch, nada.

Confusion. Last night got a little squirrely because IU sent messages telling everyone to seek shelter for the first warning, but for the second warning, some messages told people that campus was not being impacted. For once, the polygon seemed to matter! This should happen in every event. This should become the standard and not the exception. (For the record, it's the first time in my 3 years of living here that I've seen this happen.)

Here's what we absolutely cannot do. Send this email:

And then send this tweet:

This is a messaging and safety nightmare. Why would I "take cover" for something that "does not impact" me? Which one of these messages should people listen to, if either one? Just as mixed messages from faculty to students lead to protests and grade changes, mixed weather information leads to fatalities. This storm was of absolutely no risk to Bloomington, but the message implied it was. Until it wasn't.

My personal view is that we all have to make our own safety decisions. I realize that if you live in a residence hall, or work at a big-box store, you may be required to follow someone else's instructions. Based on the above, I'm honestly not sure what those instructions would have been. With that in mind, I've always believed and said that you and you alone are responsible for your safety. Make the decisions you need to make and do what you have to do, whatever that may be. That goes both for both seeking shelter and coming out from shelter so you can get on with your life.

Friday, August 29, 2014

Why meteorologists shouldn't "teach to the middle"

Once every decade, we take the temperatures of the last 30 years, average them together, and refer to this as the "normal" temperatures for a location.  For example, when you see on the nightly weather report that the "normal high for today is 84 degrees," that's simply the average of all the highs for that day from 1981 to 2010.

The number 84 is an average.  Very few, if any, days in the record will actually have had a high temperature of exactly 84!

The same goes for our students.  In any given class, the number of "average" students, perfectly in the middle of the distribution, will be quite small.[Footnote 1]  My argument is this: if we teach to the middle, we alienate and bore our upper tier of students (who are our future colleagues) and at the same time work over the heads of weaker ones who may need the most help.  We likely reach those few students who are truly in the middle of the distribution, but overall to me this is a lose-win-lose situation.  Losing two battles every day is not how I want to spend my career.  Furthermore, the standard we "set by teaching to the middle is a standard of mediocrity."  It's okay to be average, kids.  Everyone gets a ribbon.

What, then, is the answer?  Is there one?  How can we possibly differentiate learning when faced with 100 students, or even 40 or 50?  Facilitating a classroom that promotes learning already requires lots of work, and most academics I know don't believe they have any additional time to devote to it.  Here are some rough ideas, certainly a non-exhaustive list but maybe a starting point at least.

1. Variety in course assignments.  Some of our students will be math stars, while others are incredible artists who struggle mightily with college algebra.  Offering different types of work -- calculations, concept mapping, figure interpretation, opinion essays, etc. -- allows all students to take part.  I like to believe everyone is good at something.

2. Variety in in-class activities.  I pray that the days of lecturing for an hour a day three days a week are dying (an albeit gruesomely slow death, but still dying).  And reading text on slides as they appear on the screen doesn't teach to anyone, let alone the middle.  In-class activities and discussions can be like #1 above and also varied in level: a mixture of easy concepts, medium concepts, and the occasional mind-bender sets up a class that everyone can get something out of.  Structured group and team-based activities, discussions, or even quizzes (yes, group quizzes!) help also.

3. Structure in assignments and activities.  "You need structure. And discipline!"  In a room of professionals, we could get away with the activity 'hey let's pull up today's 500-mb map and just talk about it for awhile.'  However, this will likely fall flat in a room of mixed majors or gen-ed students.  At least when I've tried it, it has.  Even off-the-cuff activities need structure and scaffolding (take small steps: first let's find the ridges and troughs, and the vorticity, and the temperature advection, and then ask where are the likely surface features, etc.).


The bottom line here is that we have to find ways to involve everyone (or, realistically, as many people as possible) in the room in the learning process.  If "teach to the ____" is just code for "at what level do I pitch my lectures?" the problem goes much deeper.  To me, the room is more about what learning will be taking place, rather than what teaching will be taking place.

We'd be hard-pressed to find a string of perfectly "average" weather days, instead finding runs of hot and cold which both have their own fun and own beauty.  And each of our classes is made up of much more than a blob of "average" students who are the only ones to deserve our attention.  A classroom includes a spectrum of abilities, and everyone learn something when courses are thoughtfully organized for more than just what we believe the "average" student is capable of doing.


Footnote 1:  Some readers will want to start talking about normal distributions at this point.  I ask, are the students that are at +1σ and -1σ at the same skill level?  What's really the "average" group, then?  +0.5σ to -0.5σ?  That's now less than 50% of your class.  The bounds get smaller and smaller...

Friday, August 8, 2014

"The Points Don't Matter"

[TL;DR:  Tthere is not much difference in the average grade for a course if you redistribute the weights for exams, homework, and the like after the fact.]

When students see a new course syllabus for the first time, the first thing many look for is the breakdown of grading for the course.  "What do I have to do to get the grade I want?"  At least I always did.  Every semester, every class.  Not ashamed to admit it, either.  That university curricula are so grade-centric instead of outcome-centric (and have been for decades) is a rant for another page, and has been addressed thoroughly, here, here, and here among probably a dozen other places.

But does the course grade breakdown really matter that much?  That is, do the weights we assign to each category of work truly have a large impact on final course grades?  To find out, I pulled up the grades for an introductory course I taught a couple years ago and recomputed their final grades using five different weight combinations.  There were about 30 students in the course, and in terms of structure it was rather mundane: lecture, homework, quiz, exam.  It was earlier in my teaching career; forgive me!

Here are the breakdowns I tested, using all the assignments we did that semester:


Homework Quiz Exam 1 Exam 2 Final
Option 1 25% 15% 20% 20% 20%
Option 2 40% 10% 10% 10% 30%
Option 3 20% 10% 20% 20% 30%
Option 4 20% 10% 15% 15% 40%
Option 5 30% 20% 15% 15% 20%

Depending on the instructor, I think any one of these breakdowns would be pretty standard for a lower-division science course that doesn't have much of a team-based or lab component.  But standard as they might be, each of these five would potentially have huge impacts on student perception of the course and the instructor (especially option 4. Brutal!).  And I'd say it's highly likely that study and work habits would be different too, depending on what the actual scale was.  I know of no way to test how different those habits would be if students had been presented a different distribution up front -- we can only look at how grades would be different after the fact.  If you know a better way, please hit the comment box below.

So yes, I'm making a key assumption here:  to make this comparison I have to assume that perceptions and study habits and such would not be different as students complete any given activity, regardless of which of the five breakdowns would be used.  Again, I know this is a stretch.  For each option, here is the distribution of the students' final grades:



Highest 75th %-ile Median 25th %-ile Lowest
Option 1 99 87 80 70 53
Option 2 99 87 81 67 54
Option 3 98 88 81 70 52
Option 4 99 88 81 68 51
Option 5 99 86 80 70 53



From a class-average point of view, every option gives a nearly identical distribution!  The greatest variability occurs, expectedly, at the bottom of the distributions which includes students who were badly deficient in one of the categories (rarely attended class so had quiz grades < 50%; missed or didn't turn in key homework or team assignments; poor test takers; etc.).  I also checked the number of students who achieved 90%, 80%, etc., as those would be my rough cutoffs for letter grades.  No surprise: for this course the number in each category changed by no more than one student (out of ~30) regardless of which category distribution was used.

Because it's much more recent, I won't show the results from another course, although they are very similar.  To me, it's clear that as long as the distribution chosen is a reasonable one, the actual percentages simply don't matter that much to final grades.  We'll almost always curve a point or two, here or there, to accommodate bad exam questions and grading mistakes and uncertainty and whatnot, and so even the variability in the lower half of these distributions is just in the noise to me.


Have I tried to use this information to the advantage of my students?  Yes.  Given that test anxiety is real and observable, I've lowered the stakes on my in-class exams (toward something like option 5 above) so that those assessments count a little less, and the untimed and out-of-class work counts a little more.  Because of the tendency to think of out-of-class work as "grades I earn" and exams as "grades you give me," students hopefully will take more ownership of their learning when the percentages shift in their favor.

Even though, ultimately, the points don't matter.  Much.  :-)

Monday, July 21, 2014

What makes a good learning outcome?

What makes a good learning outcome?  One word: it is demonstrable.  It should be easy to demonstrate whether the outcome is mastered or not.  Here's an example for the severe convective storms crowd.
"At the end of this section, students will know the difference between LP and HP supercells."

Well that's nice.  How on earth am I going to be able to prove that students have met this outcome and give them, you know, a grade for it?  How many different ways could someone's knowledge be interpreted, rightly or wrongly?  How can I know you know something?  Is there any way to be more precise in expressing what you think is important here?  Let's try.
"At the end of this section, students will be able to:

- sketch and label archetypal models of LP, classic, and HP supercells, including cloud and precipitation extent, updraft location relative to precipitation, surface outflows, and the most likely location of a tornado if any;

- describe the environmental conditions that favor HP supercells over LP, and vice-versa; and

- differentiate between likely HP and LP storms in photographs and/or videos."
You can probably think of others that fit here (please do, and add them below).  I would argue that we should make the effort to be this clear in our desired outcomes for all courses, and all class periods.  Why?  These outcomes are more detailed, they are observable, and they are measurable.  Heck, they are almost ready to be questions on an exam/quiz/in-class exercise as they are written.  Writing specific outcomes removes all doubt about what's important to us as instructors and makes it clear what students should be getting out of the course (and what they "need to know for the test").  There are no surprises, for anyone in the classroom.