Wednesday, October 8, 2014

Sirens, Tornado Warnings, and Messaging

TL;DR version: Sirens go off if any part of the county is put under a warning, even if the risk is nowhere near your part of the county. YOU may not even be at risk. 

Last night in Bloomington was a textbook case of how complicated the "weather warning business" really is. Here's a rundown of the most important issues.

Warnings. Since 2007, the National Weather Service has issued tornado warnings not by county but by risk area--it's called a "polygon" because, well, it looks like one:



The area in that pink box is the area the experts at NWS in Indianapolis placed under a tornado warning, for the storm that's also in the box (this is a pretty standard weather radar image that you'd see on tv, with red indicating heavy rain and hail and the small green triangle also an indicator of hail). This image is of the first tornado warning from last night. Notice how this box does not include any part of downtown Bloomington, or the heart of the IU campus (red dot), or even my house (yellow plus).  This polygon is the box I use to make my own safety decisions.  Any weather app that's worth its salt will plot these polygons. Look at that image again. For the entire time the warning was in effect, NWS predicted that the storm would remain in that box (and it did). There is no reason to panic or to take shelter if you're not in the path of the storm--which is what the box shows for this warning.

As the NWS office in Birmingham, Alabama says"It is our goal that only those inside the polygon should take action."

Sirens.  Many siren systems in the US are still sounded by county. That means that no matter how small the sliver of your county, if any part of the county is placed under a tornado warning, the sirens will go off everywhere. This is true in Monroe County--it happened twice last night. So the takeaway messages are:
  1. Sirens do NOT always imply that your location is in danger. They imply that some PART of your county is in danger. The storm may stay 10, 20, or even 30 miles away from you.
  2. Sirens are sounded by a county employee (at least here). No one on the IU campus, to my knowledge, has any control over the sirens. None.
By the way, the sirens went off twice in Bloomington last night. The second time was for a storm that was forecast to clip the northeastern part of Monroe County.  Here's the radar and tornado warning polygon for the second one:


Again, no risk for Bloomington.  Zero, zilch, nada.

Confusion. Last night got a little squirrely because IU sent messages telling everyone to seek shelter for the first warning, but for the second warning, some messages told people that campus was not being impacted. For once, the polygon seemed to matter! This should happen in every event. This should become the standard and not the exception. (For the record, it's the first time in my 3 years of living here that I've seen this happen.)

Here's what we absolutely cannot do. Send this email:

And then send this tweet:

This is a messaging and safety nightmare. Why would I "take cover" for something that "does not impact" me? Which one of these messages should people listen to, if either one? Just as mixed messages from faculty to students lead to protests and grade changes, mixed weather information leads to fatalities. This storm was of absolutely no risk to Bloomington, but the message implied it was. Until it wasn't.

My personal view is that we all have to make our own safety decisions. I realize that if you live in a residence hall, or work at a big-box store, you may be required to follow someone else's instructions. Based on the above, I'm honestly not sure what those instructions would have been. With that in mind, I've always believed and said that you and you alone are responsible for your safety. Make the decisions you need to make and do what you have to do, whatever that may be. That goes both for both seeking shelter and coming out from shelter so you can get on with your life.

Friday, August 29, 2014

Why meteorologists shouldn't "teach to the middle"

Once every decade, we take the temperatures of the last 30 years, average them together, and refer to this as the "normal" temperatures for a location.  For example, when you see on the nightly weather report that the "normal high for today is 84 degrees," that's simply the average of all the highs for that day from 1981 to 2010.

The number 84 is an average.  Very few, if any, days in the record will actually have had a high temperature of exactly 84!

The same goes for our students.  In any given class, the number of "average" students, perfectly in the middle of the distribution, will be quite small.[Footnote 1]  My argument is this: if we teach to the middle, we alienate and bore our upper tier of students (who are our future colleagues) and at the same time work over the heads of weaker ones who may need the most help.  We likely reach those few students who are truly in the middle of the distribution, but overall to me this is a lose-win-lose situation.  Losing two battles every day is not how I want to spend my career.  Furthermore, the standard we "set by teaching to the middle is a standard of mediocrity."  It's okay to be average, kids.  Everyone gets a ribbon.

What, then, is the answer?  Is there one?  How can we possibly differentiate learning when faced with 100 students, or even 40 or 50?  Facilitating a classroom that promotes learning already requires lots of work, and most academics I know don't believe they have any additional time to devote to it.  Here are some rough ideas, certainly a non-exhaustive list but maybe a starting point at least.

1. Variety in course assignments.  Some of our students will be math stars, while others are incredible artists who struggle mightily with college algebra.  Offering different types of work -- calculations, concept mapping, figure interpretation, opinion essays, etc. -- allows all students to take part.  I like to believe everyone is good at something.

2. Variety in in-class activities.  I pray that the days of lecturing for an hour a day three days a week are dying (an albeit gruesomely slow death, but still dying).  And reading text on slides as they appear on the screen doesn't teach to anyone, let alone the middle.  In-class activities and discussions can be like #1 above and also varied in level: a mixture of easy concepts, medium concepts, and the occasional mind-bender sets up a class that everyone can get something out of.  Structured group and team-based activities, discussions, or even quizzes (yes, group quizzes!) help also.

3. Structure in assignments and activities.  "You need structure. And discipline!"  In a room of professionals, we could get away with the activity 'hey let's pull up today's 500-mb map and just talk about it for awhile.'  However, this will likely fall flat in a room of mixed majors or gen-ed students.  At least when I've tried it, it has.  Even off-the-cuff activities need structure and scaffolding (take small steps: first let's find the ridges and troughs, and the vorticity, and the temperature advection, and then ask where are the likely surface features, etc.).


The bottom line here is that we have to find ways to involve everyone (or, realistically, as many people as possible) in the room in the learning process.  If "teach to the ____" is just code for "at what level do I pitch my lectures?" the problem goes much deeper.  To me, the room is more about what learning will be taking place, rather than what teaching will be taking place.

We'd be hard-pressed to find a string of perfectly "average" weather days, instead finding runs of hot and cold which both have their own fun and own beauty.  And each of our classes is made up of much more than a blob of "average" students who are the only ones to deserve our attention.  A classroom includes a spectrum of abilities, and everyone learn something when courses are thoughtfully organized for more than just what we believe the "average" student is capable of doing.


Footnote 1:  Some readers will want to start talking about normal distributions at this point.  I ask, are the students that are at +1σ and -1σ at the same skill level?  What's really the "average" group, then?  +0.5σ to -0.5σ?  That's now less than 50% of your class.  The bounds get smaller and smaller...

Friday, August 8, 2014

"The Points Don't Matter"

[TL;DR:  Tthere is not much difference in the average grade for a course if you redistribute the weights for exams, homework, and the like after the fact.]

When students see a new course syllabus for the first time, the first thing many look for is the breakdown of grading for the course.  "What do I have to do to get the grade I want?"  At least I always did.  Every semester, every class.  Not ashamed to admit it, either.  That university curricula are so grade-centric instead of outcome-centric (and have been for decades) is a rant for another page, and has been addressed thoroughly, here, here, and here among probably a dozen other places.

But does the course grade breakdown really matter that much?  That is, do the weights we assign to each category of work truly have a large impact on final course grades?  To find out, I pulled up the grades for an introductory course I taught a couple years ago and recomputed their final grades using five different weight combinations.  There were about 30 students in the course, and in terms of structure it was rather mundane: lecture, homework, quiz, exam.  It was earlier in my teaching career; forgive me!

Here are the breakdowns I tested, using all the assignments we did that semester:


Homework Quiz Exam 1 Exam 2 Final
Option 1 25% 15% 20% 20% 20%
Option 2 40% 10% 10% 10% 30%
Option 3 20% 10% 20% 20% 30%
Option 4 20% 10% 15% 15% 40%
Option 5 30% 20% 15% 15% 20%

Depending on the instructor, I think any one of these breakdowns would be pretty standard for a lower-division science course that doesn't have much of a team-based or lab component.  But standard as they might be, each of these five would potentially have huge impacts on student perception of the course and the instructor (especially option 4. Brutal!).  And I'd say it's highly likely that study and work habits would be different too, depending on what the actual scale was.  I know of no way to test how different those habits would be if students had been presented a different distribution up front -- we can only look at how grades would be different after the fact.  If you know a better way, please hit the comment box below.

So yes, I'm making a key assumption here:  to make this comparison I have to assume that perceptions and study habits and such would not be different as students complete any given activity, regardless of which of the five breakdowns would be used.  Again, I know this is a stretch.  For each option, here is the distribution of the students' final grades:



Highest 75th %-ile Median 25th %-ile Lowest
Option 1 99 87 80 70 53
Option 2 99 87 81 67 54
Option 3 98 88 81 70 52
Option 4 99 88 81 68 51
Option 5 99 86 80 70 53



From a class-average point of view, every option gives a nearly identical distribution!  The greatest variability occurs, expectedly, at the bottom of the distributions which includes students who were badly deficient in one of the categories (rarely attended class so had quiz grades < 50%; missed or didn't turn in key homework or team assignments; poor test takers; etc.).  I also checked the number of students who achieved 90%, 80%, etc., as those would be my rough cutoffs for letter grades.  No surprise: for this course the number in each category changed by no more than one student (out of ~30) regardless of which category distribution was used.

Because it's much more recent, I won't show the results from another course, although they are very similar.  To me, it's clear that as long as the distribution chosen is a reasonable one, the actual percentages simply don't matter that much to final grades.  We'll almost always curve a point or two, here or there, to accommodate bad exam questions and grading mistakes and uncertainty and whatnot, and so even the variability in the lower half of these distributions is just in the noise to me.


Have I tried to use this information to the advantage of my students?  Yes.  Given that test anxiety is real and observable, I've lowered the stakes on my in-class exams (toward something like option 5 above) so that those assessments count a little less, and the untimed and out-of-class work counts a little more.  Because of the tendency to think of out-of-class work as "grades I earn" and exams as "grades you give me," students hopefully will take more ownership of their learning when the percentages shift in their favor.

Even though, ultimately, the points don't matter.  Much.  :-)

Monday, July 21, 2014

What makes a good learning outcome?

What makes a good learning outcome?  One word: it is demonstrable.  It should be easy to demonstrate whether the outcome is mastered or not.  Here's an example for the severe convective storms crowd.
"At the end of this section, students will know the difference between LP and HP supercells."

Well that's nice.  How on earth am I going to be able to prove that students have met this outcome and give them, you know, a grade for it?  How many different ways could someone's knowledge be interpreted, rightly or wrongly?  How can I know you know something?  Is there any way to be more precise in expressing what you think is important here?  Let's try.
"At the end of this section, students will be able to:

- sketch and label archetypal models of LP, classic, and HP supercells, including cloud and precipitation extent, updraft location relative to precipitation, surface outflows, and the most likely location of a tornado if any;

- describe the environmental conditions that favor HP supercells over LP, and vice-versa; and

- differentiate between likely HP and LP storms in photographs and/or videos."
You can probably think of others that fit here (please do, and add them below).  I would argue that we should make the effort to be this clear in our desired outcomes for all courses, and all class periods.  Why?  These outcomes are more detailed, they are observable, and they are measurable.  Heck, they are almost ready to be questions on an exam/quiz/in-class exercise as they are written.  Writing specific outcomes removes all doubt about what's important to us as instructors and makes it clear what students should be getting out of the course (and what they "need to know for the test").  There are no surprises, for anyone in the classroom.

Friday, June 20, 2014

Cloud shadows on the North Atlantic

Flying home a couple weeks ago, I snagged these photos of some boundary layer clouds and their shadows somewhere south of Iceland.  We were at 32,000 feet if I remember correctly.  The shadows on even the really small ones were visible.  Love it.




Profiteering, Pulitzers, and Photojournalism


A storm chaser and freelance photographer has come under a metric ton of criticism for taking a photograph of a 5-year-old girl on a stretcher after her town was hit by a violent tornado.  (The Memphis Commercial Appeal has the photo available here.)

My specific interest in all this is in trying to understand why so many in the meteorology community seem irritated...nay, downright angry...about the photo.  My observations and opinions follow.

Storm chasers who distribute ANY photos fill a dual role.  No matter the media -- Twitter, Facebook, or sold to a television or newspaper outlet -- if you distribute a storm-related photo you are performing a service for the community and are wearing an additional hat: that of a photojournalist.  I believe this is true whether or not you profit, or even seek to profit, from your work.
 
http://www.merriam-webster.com/dictionary/nay?ref=dictionary&word=photojournalist#


Don't think that high-impact weather is a news story?  Tell that to AccuWeather, The Weather Channel, WeatherNation, and so on.  If you don't want to wear this additional hat, leave your camera and your social media apps at home when you chase.  Yeah, I know...not gonna happen.

Weather phenomena and the destruction they produce are inseparable.  Those who want to have one without the other must wake up with Tobey Maguire and Reese Witherspoon living in the next house.  Once we detach the storms from their damage, destruction, death, and aftermath, what's left?  All the people and places they harm are demoted to snippets on the nightly news.  "Oh that's horrible.  We should pray for them.  Pass the potato salad."  The same fate as Iraq, Afghanistan, Syria, Libya, Ukraine, and a half-dozen other crises in the world.  This isn't Pleasantville, y'all.

"But that doesn't mean we should photograph it."  Shouldn't photograph what?
- The storms?  Yeah right.  Not gonna happen.

- The tornado carving up that house and barn and silo?  I would guess that in any given year, hundreds of thousands of dollars change hands for professional-quality photos and video of high-impact weather doing its thing  I have no empirical data to back up that number, and I'd love to see some.

- The damage afterward?  Again, check the nightly news.  How our society has evolved, to come to really expect to see these scenes of destruction and damage, is a separate matter.  Can someone tell me a better way to communicate to the world how horrific some of these scenes are?  How much these places need our aid?  What better to tell people that a town and state need our help than photos like this one, this one (linked to by Discovery News), or this one (linked to by the Lubbock Avalanche-Journal)?

- The dead and dying?  I defer to people who have been in the business far longer than I have to decide what should be published and what shouldn't.  The FCC sets the rules for television I'm sure, but even the American Red Cross has guidelines when it comes to photos of human suffering:
- Are we caught up in the fact that the subject of the photo died afterward?  Via the Fort Collins Coloradan, The USA Today reminds us that in 1996, in the aftermath of the Murrah Building bombing in Oklahoma City, a firefighter carrying a severely injured child who later died was photographed, and the photo won a Pulitzer Prize.  (The photo is here.)  That's right: nearly 20 years ago, a photo just like this one received one of journalism's highest honors.

Individually we may dislike the photo or find it distasteful or objectionable, but society seems to want it (money and ratings and clicks are speech, sadly), and the profession apparently supports it, and it doesn't appear to violate standards espoused by our national humanitarian agency, either.

Which leaves what?  Are we offended at the photo, or at the person taking it?  I'll be the first to admit that wanting "highly photogenic and destructive tornadoes" for financial benefit is repulsive...but only the second part of that.  Profiteering as the result of anyone else's loss is shameful.  I don't know how to separate that from the demand for high-impact weather footage, though.  I really don't.  Insensitive as this comment may be, I challenge everyone to dig back through their Facebook feed (which is secured and private for a reason, right, mmm hmmm?) or their Twitter feed for an offensive comment.  Badgering someone on the basis of a single public (yet insensitive, yes yes yes) comment wreaks of some other issue. 

I don't see this photo as a case of a storm chaser behaving recklessly by getting to close to a storm, or by blocking roads, or by otherwise interfering with emergency responders on the scene.  One photo gives no proof as to whether or not this person later put down his camera to aid others who needed help.  Of course, I must ask: what is the second role of a "storm chaser" -- that is, what is the responsibility once the storms have moved on?  Are we supposed to act as "volunteer emergency responders" (my vocabulary) or photojournalists?  Don't the professional emergency responders tell us to stay off the roads, to stay out of their way, to avoid getting ourselves hurt and compounding the need for aid?  Wouldn't conscious photojournalism be the wiser course of action?  I don't know but it's worth asking.  (A third option: leave the scene entirely.)

In addition to a critically-injured little girl, there are also two other human beings in the frame, performing a Herculean task and risking their own lives digging through rubble.  Let's zoom out a little and acknowledge that this photo isn't just about the girl and now her memory, but also about what they do after every storm, every car accident, and every 9-1-1 call.

Personal opinion: the photo is poignant, not offensive.  It is weighty, not disrespectful.  It shows, in an incredibly direct and heartfelt and painful way, how ravaged this town became, in a matter of seconds.  This photo is the face of Pilger, Nebraska on June 17, 2014.  I think we're all better for seeing it.

Friday, June 6, 2014

The Monument

To commemorate the Great Fire of 1666, this column was erected.  It's certainly not a world-known structure, but it is very important to the history of London.  Possibly 70,000 out of the 80,000 homes inside the city walls were destroyed over those four days.

That percentage of loss reminds me in some ways of the numerous small towns that have been impacted just as heavily by tornadoes over the years here in the states (Manchester, SD in 2003; Greensburg, KS in 2007; Picher, OK in 2008; Hackleburg, AL in 2011).  Some have chosen to rebuild (Greensburg, Hackleburg), some have not (Manchester, Picher).


Thursday, May 22, 2014

The Rosetta Stone

I post this slightly different view of the discovery that cracked the code to Egyptian hieroglyphics.  Unsurprisingly, it's one of the Museum's most popular displays...and you have to wait your turn to get up close.  And will get nudged if you take too long.  :-)

Hyper - inflation in Zimbabwe in 2008

As the display in the British Museum tells, money became pretty much worthless in this African country in 2008. They couldn't even keep up with how fast the exchange rates were changing.

Sunday, May 18, 2014

The Broad Street pump

John Snow was a physician by trade, but made a huge impact in the field of geography too with his mapping of a major cholera outbreak in London in the mid-1800s.  This is a replica of the pump where he asked that the handle be removed...saving uncountable further deaths.

http://en.m.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak


Saturday, May 17, 2014

Small train stations

I just love these places. This one is in the middle of town, and as usage declined over the years the second building was changed into a clinic.

At this station there's not even a ticket office or kiosk: if you board ("alight") here, you can buy your ticket on the train. Can't do that at most places.

(Hale Station, Altrincham, on the outskirts of Manchester)

"Ricicles"

The early name of Rice Krispies. From about 1960.

(At the Manchester Museum of Science and Industry)

Tuesday, May 13, 2014

Vacation time

Hopefully I'll avoid the Griswolds on this European Vacation.  Many pics to come over the next 3 weeks I hope!

Tuesday, April 15, 2014

Relationships, relationships, relationships

In my introductory weather and climate course, scores on Exam 2 typically slide considerably.  Sometimes, the average drops 10-15% from Exam 1.  I have no hard evidence, but my suspicion is that scores go down because the primary topic is humidity and atmospheric moisture, a topic many students find quite difficult to grasp.  However, although some students do worse on the second exam, there are always several who do much better.  How??  They figure out that my classes are not about rote memorization -- that I'm serious why I say that I care about critical thinking skills, and that I care about problem-solving ability.  And they make a conscious effort to change their study habits.  To continue just "studying the vocab" is a quick route to a failing grade.

I asked one such student -- who went from a C+ on Exam 1 to a perfect score on Exam 2 -- what she did differently.  Her response:


To the people who say "I'm just not good at science," I say stop it!  This student is a political science major and goes on to present a universal truth: analytical ability is a key to success in college regardless of your discipline, not just in the "sciences."



P.S.:  That's all for this post, but I figured it might be helpful to show an example exam question.  In large courses (>50), exams are strictly multiple choice and there are typically N = length_of_class/2 questions on an exam.



See also "What 'the tests count for too much' really means"

Sunday, April 6, 2014

For my G109 students: air pressure experiment

I had a water bottle open on my flight today, and closed it while we were at cruising altitude (so the pressure inside the plane was about 850 millibars).

Once we were almost on the ground in Atlanta, I pulled it back out.  With the air inside the bottle sealed at 850 millibars, but the cabin now back to around 1000 millibars, I opened the bottle and...

https://www.dropbox.com/s/yqivni0ces4so63/20140406_125554.mp4

Thursday, March 20, 2014

Spring tornado drills

Here's the timeline of how I received the evening's Test Tornado Warning from various sources, including the National Weather Service, through my cable television provider, and other channels related to the largest employer in my hometown.


7:36 p.m. All three of my NOAA weather radios went off.

7:37 The TV is on pretty loud...  "Did you just hear sirens?"  "I'm not sure."  (Yes, they were going off.)

7:38 Tweet from @IUEMC.

7:39 EAS activiation on all cable TV channels.

7:40 Email from "IU Notify" system arrives.  It was time-stamped at 7:37, but took 3 minutes to get through the getrave.com servers to my inbox.  (I verified this by looking at the email headers.  I'll be happy to provide a screen capture of them.)

7:42 Tweet from @IUBloomington.

7:49 The average "lead time" for a tornado warning is 13 minutes -- meaning a tornado could have just hit your home.  How much lead time would each system have given you?  Backtrack from here.

Friday, March 14, 2014

The sling psychrometer

Want to learn a few German words? Just check out the psychrometer from our cabinet at work... a.k.a. das Schleuderpsychrometer.

(If it's der or die instead of das, let me know.)

Sunday, February 16, 2014

Mine, all mine

One of the best, straight-talk summaries of the role of Twitter profile disclaimers in this day and age:
http://www.cyberbuzz.com/2011/08/22/the-views-are-mine-and-not-my-employer/
"You see Twitter disclaimers more and more these days. It’s a variation on the “the views are mine and not my employer”, or “opinions expressed here are mine and mine alone” theme.
Guess what?

Those opinions may be yours and they may not reflect the opinions of your employer, but they reflect you."
For the record, I do endorse this view.  :-)

Your boss isn't going to go look for a disclaimer before he fires you for publicly being an idiot and dragging down the company's brand.  If you put your affiliation directly on your profile, then you are acting as a very public, full-time ambassador for that group -- whether it be your government employer, your university, or your burger joint.  And everything you post could get associated with that group, no matter whether you "disclaim" it or not.  That's all the motivation some employers will need to give you the axe, or maybe even not hire you if they trawl your past posts.

A while back, after someone I know got in a little row with some folks over a couple insensitive posts, they retorted "I don't know what the big deal is, it's just Twitter."  Tell that to its billionaire owners, countless news media outlets, or to the Library of Congress

Sunday, January 19, 2014

Spring 2013 and Fall 2013 semesters of teaching

(As I said in the intro to my first semester summary, I want to keep some notes here on the successes and failures from my teaching exploits.  It's mainly for my reference, but there's no reason not to share my successes and failures with everyone else.)  Here's a wrap-up of...

Semester #3 - Spring 2013 - 3 classes
* Lower division intro to weather & climate, 100 students
* Lower division earth system science, 50 students
* Upper division synoptic meteorology, 15 students

I learned a lot about myself this semester.  The weather and climate course began to click, with the just-in-time methods and classroom activities and other assessments.  Now I just need to make some changes to the lab manual, which I inherited from a previous instructor and which doesn't match my style (or content) as well as it should.  Since I didn't draw this course again in the fall, I didn't work on it over the summer.  Maybe I should have.

Our weather analysis-synoptic sequence is taught without the prerequisites of physics and calculus, and it's incredibly difficult--more than I thought--to teach synoptic concepts without it.  How many of you have ever tried to teach QG theory in words?  :-)  It can be done, but it's a slow process.  I really like the Lackmann book, but it's not the best fit for a conceptual synoptic course in my opinion.  I resorted to a lot of Vasquez's material, as well as Chaston's old book.  He has probably the best descriptive QG section out there, that I've seen.

My first foray into my new departmental home was ESS, with a few majors but mostly as a service course as are many 100-level courses.  I struggled with what kind of theme I wanted the course to take--again, given that I inherited a lab manual that in no way reflected my pedagogy--and eventually decided on a tour of the earth system: a few weeks of each of the four (five?) spheres.  And my goodness, if you want to know where your knowledge of a topic is weak, don't do your homework and then go to class in an activity-based course design!  I did learn that students are okay with you saying "I don't know," to a point.  It shows that we're human, that we don't have all the answers either.  I think many of them resonate with that.
 
Semester #4 - Fall 2013 - 3 classes
* Lower division intro to atmospheric science, 10 students
* Upper division climatology, 25 students
* Upper division mesoscale meteorology, 20 student

I moved the "weather and climate" course across campus to a new department and gave it a new name, with lab exercises of my own, and it felt really nice.  One problem: just as small-classroom exercises definitely do not scale up to lecture halls, large-classroom activities don't scale down to a room of 10 students, either.  More than once, an activity that would take 2-3 minutes in a room of 100+ students would be over in 30 seconds.  "Well, that usually takes a little longer."  It didn't hurt that I had an incredibly bright group.  So we improvised with more data, or went to the web for more photos, etc.

This may disappoint a few folks but I'm not going to say much about the mesoscale course, because most of the previous struggles (mixed grad/undergrad, no math and physics prerequisites, etc.) and successes (use of case studies and real events, individual vs. group presentations, etc.) came up again.  It was fun to take the course as I'd taken it--all lecture by powerpoint--and tweak the topics and change the presentation.  Much fun.

Climatology was a new prep for me but also a fun one; I'd taken a couple of graduate courses in it and it's a hot topic in mainstream science, so I felt ready.  Key takeaways:

* Flexibility in the entire course plan.  I hadn't wanted to talk about the Milankovitch cycles until we introduced past climates & Earth's climate history, but students wanted it during the radiation physics section.  So I had to shift everything.

* Don't hesitate to abandon ship.  On "what causes the jet stream" day, I tried to leap too far too quickly (it's temperature gradients ultimately, btw).  The handouts I distributed weren't matched up with what I wanted to say, the way I said it didn't match up with what they had read, and about 30 minutes into the (75-minute) period I realized this boat was a goner.  "Okay, class is over.  That's it.  Come back Thursday."  We did, I revised and resubmitted, and it went much better the second time.

* Students love real life.  As we got to a section on ice ages and glacier formation, a winter precipitation event presented itself.  So we spent a day on precipitation types and vertical temperature profiles, looking at observations and making forecasts, and so on.  They seemed to really enjoy that.
 
What will Spring 2014 hold?  Enrollments of 100 and 30, I know that much.......