One of the most challenging tasks I ask students to do in my course is perform a task analysis of a scientific effects-to-causes problem: identify (1) the data that a scientist uses to pose the problem, (2) the steps in moving from the problem to a solution, and (3) specify what qualifies as a valid solution. I worked much harder to model this process for the students and to provide them with opportunities to practice it this semester than in the past. The main challenge students seem to have is narrowing a problem down to a manageable size -- the initial attempts tried to look at extremely complex, interdisciplinary problems and brought in complex multidimensional datasets that would take an entire semester to unpack. I tried to scaffold the process by showing that each set of data could be thought of as presentig its own range of subproblems any one of which might be tractable enough to visualize and specify.
I also tried to make the point that asking students to learn models carries the concommitant responsibility to help students understand the limitations of models: models are false. I saw it once described that the simplest complete model of a cat...is a cat. Anything else is a simplification. Bill Wimsatt wrote a great paper about how false models are scientifically useful as the means to truer theories. Models are a great filter for identifying what can be explained about something and, therefore, what still needs explaining. A phylogenetic tree shows relationships among taxa, but it also illustrates which characters (ie, differences among taxa) are explained by "something that happened once a long time ago" or whether there is something else going on. It's when you find something that doesn't fit the model that you know you may have an interesting puzzle to solve.
In the Prometheus course, our instructor took some of the questions that had been raised during the chat session and tried to get answers. The one I liked best was when she asked about some students' concerns about the 'whisper' feature that allows one student to talk directly to another student without the instructor being able to monitor the content. The answer was (in part) "[...] you can’t turn the whisper feature off. They usually tell faculty to refrain from telling the students they can whisper." That's not a productive answer. If the feature is there, someone will figure out how to use it (I discovered it in less than 5 minutes). Not telling students means you can't prepare them for what they might experience. And, I believe, concealing functionality is fundamentally dishonest. But, again, the environment is being built more about the instructor's desire to control, rather than around the students', or better yet, the communities' needs.
Tom, Kirsten, and Billy came to visit this afternoon. Plato was ecstatic to have guests and leapt around in paroxysms of wild exuberance. We had fixed a pot of soup and so we invited them to lunch. They played doggy catcher with the boys, we had a nice lunch, and, after another session of doggy catcher, we took the dogs on a quiet walk out to the manure pile. I also got to see Tom's new albook running Panther. I hadn't been keeping up with the new features list, so I wasn't aware of some of them. I can see several things that will improve my productivity a lot. When we got back from the walk, we had cookies and coffee and chatted quietly until they really had to leave. It was great to see them again.