At this point in my life, I have taken the Intro to AI course three different times at two different universities. I suspect that there is a standard way of teaching Intro to AI, as the courses were taught similarly each time. They all used the classic Russell and Norvig book and worked through roughly the same topics. The lectures followed the same general structure: given this problem formulation, apply this algorithm and voilà, here is the solution.
For some reason, Intro to AI never clicked with me. I could apply the algorithms and provide the right homework answers, but something about the course felt intellectually unsatisfying.
Then something changed. Once I started regularly working on research problems within my field, I would slowly come to the realization that for problem X, technique Y that I had learned in AI would solve it. This happened a few times, and each time it was a complete surprise because it had never occurred to me before that those techniques could solve those kinds of problems. I realized that despite having taken the AI course multiple times, I didn't have an instinct for recognizing the shape of real-world problems that could be solved with the techniques I had learned. When I finally did apply those techniques to messy real-world domains, the experience was totally different than solving homework problems in class.
The problem is that Intro to AI ignores modeling. By modeling, I mean deciding what part of the problem is important to represent and how to represent it. This means choosing what features to include or what logical propositions to use. Modeling choices are critical because for a particular domain you can represent the problem one way and apply the algorithm and get totally useless results, while a different representation of the same problem would give useful results.
The frustration that I had with Intro to AI is that correct behavior of an AI system is not purely determined by correct function of the algorithm, but rather the interaction between the model and the algorithm. This is the source of the disconnect and intellectual dissatisfaction that I felt while taking the course.
I understand that modeling and algorithms are two separate skill sets, the former more of an art than the latter. But just teaching algorithms without the modeling part is a waste of time. The reason that the algorithms were developed in the first place was to solve problems. They can't solve problems if practitioners have no idea how to represent those problems in a way the algorithms can digest.
Using AI in the real world is a totally different experience than hand-calculating algorithmic behavior in Intro to AI, and that's not good. It results in a disconnect between the students and the material, and reduces the likelihood that students will be able to see problems in the real world and realize that those problems can be represented and solved using the techniques that they have learned.
If you buy this criticism of the standard Intro to AI curriculum, the question becomes: how can we teach Intro to AI differently to incorporate modeling while not shortchanging the algorithms?
So far I can think of two exercises:
Perhaps students could do biweekly mini-projects that involve designing and writing the AI "brain" for some kind of system. The project could provide the base code needed to operate in a particular domain, like sensing/actuation methods, on top of which the students would 1) choose what features to synthesize from the inputs and 2) write the algorithm. The final system could then be placed in a number of situations and evaluated based on its decision-making.
Another potential project could be to represent a particular problem using different sets of features and show that the algorithm gives useful results in one case and not useful results in the other. This might help students understand the importance of design choices when modeling and think about what kinds of processes and protocols we can use to help us reliably determine which features are important to include in order to ensure correct system behavior.
Any other suggestions?