Sunday, March 18, 2007

Robots are hard, too

Robots are hard, too: Friday's Wall Street Journal included a book review of Almost Human: Making Robots Think,a new book by Lee Gutkind that's a portrait of the work at Carnegie Mellon's Robotics Institute. That work, it seems, has its frustrations, and — as the reviewer, George Anders, tells it — the difficulties sound eerily like those recounted in Dreaming in Code's description of the things that make software hard [...] One final irony to me, coming out of Dreaming in Code, is that Carnegie Mellon is not only home to Gutkind's roboticists; it also harbors the Software Engineering Institute, which is ground zero for the CMM, CMMI, TSP and other acronymic attempts to add a framework of engineering rigor around the maddeningly difficult enterprise of producing new software. I might be jumping the gun (not having read Gutkind's book yet), but it sounds like those roboticists and the SEI people should have lunch some time.

I haven't read either book, but from my experience with both large-scale software development and AI research, I wonder if the SEI process people have as much to teach to the roboticists as Rosenberg seems to assume. The critical difference between software as a research tool and software as a deliverable is that research software is, or should be, developed as a means for testing some hypothesis. Good development practices still apply, but the kinds of precise requirements and rigorous reviews that are essential when working to meet customer needs are not applicable or feasible, because what is to be done is driven by what makes theoretical sense and is testable, not by independently derived requirements. There's a parallel with Lakatos's view of the development of mathematics in Proofs and Refutations. Production software development is like creating a new proof for a true statement. The challenges are to create intermediate concepts and lemmas that make the proof easier to build. Research software development is like trying to come up with a proof for an hypothesis that may not be true as stated. Failures to come up with a proof (buggy programs) lead to concept refinements and revisions of the hypothesis, and success is to continue this process until an hypothesis is reached that can be proven (a program that behaves as expected) and that is still recognized by the research community as worthwhile.

No comments: