Tuesday, April 14, 2009

Strings are not Meanings Part 2.1

Strings are not Meanings Part 2.1: Fernando is right – these observations are powerful traces of how writers and readers organize and relate pieces of information. Just as a film of Kasparov is a trace of his playing chess.

I think I didn't make my point as strongly or precisely as I should have. The bubble chamber analogy is neat, but limited. In contrast to the traces in the chamber, the stuff out there on the internet, stored or in transit, is not just a record but also a huge external memory that is as causally central to our behavior as anything in our neural matter. The question then is, what's the actual division of labor between external and mental representation. I tend to believe that material and communicative culture carry a lot more of the burden than individual minds, similarly to how much more of the informational burden of current computing is carried by programs stored out there than by CPUs.

I think that Fernando approaches this space from a more behaviourist mindset – accepting the input, output and context but with no requirements for stuff happening ‘inside’.

No, my stance is definitely not behavioristic. There's lots of complexity ‘inside.’ But the patterns of representation and inference favored by symbolic AI have little to do with ‘inside’ as far as I can see. Instead, they are formalizations of language — as formal logic is —which explain little and oversimplify a lot. Given that, we might as well go right to the language stuff out there and drop the crippled intermediaries.

In addition to their taxonomic meaning, ontologies have come to refer to a requirement for communication – that the stuff I refer to maps to the same stuff for you.

But that's where it all falls apart. No formal system can ensure that kind of agreement. There is no rigid designation in nature. Our agreements about terms are practical, contextual, contingent. Language structure relates to common patterns of inference (for instance, monotonicity) that seem cognitively “easy” (whether they have innate “hardware” support I don't know). But asserting that is much less than postulating a whole fine-grained cognitive architecture of representations and inference algorithms out of thin air, when the alternative of computing directly with the external representations and usage events of language is available to us and so much richer than even the fanciest semantic net system.

(Via Data Mining.)

No comments: