Search This Blog

Sunday, January 23, 2005

The fallacy of bigger brains

[Audio Version]

I recently read a great article in the February 2005 issue of Scientific American titled "The Littlest Human", by Kate Wong. Scientists have been studying a newly found member of the Homo evolutionary family, of which Homo Sapiens is the last surviving species, which they have named Homo Floresiensis, after the Indonesian island of Flores on which was discovered the first known remains of one. As you can see in the artist's rendition of H. Floresiensis, they were very small creatures. In fact, they were about the size of the Australopithicene (remember Lucy?) line from which the Homo tree is thought to have emerged and as such the smallest of Homo that we have yet found. They appear to have existed as recently as 18,000 years ago, long after the demise of Neanderthal, believed to have been the last of the Homo line to die out, leaving only us.

While I have a deep interest in the origin of the human species, what made this story particularly interesting in the context of AI is the question of intelligence that it has raised in the scientific community. Wong describes the creatures some scientists have affectionately dubbed "hobbits" as having brains the size of a grapefruit, yet points out that there is evidence that these hobbits were making sophisticated stone tools, even though some species of Homo with larger brains did not. The obvious question then is: is intelligence measured in brain size?

Wong carefully points out that scientists of various persuasions are weighing in on this question and that there is as yet no clear answer. I think the answer is obvious, though. Intelligence is a reflection of structure, not mass. Wong points out, for example, that some of the people given credit for being among the brightest of humanity run the full gamut of human brain sizes. In one example, two well known intellectuals are cited in which one actually had half the cranial volume as the other. He might as well have been missing an entire brain hemisphere.

So why should I care as an AI researcher? Because for years people have been telling us that the reason we don't have intelligent machines yet is that computers are too slow today. It's just a matter of time, they tell us, until they will have enough transistors, memory, or whatever other basic physical characteristics we care to use to measure computing power. When we reach that threshold, somehow computers will magically wake up and start cracking jokes and deciding whether or not to enslave humans or just kill them altogether.

This equation of greater numbers of computing units with greater intelligence is misguided. If brain size is key in "wet" life, then why don't those creatures who have much larger brains than us (e.g., certain whales) exhibit at least our own levels of wit and creativity? I am fully convinced that we could have had intelligent machines decades ago. I am further convinced that multiplying today's computing power by ten or a hundred times will not automatically bring them about, either. Google is a monster of computing power and it's still not "smart". Don't plan on having a computer of your own that has as much computing power as Google any time soon, by the way.

The actual question is one of structure and complexity. This concept is illustrated over and over again throughout the history of computer science. Computer games illustrate it well. When games like Doom and Tomb Raider came about in the early nineties, people were astonished at what a whole new generation of computer graphics could do with the average home computer. What people now may not remember is that there had been 3D graphics engines around for decades that could render graphics just as compelling. Few could use them because few had the expensive hardware needed to run them fast enough. Did these games come with hardware upgrades? Of course not. What they had was a set of ingenious new algorithms for generating compelling 3D graphics. The same thing happened when people started streaming audio and video through the Internet. The first systems were power hungry, requiring massive bandwidth and expensive hardware. Now, the average user can get the same results with lower bandwidth and a cheap PC, thanks to some incredible compression algorithms and other ingenious techniques invented by companies like Real Networks.

AI researchers like to blame our failure to achieve the goals we've been boasting we could achieve for decades now on lots of things, but insufficient hardware is our favorite whipping boy. Let's be honest, though, and tell the world that we just haven't found the right algorithms, yet.

Funny that rocket science would be standard against which we measure engineering complexity. AI research sometimes seems to make rocket science look like a weekend crafts project. Everyone who has contributed and continues to do so deserves credit for doing the difficult and pursuing what seems the impossible. To anyone who has thought of giving up -- especially those who wonder whether they should even bother getting started in our largely dead field -- I want to encourage you to keep the faith. I am convinced you have more than enough computing power in your own PC to give life to an intelligent mind. It's just a question of your creativity and persistence, and it will happen. Don't give up.

Wednesday, January 12, 2005

Follow-up on Pile

[Audio Version]

My head is spinning. I've done just about as much due diligence as is reasonably possible with respect to the Pile computing system as I can. I thought I should write a brief follow-up entry in light of that.

After all the bombastic claims about how modern computers, relational databases, and AI suck, CEO of Pile Systems, Inc., Peter Krieg, goes on to explain in fancy but annoyingly vague terms what Pile is and how it is the perfect solution. In fact, as far as I can tell, all Pile is is a data structure that represents everything as linked points in a non-hierarchic graph space. One might as well call it a big flow chart with only one kind of block that can't contain any discrete information. If that's true, then I can hardly see how the trivial concept that mathematicians call a "graph" is novel, let alone patentable.

To be sure, I haven't seen any of the source code or any applications written with Pile. One has to get in touch with Pile Systems for a demo. And I couldn't, with a few quick Google searches, find anyone who admits to using the thing. I'm sure they're out there, but I didn't find them.

Actually, I really didn't find anything significantly related to Pile through Google searches besides what is on Pile's web site or otherwise repackaged in rave reviews of Pile by converts to Pile who probably haven't used it. Of course, nobody cares about my AI research, either, so I'll give them the benefit of being unknown because people haven't caught the Pile bug, yet.

I have to caution people that I'm not an expert in Pile. There may well be some value there. The literature does nothing more than knock everything that has come before Pile and make bold claims about how Pile is like the human brain and can be used to solve any problem. I'm left to conclude from what little their public literature reveals that Pile is really just a data structure, and that to make use of it, one has to write all the software to assign meaning to and process the data in it. At best, then, Pile is a tool that can be used to solve any computing problem -- just like a computer memory or relational database can.

I suppose I'm not being entirely fair. I wish I could give more attention to Pile to better cement my initial thoughts on it, but after reading several documents that amount to puffy product literature on the subject, I can't take any more. Maybe I'll find useful literature or Pile will have publicly downloadable demonstrations some day. For now, the subject is pretty nauseating.

A review of the premises behind Pile

[Audio Version]

Meandering through the trickle of AI-related news out on the Web, I recently came across information about a purportedly novel kind of computing paradigm named "Pile" ( The company formed to capitalize on it, Pile Systems, Inc., makes the following bold claim on it's "about" page under the heading "Why Pile can change computing":

The Pile system is a revolutionary new approach to data and computing which eliminates the most fundamental current restrictions in regard to complexity, scalability and computability.

Pile represents and computes arbitrary electronic input exclusively as relations (virtual data) in a fully connected and scalable combinatory space. It dynamically generates data like a computer game instead of storing and retrieving it in a traditionally slow and clumsy process.

This sounds benign and interesting enough on first flush. Having read an outside review of Pile, I can genuinely say I'm interested in learning more about a way of representing information as relationships because it sounds a bit like the ideas I've been pursuing in my own AI research.

Still, the little red flag in my head goes up whenever I read things like "revolutionary new approach", because that doesn't really happen very often. Most innovations are modest extensions of existing conceptions. The red flag is the iconoclasm / excessively bold claims warning.

I am continuing to study the site and the concept of Pile. There may be genuine value to it. Or it may be a fraud. Until I give it a fair airing, I can't make that final judgment.

That said, though, I wanted to give a preliminary review of the premises given in what seems the seminal introductory text on the subject: "Pile System White Paper: Computing Relations", by Peter Krieg. I normally would wait until I'd gotten further along in my understanding of the subject, but I am so incensed by the stated premises thus far about the limitations of current computers and of AI that I thought they merited their own separate review.

The paper begins, "In the 60+ years of modern computing history we have taken the fundamental architecture of computing, i.e. the logic governing the way we represent, structure and operate as well as the method of representation we use to register events, for granted. We rarely become aware that these are mere design decision from the early days of computer technology, neither naturally given nor possibly the best of choices." OK, this is a fair statement. Anyone familiar with neural networks would agree that there are already demonstrated alternatives. A little later, though, Krieg raises the tempo a bit as he writes, "A time of crisis has always been a time where we are willing to take a closer look at foundations in order to find long term cures that go beyond patches and band-aids. The current crisis of computing -- an economic as well as a technical crisis -- is also an opportunity to reconsider the very basic assumptions that this industry has been built upon and reflect on possible alternatives that hold the promise of curing the systemic ills of computing." Let's be honest: there is no crisis of computing. Most organizations that need computing resources are doing just fine with the current breed of computers. In all my years as a software developer, I've never heard any businessman lament of a crisis in computing. They complain about the cost of computer hardware and software licensing, not about basic capabilities. The little red flag starts waving around a bit.

"Attempts in the 1960ies and 1970ies to address these issues have been silenced by the onslaught of Artificial Intelligence, for over 40 years the 'Great White Hope' of computing. Only now that the failure of AI has become evident -- as was predicted early by its critics -- and even the mention of it becomes a liability to anyone seeking publication or funding, can we revisit some of the arguments and take a fresh look at the foundations." Few would argue that AI researchers have made some bold claims that they have not yet been able to deeply substantiate. And yes, AI has a black eye now because of it. Yet while even I would argue that AI is nearly dead as a field, I wouldn't say AI has been a complete failure, nor that its time has passed. Such are the claims of people who don't really understand much about machines or intelligence, I think. So Krieg now has laid out the smoldering ashes of the dark ages out of which we are prepared to emerge into a bright new future. The little red flag waves a little more enthusiastically, now.

"In fact, computers today are just that: extremely complicated, highly integrated yet fundamentally stupid clocks." This, of course, is nonsense. A clock is not a general purpose computer. A Turing machine, by contrast, can be used to solve any information processing problem that can be solved. "They are neither adaptive nor even scalable: in spite of ever speedier and more complicated chips, in spite of even faster growing memories and storage devices, their operations keep drowning in data and complexity." Given that any given Turing machine can be used to emulate any other kind of information processing machine, saying that a Von Neumann machine (VNM) -- most all computers today are of this type -- is not adaptive is just plain nonsense. One may quip that software used on a VNM are not adaptive enough to deal with a certain class of problems, but one should not equate the limits of a program with the limits of the VNM it runs on. Saying that a VNM is not scalable is also nonsense. The famous Connection Machine (up to 10K processors in one system) and now Google (over 100K computers) should easily lay that question to rest. The little red flag begins hopping around frantically.

I would be remiss if I overlooked that last part about "drowning in data and complexity." What could this mean? The next statement is even more puzzling: "The reasons lie in the very foundations of their architecture: logic and representation." The little red flag stops its waving and hopping and scratches its head.

Krieg goes on to reveal the nature of the problem by reference to a collection jargon pulled from AI, philosophy, and even quantum mechanics. He goes so far as to claim that VNMs -- and yes, he's clearly mixing VNMs and AI programs that run on them, at this point -- rely on deterministic rules and that quantum mechanics suggests that there are no such things. Well, he's right, and can even go further to say that almost all technology we have ever created relies on basic determinism, the view of causality that says that we can predict likely outcomes to certain classes of starting states. Before declaring this is just quaint, back-country superstition, let's acknowledge that nature has done the same. Almost everything about the machines that we and all other known life forms has evolved in concert with the basic premise of determinism. What good is a muscle if one can't assume that it won't work in a predictable manner, for example? How about an eye?

"All machines including today's computers are exactly such closed deterministic mechanisms." This claim worries me a little, as I'm assuming that Pile is going to be presented as an alternative to this paradigm. The only problem is that Pile Systems sells software that runs on these deterministic machines.

"Deterministic systems by definition are incapable of learning, as learning would change them in unpredicted ways - turning it into non-deterministic systems." I guess this is supposed to be the killing blow to VNMs and / or AI research to date. This premise is just plain false, though. Determinism does not preclude learning. I could point to the simple neural network demonstrator program I made recently, but I'll grant that Krieg places neural networks somehow above VNMs and other AI. So take FLARE, which I recently reviewed. Now there's a system that is about as classical as one gets in the realm of AI. It relies wholly on deterministic processes for reasoning and learning, yet it's clearly able to adapt itself to new knowledge. How about Cyc? It may not yet have achieved the goals Doug Lenat had for it decades ago, but it clearly adapts to new knowledge. The little red flag is pretty confident the rest of my brain can take it from here and retires for the day, cheerful about another job well done.

After a bit of mumbo jumbo attempting to play on our annoyance at having to adapt to computers instead of having them adapt to us and about how adaptive systems cannot have "pre-knowledge about the signals they detect", Krieg goes on to introduce a new term: "poly-logic systems" and declares that it "is essential to understand living systems and phenomena like cognition, learning, adapting or complexity." The little red flag pokes its head out again, ears perked. To his credit, Krieg decides to rescind his abrogation of logic and declares that "polylogic" is not "another logic", but is instead another "architecture of logic".

It becomes clear at this point that the rest of the white paper will focus on what polylogic is and hence what Pile's novel conception of computing is. I'm going to read on and find out more. Still, I can't help but have a bad taste in my mouth at this point. Given the gross misunderstandings and continual confusion between Von Neumann machines, relational databases, and traditional AI research so far, it's hard to imagine a clean concept will follow. It may be a valid one, still. I'm eager to find out.