First entry

[Audio Version]

This is my first entry into this blog. The subject matter generally is Artificial Intelligence.

I have been engaged in AI research in one way or another since around 1990, when I first read the Time-Life book "Alternative Computers", part of their "Understanding Computers" series, a colorful if brief look from a layman's perspective at a variety of technologies that even today get the statuses of cutting edge or bold speculation. As I recall, it touched on neural networks, nanocomputers, optical computing, and so on. Given my intense interest at the time in robotics and digital processors, what most caught my eye was a section on artificial intelligence. At that time, I had followed an odd path that led me from studying simple electronics to digital logic and all the way up to microprocessor architecture.

What I was finally realizing around this time was that in order to understand how digital computers worked, I was going to have to learn how to program. I didn't have any particular problem to solve by programming; I just wanted to really understand what all the complex architecture of a digital computer was really for. The idea of a machine endowed with intelligence was not new to me at this time, but I guess the timing of this book and my interest in learning to program were such that they led me to conclude that I should learn to program and that I should cut my teeth on the problems of AI.

Thanks to my gracious and encouraging parents, I was lucky enough at this time to have a Tandy 1400LT laptop computer like the one shown in the illustration at right. It was great for word processing, spreadsheets, playing cheesy video games, and some other things. It had a squashed, 4-shades-of-blue LCD screen, two floppy drives, no hard drive, a 7 MHz Intel 8088-compatible CPU, and 640KB of RAM. When I decided to learn to program, my father insisted I do some research first into programming languages. After a while, I settled on Turbo Prolog (TP) because PROLOG had earned a good reputation in the AI community, especially for the part of it that was my first focus in AI: natural language processing (NLP). Once I had read my first book on the language, my father was finally convinced I was serious enough and gladly bought me a copy of TP.

In some ways, this nearsighted choice of mine to learn to program in a language few people in the business world to this day have ever heard of may have put off entry into my career as a professional programmer by a few years. Still, the way of thinking about automation engendered in PROLOG has helped my understanding of search algorithms, regular expressions, and other practical technologies and problems. And while I felt pretty out of place when I started learning C++ a year or two later, my understanding of PROLOG really crystallized my understanding of what the procedural languages that have dominated my professional life since are really all about in a broader context perhaps many programmers lack.

The books I read, including the manuals that came with Turbo Prolog, emphasized the strength of PROLOG in natural language processing (NLP), so I began my life as a programmer there. I would not say that in these early days I made any novel discoveries. I was simply following in the footsteps of many bright researchers who came before me. But I quickly came to understand just how hard the task was. My impression is that even today, NLP is a black art that has more to do with smoke and mirrors than with cognition. Still, the timing was actually fortuitous, since I was also learning the esoteric skill of sentence diagramming in high school around this time. Nobody but linguists care about this archaic skill any more, but it couldn't have come at a better time for me than then.

PROLOG was also touted as an excellent language for developing expert systems. I made my first primitive examples around this period as well. Expert systems also provided a natural application for NLP, so I experimented with simple question and answer systems of the sort one might imagine in a medical application where doctors and patients interact with information and questions for each other. Again, I hasten to add that I made no noteworthy progress here that others hadn't already achieved years before. It was really just a learning experience for me.

Sometime not long before I went off to college in 1992, I got interested in chaos theory (James Gleick's "Chaos: Making a New Science") and, as a sort of extension of it, Artificial Life ("a-life"). My programs started to be geared more toward generating the Mandelbrot set, l-systems, and other fractals. Once I got to the Stevens Institute of Technology, I was digging into a-life (Steven Levy's "Artificial Life"), especially with Conway's Life, genetic algorithms, and the like.

In modesty, I have to say that I did little that was new in all this time, but I was constantly going off in my own directions. Admittedly, this is probably because I have rarely had the patience to learn enough about any particular subject before diving into it, so I end up filling in the gaps with whatever makes sense to me. This process is inherently creative, and sometimes leads to unexpected places. Along the way I did start doing my own research, though. I wanted so much to succeed where others appeared to have failed in creating things that the average person could genuinely recognize as alive in a digital soup.

Sadly, by the time I left school, my AI and a-life research had ground to a nearly complete halt. I had jumped on the World Wide Web bandwagon almost at its beginning and have still not gotten off it yet. I was at the "serious" start of my career and the focus was almost wholly on making a success of it. I've been quite successful at it because I work so hard at it, but it's always been driven by a belief that my success will eventually free me to get back to my AI research.

Ten years later, I've had to come to the realization that I've made a mistake in not just continuing my research on the side and just dealing with the fact that I have to keep working full time on something other than my AI research in order to pay the bills. In the past few years, though, this has been sinking in and I'm starting to do AI research again. I've been focusing my attention on the bold claim that AI has always promised of sentient machines. Other great minds have done such a great job in other areas of AI that we can at least claim to have machines with roughly insect-level intelligence. But thanks to post-modern philosophical skepticism about the very existence of reason and other misguided debunking of AI, most researchers seem to have given up going for the most important piece of the puzzle: conceptualization.

In all those years I wasn't doing actual AI research, I was still thinking about the related problems. With each new thing I learned in other areas, about philosophy, programming, economics, and so forth, I gained new insights into AI. Always with me has been the question, "how would I get a computer to do that?" I continue forward now with the strong conviction that conceptualization is not only possible for computers, but also that it is a necessary part of the solution for many outstanding problems in computing. So I've devoted most of my renewed efforts in AI to date to the problem of engendering conceptualization in computer programs.

So that's where I am today and how I got to this point.

Comments

Popular posts from this blog

Coherence and ambiguities in problem solving

Neural network in C# with multicore parallelization / MNIST digits demo

Back in the saddle / C# neural network demo