Paranormal Phenomenon Omnibus

Artificial Intelligence

Essay by Matt Allair

Basic Definition / Smarter Computers / Turing Test and ELIZA /
EMYCIN Program and Deep Blue / A.I. and Virtual Reality

The speculation over the possibility of Artificial Intelligence has been a very popular theme in the Science Fiction genre for decades. As far back as the 1940's, the notion of 'Super Computers' has fascinated many. Now that we have entered the twenty-first century, the question of whether or not thinking computers -- meaning computers that are able to process and learn in the same manner as the human brain -- could be possible is being raised with more frequency. The X-Files explored various aspects of what current science understands of Artificial Intelligence through such episodes as Ghost in the Machine, Kill Switch and First Person Shooter. One point that needs to be stressed is the fact that Artificial Intelligence research often overlaps into Robotic research and development. The focus isn't so much to create a replica of HAL 9000 from 2001: A Space Odyssey, but to have Artificial Intelligence that is integrated with Robotics in many cases.

The first question to ponder is what do we mean by Artificial Intelligence? It is the science and manufacture of intelligent machines -- especially intelligent computer programs, meaning computers with cognitive abilities. For example, while it is true that personal computers have varying degrees of Intelligence, they have no true independent cognition. Most current AI research seems to be focused on understanding the mechanisms of intelligence. Part of the goal of current AI research is to simulate human intelligence; for example, to problem solve and to process like a human mind, yet that's not always the case for all research. AI researchers can use methods that are not observed in humans or that involve using much more computing than human beings can process.

When comparing the difference between Human and Computer intelligence, Arthur Jensen, a leading researcher in human intelligence, suggests "as a heuristic hypothesis" that all normal humans have the same intellectual mechanisms, that differences in intelligence are related to "quantitative biochemical and physiological conditions". Although John McCarthy makes a distinction from the process of the human mind that compares speed, short term memory and the ability to form accurate and retrievable long term memories. While computer programs have both speed and memory, it is hard to define cognition within a computer program while cognition in the human mind is not fully understood yet.

Basic Definition / Smarter Computers / Turing Test and ELIZA /
EMYCIN Program and Deep Blue / A.I. and Virtual Reality

There is no doubt that computers are getting smarter; the silicon chip has driven such technology into overdrive. Every year computer chips get smaller, faster, cheaper and more powerful. According to Moore's Law, computer chips will become faster and more powerful by a factor of two every year or so, and this exponential increase shows no signs of abating. Which brings us now to what heuristic means. Heuristic can be defined as the ability to learn, discover or problem solve independently, through experimentation, evaluation of possible answers, or by trial and error. When you contrast this with the evolution of the human brain, many experts have concluded that machine intelligence will inevitably surpass human intelligence. All of which raises the question what will happen when it does?

The intellectual origin of AI can be found in ancient Greek mythology. Many myths involved human-like artifacts. The Greek myth of Hephaestus and Pygmalion incorporate the concept of intelligent robots. In the 5th Century B.C. Aristotle invented syllogistic logic - the first deductive reasoning system. In the 17th Century, Descartes proposed that the bodies of animals are nothing more than complex machines. In the 19th Century, George Boole developed a binary algebra representing some "laws of thought". Principia Mathematica revolutionized formal logic when it was published by Bertrand Russell and Alfred North Whitehead in the first half of the 20th Century. The term, 'Robot' was invented by Karel Capek from his Russian play R.U.R. (Rossum's Universal Robot's) in 1923. The term "cybernetics" was invented in a 1943 paper by Arturo Rosenblueth, Norbert Wiener and Julian Bigelow.

Basic Definition / Smarter Computers / Turing Test and ELIZA /
EMYCIN Program and Deep Blue / A.I. and Virtual Reality

Artificial Intelligence scientific research began after the Second World War. English mathematician Alan Turing first lectured on the subject in 1947. Mr. Turing was likely the first to determine that it was best researched by programming computers than building machines. The term 'Turing Test' was coined after Mr. Turing's 1950 article Computing Machinery and Intelligence which discussed conditions for considering a machine to be intelligent. He argued that if the machine could pretend to be human to a knowledgeable observer, that it could be intelligent. The problem with the Turing Test is the fact it is one-sided; a machine could be intelligent without understanding humans enough to imitate a human. In 1956, John McCarthy coined the term; "Artificial Intelligence", during the first Dartmouth Conference on the subject. That same year saw the first demonstration of a AI program, The Logic Theorist, which was written by Allen Newell, J.C Shaw and Herbert Simon from Carnegie Institute of Technology.

In 1963, Ivan Sutherland gave a MIT dissertation on Sketchpad software concerning interactive graphics for computing. In 1965, Joseph Weizenbaum from MIT built ELIZA, an interactive program that could carry on a dialogue in English on any topic. A year later, Ross Quillian, for his PhD dissertation, did a demonstration on semantic nets. In 1967 the first knowledge-based chess playing program, MacHack, was built by Richard Greenblatt of MIT. The program was good enough to earn a C Class rating in tournament play. By 1969, the first International Conference on Artificial Intelligence was held in Stanford. In 1974, Earl Sacedoti developed one of the first planning programs, ABSTRIPS, developing techniques of hierarchical planning. By 1978, Herb Simon won the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing".

Basic Definition / Smarter Computers / Turing Test and ELIZA /
EMYCIN Program and Deep Blue / A.I. and Virtual Reality

By 1979, during Bill VanMelle's PhD Dissertation at Stanford, he demonstrated the generality of MYCIN's representation of style and reasoning in his EMYCIN program, the model for many commercial expert system "Shells". In that same year, The Stanford cart, built by Hans Moravec, became the first computer controlled autonomous vehicle. As the 80's progressed, the American Association of Artificial Intelligence formed and held its first conference at Stanford. Danny Hillis, future founder of Thinking Machines Inc., designed the connection machine, a massive parallel architecture that brought new power to AI and to computation. By the mid eighties, Neural Networks became very popular. By the start of the 90's, there were major advances in all areas of AI development. Including Machine Learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, and games.

By 1997, the Deep Blue chess program beats the current world chess champion, Garry Kasparov in a widely followed match. By the late 90's, MIT's AI lab demonstrated an intelligent room and emotional agents and initiation of work on Oxygen Architecture helped connects mobile and stationary computers in an adaptive network. As of this writing, some current applications of AI include game playing, speech recognition, understanding natural language, computer vision which deals with three dimensional objects, expert systems and Heuristic classification.

There are a number of related branches that concern AI research. Logical AI research deals with programs that decide what to do by concluding that certain actions are appropriate to achieve its goals and Search AI programs examine large numbers of possibilities, such as the moves in a chess game, while discoveries are made to become more efficient. Pattern Recognition, Common Sense knowledge and reasoning are other research branches. Inference research deals with the notion that due to some facts, other facts can be inferred. One example is that if we hear of a bird, we can assume it can fly, but the conclusion is the opposite if we hear of a penguin. This concept also referred to is Non-monotonic reasoning. Other research programs include learning from experience and research - involving planning. Lastly, other research involves Heuristic functions and Heuristic predicates, which involve comparing two approaches to see if one is better.

Artificial Intelligence and Virtual Reality

Basic Definition / Smarter Computers / Turing Test and ELIZA /
EMYCIN Program and Deep Blue / A.I. and Virtual Reality

The X-Files episodes, Kill Switch and First Person Shooter touch on the crossover of Artificial Intelligence with Virtual Reality. The term refers to a simulated environment in which you can immerse yourself. A virtual reality environment provides a convincing replacement of the visual and auditory senses. Once the fusion of a computer neural network can be fused with the biological neural network of our brain, then the scenario of Kill Switch could be probable.

One area of focus that brings the promise of the integration of both mediums can be found in Virtual Communities. There are projects underway with companies based in Japan, Habitat and Fujitsu Cyber City. Mitsubishi Electric Research Laboratories have research projects that focus on combining human-computer interaction, social virtual reality, computer networks and artificial intelligence, multimedia, and human learning and development. They have laboratories based in Diamond Park. Sony's Computer Science Laboratories are focused in similar areas. Sony's division also established the Virtual Society Project. Commercial research in such related areas are being conducted by 3DO in the United States, BT Laboratories in the UK, CompuServe where they allow members to build their own on-line virtual communities, as well as the Escot Corporation in Japan.

The twenty-first century promises to see some interesting developments, especially when you consider the rapid growth of the World Wide Web that could bring about the evolution of some new life form: Living machines.

"Robo Sapiens; Evolution of a new species" by Peter Menzel and Faith D'Aluisio, © 2000 Material World Books, MIT Press A fantastic and in-depth resource on this subject, recommended. On The Net Resources of Virtual Reality

Page Editor: XScribe

Back to top


Artificial Intelligence
Dark Matter
Electronic Hypnosis
Faith Healing
Fiji Mermaid
Firestarters / Pyrokinesis
The Jersey Devil
Jung's Collective Unconscious
- Eugene Victor Tooms: Scientific Probability
- Flukeman
- Lanny and Leonard
- Arctic Worms
- Silicone Based Spores
Prehistoric Insects
Psychic Channeling
"Purity Control": Scientific Probability
Ritualistic Cannibalism
Sociopath Fetishists
Spirit Parasites
Spirit Possesion
Spontaneous Combustion
Time Displacement
Vampires: Myth and Historical Reality
Voodoo: Myth and Historical Reality
Werewolves: Myth and Historical Reality
Wild Men
Witchcraft: Myth and Historical Reality