1.0 AN INTRODUCTION TO ARTIFICIAL INTELLIGENCE.
Artificial Intelligence (AI) is the area of computer
science focusing on creating machines that can engage on behaviors that humans
consider intelligent. The ability to create intelligent machines has intrigued
humans since ancient times, and today with the advent of the computer and 50
years of research into AI programming techniques, the dream of smart machines
is becoming a reality. Researchers are creating systems which can mimic human
thought, understand speech, beat the best human chessplayer, and countless
other feats never before possible. Find out how the military is applying AI
logic to its hi-tech systems, and how in the near future Artificial
Intelligence may impact our lives.
Artificial
Intelligence, or AI for short, is a combination of computer science,
physiology, and philosophy. AI is a broad topic, consisting of different
fields, from machine vision to expert systems. The element that the fields of
AI have in common is the creation of machines that can "think".
In order to
classify machines as "thinking", it is necessary to define
intelligence. To what degree does intelligence consist of, for example, solving
complex problems, or making generalizations and relationships? And what about
perception and comprehension? Research into the areas of learning, of language,
and of sensory perception have aided scientists in building intelligent
machines. One of the most challenging approaches facing experts is building systems
that mimic the behavior of the human brain, made up of billions of neurons, and
arguably the most complex matter in the universe. Perhaps the best way to gauge
the intelligence of a machine is British computer scientist Alan
Turing's test. He stated that a computer would deserves to be called
intelligent if it could deceive a human into believing that it was human.
Artificial
Intelligence has come a long way from its early roots, driven by dedicated
researchers. The beginnings of AI reach back before electronics, to
philosophers and mathematicians such as Boole and others theorizing on principles
that were used as the foundation of AI Logic. AI really began to intrigue
researchers with the invention of the computer in 1943. The technology was
finally available, or so it seemed, to simulate intelligent behavior. Over the
next four decades, despite many stumbling blocks, AI has grown from a dozen
researchers, to thousands of engineers and specialists; and from programs
capable of playing checkers, to systems designed to diagnose disease.
AI has always
been on the pioneering end of computer science. Advanced-level computer languages,
as well as computer interfaces and word-processors owe their existence to the
research into artificial intelligence. The theory and insights brought about by
AI research will set the trend in the future of computing. The products
available today are only bits and pieces of what are soon to follow, but they
are a movement towards the future of artificial intelligence. The advancements
in the quest for artificial intelligence have, and will continue to affect our
jobs, our education, and our lives.
2.0 THE HISTORY OF ARTIFICIAL INTELLIGENCE
![]() |
Timeline of major AI events
Evidence of Artificial Intelligence folklore can be
traced back to ancient Egypt, but with the development of the electronic
computer in 1941, the technology finally became available to create machine
intelligence. The term artificial intelligence was first coined in 1956, at the
Dartmouth conference, and since then Artificial Intelligence has expanded
because of the theories and principles developed by its dedicated researchers.
Through its short modern history, advancement in the fields of AI have been slower
than first estimated, progress continues to be made. From its birth 4 decades
ago, there have been a variety of AI programs, and they have impacted other
technological advancements.
The Era of the Computer:
In 1941 an invention revolutionized every aspect of
the storage and processing of information. That invention, developed in both
the US and Germany was the electronic computer. The first computers required
large, separate air-conditioned rooms, and were a programmers nightmare,
involving the separate configuration of thousands of wires to even get a
program running.
The 1949 innovation, the stored program computer, made
the job of entering a program easier, and advancements in computer theory lead
to computer science, and eventually Artificial intelligence. With the invention
of an electronic means of processing data, came a medium that made AI possible.
Although the computer provided the technology
necessary for AI, it was not until the early 1950's that the link between human
intelligence and machines was really observed. Norbert Wiener was one of the first
Americans to make observations on the principle of feedback theory feedback
theory. The most familiar example of feedback theory is the thermostat: It
controls the temperature of an environment by gathering the actual temperature
of the house, comparing it to the desired temperature, and responding by
turning the heat up or down. What was so important about his research into
feedback loops was that Wiener theorized that all intelligent behavior was the
result of feedback mechanisms. Mechanisms that could possibly be simulated by
machines. This discovery influenced much of early development of AI.
In late 1955,
Newell and Simon developed The Logic Theorist, considered by many to be
the first AI program. The program, representing each problem as a tree model,
would attempt to solve it by selecting the branch that would most likely result
in the correct conclusion. The impact that the logic theorist made on both the
public and the field of AI has made it a crucial stepping stone in developing
the AI field.
In 1956 John
McCarthy regarded as the father of AI, organized a conference to draw the talent
and expertise of others interested in machine intelligence for a month of
brainstorming. He invited them to Vermont for "The Dartmouth summer
research project on artificial intelligence." From that point on, because
of McCarthy, the field would be known as Artificial intelligence. Although not
a huge success, (explain) the Dartmouth conference did bring together the
founders in AI, and served to lay the groundwork for the future of AI research.
Knowledge Expansion
In the seven years after the conference, AI began to
pick up momentum. Although the field was still undefined, ideas formed at the
conference were re-examined, and built upon. Centers for AI research began
forming at Carnegie Mellon and MIT, and a new challenges were faced: further
research was placed upon creating systems that could efficiently solve
problems, by limiting the search, such as the Logic Theorist. And second,
making systems that could learn by themselves.
In 1957, the first version of a new
program The General Problem Solver(GPS) was tested. The program
developed by the same pair which developed the Logic Theorist. The GPS was an
extension of Wiener's feedback principle, and was capable of solving a greater extent
of common sense problems. A couple of years after the GPS, IBM contracted a
team to research artificial intelligence. Herbert Gelerneter spent 3 years
working on a program for solving geometry theorems.
While more programs were being
produced, McCarthy was busy developing a major breakthrough in AI history. In
1958 McCarthy announced his new development; the LISP language, which is still
used today. LISP stands for LISt Processing, and was soon adopted as the
language of choice among most AI developers.
In 1963 MIT received a 2.2 million
dollar grant from the United States government to be used in researching
Machine-Aided Cognition (artificial intelligence). The grant by the Department
of Defense's Advanced research projects Agency (ARPA), to ensure that the US
would stay ahead of the Soviet Union in technological advancements. The project
served to increase the pace of development in AI research, by drawing computer
scientists from around the world, and continues funding.
![]() |
The next few years showed a multitude of programs, one
notably was SHRDLU. SHRDLU was part of the microworlds project, which consisted
of research and programming in small worlds (such as with a limited number of
geometric shapes). The MIT researchers headed by Marvin Minsky, demonstrated
that when confined to a small subject matter, computer programs could solve
spatial problems and logic problems. Other programs which appeared during the
late 1960's were STUDENT, which could solve algebra story problems, and SIR
which could understand simple English sentences. The result of these programs
was a refinement in language comprehension and logic.
Another advancement in the 1970's was the advent of
the expert system. Expert systems predict the probability of a solution under
set conditions. For example:
Because of the large storage capacity of computers at
the time, expert systems had the potential to interpret statistics, to
formulate rules. And the applications in the market place were extensive, and
over the course of ten years, expert systems had been introduced to forecast
the stock market, aiding doctors with the ability to diagnose disease, and
instruct miners to promising mineral locations. This was made possible because
of the systems ability to store conditional rules, and a storage of
information.
During the 1970's Many new methods in the development
of AI were tested, notably Minsky's frames theory. Also David Marr proposed new
theories about machine vision, for example, how it would be possible to distinguish
an image based on the shading of an image, basic information on shapes, color,
edges, and texture. With analysis of this information, frames of what an image
might be could then be referenced. another development during this time was the
PROLOGUE language. The language was proposed for In 1972,
During the 1980's AI was moving at a faster pace, and
further into the corporate sector. In 1986, US sales of AI-related hardware and
software surged to $425 million. Expert systems in particular demand because of
their efficiency. Companies such as Digital Electronics were using XCON, an
expert system designed to program the large VAX computers. DuPont, General
Motors, and Boeing relied heavily on expert systems Indeed to keep up with the
demand for the computer experts, companies such as Teknowledge and Intellicorp
specializing in creating software to aid in producing expert systems formed.
Other expert systems were designed to find and correct flaws in existing expert
systems.
The impact of the computer technology, AI included was
felt. No longer was the computer technology just part of a select few
researchers in laboratories. The personal computer made its debut along with
many technological magazines. Such foundations as the American Association for
Artificial Intelligence also started. There was also, with the demand for AI
development, a push for researchers to join private companies. 150 companies
such as DEC which employed its AI research group of 700 personnel, spend $1
billion on internal AI groups.
Other fields of AI also made there
way into the marketplace during the 1980's. One in particular was the machine
vision field. The work by Minsky and Marr were now the foundation for the
cameras and computers on assembly lines, performing quality control. Although
crude, these systems could distinguish differences shapes in objects using
black and white differences. By 1985 over a hundred companies offered machine
vision systems in the US, and sales totaled $80 million.
The 1980's were not totally good for
the AI industry. In 1986-87 the demand in AI systems decreased, and the
industry lost almost a half of a billion dollars. Companies such as Teknowledge
and Intellicorp together lost more than $6 million, about a third of there
total earnings. The large losses convinced many research leaders to cut back
funding. Another disappointment was the so called "smart truck"
financed by the Defense Advanced Research Projects Agency. The projects goal
was to develop a robot that could perform many battlefield tasks. In 1989, due
to project setbacks and unlikely success, the Pentagon cut funding for the
project.
Despite these discouraging events,
AI slowly recovered. New technology in Japan was being developed. Fuzzy logic,
first pioneered in the US has the unique ability to make decisions under
uncertain conditions. Also neural networks were being reconsidered as possible
ways of achieving Artificial Intelligence. The 1980's introduced to its place
in the corporate marketplace, and showed the technology had real life uses,
ensuring it would be a key in the 21st century.
The military
put AI based hardware to the test of war during Desert Storm. AI-based
technologies were used in missile systems, heads-up-displays, and other
advancements. AI has also made the transition to the home. With the popularity
of the AI computer growing, the interest of the public has also grown.
Applications for the Apple Macintosh and IBM compatible computer, such as voice
and character recognition have become available. Also AI technology has made
steadying camcorders simple using fuzzy logic. With a greater demand for
AI-related technology, new advancements are becoming available. Inevitably
Artificial Intelligence has, and will continue to affecting our lives.3.0 METHODS USED TO CREATE INTELLIGENCE.
Introduction
In the quest to
create intelligent machines, the field of Artificial Intelligence has split
into several different approaches based on the opinions about the most
promising methods and theories. These rivaling theories have lead researchers
in one of two basic approaches; bottom-up and top-down. Bottom-up theorists
believe the best way to achieve artificial intelligence is to build electronic
replicas of the human brain's complex network of neurons, while the top-down
approach attempts to mimic the brain's behavior with computer programs.
Neural Networks and Parallel Computation
The neuron "firing",
passing a signal to the next in the chain.
Research has shown that a signal received by a neuron travels through the dendrite region, and down the axon. Separating nerve cells is a gap called the synapse. In order for the signal to be transferred
to the next neuron, the signal must
be converted from electrical to chemical energy. The signal can then be
received by the next neuron and processed.
Warren McCulloch after completing medical school at
Yale, along with Walter Pitts a mathematician proposed a hypothesis to explain
the fundamentals of how neural networks made the brain work. Based on
experiments with neurons, McCulloch and Pitts showed that neurons might be
considered devices for processing binary numbers. An important back of
mathematic logic, binary numbers (represented as 1's and 0's or true and false)
were also the basis of the electronic computer. This link is the basis of
computer-simulated neural networks, also know as Parallel computing.
A century earlier the true / false nature of binary numbers was theorized
in 1854 by George Boole in his postulates concerning the Laws of Thought.
Boole's principles make up what is known as Boolean algebra, the collection of
logic concerning AND, OR, NOT operands. For example according to the Laws of thought the statement: (for this
example consider all apples red)
- Apples are red-- is True
- Apples are red AND oranges are purple-- is False
- Apples are red OR oranges are purple-- is True
- Apples are red AND oranges are NOT purple-- is also True
Boole also assumed that the human mind works according
to these laws, it performs logical operations that could be reasoned. Ninety
years later, Claude Shannon applied Boole's principles in circuits, the
blueprint for electronic computers. Boole's contribution to the future of computing
and Artificial Intelligence was immeasurable, and his logic is the basis of
neural networks.
Using this theory, McCulloch and Pitts then designed
electronic replicas of neural networks, to show how electronic networks could
generate logical processes. They also stated that neural networks may, in the
future, be able to learn, and recognize patterns. The results of their research
and two of Weiner's books served to increase enthusiasm, and laboratories of
computer simulated neurons were set up across the country.
Two major factors have inhibited the development of
full scale neural networks. Because of the expense of constructing a machine to
simulate neurons, it was expensive even to construct neural networks with the
number of neurons in an ant. Although the cost of components have decreased,
the computer would have to grow thousands of times larger to be on the scale of
the human brain. The second factor is current computer architecture. The standard Von Neuman computer, the
architecture of nearly all computers, lacks an adequate number of pathways
between components. Researchers are now developing alternate architectures for
use with neural networks.
Even with these inhibiting factors, artificial neural
networks have presented some impressive results. Frank Rosenblatt,
experimenting with computer simulated networks, was able to create a machine
that could mimic the human thinking process, and recognize letters. But, with
new top-down methods becoming popular, parallel computing was put on hold. Now
neural networks are making a return, and some researchers believe that with new
computer architectures, parallel computing and the bottom-up theory will be a
driving factor in creating artificial intelligence.
Top Down Approaches; Expert Systems
Because of the large storage capacity of computers,
expert systems had the potential to interpret statistics, in order to formulate
rules. An expert system works much like a detective solves a mystery. Using the
information, and logic or rules, an expert system can solve the problem. For
example it the expert system was designed to distinguish birds it may have the
following:
![]() |
Charts like these represent the logic of expert
systems. Using a similar set of rules, experts can have a variety of
applications. With improved interfacing, computers may begin to find a larger
place in society.
Chess
AI-based game playing programs combine intelligence
with entertainment. On game with strong AI ties is chess. World-champion chess
playing programs can see ahead twenty plus moves in advance for each move they
make. In addition, the programs have an ability to get progressably better over
time because of the ability to learn. Chess programs do not play chess as
humans do. In three minutes, Deep Thought (a master program) considers 126
million moves, while human chessmaster on average considers less than 2 moves.
Herbert Simon suggested that human chess masters are familiar with favorable
board positions, and the relationship with thousands of pieces in small areas.
Computers on the other hand, do not take hunches into account. The next move
comes from exhaustive searches into all moves, and the consequences of the
moves based on prior learning. Chess programs, running on Cray super computers
have attained a rating of 2600 (senior master), in the range of Gary Kasparov,
the Russian world champion.
Frames
On method that many programs use to
represent knowledge are frames. Pioneered by Marvin Minsky, frame theory
revolves around packets of information. For example, say the situation was a
birthday party. A computer could call on its birthday frame, and use the
information contained in the frame, to apply to the situation. The computer
knows that there is usually cake and presents because of the information
contained in the knowledge frame. Frames can also overlap, or contain
sub-frames. The use of frames also allows the computer to add knowledge.
Although not embraced by all AI developers, frames have been used in
comprehension programs such as Sam.
4.0 WHAT WE CAN DO WITH AI
We have been studying this issue of AI application for quite some time now and know all the terms and facts. But what we all really need to know is what can we do to get our hands on some AI today. How can we as individuals use our own technology? We hope to discuss this in depth (but as briefly as possible) so that you the consumer can use AI as it is intended.First, we should be prepared for a change. Our conservative ways stand in the way of progress. AI is a new step that is very helpful to the society. Machines can do jobs that require detailed instructions followed and mental alertness. AI with its learning capabilities can accomplish those tasks but only if the worlds conservatives are ready to change and allow this to be a possibility. It makes us think about how early man finally accepted the wheel as a good invention, not something taking away from its heritage or tradition.
Secondly, we must be prepared to learn about the capabilities of AI. The more use we get out of the machines the less work is required by us. In turn less injuries and stress to human beings. Human beings are a species that learn by trying, and we must be prepared to give AI a chance seeing AI as a blessing, not an inhibition.
Finally, we need to be prepared for the worst of AI. Something as revolutionary as AI is sure to have many kinks to work out. There is always that fear that if AI is learning based, will machines learn that being rich and successful is a good thing, then wage war against economic powers and famous people? There are so many things that can go wrong with a new system so we must be as prepared as we can be for this new technology.
However, even though the fear of the machines are there, their capabilities are infinite Whatever we teach AI, they will suggest in the future if a positive outcome arrives from it. AI are like children that need to be taught to be kind, well mannered, and intelligent. If they are to make important decisions, they should be wise. We as citizens need to make sure AI programmers are keeping things on the level. We should be sure they are doing the job correctly, so that no future accidents occur.
AIAI Teaching Computers Computers
AUSDA is a program which will exam software to see if it is capable of handling the tasks you need performed. If it isn't able or isn't reliable AUSDA will instruct you on finding alternative software which would better suit your needs. According to AIAI, the software will try to provide solutions to problems like "identifying the root causes of incidents in which the use of computer software is involved, studying different software development approaches, and identifying aspects of these which are relevant to those root causes producing guidelines for using and improving the development approaches studied, and providing support in the integration of these approaches, so that they can be better used for the development and maintenance of safety critical software."
Sure, for the computer buffs this program is a definitely good news. But what about the average person who think the mouse is just the computers foot pedal? Where do they fit into computer technology. Well don't worry guys, because us nerds are looking out for you too! Just ask AIAI what they have for you and it turns up the EGRESS is right down your alley. This is a program which is studying human reactions to accidents. It is trying to make a model of how peoples reactions in panic moments save lives. Although it seems like in tough situations humans would fall apart and have no idea what to do, it is in fact the opposite. Quick Decisions are usually made and are effective but not flawless. These computer models will help rescuers make smart decisions in time of need. AI can't be positive all the time but can suggest actions which we can act out and therefor lead to safe rescues.
So AIAI is teaching computers to be better computers and better people. AI technology will never replace man but can be an extension of our body which allows us to make more rational decisions faster. And with Institutes like AIAI- we continue each stay to step forward into progress.
No worms in these Apples by Adam Dyess
All Power Macintoshes come with Speech Recognition. That's right- you tell the computer to do what you want without it having to learn your voice. This implication of AI in Personal computers is still very crude but it does work given the correct conditions to work in and a clear voice. Not to mention the requirement of at least 16Mgs of RAM for quick use. Also Apple's Newton and other hand held note pads have Script recognition. Cursive or Print can be recognized by these notepad sized devices. With the pen that accompanies your silicon note pad you can write a little note to yourself which magically changes into computer text if desired. No more complaining about sloppy written reports if your computer can read your handwriting. If it can't read it though- perhaps in the future, you can correct it by dictating your letters instead.
Macros provide a huge stress relief as your computer does faster what you could do more tediously. Macros are old but they are to an extent, Intelligent. You have taught the computer to do something only by doing it once. In businesses, many times applications are upgraded. But the files must be converted. All of the businesses records but be changed into the new software's type. Macros save the work of conversion of hundred of files by a human by teaching the computer to mimic the actions of the programmer. Thus teaching the computer a task that it can repeat whenever ordered to do so.
AI is all around us all but get ready for a change. But don't think the change will be harder on us because AI has been developed to make our lives easier.
The Scope of Expert Systems
As stated in the 'approaches'
section, an expert system is able to do the work of a professional. Moreover, a
computer system can be trained quickly, has virtually no operating cost, never
forgets what it learns, never calls in sick, retires, or goes on vacation. Beyond
those, intelligent computers can consider a large amount of information that
may not be considered by humans.
But to what
extent should these systems replace human experts? Or, should they at all? For
example, some people once considered an intelligent computer as a possible
substitute for human control over nuclear weapons, citing that a computer could
respond more quickly to a threat. And many AI developers were afraid of the
possibility of programs like Eliza, the
psychiatrist and the bond that humans were making with the computer. We
cannot, however, over look the benefits of having a computer expert.
Forecasting the weather, for example, relies on many variables, and a computer
expert can more accurately pool all of its knowledge. Still a computer cannot
rely on the hunches of a human expert, which are sometimes necessary in
predicting an outcome.In conclusion, in some fields such as forecasting weather or finding bugs in computer software, expert systems are sometimes more accurate than humans. But for other fields, such as medicine, computers aiding doctors will be beneficial, but the human doctor should not be replaced. Expert systems have the power and range to aid to benefit, and in some cases replace humans, and computer experts, if used with discretion, will benefit human kind.
5.0 BRANCHES OF AI
logical AI
What a program knows about the world in general the
facts of the specific situation in which it must act, and its goals are all
represented by sentences of some mathematical logical language. The program
decides what to do by inferring that certain actions are appropriate for
achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent
summary. [McC96b] lists some of the
concepts involved in logical aI. [Sha97] is an important text.
search
AI programs often examine large numbers of
possibilities, e.g. moves in a chess game or inferences by a theorem proving
program. Discoveries are continually made about how to do this more efficiently
in various domains.
pattern recognition
When a program makes observations of some kind, it is
often programmed to compare what it sees with a pattern. For example, a vision
program may try to match a pattern of eyes and a nose in a scene in order to
find a face. More complex patterns, e.g. in a natural language text, in a chess
position, or in the history of some event are also studied. These more complex
patterns require quite different methods than do the simple patterns that have
been studied the most.
representation
Facts about the world have to be represented in some
way. Usually languages of mathematical logic are used.
inference
From some facts, others can be inferred. Mathematical
logical deduction is adequate for some purposes, but new methods of non-monotonic
inference have been added to logic since the 1970s. The simplest kind of
non-monotonic reasoning is default reasoning in which a conclusion is to be
inferred by default, but the conclusion can be withdrawn if there is evidence
to the contrary. For example, when we hear of a bird, we man infer that it can
fly, but this conclusion can be reversed when we hear that it is a penguin. It
is the possibility that a conclusion may have to be withdrawn that constitutes
the non-monotonic character of the reasoning. Ordinary logical reasoning is
monotonic in that the set of conclusions that can the drawn from a set of
premises is a monotonic increasing function of the premises. Circumscription is
another form of non-monotonic reasoning.
common sense knowledge and reasoning
This is the area in which AI is farthest from
human-level, in spite of the fact that it has been an active research area
since the 1950s. While there has been considerable progress, e.g. in developing
systems of non-monotonic reasoning and theories of action, yet more new
ideas are needed. The Cyc system contains a large but spotty collection of
common sense facts.
learning from experience
Programs do that. The approaches to AI based on connectionism
and neural nets specialize in that. There is also learning of laws
expressed in logic. [Mit97] is a comprehensive
undergraduate text on machine learning. Programs can only learn what facts or
behaviors their formalisms can represent, and unfortunately learning systems
are almost all based on very limited abilities to represent information.
planning
Planning programs start with general facts about the
world (especially facts about the effects of actions), facts about the
particular situation and a statement of a goal. From these, they generate a
strategy for achieving the goal. In the most common cases, the strategy is just
a sequence of actions.
epistemology
This is a study of the kinds of knowledge that are
required for solving problems in the world.
ontology
Ontology is the study of the kinds of things that
exist. In AI, the programs and sentences deal with various kinds of objects,
and we study what these kinds are and what their basic properties are. Emphasis
on ontology begins in the 1990s.
heuristics
A heuristic is a way of trying to discover something
or an idea imbedded in a program. The term is used variously in AI. Heuristic
functions are used in some approaches to search to measure how far a node
in a search tree seems to be from a goal. Heuristic predicates that
compare two nodes in a search tree to see if one is better than the other, i.e.
constitutes an advance toward the goal, may be more useful.
genetic programming
Genetic programming is a technique for getting
programs to solve a task by mating random Lisp programs and selecting fittest
in millions of generations.




Very informative and helpful blog with very descriptive information. For more information you contact the best artificial intelligence development company.
ReplyDelete