FANDOM


Project Outline

Scope: The term AIS may be used to cover a broad range of topics from simple expert systems to very complex ones, from mechanical applications in industrial production to advance problem solving capabilities in weather prediction or prediction of economic behavior or stock market, etc. The focus of this project should be on the most advanced developments in the field for the most sophisticated purposes, particularly those relating to human behavior and social problem solving, areas in which *AIS come closest to mimicking or substituting for the conscious intervention of human beings or exceeding their intellectual capabilities. Mechanical applications to drive cars,in industrial or domestic robotics, etc, may be mentioned in passing but are not the focus. The focus is on the capacity for decision making as it relates to predicted or improving the functioning of human social systems. Other aspects may be mentioned in passing and simply listed in the history to show their place in the other all scheme.

Key Themes: Consciousness and decision-making by AIS

  • Typical questions:
  • To what extent can AIS be used effectively to
  • Make money?
  • Solve political problems between nations?
  • Identify economic opportunities?
  • Pick compatible marriage partners?
  • Specific Topics:
  1. Theories of artificial intelligence (more detail welcome here) – particularly theories relating to consciousness in computers and their capacity to understanding complex human behavior and make complex decisions that are better than those made by human beings -- different schools of thought, their proponents and their main arguments and limitations or challenges to those arguments
  2. Outline a brief history of artificial intelligence with significant dates, events and persons
  3. Main proponents of different theories
  4. Largest and most important AIS conferences and associations in the world, including the most important AIS events that took place around 2000 marking the transition to a new millennium
  5. Notable opponents of AIS who believe it can never approximate human intelligence
  6. Notable accomplishments and successes of artificial intelligence
  7. Use of AIS to make money – their application in computerized trading,economic modeling, investment banking or any other such field
  8. Notable limitations of *AIS *to replace the consciousness and decision-making of human beings with examples of small, simple tasks of discrimination that are difficult for computers. E.g. on project says they have developed computers with the intelligence of a 7 year old by programming thousands of simple rules such as a physical object cannot be in two places at once. Illustration of obvious limitations or difficulties for a computer to do what human beings easily do.
  9. Types of expertise in AIS – what different type of professions are employed in designing an AIS system, e.g. mathematicians, systems analysts, psychologists?, etc.
  10. Leading countries, leading institutes and leading companies in the field of AIS worldwide – five each is OK, but if USA, Israel, Russia, India are not in the list please also cover these countries as well.
  11. Most advanced institutes in Russia and the type of work they have focused on. Also significant developments and applications of AIS by the Russians in the 1970s and 1980s.
  12. Mathematics of *AIS *–* *how do computers convert life situations into mathematical formulas? What are the common concepts and technical terms employed in this field? Are there any famous mathematical theorems of formulas responsible for breakthroughs in *AIS? *Names of famous mathematicians past and present who have influenced developments in this field? Is there any mention of Srinivasa Ramanujam in this connection?
  13. Any major books proclaiming the dawn of a new age based on AIS
  14. Future of AIS – what are the most advanced conceptions of the role of AIS in the future of humanity
  15. Religious protests or groups opposed to AIS on philosophic grounds – even the most extremist or irrational expressions by fringe groups are relevant here. Have their ever been violent protests or efforts to sabotage the development of AIS?
  16. Components of *AIS – *what are the main components of an AIS system, their technical terms and their functions?
  17. Application of AIS in peace making or peace building, war avoidance, defense and military strategy
  18. Applications of AIS to deal with qualitative rather than quantitative issues. Calculating prices or growth rates is quantitative. Any examples where it has been applied with relation to qualitative experiences such as computerized dating, customer satisfaction,enjoyment or happiness, selection or recruitment of suitable people based on qualities and compatibility, social harmony, beauty, etc.
  19. Role of grids, internet and supercomputing in the development of AIS. Grid networking increasing processing power at lower cost. Does it also enhance the intelligence capabilities of AIS? Is there any example of a distributed AIS system that gains knowledge and experience by sharing the results over a network?
  20. AIS and mysticism – use of AIS in astrology, numerology or any other arcane field associated with mystical or occult traditions or secret societies.
  21. Computer security for AIS systems – worst examples of efforts to misuse, corrupt or infect AIS systems maliciously or for monetary gain,if any.
  22. AIS in science fiction – brief reference to the most advanced examples of AIS in fantasy, e.g. film Space Odyssey 2001
  23. Application of AIS in business decision-making
  24. Application of AIS in computer games, video games, war games and other simultations
  25. Application of AIS in psychological profiling, e.g. computer matching of people
  26. Application of AIS in politics and election forecasting

Research Information

Artificial Intelligence Systems (AIS)

Consciousness and decision-making by AIS

Consciousness

The fundamental assumption of Artificial Intelligence (AI) as a research program is that human minds operate on computational principles, and its grand goal is to build material artifacts that genuinely possess the very same mental capacities that human beings have. Many contemporary philosophers believe that while the Computational/Representational Theory of Mind (CRTM) can in principle give a full account of thinking, believing, planning, intending, judging, and the like, the explanation of qualitative aspects of the mind — such as colour, sensations, feelings of cold and warmth, and tickles and pains (and perhaps feelings of sadness, anger, and joy, as well) — lies beyond the reaches of any such theory. Whether or not AI researchers agree with philosophers on the discrepancy between the prospects of explaining consciousness versus explaining intentionality and rationality, it is a fact that most of the work in AI research so far has heavily focused on the latter issues, and hardly ever on the former.

Decision Making

New sequential decision making model could be key to artificial intelligence “Decision making is everywhere, and not just with humans. Animals use it, and robots do. But the traditional approach to decision making is too simple.” The idea behind sequential decision making is fairly simple: if an intelligence has to decide between two items, something will follow, based on the decision made. “In the traditional, simplistic model,” the decision maker has to answer a simple question — left or right for instance — choosing between two attractors.” This results in a simple “if-then” equation. However, when real decision making is in question, there is more than a simple “if-then” at work. “In reality,” he says, “it’s much more complex and interesting.” “If we are going to create an intelligent brain for a robot, we have to think of these independent elements.” Basically, a process is needed for modeling a robot brain that could work as a human or animal brain. This is a model for getting there: “We have to be able to answer these questions in a qualitative way.”

“If you are talking about an animal trying to flee from a predator, you have to look at the complex landscape. The prey has more than one decision to make. It has to decide which way to go not only at this moment, but at each moment for the length of its life. And there are other factors that influence the decision as well.” “The same is with a robot that we put on another planet. It has to make a decision at many critical moments, not only about direction, but also about speed, whether or not to go, and other decisions at each decision making moment.”

Source: http://www.physorg.com/news82190531.html

Specific Topics

Theories of Artificial Intelligence

The Computational/Representational Theory of Mind (CRTM)

It is common practice in everyday life to attribute a variety of mental states —beliefs, desires, hopes, fears, regrets, expectations, etc. — to people (and sometimes to non-human animals and even certain artifacts) to make sense of their behaviour. Philosophers standardly call such states propositional attitudes, because they seem to be mental attitudes towards propositions. Most importantly, practical reasoning and the production of behaviour are typically responsive to the content of the beliefs and desires involved.

(A) Representationalism:

(1) Representational Theory of Thought:
For each propositional attitude A , there is a unique and distinct (i.e. dedicated)3 psychological relation R and for all propositions P and subjects S , S A s that P if and only if there is a mental representation # P # such that
(a) S bears R to # P #, and
(b) # P # means that P.
(2) Representational Theory of Thinking:
Mental processes, thinking in particular, consist of causal sequences of tokenings of mental representations.

(B) Computationalism:

Mental representations, which, as per (A1), constitute the direct “objects” of propositional attitudes, belong to a representational or symbolic system which is such that:
(1) representations of the system have a combinatorial syntax and semantics: structurally complex (molecular) representations are systematically built up out of structurally simple (atomic) constituents, and the semantic content of a molecular representation is a function of the semantic content of its atomic constituents together with its syntactic/formal structure, and
(2) the operations on representations (constituting, as per (A2), the domain of mental processes) are causally sensitive to the syntactic/formal structure of representations defined by this combinatorial syntax.

(C) Physicalist Functionalism:

Mental representations so characterized are functionally characterizable entities which are realized by physical properties of the subject of the attitudes (if the subject is an organism, then the realizing properties are presumably the neurophysiological properties in the brain or the central nervous system).
The relation R in (A), when (A) is combined with (B), should be understood as a computational/functional relation. The idea is that each attitude is identified with a characteristic computational/functional role played by the mental sentence that is the direct object of that kind of attitude.
The two most important achievements of 20th century that are at the foundations of CRTM as well as most of modern Artificial Intelligence (AI) research and the so-called information processing approaches to cognition (practically almost all of contemporary cognitive psychology) are (i) the developments in modern symbolic (formal) logic, and (ii) Alan Turing's idea of a Turing Machine and Turing computability. It is putting these two ideas together that gives CRTM its enormous explanatory power within a naturalistic framework. Modern logic showed that most of deductive reasoning can be formalized, i.e. most semantic relations among symbols can be entirely captured by the symbols' formal/syntactic properties and the relations among them. And, Turing showed, roughly, that if a process has a formally specifiable character then it can be mechanized. So we can appreciate the implications of (i) and (ii) for the philosophy of psychology in this way: if thinking consists in processing representations physically realized in the brain (in the way the internal data structures are realized in a computer) and these representations form a formal system, i.e. a language with its proper combinatorial syntax (and semantics) and a set of derivations rules formally defined over the syntactic features of those representations (allowing for specific but extremely powerful programs to be written in terms of them), then the problem of thinking (and rational action), as we described it above, can in principle be solved in completely naturalistic terms, thus the mystery surrounding how a physical device can ever have semantically coherent state transitions (processes) can be removed.
Thus, given the commitment to naturalism, the hypothesis that the brain is a kind of computer trafficking in representations in virtue of their syntactic properties is the basic idea of CRTM and the AI vision of cognition. Computers are environments in which symbols are manipulated in virtue of their formal features, but what is thus preserved are their semantic properties, hence the semantic coherence of symbolic processes. This is in virtue of the mimicry or mirroring relation between the semantic and formal properties of symbols. We can view the thinking brain as a syntactically driven engine preserving semantic properties of its processes, i.e. driving a semantic engine.

To sum up: CRTM, as sketched above, provides a way of understanding how phenomena such as thoughts and beliefs, as well as thinking, decision making, practical reasoning and rational action, can be understood in a materialist framework that not only can explain human mentality in terms of bodily processes but also points to how they might be implemented in other physical systems, including artifactual ones (e.g. robots). This is how CRTM is theoretically equipped to tackle Fodor's second and third questions.4 It does remain silent, however, when it comes to the first question, the question of consciousness. How is it that a physical system can come to have qualitative states — experience flashes of colours, feel pangs of jealousy, or enjoy the warmth of the afternoon sun? This is the problem to which we now turn.

3. The Problem of Phenomenal Consciousness Experience

The problem of experience concerns the ontological status of the qualitative character of our experiences — their qualitative feel, or 'qualia' — of which we seem to be directly aware in introspection.5 It is characterized here as a problem because, on the face of it, it is not clear how qualia could be entirely physical (e.g. some sort of entirely physical phenomena in the brain). But the puzzling character of qualia is a more general problem about our understanding of them, because if it is puzzling to think of qualia in physical terms, it is no less puzzling to think of them in nonphysical terms. The mystery remains even if physicalism is rejected. This aspect of the problem is brought out nicely by Jackson's so-called "Knowledge Argument". Here is the thought-experimental set-up for the argument:

Mary is confined to a black-and-white room, is educated through black-and white books, and through lectures relayed on black-and-white television. In this way she learns everything there is to know about the physical nature of the world. She knows all the physical facts about us and our environment, in a wide sense of 'physical' which includes everything in completed physics, chemistry, and neurophysiology, and all there is to know about the causal and relational facts consequent upon all this, including of course functional roles. Mary is released and sees for the first time a ripe tomato in good light, and comes to know what it is like to see red, something she allegedly did not know before, despite her omniscience with respect to physical facts. Jackson runs his argument thus

(1)'Mary (before her release) knows everything physical there is to know about other people.
(2)'Mary (before her release) does not know everything there is to know about other people (because she learns something about them on her release).Therefore,
(3)' There are truths about other people (and herself) which escape the physical story.

According to Jackson, physicalism is the doctrine that the world consists entirely of physical facts. if that doctrine is correct, someone who knows all the physical facts knows all there is to know. According to Jackson, Mary comes to know a new fact upon seeing red for the first time, a fact which she did not know before; but since, by hypothesis, she already knew all the physical facts, the fact she comes to know cannot be physical. Hence, there are non-physical facts, and physicalism is false. Jackson seems to think that in experience we encounter, are acquainted with, (instantiations of) non-physical properties. But if qualia are non-physical, it is hard to see how they could participate in the causal working of the physical world, which includes our bodies. According to (early) Jackson (1982), and many other antiphysicalists, qualia are epiphenomenal: they are caused (by physical events) but they don't cause anything, they are altogether causally inefficacious. So it seems that there is a heavy price to pay if physicalism is false. For the falsity of physicalism makes the mystery bigger, not smaller.

4. How to Approach the Problem

Although to some extent we share the sense of awe and mystery surrounding the philosophical problem of phenomenal consciousness described above, we are no mysterians about consciousness. In fact, we are optimistic about the prospects for a naturalistic solution. In what follows, we will indicate the grounds for this optimism, and describe in broad outline the theoretical tenets of a naturalistic research program within which consciousness, and not just intentional cognitive states, can be explained. If we are right about how to pursue this research, the ultimate solution will be an interdisciplinary one, involving not only the relevant branches of neuroscience and psychology but also AI in a crucial way. As we characterized the problem of consciousness above, a particular form of state consciousness becomes the focus of mystery. It is important to note at this juncture that there are two kinds of mental states that can be conscious: phenomenal states (sensory and emotional experiences , like pains, itches, seeing red, smelling coffee, and feeling depressed), and cognitive states with conceptual content (propositional attitudes like thoughts, beliefs, and desires). Although it is problematic how any such states could be conscious, the degree of mystery that attaches to both kinds is not the same.

There is a sense that explaining what makes a thought conscious is easier than explaining what makes an experience conscious. Indeed, the sense of philosophical mystery always accompanies the latter and almost never the former. For instance, McCarthy (1999) argues that making robots conscious is in principle within our grasp. However, it turns out that what McCarthy has in mind is robots' capacity to have conscious thoughts (propositional attitudes), not experiences. He seems to join the group of people who declare conscious experience a mystery, and as a result, he questions not only the possibility of robots' having conscious experiences, but also the desirability of producing robots with this capacity, assuming it were possible to do so. He seems to think that having conscious experiences is an option that robots with fully conscious thoughts could do without.

Source: Consciousness, Intentionality, and Intelligence: Some Foundational Issues for Artificial Intelligence

Different school of thought

In terms of consequences, AI applications can be divided into two types: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of most AI systems. Classifiers make use of pattern recognition for condition matching. In many cases this does not imply absolute, but rather the closest match. Techniques to achieve this divide roughly into two schools of thought: Conventional AI and Computational intelligence (CI). Conventional AI research focuses on attempts to mimic human intelligence through symbol manipulation and symbolically structured knowledge bases. This approach limits the situations to which conventional AI can be applied. Lotfi Zadeh stated that "we are also in possession of computational tools which are far more effective in the conception and design of intelligent systems than the predicate-logic-based methods which form the core of traditional AI." These techniques, which include fuzzy logic, have become known as soft computing. These often biologically inspired methods stand in contrast to conventional AI and compensate for the shortcomings of symbolicism.

These two methodologies have also been labeled as neats vs. scruffies, with neats emphasizing the use of logic and formal representation of knowledge while scruffies take an application-oriented heuristic bottom-up approach.

Classifiers: Classifiers are functions that can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are mainly statistical and machine learning approaches. A wide range of classifiers are available, each with its strengths and weaknesses.

Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science. The most widely used classifiers are the neural network, support vector machine, k-nearest neighbor algorithm, Gaussian mixture model, naive Bayes classifier, and decision tree.

Conventional AI: Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI).

The nature of consciousness (Source: Wikipedia)

According to naïve and direct realism, humans perceive directly while brains perform processing. According to indirect realism and dualism, brains contain data obtained by processing but what people perceive is a mental model or state appearing to overlay physical things as a result of projective geometry (such as the point observation in René Descartes' dualism). Which of these approaches to consciousness is correct is fiercely debated. Direct perception problematically requires a new physical theory allowing conscious experience to supervene directly on the world outside the brain. But if people perceive indirectly through a world model in the brain, then a new physical phenomenon, other than the endless further flow of data, would be needed to explain how the model becomes experience.

Brief History OF Artificial Intelligence With Dates, Events & Persons

THE HISTORY OF ARTIFICIAL INTELLIGENCE

TIMELINE OF MAJOR AI EVENTS
TIMELINE OF MAJOR AI EVENTS

Introduction:

Evidence of Artificial Intelligence folklore can be traced back to ancient Egypt, but with the development of the electronic computer in 1941, the technology finally became available to create machine intelligence. The term artificial intelligence was first coined in 1956, at the Dartmouth conference, and since then Artificial Intelligence has expanded because of the theories and principles developed by its dedicated researchers. Through its short modern history, advancement in the fields of AI have been slower than first estimated, progress continues to be made. From its birth 4 decades ago, there have been a variety of AI programs, and they have impacted other technological advancements.

THE ERA OF THE COMPUTER:

In 1941 an invention revolutionized every aspect of the storage and processing of information. That invention, developed in both the US and Germany was the electronic computer. The 1949 innovation, the stored program computer, made the job of entering a program easier, and advancements in computer theory lead to computer science, and eventually Artificial intelligence. With the invention of an electronic means of processing data, came a medium that made AI possible.

The Beginnings of AI:

Although the computer provided the technology necessary for AI, it was not until the early 1950's that the link between human intelligence and machines was really observed. In late 1955, Newell and Simon developed The Logic Theorist, considered by many to be the first AI program. The program, representing each problem as a tree model, would attempt to solve it by selecting the branch that would most likely result in the correct conclusion. The impact that the logic theorist made on both the public and the field of AI has made it a crucial stepping stone in developing the AI field.

In 1956 John McCarthy regarded as the father of AI, organized a conference to draw the talent and expertise of others interested in machine intelligence for a month of brainstorming. He invited them to Vermont for "The Dartmouth summer research project on artificial intelligence." From that point on, because of McCarthy, the field would be known as Artificial intelligence. Although not a huge success, (explain) the Dartmouth conference did bring together the founders in AI, and served to lay the groundwork for the future of AI research.

Knowledge Expansion

In the seven years after the conference, AI began to pick up momentum. Although the field was still undefined, ideas formed at the conference were re-examined, and built upon. Centers for AI research began forming at Carnegie Mellon and MIT, and a new challenges were faced: further research was placed upon creating systems that could efficiently solve problems, by limiting the search, such as the Logic Theorist. And second, making systems that could learn by themselves.

In 1963 MIT received a 2.2 million dollar grant from the United States government to be used in researching Machine-Aided Cognition (artificial intelligence). The grant by the Department of Defense's Advanced research projects Agency (ARPA), to ensure that the US would stay ahead of the Soviet Union in technological advancements. The project served to increase the pace of development in AI research, by drawing computer scientists from around the world, and continues funding. Another advancement in the 1970's was the advent of the expert system. Expert systems predict the probability of a solution under set conditions.

The Transition from Lab to Life

The impact of the computer technology, AI included was felt. Other fields of AI also made there way into the marketplace during the 1980's. One in particular was the machine vision field. The 1980's were not totally good for the AI industry. In 1986-87 the demand in AI systems decreased, and the industry lost almost a half of a billion dollars.

Source: http://library.thinkquest.org2705history.html

Modern Computer Science and AI root in the pre-war work of Goedel, Turing, and Zuse Until 2000 or so, most AI systems were limited and based on heuristics. In the new millennium a new type of universal AI has gained momentum. It is mathematically sound, combining theoretical computer science and probability theory to derive optimal behavior for robots and other systems embedded in a physical environment.
In 1931, Goedel Goedel layed the foundation of Theoretical Computer Science He published the first universal formal language and showed that math itself is either flawed or allows for unprovable but true statements. Some mistakenly thought this proves that AIs will always be inferior to humans. (Around the same time, Lilienfeld and Heil patented the first transistors.)
In 1936, Turing Turing reformulated Goedel's result and Church's extension thereof To do this, he introduced the Turing machine, which became the main tool of CS theory. In 1950 he invented a subjective test to decide whether something is intelligent
From 1935-1941, Zuse Zuse built the first working program-controlled computers In the 1940s he devised the first high-level programming language, and wrote the first chess program (back then chess-playing was considered an intelligent activity). Soon afterwards, Shannon published information theory, and Shockley et al. re-invented Lilienfeld's transistor (1928)
McCarthy coined the term "AI" in the 1950s. In the 60s, general AI theory started with Solomonoff's universal predictors But failed predictions of human-level AI with just a tiny fraction of the brain's computing power discredited the field. Practical AI of the 60s and 70s was dominated by rule-based expert systems and Logic Programming
In the 1980s and 90s, mainstream AI married probability theory (Bayes nets etc)
Image6
"Subsymbolic" AI became popular, including neural nets (McCulloch & Pitts, 40s; Kohonen, Minsky & Papert, Amari, 60s; Werbos, 70s; many others), fuzzy logic (Zadeh, 60s), artificial evolution (Rechenberg, 60s, Holland, 70s), "representation-free" AI (Brooks), artificial ants (Dorigo, Gambardella, 90s), statistical learning theory & support vector machines (Vapnik & others)
In the 1990s and 2000s, much of the progress in practical AI was due to better hardware, getting roughly 1000 times faster per dollar per decade
Image7

http://www.idsia.ch/~juergen/robotcars.html

Image8

http://world.honda.com/ASIMO/history/p1_p2_p3.html

In 1995, a fast vision-based robot car by Dickmanns autonomously drove 1000 miles in traffic at up to 120 mph. Japanese labs (Honda, Sony) and TUM built famous humanoid robots. There were few if any fundamental software breakthroughs; improvements / extensions of already existing algorithms seemed less impressive and less crucial than hardware advances. For example, chess world champion Kasparov was beaten by a fast IBM computer running a fairly standard algorithm. Rather simple but computationally expensive probabilistic methods for speech recognition, statistical machine translation, computer vision, optimization etc. started to become feasible on fast PCs.
In the new millennium the first mathematical theory of universal AI emerged, combining "old" theoretical computer science and "ancient" probability theory to derive optimal behavior for embedded rational agents. A sign that AI is maturing and becoming a real formal science! Will this mathematically sound type of New AI and its associated optimality theorems be considered a milestone 50 years from now? Some IDSIA links on this topic: Universal AI, Goedel machines, Universal search. Less universal methods (but still more general than most traditional AI) achieve program learning and sequence learning (as opposed to conventional input/output mappings) with feedback networks. To make such algorithms really practical, however, we will still need substantially faster computers. By 2020 affordable computers will match brains in terms of raw computing power. We think the necessary self-improving AI software will not lag far behind. Is history about to converge?



Source: http://www.idsia.ch~juergen/ai.html

Artificial Intelligence IN MILITARY

The military put AI based hardware to the test of war during Desert Storm. AI-based technologies were used in missile systems, heads-up-displays, and other advancements. AI has also made the transition to the home. With the popularity of the AI computer growing, the interest of the public has also grown. Applications for the Apple Macintosh and IBM compatible computer, such as voice and character recognition have become available. Also AI technology has made steadying camcorders simple using fuzzy logic. With a greater demand for AI-related technology, new advancements are becoming available. Inevitably Artificial Intelligence has, and will continue to affecting our lives.

Opposition to AIS

The technology of the future will do things that seem “mad” to most of us today. Our ability to create artificial intelligence is increasing exponentially. In the labs of prestigious institutions across the country, scientists try to create a computer that will replace the brain. This futuristic technology may not be far off; however, it faces harsh opposition by people afraid of what they don’t understand.

Hans Moravec, senior research scientist at Carnegie-Mellon’s Mobile Robot Laboratory, wrote, “We are on a threshold of a change in the universe comparable to the transition from nonlife to life. Moravec seeks to create a “robotic immortality” for everyone. Implementing artificial intelligence would allow us to do the many things we don’t have time to do, in our short mortal lives. They feel that God gave us life and he takes it away when the time is right.

Artificial intelligence will make it possible for me to pursue those things. I am intrigued with the idea of living long past the death of my mortal body. Nevertheless, people are afraid of this new technology, afraid of what it may lead too. I too know that there are too many things that I would like to try, places I would like to see, and people that I would like to meet, to fit into one mortal life. Robots will never experience emotion no matter how complex the artificial intelligence is because they only follow commands and comply to the algorithm that is given to them. Robots can not and will not have the capacity to think like a human using common sense no matter how many neuronic processes is implemented into the system.

Accomplishments:

Among other accomplishments, there are now AI systems that diagnose various types of diseases, drive cars, recognize handwriting and spoken speech, evaluate credit card applications, translate text, control factories, schedule tasks, guide image-based robotic surgery, and troubleshoot complicated machinery. As this list shows, AI is already pervasive, even though many people may not have realized it yet. There is a common theme to these impressive accomplishments: AI involves producing programs that behave effectively in complex environments--this typically requires reasoning, possibly in ways that differ from how people think. Doing this has required significant advances in understanding intelligence, both in the abstract, and in the interest in solving a wide variety of specific, important tasks. AI continues to be one of the most exciting, and useful, of fields today--with an impressive history of major results, and an "application pull" motivating yet other new advances, which will continue to improve the quality of our lives.

Successes:

In the 90s AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas. The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.

Components of Artificial Intelligence

Source: http://www.civil.iitb.ac.in/ailab/components.htm

Neural Network, Genetic Algorithms / Programming, Fuzzy Logic, Knowledge Based Systems, Hybrid Systems, Pattern Recognition, Machine Learning, Chaos theory, Case-based Reasoning, Cellular Automata, Risk Analysis, Decision Support Systems, Locally Weighted Projection Regression, Model Trees, AI programming languages and systems tools –OOP

Artificial Intelligence For Business

Artificial Intelligence (AI) has been used in business applications since the early eighties. As with all technologies, AI initially generated much interest, but failed to live up to the hype. However, with the advent of web-enabled infrastructure and rapid strides made by the AI development community, the application of AI techniques in real-time business applications has picked up substantially in the recent past. AI is a broad discipline that promises to simulate numerous innate human skills such as automatic programming, case-based reasoning, neural networks, decision-making, expert systems, fuzzy logic, natural language processing, pattern recognition and speech recognition etc. AI technologies bring more complex data-analysis features to existing applications. Business applications utilise the specific technologies mentioned earlier to try and make better sense of potentially enormous variability (for example, unknown patterns/relationships in sales data, customer buying habits, and so on).

However, within the corporate world, AI is widely used for complex problem-solving and decision-support techniques (neural networks and expert systems) in real-time business applications. The business applicability of AI techniques is spread across functions ranging from finance management to forecasting and production. The proven success of Artificial Neural Networks (ANN) and expert systems has helped AI gain widespread adoption in enterprise business applications. In some instances, such as fraud detection, the use of AI has already become the most preferred method. In addition, neural networks have become a well-established technique for pattern recognition, particularly of images, data streams and complex data sources and, in turn, have emerged as a modeling backbone for a majority of data-mining tools available in the market. Some of the key business applications of AI/ANN include fraud detection, cross-selling, customer relationship management analytics, demand prediction, failure prediction, and non-linear control.

Source: The Financial Express, Posted online: Tuesday , July 15, 2003 at 0000 hrs IST

Use of AIS' [Source: http://cra.org/research.impact] Authorizing Financial Transactions: Credit card providers, telephone companies, mortgage lenders, banks, and the U.S. Government employ AI systems to detect fraud and expedite financial transactions, with daily transaction volumes in the billions. These systems first use learning algorithms to construct profiles of customer usage patterns, and then use the resulting profiles to detect unusual patterns and take the appropriate action (e.g., disable the credit card). Such automated oversight of financial transactions is an important component in achieving a viable basis for electronic commerce.

The Future of AI

AI began as an attempt to answer some of the most fundamental questions about human existence by understanding the nature of intelligence, but it has grown into a scientific and technological field affecting many aspects of commerce and society. Even as AI technology becomes integrated into the fabric of everyday life, AI researchers remain focused on the grand challenges of automating intelligence. Work is progressing on developing systems that converse in natural language, that perceive and respond to their surroundings, and that encode and provide useful access to all of human knowledge and expertise. The pursuit of the ultimate goals of AI -- the design of intelligent artifacts; understanding of human intelligence; abstract understanding of intelligence (possibly superhuman) -- continues to have practical consequences in the form of new industries, enhanced functionality for existing systems, increased productivity in general, and improvements in the quality of life. But the ultimate promises of AI are still decades away, and the necessary advances in knowledge and technology will require a sustained fundamental research effort.

Limitations of AIS

1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do” In 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved." These predictions, and many like them, would not come true. They had failed to anticipate the difficulty of some of the problems they faced: the lack of raw computer power, the intractable combinatorial explosion of their algorithms, the difficulty of representing commonsense knowledge and doing commonsense reasoning, the incredible difficulty of perception and motion and the failings of logic.

Conferences & Associations of AI

[Source: http://www.cs.cmu.edu/~aips98/]

  1. The Fourth International Conference on Artificial Intelligence Planning Systems 1998 (AIPS '98):The International Conference on Artificial Intelligence Planning Systems (AIPS) will bring together researchers working in all aspects of problems in planning, scheduling, planning and learning, and plan execution, for dealing with complex problems. The conference is aimed at researchers ranging from those interested in the latest techniques in planning and scheduling to those interested in finding solutions to problems in industry and engineering.
  2. UAI-2000: The Sixteenth Conference on Uncertainty in Artificial Intelligence, Stanford University, Stanford, CA :June 30 - July 3, 2000. As we approach the new millenium, advances in the theory and practice of artificial intelligence have pushed intelligent systems to the forefront of the information technology sector. At the same time, uncertainty managament has come to play a central role in the development of these systems. The Conference on Uncertainty in Artificial intelligence, organized annually under the auspices of the Association for Uncertainty in AI (AUAI), is the premier international forum for exchanging results on the use of principled uncertain-reasoning methods in intelligent systems.
  3. European Conference on Artificial Intelligence (ECAI) :Source: http://www.informatik.uni-trier.de/~ley/db/conf/ecai/ :ECAI-08 will give researchers from all over the world the possibility to identify important new trends and challenges in all subfields of Artificial Intelligence, and it will provide a major forum for potential users of innovative AI techniques. The 18th biennial European Conference on Artificial Intelligence ECAI-2008 is planned for Patras, Greece.
  4. The Eighth International Conference on Artificial Intelligence and Law, May 21-25, 2001, ICAIL 2001 :ICAIL-2001 will be held under the auspices of the International Association for Artificial Intelligence and Law (IAAIL), an organization devoted to promoting research and development in the field of AI and Law with members throughout the world. ICAIL provides a forum for the presentation and discussion of the latest research results and practical applications and stimulates interdisciplinary and international collaboration. Previous ICAIL conferences have taken place in Boston (1987), Vancouver (1989), Oxford (1991), Amsterdam (1993), College Park, Maryland (1995), Melbourne (1997), and Oslo (1999). As for these past conferences, the accepted papers will be published in a conference proceedings.
  5. SEVENTH SCANDINAVIAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, Odense, Denmark, February 19-21, 2001. Source: http://www.mip.sdu.dk/scai01 :After going through a difficult period, there is a feeling that AI is headed towards better times again. Certainly the very challenging problems it addresses remain unsolved. Thus, it has been decided not to point out some key theme for the conference, but instead treat all branches of AI equally, including some of the exciting recent developments in bioinformatics, machine learning, multi-agent systems, electronic commerce, behavioural robotics. The conference will be held at the Maersk Mc-Kinney Moller Institute for Production Technology at the University of Southern Denmarks main campus at Odense University.

Association:

  1. International Association for Artificial Intelligence & Law : IAAIL is a nonprofit association devoted to promoting research and development in the field of AI and Law, with members throughout the world. IAAIL organizes a biennial conference (ICAIL), which provides a forum for the presentation and discussion of the latest research results and practical applications and stimulates interdisciplinary and international collaboration.
  2. Association for the Advancement of Artificial Intelligence (AAAI): Founded in 1979, the Association for the Advancement of Artificial Intelligence (AAAI) (formerly the American Association for Artificial Intelligence) is a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.

Current Conferences: Source: http://www.aaai.org/Conferences/conferences.php

  • AAAI 2007 (The Twenty-Second AAAI Conference on Artificial Intelligence) will be held July 22–26, 2007 in Vancouver, British Columbia, Canada.
  • IAAI 2007 (The Nineteenth Innovative Applications of Artificial Intelligence Conference) will be held July 22–26, 2007 in Vancouver, British Columbia, Canada.

Future Conferences

  • AAAI 2008 (The Twenty-Third AAAI Conference on Artificial Intelligence) will be held July 13–17, 2008 in Chicago, Illinois, USA.
  • IAAI 2008 (The Twentieth Innovative Applications of Artificial Intelligence Conference) will be held July 13–17, 2008 in Chicago, Illinois, USA.

Mathematics of Artificial Intelligence System

Many representations involve some kind of language. We have seen, for example, propositional calculus and predicate calculus in which languages are used to represent and reason with logical statements; the language of mathematics enables us to represent complex numeric relationships; programming languages such as Java and C++ use objects, arrays, and other data structures to represent ideas, things, and numbers. Human beings use languages such as English to represent objects and more complex notions. Human language is rather different from the languages usually used in Artificial Intelligence. In particular, although human languages are able to express an extremely wide range of concepts, they tend to be ambiguous—a sentence can have more than one meaning, depending on the time and place it is spoken, who said it, and what was said before it.

Human languages are also very efficient: it is possible to express in a few words ideas that took thousands of years for humans to develop (for example, the words existentialism, solipsism, and mathematics). When considering any representational language, it is vital to consider the semantics of the language (i.e., what expressions in the language mean or what they represent). In some ways, despite its tendency for ambiguity, human language is very explicit—each sentence has a meaning that can be determined without any external information. The sentence “the cat sat on the mat,” for example, has a fairly specific meaning (although, it does not specify which cat or which mat). In contrast, sentences in a language such as predicate calculus need to have an interpretation provided. For example, we might write ∀x P(x)→Q(x) This sentence might have a number of interpretations, depending on our choice of meaning for P and Q. For example, we could interpret it as meaning “all men are mortal.” An inference engine that manipulates such sentences does not need to know the meanings of the sentences, but if the sentences are being used to reason about the real world and to form plans, then of course the interpretations must be carefully chosen.

Functions: In much the same way that functions can be used in mathematics, we can express an object that relates to another object in a specific way using functions. For example, to represent the statement “my mother likes cheese,” we might use L(m(me),cheese) Here the function m(x) means the mother of x. Functions can take more than one argument, and in general a function with n arguments is represented as f(x1, x2, x3, . . . , xn)

Leading Countries in AIS:

Japan and America, two leading countries in Artificial intelligence, are creating robots for military purposes to reduce the number of casualties in times of war.

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.