Artificial intelligence

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".[1]

As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect.[2] A quip in Tesler's Theorem says "AI is whatever hasn't been done yet."[3] For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.[4] Modern machine capabilities generally classified as AI include successfully understanding human speech,[5] competing at the highest level in strategic game systems (such as chess and Go),[6] autonomously operating cars, intelligent routing in content delivery networks, and military simulations.

Artificial intelligence can be classified into three different types of systems: analytical, human-inspired, and humanized artificial intelligence.[7] Analytical AI has only characteristics consistent with cognitive intelligence; generating a cognitive representation of the world and using learning based on past experience to inform future decisions. Human-inspired AI has elements from cognitive and emotional intelligence; understanding human emotions, in addition to cognitive elements, and considering them in their decision making. Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence), is able to be self-conscious and is self-aware in interactions.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[8][9] followed by disappointment and the loss of funding (known as an "AI winter"),[10][11] followed by new approaches, success and renewed funding.[9][12] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[13] These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"),[14] the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences.[15][16][17] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[13]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[14] General intelligence is among the field's long-term goals.[18] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many other fields.

The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it".[19] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[20] Some people also consider AI to be a danger to humanity if it progresses unabated.[21] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[22]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[23][12]

History

Silver didrachma from Crete depicting Talos, an ancient mythical automaton with artificial intelligence

Thought-capable artificial beings appeared as storytelling devices in antiquity,[24] and have been common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots).[25] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[20]

The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis.[26] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed changing the question from whether a machine was intelligent, to "whether or not it is possible for machinery to show intelligent behaviour"[27] The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons".[28]

The field of AI research was born at a workshop at Dartmouth College in 1956.[29] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[30] They and their students produced programs that the press described as "astonishing":[31] computers were learning checkers strategies (c. 1954)[32] (and by 1959 were reportedly playing better than the average human),[33] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[34] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[35] and laboratories had been established around the world.[36] AI's founders were optimistic about the future: Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[8]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill[37] and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter",[10] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[38] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[9] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[11]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[23] The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[39] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.[40]

In 2011, a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[41] Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[42] The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[43] as do intelligent personal assistants in smartphones.[44] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][45] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[46] who at the time continuously held the world No. 1 ranking for two years.[47][48] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is a relatively complex game, more so than Chess.

According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI Google increased from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating the improvements of AI since 2012 supported by lower error rates in image processing tasks.[49] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[12] Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.[49] In a 2017 survey, one in five companies reported they had "incorporated AI in some offerings or processes".[50][51] Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an "AI superpower".[52][53] However, it has been acknowledged that reports regarding artificial intelligence have tended to be exaggerated.[54][55][56]

Other Languages
العربية: ذكاء اصطناعي
azərbaycanca: Süni intellekt
Bân-lâm-gú: Jîn-kang tì-lêng
беларуская: Штучны інтэлект
беларуская (тарашкевіца)‎: Штучны інтэлект
буряад: Хэмэл оюун
贛語: 人工智能
한국어: 인공지능
Bahasa Indonesia: Kecerdasan buatan
íslenska: Gervigreind
Kiswahili: Akili bandia
la .lojban.: rutni menli
Bahasa Melayu: Kecerdasan buatan
монгол: Хиймэл оюун
မြန်မာဘာသာ: ဉာဏ်တု
Nedersaksies: Keunstklookte
日本語: 人工知能
norsk nynorsk: Kunstig intelligens
Simple English: Artificial intelligence
slovenčina: Umelá inteligencia
slovenščina: Umetna inteligenca
srpskohrvatski / српскохрватски: Vještačka inteligencija
suomi: Tekoäly
татарча/tatarça: Ясалма интеллект
тоҷикӣ: Ҳуши маснуӣ
Türkçe: Yapay zekâ
Türkmençe: Ýasama akyl
українська: Штучний інтелект
Tiếng Việt: Trí tuệ nhân tạo
吴语: 人工智能
粵語: 人工智能
žemaitėška: Dėrbtėns intelekts
中文: 人工智能