by Andrew Cochran
January 3, 2019
updated October 2019
There’s never been an accepted definition of human intelligence. This makes its artificial form hard to define — as intelligent as what? It comes down to how the results are perceived. A computer appears to be ‘intelligent’ when it produces a result that is comparable to a human doing the same task.
AI computers seem to get ‘smarter’ as their ability to calculate improves. The advances come in four areas:

- Speed – more possibilities can be processed more quickly with faster computers. This is known as
high performance computing , or HPC. - Data – greater accuracy results from larger and larger pools of data to analyze — typically called big data. Big data is enhanced by the ability to store data remotely and move it over networks, known as cloud computing.
- Algorithms – more efficiency comes from more elegant ways of processing the data. This enables more complex operations.
- Money – more research comes from more funding and commercialization of the results. Positive outcomes increase the appeal of additional investment, and the cycle continues.
An advance in one area affects the others. For example, the algorithms for identifying images have been known for years. Recent breakthroughs only came after processing speeds became faster and greater amounts of data were available.
AI research started more than 65 years ago
The term ‘artificial intelligence’ originated in 1955 and often is attributed to John McCarthy, a math professor at Dartmouth College. McCarthy denied credit for the term, but he is consistently associated with the origins because he convened a summer workshop in 1956 that launched organized AI research.
As a field of study, artificial intelligence is a subset of computer science. Some researchers have backgrounds in computing or engineering or physics or math, others come with neuroscience training, and some have combinations in many of these and more.
Research has grown and ebbed along different paths over the years. Some periods produced more promising results than others, leading to expansions and contractions in funding. Periods of little funding and slow growth are sometimes called AI winter.
Early periods of AI research sometimes are referred to as GOFAI, or good old-fashioned AI, recalling a simpler time when the field revolved around logic and reasoning. The earliest work on AI followed this path, believing that computers could be instructed to emulate logical thinking. This requires precise instructions written as computer code and is commonly known as symbolic computing or symbolic AI.
Today, many AI machines are not given step-by-step instructions and instead are presented with large volumes of representative data from which they ‘learn’ meaning. These learning-based systems have many variants, such as deep learning, generative adversarial networks, convoluted neural networks, reinforcement learning, and others. Each has differences in how they go about their work. Stages beyond present-day AI are frequently imagined.
Capabilities today are impressive but narrow
Jeff Bezos once described the current period as ‘only the first inning’ of AI systems, in 2016 joking ‘the first guy may be just coming to bat.’ Prominent AI scientist Fei Fei Li says the present capabilities of AI systems are ‘closer to a wash machine than Terminator.’
Current AI excels at a particular task, but only that task. A system that is capable of defeating the world’s best players in chess or Go cannot play checkers or say ‘thank you’ after a match. A system that can autonomously operate a moving vehicle cannot boil water. Prominent AI researcher Andrew Ng puts this in a workplace context: ‘AI is good at tasks but not jobs.’ Given their singular abilities, present systems are called narrow AI. Variations are artificial narrow intelligence (ANI), or sometimes, weak AI.
Even elaborate AI examples are narrow, such as autonomous cars or ‘dark factories’ (manufacturing without humans, so no lighting is required). These result from multiple narrow systems being grouped together, each performing a single function.

The stage beyond artificial narrow intelligence is called artificial general intelligence or AGI.
AGI envisages a time when the same system can perform many tasks, equalling or surpassing human performance in each. Metaphorically, after learning how to play the piano, an AGI system might extrapolate nuances of music to write and paint and other creative pursuits. It’s important to note that creativity is among the toughest things that machines might master, and the subject for much conjecture is whether machine-generated writing, drawing, or music could ever be considered ‘created’ or simply synthesized. The larger point is that AGI systems would learn in one area and be able to transfer their acquired ability to others, adapting along the way.
There are many opinions about when, or even if, this next stage of AI will happen. AGI is a persistent research focus and business goal for a handful of companies, notably OpenAI in California, and DeepMind, a London research group acquired by Alphabet (Google). A DeepMind system named AlphaZero prevailed against world-leading opponents in Go, as well as several other games. An OpenAI system named GPT-2 demonstrates an ability to generate several lines of coherent text, and original music can be generated by a variant of GPT-2 after being given a few bars as a prompt.
These have been head-turning developments partly for the abilities they portray and partly for how they were achieved. For example, AlphaZero became an expert GO player after repetitively playing the game in rapid rounds, accumulating experience each time. In the end, AlphaZero not only surpassed the world’s best players but also surprised them with the sophistication of its play. It marked a breakthrough because GO is said to have more possible moves than there are stars in the galaxy, an impossibly large number to manipulate simply as a function of computer memory. Instead, AlphaZero had devised its own understanding of the game.
No one knows what happens after AGI
The possibility that computers might someday direct themselves beyond AGI capabilities is hotly debated within the AI community. It’s called artificial superintelligence (ASI). This imagined future state sees machines transcending human ability, training themselves to reach levels well beyond our comprehension.
Some AI experts see this stage as inevitable. They foresee machines evolving over time to possess sufficient ability to determine their own goals, perhaps even able to design and build other machines to their specifications.
Other experts say this kind of conjecture is a waste of time — the possibilities are too remote. It’s counter-productive, they contend, to raise public anxiety as there is no evidence such a state is possible. Highly qualified and respected authorities line up on both sides.
Finally, you might well ask: if narrow AI is also known as weak AI, what constitutes strong AI? For some, artificial general intelligence (AGI) can be called strong AI, because AGI is about human-level abilities in multiple domains. For others, strong AI is about
Related
- Explainer | Is deep learning the same as AI & machine learning?
- Explainer | How do algorithms work?
- Explainer | How is artificial intelligence used in journalism?
- Panels/talks | Andrew Ng on How Artificial Intelligence is the New Electricity
- Academic papers | When Will AI Exceed Human Performance? Evidence from AI Experts