What Exactly Is Artificial Intelligence?

Once the stuff of sci-fi fantasies, artificial intelligence is taking over the world. It completes half-written words in your texts. It answers questions through your smartphone or smart speaker. It plots the quickest route from here to there. It recognises familiar faces at your door, curates your social feed, fills your playlist. It slaps ads for those cool boots you considered buying on every webpage you look at. And it’s only going to become more sophisticated. Automakers are conceiving self-driving vehicles to ferry people to their destinations, then park or carry other passengers until pickup time. Lights and thermostats can connect to your smartphone’s location sensors, so your home can go to sleep when you depart and wake up when you return. Factory robots are sorting goods into customer orders that might be carried to your doorstep by autonomous quadcopters. AI isn’t the next big thing—it’s here now, and it shows no sign of letting up. Here’s an introduction to the technology you soon may come to regard as your new helper, teacher, colleague, neighbour, or overlord—I mean best friend.

What is artificial intelligence?

Let’s start with what it’s not. AI is not a high-tech replacement for a human brain. Generally, AI is anything a computer can do that formerly was considered a job for a human. But the pace of change in computing makes that a slippery formulation. Two decades ago, Deep Blue, the International Business Machines Corp. system that beat world chess champion, Garry Kasparov, was the epitome of AI. Now a freebie smartphone game can accomplish much the same thing.

How does AI work?

Early efforts were based on code that mimicked human problem-solving ability by applying logic to predefined objects and actions or emulated human thinking by following if-then rules. Those techniques often failed because programmers couldn’t map out entities or instructions that covered the profusion of actual possibilities. Meanwhile, a set of techniques known as machine learning sidestepped the need to manage endless possibilities by using statistical analysis of real-world phenomena to make predictions about how the world works. But such programs couldn’t learn much without huge amounts of data and processing power to crunch them—rare commodities in decades past.

Then came the internet. Net-based applications from search to e-commerce started to produce motherlodes of data, and processing power mushroomed as companies like Google and Amazon.com Inc. pooled their servers into global mega-computers. Supercharged with floods of data and endless horsepower, machine-learning algorithms took off—especially a type known as neural networks. These self-learning programs are built of simple software routines modelled on early notions of how biological neurons work. Digital neurons are arranged in interconnected layers, each sending its output to the next. A network with many layers is said to perform deep learning. Neural nets learn without being explicitly programmed with logical relationships or rules. Programmers start by curating and labelling a large body of data; say, images of cats. Then they feed the images into a neural net step known as training. Once the network has digested the pictures, they feed it a new set of unlabelled shots, some depicting cats, some not. If all goes well, the neural net will pick out the felines—an operation often called inference. 

Why is everyone talking about it?

AI has overpromised and underdelivered ever since Dartmouth College mathematician John McCarthy invented the term in the mid-1950s. A few years ago, though, it started delivering in spades. In 2012 Geoffrey Hinton and his colleagues at the University of Toronto used a neural network to win a competition that required identifying the contents of 15 million images in 22,000 categories. Mr Hinton’s software achieved an error-rate score of 15.4%, compared with the next-best entry’s 26.2%. Mr Hinton’s demonstration that neural nets can mimic human perception set in motion a tsunami of AI research and development that promises to leave no industry unchanged. Suddenly computers can learn to see, hear and otherwise sense the world around them, and begin to reason based on what they learn. How long before they wink into sentience, or blunder unconsciously into dystopian mayhem? Maybe soon, maybe never.

What is AI good for?

In many cases, AI is used to mimic human senses. When you’re searching Facebook photos for pictures of purple berets, the social network’s software assesses shapes and colours and factors in tags entered by users to deliver appropriate snapshots. Likewise, AI is good at recognising words it hears. It’s excellent at language translation and useful for extracting meaning from written or spoken statements. The ability to recognise imagery from digital cameras has led to an explosion of machines that see their surroundings and respond in useful ways. Skydio Inc. makes a camera-equipped drone, for instance, that can follow a person, avoiding obstacles as it goes. Warehouse robots, too, are watching to adapt their behaviour to changing conditions. AI also gives devices a semblance of reasoning ability. The Waze navigation app, for instance, assesses traffic speed to spot jams and predict drive times. The ability to find the most efficient path comes in handy in tasks like optimising electrical grids and other large networks. Many companies are racing to use machine learning to match medical treatments with records of patients seeking help, and the technology has myriad applications in health care from helping manage supplies to recommending healthy lifestyle choices. 

How intelligent is it?

AI lately has become far smarter than it once was, but it’s still not smart. Current technology can be fairly sharp-witted when it’s confined to narrow domains, but it still doesn’t cope well with the wide world. Take Alexa, the AI that speaks through the Echo smart speaker from Amazon. It has a firm grasp on everyday spoken language, and out of the box, it can answer simple questions about the time of day, weather forecast, and latest headlines. You can expand its knowledge to other slices of the world by loading any of 40,000 so-called skills, enabling it to serve up recipes or retrieve your bank balance. But try to engage Alexa, or Siri from Apple Inc., Cortana from Microsoft Corp., or Google Assistant, in a free-form conversation, and you’re likely to lose them. Examples of so-called weak AI, they have no common sense, and their understanding of context develops little, if any, from one moment to the next. Strong AI, on the other hand, would possess human beings’ native abilities to sense, learn, reason, imagine, express and build up context around subjects both abstract and practical. Such capabilities, for now, remain the domain of sci-fi robots like Star Trek’s Data and the Terminator. Some computer scientists believe a big enough computer running a big enough AI program might, at some point, awaken into sentience. But it’s generally agreed that breakthroughs—unforeseen and unpredictable—will be required before the dream of a human-level AI can approach reality.

How is AI limited?

Neural nets can take ages crunching through reams of data, depending on the net’s size and the hardware it’s running on before their designers know how well they’re working. That limits how quickly data scientists can hone their formulas. Intel Corp., Nvidia Corp., Qualcomm Inc. and a host of others are designing AI-specific processors that speed up training and inference. The algorithms themselves are difficult to design properly, and there are many ways to go wrong. What’s more, when machine-learning software renders a decision, it can be difficult to know what factors determined the result. So if, say, a self-driving car doesn’t yield appropriately to traffic, it can be hard to address the problem. The need to amass sufficient training data is another limiting factor. If there is too little data, the algorithm won’t have enough examples of a given feature to learn anything useful about it. Too narrow in scope, and it may gain a lopsided view of the world. Too old, and it can’t recognise conditions that have emerged in the meantime. Ultimately, today’s AI can’t devise new ways to solve problems. It knows what it has been told or shown, and learns about things it has been told to consider. Some algorithms are designed to decide what’s noteworthy in a particular data set. But they can’t generate new ideas. That’s still a job for humans.

Is it dangerous?

Some very smart, well-informed people have warned of the potential dangers of AI. The late astrophysicist Stephen Hawking said AI “could spell the end of the human race.” Elon Musk, the founder of Tesla Inc., has called AI research “summoning the demon.” The crux of such concerns often is a version of the notion, popularised by science-fiction author Vernor Vinge in the 1990s, called the technological singularity: the moment when machine intelligence equals human intelligence. At that point, computers would be able to give themselves ever greater intelligence and pursue goals ever more distant from human needs or desires. Singularity sceptics point out that scientists don’t know how to make a sentient robot, much less one that can guide its actions. Nonetheless, AI presents clear—though likely not existential—risks today. The nonprofit think tank OpenAI recently published a report detailing ways malicious actors could subvert current technology in the service of fraud, cyberattacks, even physical assaults by bomb-carrying delivery robots. A more pervasive danger may be an overreliance on underdeveloped technology. Two fatal crashes of Tesla Inc. cars are known to have occurred while the drivers were using the vehicle’s semiautonomous Autopilot system. In one case, the car’s sensors didn’t recognise a white trailer as it crossed the road against a bright sky. Thoroughly testing such complex systems is a tall order. Users of any AI will do well to exercise caution until the specific product or service is proven to perform well in extensive use.

How will it change our lives?

“AI is the new electricity,” declared Andrew Ng, chief executive of factory-automation specialist Landing.ai and former head of Baidu’s autonomous-vehicle program and founder of the Google Brain deep learning project. In a speech last year, he said machine learning is poised to transform one industry after another, spurring radically new capabilities and new businesses the same way Thomas Edison’s discovery did beginning in the late 19th century. Indeed, ubiquitous AI appears to be a foregone conclusion, as major internet companies restructure their data centres to train deep-learning software and as consumer devices from smartphones to doorbells are outfitted for inference. Current trends in AI could lead to greater efficiency in consumption of all kinds of resources. AI in the home can automatically turn on lighting, air conditioning and other equipment only when it’s needed, or run washers and dryers when energy costs are lowest. Businesses can use it to price goods and services and to stock inventory to create less waste. Self-driving cars promise to save lives and reduce traffic congestion. Municipalities aim to route traffic to minimise congestion and accommodate pedestrians. Farms may allocate water and nutrients to individual plants as needed rather than en masse to whole fields. The advent of machine vision and hearing also portends surveillance systems that track people’s activities on behalf of not only vendors but also employers, building security and law enforcement. The prospect of integrating and centralising such networks poses severe challenges to civil liberties and individual privacy. Some computer scientists believe AI will become embedded in who we are, as neural implants and prostheses increasingly shore up human physiology. AI’s present is filled with potential, its future unpredictable, with many surprises in store.

Credit: Ted Greenwald for The Wall Street Journal, 30 April 2018.