by Andrew Cochran
updated October 2019
Algorithms are sets of instructions — some say like recipes — for computers to reach an outcome. All forms of AI use algorithms, but not all algorithms are forms of AI. They’re used every day in every kind of digital device.
An algorithm formatted this page. Another will translate it into any language. Another still will assemble stories by how you choose to display them. Another determines responses when you click. And on and on. Algorithms do the same task every time, precisely. Even if the task is to be random, it will be precisely random.
Imagine yourself arriving home
Now think of unlocking your front door. You find your keys. From the bunch you pick one, matching it to your memory of the key for this particular door. You put the smallest end of the key into the lock. You apply some forward pressure until it meets resistance. You turn it to the right. You performed what could be thought of as the ‘door unlocking algorithm’ — a methodical series of actions for a certain
You would have used other algorithm-like procedures, too: one to differentiate your keys from all the things in your grasp, another to figure out the small end, etc. The ‘door unlocking algorithm’ probably links with another, the ‘opening door algorithm’ and so on, all joining together to form the routinized sequence of steps you use for ‘arriving home’.
Consider the number of intelligent functions required for unlocking the door:
- tactical sensitivity (e.g., finding the keys by touch),
- visual identification and reasoning (e.g., what’s ‘a key’, discriminating the correct key from the others, locating the keyhole),
- manual dexterity (e.g., positioning the key in the lock)
- motor skills (e.g., turning the key to the right)
This choreography is do-it-in-the-dark easy for humans but highly complex for computers. For machines to perform all these functions, each step needs either to be written out, step-by-step, or learned by example. Both ways require algorithms.
Yes, they are like Lego bricks
Algorithms come in many types. They can perform a discrete function, like ‘sort’, or be put together in any number of ways, like building blocks.
Consider another analogy. You are packing for a flight to a warmer climate and want to fit everything into a carry-on suitcase. You would match your belongings to what you’ll use at your destination, thinking perhaps of your various activities and any formalities you expect. From all the possible choices you would prioritize the items so everything fits in the suitcase. You might further optimize space by including or excluding certain items. When packing you may arrange items a certain way, for example putting pieces used every day in the same area.
Your closet is like a dataset and the steps of packing similar to a series of algorithms. Many are second nature because you already know all the items and their uses. Computers only know what they are told or can learn. In traditional computing, writing all the instructions for ‘packing’ would be a massive task: describing every item, providing its dimensions, anticipating all possible uses so priorities could be followed, etc.
There are special algorithms for AI
LEARNING: The newest forms of AI algorithms, known as machine learning, enable data to be acquired without describing them or anticipating future uses. They are ‘learned’, using many kinds of inputs, for example from sight, sounds, or words. It is roughly similar to how a toddler figures out the world. Show the machine learning system a t-shirt, call it ‘t-shirt’, and an algorithm associates the image with ‘t-shirt’. The more images it sees labelled as a ‘t-shirt’, the more refined the algorithm can be in differentiating a t-shirt from other images, such as a dress shirt or blouse. This is known as ‘training’ the system. Variants of machine learning refine the process. Some use processing such as deep learning, sometimes alternately referred to as neural networks, which use a structure inspired by the human brain, refining a prediction in multiple layers of processing.
PREDICTION: As well as using algorithms that produce two certain states (e.g., ‘true-false’) AI algorithms calculate probabilities (e.g., the degree of ‘true-false’). Multiple iterations keep raising or lowering the probabilities until there is sufficient confidence in the prediction being calculated by the AI algorithm (e.g., it’s your face looking at your mobile – so it’s ok to unlock). The illusion of intelligence partly comes from algorithms refining predictions billions of times per second, so that high probabilities are reached quickly. Faster computing has led to prediction abilities
AI algorithms learn like we do, except with more examples. Many more.
Training an AI computer is like giving it experience. The essential ingredient of experience for an algorithm is a piece of data. The more pieces the better. The more pictures of a cat an AI algorithm sees, the faster and more accurately it can discern a cat, even if much of the cat is obscured in the image.
The data training set below has 10,000 images, all labelled ‘cat.’ If some of these weren’t of a cat but, say, a fish, the learning would be impaired and the system would be less accurate when trying to identify a cat. Mislabelled data weakens future analysis. This makes ‘clean’ data sets valuable.
Training sets for human faces typically involve 1 million to 15 million faces. A facial recognition system, at Baidu, a major Chinese AI company, used a training set with 200 million faces.
AI algorithms acquire experience in three ways
SUPERVISED LEARNING: Most AI systems currently use what’s known as supervised learning. The training sets are structured, comprised of data that has been pre-labelled by humans (e.g., ‘this is a cat’). It is designed to achieve known results (e.g., ‘which of these new images show a cat?’). The algorithm keeps working to match a pre-determined outcome. The objective is classification.
UNSUPERVISED LEARNING: In unsupervised learning, no labels are used describe the data, which are called unstructured data. The algorithms work to identify common features (e.g., how are all these papers similar?). The objective is clustering.
REINFORCEMENT LEARNING: A variant of these is known as reinforcement learning. Like supervised learning, the outcome is known. Like unsupervised learning, the source data is unstructured. An example is algorithms learning how to play a game. The desired end is known (winning the game). The training data is generated internally as repetitive rounds of the game generate how much closer or farther from the desired end state is produced by each round of play produces. Put another way, reinforcement learning involves vast amounts of trial-and-error, learning from each round. One recent experiment used 500 million rounds of hide-and-seek. The objective is optimization.
Here is a summary of the three ways AI algorithms learn:
|Purpose||classifying the |
|clustering the |
|optimizing how to reach the goal|
There are potential issues with bias
Algorithms are created by humans to achieve a human purpose. Human bias can become embedded, either knowingly or unknowingly. Human bias can be conscious (e.g. I like paintings more than photography) or unconscious (e.g. wedding pictures always feature brides wearing white). Human bias can also appear in:
- Choosing which data to analyze
- What labelling terms to use
- The criteria for sorting, and more
Keep in mind, an algorithm executes its task the same way every time. Embedded bias may have perpetual effects.
Once an algorithm has absorbed a bias, it can be hard to detect. Algorithms endure for long periods when they work well. They also work in combination with other algorithms, masking bias even more deeply. Bias in an algorithm may be compounded by bias in a dataset, in how the data were selected, annotated, etc. AI research scientist Margaret Mitchell warns of what she calls bias laundering.
There are potential issues with transparency, too
Transparency is often proposed as the answer. This, too, is complicated. Sometimes companies or individuals gain competitive advantage from their algorithms. Making them transparent could be commercially damaging. There may be ethical problems. Revealing details could cause hardship to individuals identified, either directly or indirectly. Insights into how an algorithm works could provide the means for hacking or abuses by others.
It comes down to a matter of degree. Disclosure of the kinds of training sets used, for example, provides context for the results without revealing individual records or processes of evaluation. So can disclosure of the goals applied to the algorithm. As with other areas in AI, there is more thinking to be done. It has become a separate area of research known as explainable AI, or XAI.
We have more on this topic
- Explainer | An ethical checklist for robot journalism
- Reports/white papers | Statement on Algorithmic Transparency and Accountability
- Panels/talks | How we teach computers to understand pictures
- Academic papers | Algorithmic Accountability
- Academic papers | Mapping the field of algorithmic journalism