by Andrew Cochran
December 2018
updated October 2019
Algorithms are sets of instructions — like recipes — for computers to reach an outcome. All forms of AI use algorithms, but not all algorithms are forms of AI. They’re used every day in every kind of digital device.
An algorithm formatted this page. Another will translate it into any language. Another still will assemble stories by how you choose to display them. Another determines responses when you click. And on and on. Algorithms do the same task every time, precisely. Even if the task is to be random, it will be precisely random.
Imagine yourself arriving home
Think about unlocking your front door. You find your keys. From the bunch, you pick one, matching it to your memory of the key for this particular door. You put the smallest end of the key into the lock. You apply some forward pressure until it meets resistance. You turn it to the right. All together, you performed a methodical series of actions for a certain outcome. This is like how an algorithm works.
You would have used other algorithm-like procedures, too. One would differentiate your keys from all the things in your grasp. Another might figure out the small end of the key. The ‘door unlocking algorithm’ can link with others, too, such as the ‘opening door algorithm’ and so on. They join together to form a routinized sequence of steps you use for ‘arriving home’. With algorithms, you figure out the routine you want to achieve, and then the sequence of steps to achieve it.
Consider the number of intelligent functions required for unlocking the door:
- tactical sensitivity (e.g., finding the keys by touch),
- visual identification and reasoning (e.g., what’s ‘a key’, discriminating the correct key from the others, locating the keyhole),
- manual dexterity (e.g., positioning the key in the lock)
- motor skills (e.g., turning the key to the right)
This choreography is do-it-in-the-dark easy for humans but complex for computers. For machines to perform all these functions, each step must be written out, step-by-step, or learned by example. Both ways require algorithms.
They are like Lego bricks
Algorithms come in many types. They can perform a discrete function, like ‘sort’, or be assembled, like building blocks.

Consider another analogy. You are packing for a flight to a warmer climate and want to fit everything into a carry-on suitcase. You would match your belongings to what you’ll use at your destination, anticipating your various activities on the trip. From all the possible choices, you might rank the items so the most important will fit in the suitcase. You might include or exclude certain articles. You may arrange items in certain ways, for example, by putting pieces used daily in the same area.
Your closet is like a dataset. These packing steps are similar to a series of algorithms as you select from the dataset. Many steps are second nature because you already know the items and their uses. But computers only know what they are told or can learn. In traditional computing, writing all the instructions for ‘packing’ would be a massive task: describing every item, providing its dimensions, imagining various uses so priorities could be followed, etc.
There are special algorithms for AI
LEARNING: The newest forms of AI algorithms, known as machine learning, enable data to be acquired without describing them or anticipating future uses. They are ‘learned,’ using many kinds of inputs, for example, from sight, sounds, or words. It is roughly similar to how a toddler figures out the world. Show the machine learning system a t-shirt, call it ‘t-shirt’, and an algorithm associates the image with ‘t-shirt’. The more images it sees labelled as a ‘t-shirt’, the more refined the algorithm can differentiate a t-shirt from other images, such as a dress shirt or blouse. This is known as ‘training’ the system. Variants of machine learning refine this process. Some use an approach known as ‘deep learning,’ sometimes called ‘neural networks.’ These use a process inspired by the human brain, refining a prediction through multiple layers of analysis.
PREDICTION: As well as using algorithms that produce certain states (e.g., ‘true-false’), AI algorithms calculate probabilities (e.g., the degree of ‘true-false’). Multiple iterations keep raising or lowering the probabilities until there is sufficient confidence in the prediction (e.g., it’s your face looking at your mobile – so it’s ok to unlock). The illusion of intelligence partly comes from algorithms refining predictions billions of times per second so that high probabilities are reached quickly. Faster computing has led to prediction abilities
AI algorithms learn like we do, except with more examples. Many more.
Training an AI computer is like giving it experience. The essential ingredient of experience for an algorithm is a piece of data. The more pieces the better. The more pictures of a cat an AI algorithm sees, the faster and more accurately it can discern a cat, even if much of the cat is obscured in the image.
The data training set below has 10,000 images, all labelled ‘cat.’ If some of these weren’t of a cat but, say, a fish, the learning would be impaired, and the system would be less accurate when trying to identify a cat. Mislabelled data weakens future analysis. This makes ‘clean’ data sets valuable.

Training sets for human faces typically involve 1 million to 15 million faces. A facial recognition system at Baidu, a major Chinese AI company, used a training set with 200 million faces.
AI algorithms acquire experience in three ways
SUPERVISED LEARNING: Most AI systems currently use what’s known as supervised learning. The training sets are structured, comprised of data that has been pre-labelled by humans (e.g., ‘this is a cat’). It is designed to achieve known results (e.g., ‘Which of these new images shows a cat?’). The algorithm keeps working to match a pre-determined outcome. The objective is classification.
UNSUPERVISED LEARNING: In unsupervised learning, no labels are used to describe the data, which are called unstructured data. The algorithms identify common features (e.g., how are all these papers similar?). The objective is clustering.
REINFORCEMENT LEARNING: A variant of this is known as reinforcement learning. Like supervised learning, the outcome is known. Like unsupervised learning, the source data is unstructured. An example is algorithms learning how to play a game. The desired end is known (winning the game). The training data is generated internally as repetitive game rounds generate how much closer or farther from the desired end state is produced by each round of play. Put another way, reinforcement learning involves vast amounts of trial and error, learning from each round. One recent experiment used 500 million rounds of hide-and-seek. The objective is optimization.
Here is a summary of the three ways AI algorithms learn:
Supervised | Unsupervised | Reinforcement | |
Input | known | unknown | internally generated |
Output | known | unknown | known goal |
Purpose | classifying the knowns | clustering the unknowns | optimizing how to reach the goal |
There are potential issues with bias
Algorithms are created by humans to achieve a human purpose. Human bias can become embedded, either knowingly or unknowingly. Human bias can be conscious (e.g. I like paintings more than photography) or unconscious (e.g. wedding pictures always feature brides wearing white). Human bias can also appear in:
- Choosing which data to analyze
- What labelling terms to use
- The criteria for sorting, and more
Keep in mind, an algorithm executes its task the same way every time. Embedded bias may have perpetual effects.
Once an algorithm has absorbed a bias, it can be hard to detect. Algorithms endure for long periods when they work well. They also work in combination with other algorithms, masking bias even more deeply. Bias in an algorithm may be compounded by bias in a dataset, in how the data were selected, annotated, etc. AI research scientist Margaret Mitchell warns of what she calls bias laundering.
There are potential issues with transparency, too
Transparency is often proposed as the answer. This, too, is complicated. Sometimes companies or individuals gain a competitive advantage from their algorithms. Making them transparent could be commercially damaging. There may be ethical problems. Revealing details could cause hardship to individuals identified, either directly or indirectly. Insights into how an algorithm works could provide the means for hacking or abuses by others.
It comes down to a matter of degree. Disclosure of the kinds of training sets used, for example, provides context for the results without revealing individual records or processes of evaluation. So can disclosure of the goals applied to the algorithm. As with other areas in AI, there is more thinking to be done. It has become a separate area of research known as explainable AI, or XAI.
Related
- Explainer | An ethical checklist for robot journalism
- Reports/white papers | Statement on Algorithmic Transparency and Accountability
- Panels/talks | How we teach computers to understand pictures
- Academic papers | Algorithmic Accountability
- Academic papers | Mapping the field of algorithmic journalism
1 thought on “How do algorithms work?”