Everything you need to know about the history of AI, what we mean by ‘deep learning’, and if we can really trust artificial intelligence.
Are there different kinds of AI?
Mention AI, and most people think of ‘deep learning’. This kind of AI is loosely inspired by the way our brains work. It uses lots of computers to simulate large networks of artificial ‘neurons’, which are then trained, typically using humongous amounts of data, until they’ve learned to do what we want them to – for example, understanding speech.
This training is the slow and resource-heavy part. Once trained, even a phone can then run the AI and instantly perform the right function, such as obeying your voice command. Deep learning is just one kind of AI, among thousands of others.
Some AIs use advanced statistics to help computers make predictions, such as the likely side effects of a new drug; others use logic to make deductions about their environment, such as a robot mapping out a route; while others simulate evolution or even swarms of bees in order to find solutions to difficult problems such as scheduling activities in a factory or optimising the shape of an aircraft wing.
Will AI ever be as clever as us?
This is a tough question, as it depends on how we measure intelligence. The more advanced AIs can recognise features in images (that’s a half-hidden cat; that’s a corner of a bus) better than us, provide expert opinions more reliably than us (your test results mean you have an 85 per cent chance of condition X), and play many games better than us – most recently Go and classic arcade games like Asteroids.
But no AI can drive a car as well as an experienced driver, despite billions of dollars of research and data equivalent to more than 10 billion miles of driving. No artificial intelligence can control a robot to wash the dishes. No AI has an IQ more than that of an average six-year-old child.
We have very powerful computers, lots of training data, and some clever algorithms, but we still don’t know how to make an AI with the flexibility and learning capacity of the human brain – and we also don’t know how to make AIs that become cleverer without human input.
What’s more, we still don’t fully understand how intelligence arises in biological organisms, so it’s hard to see how our human-inspired AIs will match our intelligence in the near future.
What else is AI used for?
AI has become a ubiquitous technology. When you unlock your phone by looking at it, an AI has recognised your face. When you speak to your TV or smart speaker, an AI has recognised your voice. When you take a photo with your phone or digital camera, an AI identifies elements in the foreground to help blur the background, and combines several photos taken with different exposures to construct a perfect picture.
AIs check for fraud every time you buy something online. They monitor your online shopping behaviour and present you with adverts tailored to your likes. They suggest news stories that you are more likely to be interested in, and answer your questions in online help desks.
AIs are creative, composing music, designing buildings, painting artworks. AIs are also enabling our vehicles to become increasingly autonomous, taking over some of the more tedious and repetitive aspects of driving.
Can we trust AI?
There are always downsides to technologies. If an AI is trusted too much, then we may get ourselves into trouble – this is why driverless cars will always need a human override option.
If we train AI with biased data then the AI will also be biased, as studies have shown where AIs recognise white male faces better than others. Some worry that AI will lead to job losses, which may be true, but AI will also create many jobs.
AI is nothing new in this regard: a similar thing happened in the Industrial Revolution, and again in the ‘information revolution’ with the advent of computers and the internet.
In the end, artificial intelligence should not be feared. AI is being created to help us, and like all future technologies, we need to ensure that it is used appropriately.
Source: BBC Science News