If you sit down to watch Netflix, most of the shows and movies that appear on your home screen are chosen by machine. Even the images used to advertise shows can change from person to person, based on what it knows about you. And when you start watching a show, more often than not your TV will use methods achieved through machine learning to add extra detail for a sharper image, interpolate new frames to make things smoother, or separate dialogue from the rest of the audio track to make it easier to hear.
If you’re using a smartphone you’re interacting with some form of AI almost constantly, from the obvious (recorder apps that transcribe your voice into text) to the invisible (algorithms that learn your routine to optimise your battery).
Loading
Photos taken with smartphones are increasingly AI generated. That is to say the raw data captured by the camera is interpreted by a range of machine learning processes. This is no surprise to people who have been watching that industry – lenses and sensors are not getting that much bigger, yet images are greatly improving – but it’s so fast you may not notice. Colours and details can be wholly invented by the phone when shooting in low light or at long zooms, or people’s faces can be constructed to an estimation of what they look like if the shot was blurry. Some users of Samsung phones were recently surprised to find their zoomed-in images of the moon were fed through a specifically designed algorithm that adds texture and details of the moon, which are of course very predictable, to even the most over-exposed or featureless of shots.
You might even ask a smart speaker to dim your lights, hear it respond politely and feel briefly like a Star Trek character, even though you know the thing is hardly more “intelligent” than an analogue dial.
The point is that machine learning and AI have been developing for a long time, and have resulted not only in new products and technologies, but in huge efficiency gains that make our lives nicer in ways we wouldn’t have been able to predict 15 years ago. And its ongoing development could keep doing that.
Loading
RMIT’s Professor Matthew Warren said the current negative dialogue risked drowning out discussions of the opportunities advancing AI research could bring. Beyond consumer products, AI could revolutionise transport, logistics, sustainability efforts and more.
“We can’t get cybersecurity professionals. But the application of AI in security operation centres, in terms of looking at all those security logs and predicting when attacks are occurring, it’s actually going to solve a lot of the skill problems that industry is talking about,” he said.
“In the medical environment, you talk about a lack of specialised services, you’re going to see AI rolled out that can assist doctors in terms of identifying skin cancers for example. Processes at the moment where there’s an element of human error.”
While he did acknowledge the string of high-profile experts and groups who have spoken up to say they believe AI is a path to ruin, he said in many cases it was over-reported or encouraged by the companies that make the products themselves.
“A lot of it is hype, around trying to raise the profile of products or services,” he said. “Global warming is our No.1 extinction factor, not AI and androids taking over the earth, which is what some people have jumped to.”
To be clear not every AI is innocuous, and all indications are that governments, including our own, are trying to get out ahead of it to avoid arriving too late as they did at the dawn of social media.
Chatbots and generative AI images in particular promise to boost efficiency across a range of jobs and industries, but pose regulatory questions such as do we need to label AI-generated content, so people can consider the source properly? If the models are fed on a diet of original creative works, shouldn’t the humans responsible for that work be compensated when the models are put to commercial use? Do the bots reinforce unfair biases present in the training data?
None of those challenges involve evil robots, yet their ongoing relevance does seem to carry a glimmer of those fears. Whether that’s the reports of creepy sinister things being said by an early version of Microsoft’s Bing chatbot, or a viral story about a military drone that decided it would be more effective without its human handlers and killed them. That first example is a lot less concerning in the context of a brand-new system, designed for creative writing, being stress tested. And the second one didn’t happen at all; a US Air Force colonel was misunderstood when he was talking about simulations and the story got out of control.
Loading
Even stories about more grounded AI issues are clearly preceded and engendered by our expectations of AI disaster. Take the faked photo of a bomb at the Pentagon which was billed as an attack using AI. Or the recent decision of the Bild newspaper to shed 200 staff in a reorganisation, which was widely headlined to imply a major news organisation was replacing journalists with AI. In fact, the company is keeping all its journalists and writers; the AI connection was that the chief executive had cited the technology as a competitive pressure, which helped inform the company’s decision to refocus on quality writing and investigative journalism.
It’s no wonder people are scared. But if we’re going to take advantage of the best of AI while avoiding the worst, it’s initiative and healthy scepticism we need rather than fear. And maybe the tiniest little bit of optimism.
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.