Photo credit: Pixabay.
Aftermath is a 2017 movie based on the Uberlingen air disaster of 2002. It tells the revenge story of one of the victims’ grieving parents who went on to kill an air controller blamed for the crash. The movie had reviewers rolling their eyes at Arnold Schwarzenegger trying to do an emotive role. But the true story behind it has an important takeaway on managing new risks as AI becomes a part of everyday life.
Uber’s product lead from San Francisco Alexis Zheng brought up the Uberlingen story at a panel discussion I moderated earlier this month at Seoul’s Startup Festival. Uberlingen has little to do with the ride-hailing company apart from the name. It’s the German city over which two planes collided – a Russian Tupolev passenger jet and a DHL Boeing 757 cargo jet.
For humans to work with humans is hard enough; for humans to work with machines is going to be even harder.
Both planes were equipped with traffic collision avoidance systems (TCAS), which automatically directed one plane to fly 1,000 feet higher and the other one to go 1,000 feet lower. But in this case, a Swiss air controller also spotted the impending disaster and gave the pilots the opposite command. The Russian pilot listened to the human command, whereas the DHL pilot from Western Europe was trained to listen only to the TCAS and disregard the contrary human instruction. The result was that the the two opposite commands neutralized one another and the planes came back on a collision course. All aboard the Tupolev and the cargo jet died in the crash.
Today this is used to illustrate challenges in machines and humans working together, says Zheng, who leads a team building machine learning and AI products at Uber. “The biggest challenge in my job is when a machine is making a decision and a human is making a decision at the same time; that’s an area less talked about today, but it’s going to be very important going forward,” she says. “For humans to work with humans is hard enough; for humans to work with machines is going to be even harder.”
Fear of an AI being running amok has been a popular theme in science fiction and movies. But rather than a Frankenstein or Matrix, what we’re starting to see are myriads of AI applications that become part of our everyday lives in ways that we may not even be aware of: from our Uber rides to news feeds and Netflix recommendations. Add the internet of things to the mix, and we have a scenario where almost every asset in our lives is intelligent, knows about us, and is thinking of ways to be more efficient and personalized. “The term AI will disappear because it’s everywhere,” says Zheng.
The challenge for her and others developing and deploying AI applications then is to anticipate unintended consequences and prevent their pernicious effects. Startups and larger tech companies jumping in to use AI in their products and services have a lot to gain from real-time automation. But how much attention are they paying to how AI applications will behave in various real-life situations?
See: The AI haves are pulling ahead of the have-nots, McKinsey study finds
AI panel discussion at Startup Festival 2017 in Seoul. Photo credit: Jeffrey Broer.
Design problem
It’s a concern that is cropping up in different forums.
Because every decision is expressed at a mathematical level, it forces us as humans to anticipate these problems when we design intelligent machines.
A debate has been raging on the extent to which fake news manufactured in Russia and posted on social media might have influenced the outcome of the 2016 US election. Facebook boss Mark Zuckerberg dismissed it as a “crazy idea” when it first surfaced last year but has since then agreed to share with US congressional investigators thousands of political ads linked to fake accounts in Russia. He has promised to double down on a team “working on election integrity.”
That algorithms decide Facebook feeds, mostly based on engagement metrics, has been well known. The extent to which the system could be manipulated by trolls to possibly influence elections has been surprising. An added twist to the controversy was Facebook’s decision last year to disband a team of human editors for its trending topics, making it solely an algorithmic exercise. Now it has vowed to push back against “problematic content.”
But it’s not just a matter of tweaking algorithms to prevent misuse. Adobe’s principal designer Khoi Vinh pointed out at a design conference in Bangalore, DesignUp, that AI engineers should really go back to how user engagement is defined from the outset. And that’s a design problem.
“Recent history shows that new technologies are rolled out and they produce unintended consequences and significant downstream effects on society. But we tend to talk about it in technical terms, like a bug or something in the algorithm that was wrong, and nothing to do with design or the user experience on FB,” he told a hall packed with designers.
See: China’s most addictive news app Toutiao eyes world domination with AI feeds
Alexis Zheng expanded on this idea during the discussion in Seoul. She pointed out that setting objectives is difficult because everything in AI is expressed in mathematical terms, including definitions of good versus bad outcomes. “Because every decision is expressed at a mathematical level, it forces us as humans to anticipate these problems when we design intelligent machines.”
Common sense
AI is a long way from the general intelligence of a human brain which draws upon wide-ranging and deep context in decision-making. An AI brain simply lacks the common sense we’re used to, and this can produce weird results. That’s something AI engineers have to always keep in mind in defining objectives for machines. Zheng illustrates this with a game used in AI training.
What’s the AI equivalent of gut feel or intuition?
It’s called Coast Runner and players are on jet skis on a water course. The result depends on the time it takes to complete laps, their ranking, and the extra points they collect by hitting floating objects on the course. The machine picks the optimal strategy: it goes into a small loop and keeps hitting the floaters to collect points, without bothering to complete the course, because success is defined as a weighted average of the three objectives.
“The machine would cheat. It figures it can just drop the first two objectives and win on points without doing the heavy work. It doesn’t get the human context, the grand mission, or what good you’re trying to bring to society. It takes the objective function very literally. And that’s what it’s going to optimize,” explains Zheng. “I find myself in that situation a lot. If I don’t express the objective function well, I find the machine cheating by picking the area where it knows it’s easier to score. Then I won’t accomplish the strategic goal.”
See: Half the work people do can be automated: McKinsey
And finally, there’s the question of communicating the logic to those who deploy or use the system. “Business teams come to our product teams and say, ‘Your product is a black box – it’s very difficult for us to understand the decision your product is making.’ AI fundamentally does act a little like a black box. The best explanation I can sometimes use is that it just works this way,” says Zheng, adding that even humans are not always able to explain fully the decisions we make. What’s the AI equivalent of gut feel or intuition?
This post Black boxes, human factors, and the unintended consequences of today’s AI appeared first on Tech in Asia.
from Tech in Asia https://www.techinasia.com/unintended-consequences-of-ai
via IFTTT
Vào thời gian cuối năm lúc đó vấn đề nhu cầu mua sắm đồ tăng cao kéo theo đó yêu cầu tuyển việc người làm việc theo thời vụ cũng sẽ rất nhiều, đây là hi vọng dành cho các sinh viên https://timviec365.vn/.
ReplyDelete