Technology in today’s world and economy is taking over a lot of monotonous jobs and, pretty soon, driving will become one of them, and when it does it will be a huge part of the economy. By 2050, driverless cars and mobility as a service is expected to grow to an astounding $7 trillion industry worldwide. From 2035 to 2045 it is anticipated that consumers will regain up to 250 million hours of free time that otherwise would be spent on driving. $234 billion in public costs will be saved by reducing accidents and damage to property from human error, and driverless cars can eliminate 90% of all traffic fatalities – saving over 1 million lives every year. But if people are no longer behind the wheel, how will A.I. make those decisions that we make every time we are on the road?
Real life applications of ethical A.I. can grow to be even more outlandish and complex, with the many different variables that prevail in our day to day lives. And as A.I. advances, it becomes more and more responsible for more moral and ethical decision making.
But A.I., just like people, can make mistakes. Amazon’s Rekognition was a face identifier – it’s algorithms identify up to 100 faces in a single image, it can track people in real time through surveillance cameras, and can scan footage from police body cameras. A.I. also learned bias from its programmers – adding another flaw to A.I. and its acceptance into cars. In 2014, Amazon began training and A.I. to review new job candidates. The system was trained using resumes submitted mostly sent by men – the system then concluded that ‘male’ was a preferred quality in job hires and started to filter out women.
Find out if A.I. sometimes can’t be trusted and if people don’t accept A.I.’s decisions, how A.I. can become ethical here.