|An AI assistant|
Why is AI so attractive to those with deep pockets? Because these intelligent machines that mimic the neural networks of the human brain can be taught to think and act like humans using what is known as deep learning. By showing the machine real life situations say of symptoms of a disease and then a group of expert doctors telling it what the disease is and what the best possible cure is the machine becomes as intelligent as a panel of doctors, if not more. Instead of going to an expensive doctor you can choose to access the machine owned by a corporate giant, pay a lesser fee and get cured with the machines advice.
AI has started to take away jobs, of chauffeurs with autonomous cars on the road, of private secretaries with machine assistants making your phone calls for appointments and many more jobs will go as machines already in the process of deep learning come into the market.
To those who say that alternate jobs will be created by AI such as programmers of AI machines, just consider the number of jobs lost against the number of jobs created because of AI. A recent report by McKinsey predicts that by 2030, as many as 800 million jobs could be lost worldwide to automation.
So who makes money while those who lose jobs lose money? The ones who own AI machines and the one you used to work for before you lost your job to a machine, in spite of the fact that products and services would become cheaper for the consumer with greater efficiency of AI machines over humans.
Ethics of AI will become all important as AI progresses both in terms of its intelligence and its extension into the workplace.
At present, ethics of AI revolves around ‘’The Three Laws of Robotics’’ devised by the science fiction author Isaac Asimov:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov also added a fourth, or zeroth law, to precede the others:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The Three Laws, and the zeroth, have impacted thought on ethics of artificial intelligence.
But, what if a murderer attacks another human being and that human being whips out a knife in self-defence. By the time the robot sees this circumstance, the two are trying to kill each other. Unaware who is the murderer, the robot injures both humans because its inaction will allow two human beings to come to harm!
A big question – Should we use AI as weapons of destruction?
And, another big question – Does doing away with jobs on a large scale, worldwide, harm humanity?
Ethics of artificial intelligence must be addressed by the UN along with all stakeholders because if only industry organisations formulate the ethics there will certainly be conflict of interest.
Some of the important decisions that will have to be taken by governments will be appropriate amount of insurance premiums to be paid by automation machine manufacturers and service providers in order to provide a limited period income support for those who lose jobs to automation. Among other things, this income support will be needed for re-training of jobless people.
The good side of income support for re-training if you lose your job to automation is that you can build your creative talents and critical thinking skills which cannot be replaced by a machine. And, if you are able to earn money from these talents and skills by setting up a small business, you can perhaps spend more quality time with family and friends than you did when you were in a job!