By Arvin Charles
If there was one memorable scene from the movie, “The Imitation Game”, starring Benedict Cumberbatch as Alan Turing, it would be the one where Alan Turing is brought into an interrogation room by a detective. The detective sits on the chair across him in the dimly lit room and poses the question “Could machines ever think as human beings do?’. Alan Turing then replies (rather arrogantly as portrayed by Benedict Cumberbatch) with a very thought inducing answer, “Of course machines cannot think as human beings do. A machine is different from a person, hence they think differently. The interesting question is that just because something thinks differently from you, does that mean it is not thinking?”
Despite the fact that computers had just begun their infancy at that time, Sir Alan Turing already developed a train of thought on artificial intelligence, to which he even created a test to measure the capability of a machine to exhibit intelligence which is indistinguishable from a human being called the Turing Test. Fast forward to today, there already exists numerous artificial intelligence systems from Siri on your iPhone to even self-driving vehicles. Most artificial intelligence systems used today fall under the category of weak AI or narrow AI, in which the system is designed to do one particular or narrow task. Nevertheless, we are already on the verge of breakthrough in producing artificial general intelligence (AGI) or widely known as strong AI (artificial intelligence with an intellectual capability equivalent to that of a human beings or possibly even surpassing human intelligence). The advancements of AI would open the window to a whole new world of endless possibilities and in many circumstances replace the need for physical and also mental human labour. The question is where do we draw the line before machines start outperforming human kind, making us obsolete?
Benefits of artificial intelligence
The biggest benefit that would arise from the advancement of artificial intelligence would be the reduction of errors, exclusively in the medical field. According to statistics, approximately 400,000 patients die in hospitals in the United States of America annually due to medical mishaps. To put things into perspective, Dan Munro, a contributor at Forbes magazine once compared this magnitude to the number of passengers who would perish if an Airbus-380 flight were to crash every day for a year. Hence, the introduction of more complex AI to improve existing electronic health records (EHRs) could be a major game changer in reducing this staggering number. One leading company in this field is CloudMedX which is currently developing an AI solution which harnesses information from the EHR and turns the EHR into a predictive tool that could facilitate doctors to make optimal decisions in treating their patients. This system also accesses information like a patient’s medical history and even illnesses experienced by the patient’s family and connects them to the patient’s symptoms in real time to detect certain diseases at an early stage, thus reducing the risk of a patient from contracting these chronic diseases. This is done using healthcare natural language processing (NLP) technology which picks up unstructured data and free texts like doctor’s notes, intake assessments and patient’s discharge summaries and decodes the information. Machine-learning technology (ML) is also coupled with this process and works by advising medical personnel on treatment and by rejecting or accepting the recommendations, doctors ‘teach’ the ML which learns and evolves itself gradually.
With many exciting discoveries like the Trappist-1 star, reported to be orbited by seven earth-like planets and also the idea of a Mars colonization, the presence of complex artificial intelligence would make the field of space exploration flourish. Using intelligent robots that are able to withstand harsh atmospheres and collect massive amounts of data, scientists are able to make various important discoveries without leaving the seat of their comfortable laboratories on Earth. Take for instance the new upgrade on the Mars rover Curiosity, which perfectly lives up to its name by having its own built-in curiosity. The new AEGIS system enables the rover to analyze and select interesting rocks using its camera. Information about a rock’s shape, contour and brightness is harvested from the image taken by the rover’s camera and the rover then ranks the rocks from the most interesting one to the least interesting. A laser beam is then directed at the rock which then identifies the atoms present in the rocks using a laser system called ChemCam. Hence, the usage of artificial intelligence albeit being very expensive, substantially lessens the risk of danger exposed by humans in the name of science.
Drawbacks of artificial intelligence
A greater progress in the development of more sophisticated artificial intelligence systems will no doubt bring about human job losses. The impact of mere robotics in the manufacturing industry can already be seen today in production and assembly lines in various factories and companies, displacing many low-skilled jobs every day. However, the introduction of complex AI technologies could bring about massive job losses on an entirely grander platform. For example, just the acceptance of self-driving vehicle systems and driverless cars would make the jobs of millions of truck and taxi drivers obsolete and irrelevant. Furthermore, algorithms for big data management and processing are also bound to supersede administrative and office occupations. A good example of this would be Fukako Mutual Life, an insurance company in Japan which sacked 34 employees by installing an AI software which is able to read medical certificates and data on hospital surgeries and stays which saved approximately 140 million Yen per year due to the cut in salary expenses.
Artificial intelligence is acknowledged by many as a dual-use technology (technology that can either be capable of great good or great harm) and many smart minds like Stephen Hawking, Elon Musk and Bill Gates continually remind the world of the fact that artificial intelligence may as well be the extinction of mankind. According to Hawking, artificial intelligence is only beneficial depending on who is controlling the technology and whether it can be controlled in the first place. Though it may seem eerily similar to the Terminator movies, a lot of research is being done to produce autonomous systems for military purposes and this raises a lot of moral and ethical questions on the usage of AI in the execution of military missions, especially when the lives of humans are at risk. It is obvious that AI lacks common sense and the ability to make situational judgements and depending heavily upon this technology would be unwise.
As portrayed by many blockbuster movies, the fear that AI may supersede human intelligence is also not only worrying, but may prove to be something that can actually happen. According to Stephen Hawking, humans are limited by slow biological evolution while AI systems would be able to redesign themselves at an alarming rate once full artificial intelligence is achieved. Hence, there may be a point where humans will not be able to compete with this rapid evolution and may be superseded. This point where AI outsmarts human kind is coined as “the singularity”.
In short, although artificial intelligence may seem to be the next frontier in our development as human beings and would ideally bring about many benefits to society, care has to be taken that we do not lose control over artificial intelligence in our attempt to ‘play God’. As Elon Musk has mentioned before, strict and well defined regulations for the development of AI has to be enforced internationally. After all, we would not want a Tony Stark somewhere around the world to accidentally create an Ultron now, would we?