Apocalypse Now or Paradise Later: AI’s Reality Distorting Tools
March 2nd, 2018 by CHHS RAs
By CHHS Extern Erika Steele
Like a child trapped in a septuagenarian’s body, artificial intelligence (AI) presents an odd sort of dichotomy, which triggers emotions not seen since the proverbial splitting of the atom, ranging from exuberance to resistance, paranoia to terror, and from hopes for a better life to warnings of Armageddon.
The recent Buzzfeed profile, Avid Ovadya, an MIT graduate and Chief Technologist at the Center for Social Media Responsibility and Knight Two Fellow, is not an exception. Therein, Ovadya warns that AI-assisted technology, used maliciously, could spread propaganda, manipulate reality, and in essence effectively compete with real humans, displacing the voices of legislators, regulators, lawmakers, technologies, and technology companies. He argues that AI’s distortion of reality will shake the foundations of our Democracy because “it can make it appear as if anything has happened, regardless of whether or not it did.”
AI as We Know It
Society’s knowledge of AI is based in part on our shared experiences and in part based on warnings of technology giants, like Elon Musk and Stephen Hawking, who warned respectively that AI is a “fundamental risk to the existence of human civilization, “ and that “[AI] brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It cold bring great disruption to our economy.” Moreover, publications like the “Malicious Use of Artificial Intelligence: Forecasting, Preventing, and Mitigation,” published by the a group of 26 leading AI researchers working at a two-day conference at the University of Oxford, warning that as AI continues to become more powerful and plentiful, security attacks will become less expensive and more easily carried out, more precisely targeted and harder to detect fuels public concern.
Indubitably, the current American political platform, and foreign hacking efforts to swing elections present prime examples of how AI automation of tasks involved in political security may expand existing surveillance, persuasion, and deception threats. Likewise, new varieties of using speech synthesis for impersonation and automatic hacking continue to escalate. Indeed AI could be used to overcome an autonomous vehicle, for unlawful surveillance purposes or to launch uncoordinated malicious attacks. AI can also carry out cyber attacks by automating certain, labor intensive tasks, such as spear phishing, where attackers use personalized messages for each potential target, in order to steal sensitive information or money. On the political front, AI can be used for surveillance, creating targeted propaganda and spreading misinformation. For instance, highly realistic videos of state leaders making seemingly inflammatory comments they never actually made could be made using advances in image and audio processing.
We have already seen AI used to create fake, superimposed images onto another person in videos. For example “deepfakes,” publish fake pornographic videos of celebrities created with a machine learning algorithm, represent an unscrupulous abuse of technology that has to be banned. Similarly, Face2Face technology looks a lot like deepfakes, in in that it swaps faces in real-time with an incredibly realistic result, also presenting possibilities of misuse, but the technology also demonstrates the need of fraud detection systems that will most likely be based on AI methods to spot fakes.
Ovadya and the Malicious Use of AI report highlight the use of deepfakes, Face2Face, or similar techniques to manipulate videos of world leaders as a threat to political security. For instance, they posit the consideration that one only need imagine a fake video of Trump declaring war on North Korea surfacing in Pyongyang and the fallout that would result to understand what is at stake. Illustrating that nefarious use of AI will enable unprecedented levels of mass surveillance through data analysis and mass persuasion through targeted propaganda diluting our Democratic system.
At the end of the day, we must surmise that deception is common in our daily lives, some lies are harmless, and while others may have severe consequences and can become an existential threat to society. As unsettling as this appears, as humans, we have accepted that there are some lies that we can live with and usually accept as normal—the small lies or white lies. Computational propaganda presents a threat, and some argue that AI could set us back 100-years when it comes to how we consume the news because we will not be able to discern true facts from fake news when artificially amplified and targeted news dominate political discussions, in essence altering our discourse-reality.
However, as alarming as Ovadya’s message is, we should take stock in the fact that society, and science fiction writers, have since the 1950’s played upon our fears that robots will one day destroy mankind and take over the word. Most recently, with the advances of AI technologies recent decades, Armageddon-like scenarios that threaten the very existence of its creators hunt us ever more. Undeniably, AI functionality is limited today, but researchers believe it will soon give way to transformative enterprises that will far surpass the functionality we see today. That is not to say that technology is not advanced today; in fact, AI functions have grown significantly, and those advances are what give rise to the doomsday scenarios that many fear today as developers, and end users, find that in the real world, AI functions differently than in training environments and reliance on algorithms for beneficial and nefarious purposes.
The Promise of AI
Rather, we should follow gestalt principles to find meaningful perceptions in a chaotic world. From chatbots to killer bots, AI runs the gamut of applications, but it is not all bad. AI is transforming many industries from farming to manufacturing to cognitive computing brings benefits in ways that help extend our life expectancy.
The field of medicine one of the most polarizing industries when it comes to adopting modern deep learning techniques, yet despite the unending deluge of panic-ridden articles declaring AI will be the end of mankind, we are living in a world where algorithms are saving lives every day. For instance, AI assists us in improving a doctor’s bedside manner, and AI’s unlimited potential helps the overall performance of healthcare organizations as they move to the cloud. Thanks to AI, an iPhone can detect cancer and a smart watch can detect a stroke. You can even quit smoking or kick your opiate addiction with the help of AI. In Copenhagen, emergency service dispatchers use Corti, a digital assistant that leverages deep learning to help medical personnel make critical decisions in the heat of the moment. Likewise, Art Medical is using AI to solve a big problem in medicine: that people get sicker in hospitals. Thus, it is using smart feeding tubes and monitors designed to prevent life-threatening complications in ICU patients.
Moreover, industry experts like Bill Gates optimistically believes that the rise of AI is not the end of society, as we know it. His views are diametrically opposed to the warnings from other AI techs, including Musk. Gates states that AI will be great for society because “we will be able to do more with less.”
Undeniably, data scientists must continue to identify, examine and pinpoint bias in the data systems we use, especially since nearly all data inherently contains bias. In sum, at times it may not seem like it, but we are living in the most prosperous era in human history. A decade or two ago, many of the advances AI has brought to live were considered out of reach, thus, it appears that the future of AI learning is not sentient killer machines, but rather its longer [and hopefully more prosperous] human lives.
Despite the Changing Landscape, There is Hope
To better protect against the rise of ill-intended AI, policy makers should collaborate with technical researchers and subject matter experts to investigate, prevent and mitigate potential malicious of AI. Researchers and engineers should take the dual-use nature of their work seriously, allowing misuse-related consideration to influence research priorities and norms, and proactively reaching to relevant actors when harmful applications are foreseeable. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these changes.
Additionally, as a society we must promote a culture of responsibility, wherein all researchers, organizations, and end users, highlight the importance of education, ethical statements and standards, norms and expectations. We also need to highlight the need to explore and implement data verification, and responsible disclosure of AI vulnerabilities, security tools and secure hardware. As the use of AI continues present novel attacks that take advantage of an improved capacity to analyze human behaviors, moods, and beliefs on the basis of available data, users must be cognizant that novel cyber attacks that subvert systems may undermine the ability of democracies to sustain truthful debates. Consequently, individuals will bear the additional burden to educate themselves and learn to authenticate and validate their sources of information.
AI’s slow creep into every facet of modern life is causing a social disturbance not seen since the atom bomb, giving rise to concerns that AI-assisted technology will distort reality and shake the foundations of our Democracy. As alarming as this premise appears, in today’s ever changing socioeconomic and technological landscape, we must take stock of the multitude of benefits AI has brought to our daily lives.
Instead of blaming AI algorithms for the increase in deceptive threats, spear phishing, social and political propaganda, spreading misinformation, etc., we must come to terms the fact that these deceptive practices are not novel, rather, the persons abusing the technology are increasing the frequency and severity of activities that previously required greater resources. AI is not the monster dreamt up by Hollywood plotting to destroy humanity, because today most AI is quite stupid, capable of executing tasks within defined parameters established by its human creators. Thus, the threat society, Democracy, and our way of life is not AI, but rather, the persons exploiting it for nefarious purposes.