“We need to be super careful with AI — it’s potentially more dangerous than nukes”
AI is a dynamic constellation of countless different technologies working in unison to empower machines so they can sense, act and comprehend with its cutting-edge levels of humanoid intelligence. — Acquisitions it was endowed with by us lowly humans, in order to better our lives. But whether it is weak or strong AI, is there really a dark side?
The answer to this, is a resounding “Yes!” — And it’s not an alarmist view at all, as the fact of the matter is: the genie has already left the lamp, and the bad masters cannot be stopped… But this is not Star Wars — it is a clear and present danger, and one that could become uncontrollable and a trillion times more deadly that the worse form of global virus…
Going Over to the Dark Side
“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal“
Dark AI is a comprehensive term that concentrates all the evildoing that a self-governing system can accomplish, with the necessary components (unchecked algorithms, biased data, and so on).
And while it has to be said that both private sector and Government R&D initiatives have been put into position to confront dark AI, as Forbes notes, clearly it’s:
“AI’s tremendous capacity for evil that makes it so dangerous; and combined with its great relevance to society, it is necessary to create, launch and enforce strict ethical standards and pre-emptive measures as a means to combat artificial intelligence.”
So let’s pause for a moment, and take a look at this realistic list of dark AI possibilities, which could easily come about with specific maleficent Artificial Intelligence applications. — These include: smart device listening; fake news and bots; facial recognition and surveillance; and smart dust and drones. Further:
- AI could easily broaden the disparity between rich & poor
- Algorithms can influence our selling & buying transactions & patterns
- Sophisticated deep fakes could help make up micro amounts of personal data
- The AI economy could be controlled by a small number of tech giants (sound familiar?)
Indeed: “armies of undetectable smart dust can work together to obliterate entire power grids and smart infrastructure systems. Facial recognition grants autonomous systems the right to millions of individuals’ characteristics, which, thanks to cloning and bots, can be mobilized in the form of compromising deep fake images and videos. Smart home devices take privacy infringement to the next level, as IoT technologies serve as effective spying conduits for both domestic cyber criminals and foreign agents.”
And as if that is not enough, our governance laws can easily be perverted, and our freedoms quashed, thanks to the unrestrained AI access for surveillance of the masses. And so this is why human-machine cooperation is essential now more than ever, and as we head towards 2050, AI must be solely an extension of human capability, not a substitution.
So Where Do We Go From Here?
Some may think that in order to exert caution, it would be advisable to slow down the rate of progress. However, that also has a serious downside, in that China and other power hungry countries could overtake the US, Europe, and other aligned nations. Moreover, the threat of dark AI would still exist.
As the Verge notes: “Elon Musk says we need to regulate AI before it becomes a danger to humanity. [Further], he famously compared work on AI to “summoning the demon,” and has warned time and time again that the technology poses an existential risk to humanity.”
Check out this YouTube video (approximately 48 minutes long), in which he addresses a group of US governors, and stresses that: “governments need to start regulating AI now.” — But there will always be governments which want to go rogue, regardless of what they say in public…
The Potential Terminators
Respected scholars & tech leaders warn that AI is on the path to turning robots into a master class that will subjugate humanity, if not destroy it. Others fear that AI is enabling governments to mass produce autonomous weapons — “killing machines” — that will choose their own targets, including innocent civilians
Moreover, Tesla’s Elon Musk, stressed: “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
With regard to regulation, Musk feels that Artificial Intelligence is an unusual case where being proactive trumps a reactionary response. — This is due to his belief that by the time authorities become reactive, it will all be too late.
Indeed, he regards the present model of regulation: “in which governments step in only after a whole bunch of bad things happen, is inadequate for AI because the technology represents a fundamental risk to the existence of civilization.”
And while Musk is not referring to the type of AI which mega corporations such as Microsoft, Uber or Google are involved with; and is instead highlighting the super-intelligent entities, which until not so long ago, we all thought were confined to films; a large percentage of artificial researchers along with Musk, think that that: “work on the former will eventually lead to the latter…”
Open to Abuse
While a number of researchers are concerned about the potential abuse of present day forms of “stupid” and narrow AI, conversely, a well known Google Brain scientist, David Ha, stated that he was “more concerned about machine learning being used to mask unethical human activities.”
“International alignment will be critical to making global standards work”
But what about the dark side, which cannot be stopped? As The Financial Times notes: the potential negative impact of artificial intelligence from deepfakes to villainous uses of facial recognition and beyond, should be a genuine concern for everyone… So let’s examine what measures have been taken to address it thus far. — Well, the US and the UK, are in the middle of setting out regulatory proposals.
However, in order to make global standards viable, international alignment is paramount, and every player has to agree about fundamental values. But how likely is a cast iron guarantee that AI will be available to everyone, and be controlled for the good of humankind? And how can private and rogue government’s AI R&D be policed? — It can’t…
The CEO of Alphabet and Google, Sundar Pichai, unsurprisingly, has the view that:
“Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities. Regulation can provide broad guidance while allowing for tailored implementation in different sectors.”
Moreover, findings from one of ‘Stanford University’s One Hundred Year Study of AI’, came to the consensus that, trying to: “regulate AI in general, would be misguided, since there is no clear definition of AI, and the risks and considerations are very different in different domains.”
Indeed, some people think that artificial intelligence will become far more competent than humans; and once the degree of “technological singularity,” is attained:
“computers will continue to advance and give birth to rapid technological progress that will result in dramatic and unpredictable changes for humanity.”
In fact, some believe that this stage could be reached in just 20 years.
In addition, a team at Oxford University, cautioned: “Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime)…the intelligence will be driven to construct a world without humans or without meaningful features of human existence.”
Clearly, this renders highly intelligent AI, an unprecedented risk, in that extinction will most likely be the end game. Further, Nick Bostrom, the Oxford philosopher, thinks that just as human-beings out-contended, and practically eliminated, all the gorillas on the planet, Artificial Intelligence will surpass our natural evolution, and we will eventually be dominated by it.
It is also necessary to be mindful of the fact that work on artificial intelligence is conducted in countless nations, by a huge number of academics, business people and government employees. Moreover, its use spans a broad range and number of machines, from industrial robots to search engines, and everything in between.
But What About Autonomous Weapons?
This refers to the type of weapons which use artificial intelligence to determine which targets to fire at, and how much devastation should be left behind. Of note, a degree of autonomy is part of all software which uses algorithms, and this form of software is included in countless weapon systems. In fact, there is a level of completely autonomous weapons which operate wholly independent of any input from human beings. So should certain countries the world, such as North Korea, China and Russia, agree to a ban on these fully autonomous weapons. What do you think their answer would be? — So that along with other lesser, but nonetheless important issues, is exactly why AI needs to be regulated…