Vivek Wadhwa and Mauritz Kop recently wrote an op-ed urging governments around the world to anticipate the threat of the emerging technology known as quantum computing. They even went so far as to title their article “Why Quantum Computing Is Even More Dangerous Than Artificial Intelligence.”
Front: this one gets a very respectful strongly disagree from me. While I believe that quantum computers pose an existential threat to humanity, my reasons are vastly different from those of Wadhwa and Kop.
Wadhwa and Kop open their article with a description of AI’s failures, potential abuse, and how the media’s story exacerbated the danger posed by AI before it gains a powerful edge:
The world’s failure to rein in the demon of AI — or rather, the crude technologies that masquerade as such — should be a deep warning. There is an even more powerful emerging technology with the potential to wreak havoc, especially when combined with AI: quantum computing. We urgently need to understand, regulate and prevent the potential impact of this technology from falling into the wrong hands before it’s too late. The world must not repeat the mistakes it has made in refusing to regulate AI.
The duo’s article then details the nature of quantum computers and the current state of research before getting to the next key point:
Given the potential scope and capabilities of quantum technology, it’s absolutely critical not to repeat the mistakes made with AI — where regulatory failure has left the world algorithmic biases overloading human biases, social media favoring conspiracy theories and attacking the institutions of democracy fueled by AI-generated fake news and social media posts. The dangers lie in the machine’s ability to make autonomous decisions, with errors in the computer code resulting in unexpected, often damaging results.
They also describe the problem with current encryption standards and the need for quantum-resistant technology and new standards to prevent corporate and national secrets from being exposed to US adversaries:
Patents, trade secrets and related intellectual property rights must be firmly secured – a return to the kind of technology control that was a key part of security policy during the Cold War. The revolutionary potential of quantum computing takes the risks of intellectual property theft by China and other countries to a new level.
Finally, the article ends with a call for common sense legislation:
Governments urgently need to think about regulations, standards and responsible use – and learn from the way countries have handled or mishandled other revolutionary technologies, including AI, nanotechnology, biotechnology, semiconductors and nuclear fission.
I have written extensively about quantum computing. I believe it has the potential to be the most transformative technology in history. But the threat it poses, in my opinion, is more closely related to that of fusion than, say, a knife.
As Wadhwa and Kop note, the utter inability of the US government to enact even the slightest bit of human-centric regulation or policy regarding the misuse of AI has resulted in a development environment where bias is not only acceptable, but a given. .
But no amount of government oversight and policy enforcement will change the fact that anyone with Internet access and the will to succeed can create, train, and implement models.
It’s a bit more difficult to build a functional quantum computer capable of performing hostile decryption tasks.
Wadhwa and Kop are absolutely right in calling for some regulation – although I vehemently oppose the idea that the US, or any country for that matter, should be doing that in any way”return to the kind of technology control that was an important part of security policy during the Cold War” when it comes to quantum computing. Physics is not a trade or military secret.
encryption, by its nature, does not require secrecy. And quantum computers, for all the hopes they represent, only promise to speed things up. The world is already taking steps to mitigate the threat of quantum decoding.
The truth is that AI and quantum computing pose as much of a threat as humanity’s ability to use them for evil.
Right now, billions of people are being actively manipulated by biased algorithms in every industry imaginable, ranging from social media and job search to healthcare and law enforcement.
As mentioned, anyone with internet access and the will to succeed can learn to build and deploy AI models.
It costs millions – billions in fact – to build a quantum computer. And the companies and labs that build them are already facing a much stricter regulation in the US than their machine learning-only counterparts.
As for the potential damage that quantum computers can cause? Aside from the couple’s concerns about quantum encryption, their main concern seems to be that quantum computers will exacerbate existing problems with inequality and algorithmic bias.
To that I would argue that quantum computers represent our greatest hope for breaking free from the “bullshit in, bullshit out” paradigm that deep learning has sucked into the entire field of artificial intelligence.
Instead of relying on wall-sized stacks of GPUs for brute-force inferences from buckets of unlabeled data and hoping prime rib and filet mignon come out the other side, quantum computers could open up whole new computational methods for dealing with smaller ones. amounts of data more efficiently.
I believe the “threat” of AI could have been mitigated somewhat with stricter regulation (it’s never too late for the US and other actors to follow the EU’s lead). But the accessibility of AI technology makes it a clear and current threat to every human being on Earth.
Quantum computing technology is poised to help us solve many of the problems caused by the “scale is all you need” crowd.
Ultimately, I think it’s about getting the right technology in the right hands at the right time. Artificial intelligence is a knife, it is a tool that can be used by almost anyone for good or ill. Quantum computing is like fusion, it can potentially wreak havoc on an unprecedented scale, but the cost of access is high enough to deter the vast majority of people on the planet from accessing it.
Historically, there’s no point in judging fusion or quantum computing based on the hypothetical things that can go wrong – where would we be without nuclear power? The call for more supervision and common sense regulation is necessary, but the mixing with AI technology seems unjustified.
AI is clearly dangerous, and that’s where I think regulators, the media and the general public should focus their concerns.