Elon Musk & Future of Life’s Call to Halt AI Systems Training Is The Wrong Move
Instead, OpenAI, Google, and other AI labs should develop and train their AI systems while simultaneously innovating in privacy, security, ethics, trust, and safety
Future of Life recently published an open letter for AI labs to immediately halt the training of AI systems more powerful than GPT-4, a call that was endorsed by Elon Musk and 20,000+ tech professionals. It was a call heard by the Italian data protection regulator (the Garante), which subsequently banned chatGPT in Italy. Setting aside the proponents’ potentially competing agenda and ulterior motives, their call to halt AI development and training is ineffective, inappropriate, and the wrong move. It suggests a zero-sum game and an unnecessary false tradeoff between AI on one side and privacy, security, ethics, and safety on the other. Instead, I call for AI labs to do what they should’ve done from the very beginning: to develop and train AI systems, while innovating in privacy, security, trust, and safety.
The call to halt AI development and training is inappropriate given generative AI’s broad utility and legitimate uses. Unlike other technologies (such as social media and facial recognition, for example), AI applications span almost every facet of life, from law, medicine, and engineering, to human resources, operations, and marketing and sales, just to name a few. In contrast, some governments rightly stepped in to ban facial recognition technologies because their limited uses often didn’t justify their broad data collection and use, violating data minimization and proportionality privacy principles. Even more damning is the worldwide regulatory failure to ban harmful social media technologies, which have far less utility than AI.
Moreover, the proponents’ call is unlikely to work. Telling a bunch of innovators not to develop cutting-edge technology is like leaving shiny, new Lego blocks in front of curious toddlers and asking them not to play and build something out of it. A halt is negative and demoralizing, whereas a challenge to innovate is positive and incentivizing. Plus, a halt is unlikely to happen, without regulators stepping in with overreaching proscriptions. To be clear, other European regulators are starting to investigate following Italy’s ban. But jurisdictions that value innovation, such as the United States, are unlikely to institute such a ban at such a pivotal time in history. Again, we need not look farther than regulators’ failure to ban social media platforms, despite their harmful data practices--see the Facebook Cambridge Analytica scandal, the Instagram teen depression fiasco, and the Clubhouse privacy infringements--coupled with their even lesser utility.
Instead, I call OpenAI and other AI labs to develop and train their AI systems, while simultaneously innovating in privacy, security, trust, and safety. This means not just hiring privacy, security, trust, and safety experts who who can help develop and train AI systems with privacy, security, trust, and safety in mind. But perhaps more importantly at this grand scale, it means actually building ethical, privacy, responsible technologies into the AI systems they’re developing.
From a privacy lense, this also means following Privacy by Design principles including, in particular, the principle of full functionality, a positive-sum, not zero-sum approach to innovation. In plain speak, it means developing AI while accommodating legitimate interests and objectives like privacy, security, trust, and safety in a positive-sum “win-win” manner, not through a dated, zero-sum approach that makes unnecessary trade-offs. This means practicing privacy engineering throughout the development lifecycle of AI systems, with privacy engineering being the “how” to privacy by design’s “what.”
To be sure, the proponents for halting AI systems training could read this response and say that a halt will allow Open AI and other AI labs to put in place the aforementioned recommendations of innovating in privacy, following privacy by design, and instituting privacy engineering in developing and training AI systems. But: 1) that would be is a stretch and 2) their call for a halt remains problematic on several levels. First, their call is not the same as calling for the development and training of AI systems, while innovating in privacy, security, trust, and safety. Second, it sets up privacy, security, and ethics as roadblocks, creating false dichotomies. And most importantly, a halt does not accomplish the much needed counter-innovation.
To be fair, the proponents call on OpenAI and other AI labs to use the halt to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” While this an interesting enough suggestion, it still falls short for several reasons. External protocols are reactive, not proactive. They’re often not binding, which makes adoption problematic. Most importantly, they don’t counter the powerful positive incentive to innovate, like a call for privacy, security, trust, and safety innovation does. Which means that they’re not going to be as effective in proactively building privacy, security, trust, and safety into AI systems.
In summary, OpenAI and other AI labs should develop and train their AI systems, while simultaneously innovating in privacy, security, trust, and safety. This helps ensure that we build AI systems responsibly, avoid backwards regulatory bans, engender trust in users and the general public, and build AI systems that serve and protect humanity.
If you support the positive call to innovate in privacy, security, trust, and safety in the development and training of AI systems, you can sign this open letter.
This post is the first in a series exploring AI, privacy, security, and ethics broadly; and OpenAI’s chatGPT, Google’s Bard, and other generative AI, more specifically. This first post is a response to Future of Life’s open letter to halt the development and training of AI systems. The upcoming second post will outline the privacy privacy, security, and ethics challenges in AI; while the third post will provide privacy recommendations for Open AI, Google, and other AI labs to use as a blueprint and leverage in developing their AI systems.
If you’d like to be the first to receive Lourdes’ future writings, feel free to subscribe to Lourdes’ Substack.