November 30, 2023
Generative Artificial Intelligence (“AI”) has seen explosive growth in 2023. According to the latest McKinsey Global Survey, AI has evolved from a tech-focused conversation to a priority for nearly 25% of C-suite leaders who personally employ generative tools. As platforms like ChatGPT become even more accessible and organizations persist in their investments, the technology will continue to integrate deeper into our corporate and creative spheres.
The response has not been entirely enthusiastic. In March 2023, more than 1,100 signatories (Elon Musk and Steve Wozniak among them) signed an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. That did not happen. If anything, the development of generative AI has accelerated, despite many fervently proclaiming it as an existential danger.
Although existing regulatory schemes have been (unsurprisingly) slow to address these advancements, recent events suggest that policymakers are picking up their bootstraps. The Biden administration has released an executive order that dictates “new standards” for AI safety and security. In the United Kingdom, decision-makers attended the AI Safety Summit, with aims to reach global consensus on how to tackle the risks posed by AI. Lawmakers in the European Union have even agreed on a risk-based framework for AI regulation meaning that pan-EU legislation is imminent.
While effective oversight is surely overdue, the challenge will be striking an adequate balance between facilitating technological innovation and economic benefits on the one hand, and safety and ethical governance, on the other.
Author: Emily Groper, 2023/2024 Articling Student-at-Law