Labour Proposes Licensing AI to Safeguard Against Risks: Should Advanced AI Developers Be Licensed?

Artificial intelligence

In a bid to address the potential risks associated with advanced artificial intelligence (AI) technologies, the UK’s Labour party has proposed the introduction of licensing for developers working on these cutting-edge tools. Lucy Powell, Labour’s digital spokesperson, has called for stricter regulations governing companies training their AI products on vast datasets, similar to the ones used by OpenAI to develop ChatGPT.

The urgency for AI regulation has gained momentum within the government, with Prime Minister Rishi Sunak acknowledging the “existential” threat that AI could pose to humanity. Recent warnings from industry insiders, including an adviser on AI to the government, have emphasized the need for proactive measures to control AI’s development. Powell expressed concern over the lack of regulation surrounding large language models that can be applied across various AI tools, urging for oversight in their construction, management, and control.

Drawing parallels with the regulation of medicines and nuclear power, Powell suggested that licensing developers could be an effective approach. She argued for the establishment of governmental bodies similar to those governing medicines and nuclear power to oversee AI development. By implementing licensing requirements, the UK could ensure that developers meet specific standards and adhere to ethical guidelines when working on advanced AI projects.

The UK government released a white paper on AI two months ago, highlighting the potential opportunities of AI while providing limited insights into regulation. However, recent developments, including advancements in ChatGPT and heightened concerns from industry experts, have prompted a reconsideration of the government’s approach. As part of this reassessment, Sunak plans to advocate for the UK’s leading role in international efforts to establish guidelines for governing the AI industry during his visit to Washington DC.

Labour, too, is actively finalizing its policies on advanced technology. Powell, who will address industry insiders at the TechUK conference, believes that the disruption caused by AI could be as profound as the deindustrialization of the 1970s and 1980s. Labour leader Keir Starmer is expected to deliver a speech on this subject during London Tech Week, underscoring the party’s commitment to addressing the challenges and opportunities presented by advanced technology.

Rather than outright banning specific AI technologies, such as facial recognition as done by the European Union, Powell emphasized the importance of regulating the development process. Concerns have arisen regarding biased or discriminatory data used to train AI models, which can perpetuate biases in the resulting products. To mitigate these risks, Powell suggests that governments should require developers to be more transparent about the datasets they use. This would enable proactive measures to address potential biases and unintended consequences of AI, such as in employment practices where AI tools may influence hiring and firing decisions.

Matt Clifford, chair of the Advanced Research and Invention Agency, set up by the government, highlighted the rapid evolution of AI. Clifford warned that AI could already be used for launching bioweapons or large-scale cyber-attacks, and cautioned that humans could soon be surpassed by their own technological creations. As AI continues to progress at an unprecedented pace, the need for an active and interventionist government approach, rather than laissez-faire, becomes crucial in managing its risks.

In conclusion, the Labour party’s proposal to license developers working on advanced AI tools reflects the growing recognition of the need for regulation in the field of AI. By implementing licensing requirements, the UK can ensure that developers adhere to ethical standards and mitigate potential risks. As the government reconsiders its approach to AI regulation, it is essential to strike a balance between fostering innovation and safeguarding against unintended consequences, bias, and discriminatory practices associated with AI technologies.