Calls to regulate AI amid concerns over future risks

Artificial intelligence experts have issued a warning around the risks of emerging technologies in the field, and are calling for proportionate regulation to protect individuals and wider society from potential unforeseen and uncontrolled impacts.

Alarming reports from a number of news outlets include warnings that, if left unchecked, AI could lead to the extinction of humanity. Experts – including the heads of OpenAI and Google Deepmind – have warned that some form of governance or control will be needed, and many have supported a statement published on the webpage of the Centre for AI Safety which states: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Earlier this month, European Parliament MEPs adopted a draft proposal for AI rules designed to provide transparency and risk management measures for AI systems. The Parliament’s internal market committee and civil liberties committee voted in favour of the proposals to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.

MEPs expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45m users under the Digital Services Act) to the high-risk list. Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.

After the vote, co-rapporteur Brando Benifei said: “We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level. We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

MEP’s are keen to ensure that AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring on factors such as social behaviour, socio-economic status, and personal characteristics.
Earlier this year, the UK government also outlined a white paper detailing a proposed set of principles for building public trust and confidence in the AI sector, based around factors including safety, security, transparency, and governance.

The government is keen to avoid heavy-handed legislation which could stifle innovation and instead take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government says it will empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission, and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.

Commenting on developments, Jake Moore, global cyber security advisor at ESET, said: “Although currently a relatively small threat, the extinction statement suggests the realms of what AI will be able to carry out is still largely unknown. With bad actors increasingly using AI to facilitate their attacks and even AI deciding to think nefariously for itself, there is the risk that attacks will continue to counter standard defences and break through current securities. However, fighting fire with fire is a vital way to limit the chances of seeing AI risks increase so the production of more AI being deployed in countermeasures, we will see this balance out again.

“The government was very slow to regulate social media and cryptocurrencies so it is positive that the AI discussion is now being held as well as heard. Although impressive, ChatGPT and other intelligence language models are largely still in their infant phase, however, regulation at this stage is a vital part of the process and can help guide a safer use of the technology for future generations.”

    Share Story:


Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here

Modelling and measuring transition and physical risks
CIR's editor, Deborah Ritchie speaks with Giorgio Baldasarri, global head of the Analytical Innovation & Development Group at S&P Global Market Intelligence; and James McMahon, CEO of The Climate Service, a S&P Global company. April 2023