EDITOR'S COMMENT

For readers who, like me, grew up in the 70s and 80s, the term ‘artificial intelligence’ probably conjures up images of Japan’s WABOTs (the first anthropomorphic robots, featuring moveable limbs and the ability to see and converse); The Stanford Cart (probably the very first autonomous vehicle); and, of course, C-3PO and his companion R2-D2 from Star Wars. It was the stuff of science fiction – magic, even.

Fast forward almost fifty years, and AI is becoming more commonplace in the tools and appliances we use at work and at home, and the potential and promise of AI is the topic of almost daily news and debate.

For some, it would seem that the excitement they felt about AI has been replaced by fear, as research published this quarter suggests that the more digitally advanced a country is, the less its people trust in AI.

According to Swiss Re Institute’s analysis, which delves into how technology, such as sensors and AI/automated decision-making is closing the gap between the real and online worlds, consumers in countries considered as technologically advanced are the first to declare their mistrust for AI. Germany, France, the UK, Canada and the US are in the top 20 most prepared for AI out of 120 countries in the study. Yet, just a third of respondents in these countries say they understand and trust AI. At the other end of the scale, people in the emerging digital growth markets of India, Nigeria, Mexico, Indonesia, Philippines and Argentina hold the opposite view.

For consumers, digital trust, the research found, is influenced by a variety of psychological factors, including cultural and generational attitudes, trust in institutions, incidence of online fraud, and ease of use and understanding of technology. For businesses, however, digital trust is vital for progress in analytics and automation.

Readers of this magazine will naturally be focused on the multitude of risks and opportunities of AI – although not necessarily in that order. A ‘risks first’ approach is the one taken by the Center for AI Safety, which in May issued a stark warning around the most important and urgent risks of advanced AI, undersigned by a number of high-profile AI experts and policymakers, including the heads of OpenAI and Google DeepMind.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads. A dark time, indeed.
The San Francisco-based research group insists that AI has the potential to “profoundly benefit the world”, but only if developed and used safely, suggesting that many of the basic, inherent problems are not keeping pace with “dramatic progress” in the field.

Being such a broad and multifaceted topic, it is impossible to talk about ‘artificial intelligence’ as one single risk. Whether it’s the operational risks of AI, the geopolitical impacts, the impacts on professional roles and the future of work, or the bias and fairness of algorithms, the topic is enormous, fragmented and at the same time highly interconnected.

Whether you sit closer to the fear end of the scale or the excitement end, most would agree with the need for regulation and standards that support the safe and ethical development and roll out of AI tools.

Of note in this arena are two major pieces of AI regulation from two of the bigger global players – the EU and China. The EU’s approach is, naturally, a centrist one, whilst China has taken a sectoral approach to legislating this advanced technology field.

The UK will be next to create rules for the safe and responsible use of AI – something the PM met with the CEOs of OpenAI, Google DeepMind and Anthropic earlier this month to discuss. We will be covering these developments and more, as the AI saga continues...


This article was published in the Q2 2023 issue of CIR Magazine.

Download PDF

Contact the editor

    Share Story:

YOU MIGHT ALSO LIKE


Investec is disrupting premium finance – Podcast
Investec made waves in entering the premium finance market, where listening and evolving in response to brokers made a real difference.

Communicating in a crisis
Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here

Advertisement