Governance professionals are concerned about the accuracy of AI-generated content in corporate reporting, according to new research from the Chartered Governance Institute UK and Ireland.
The survey of more than 600 professionals reveals growing anxiety about the reliability, ethics and oversight of AI tools now being used across boardrooms – particularly for minute-taking, risk analysis and corporate reporting.
Despite these concerns, many organisations still lack clear policies governing the use of AI, and only a minority of boards have a defined AI strategy.
According to the research, 74% of respondents are concerned about AI’s impact on reporting accuracy, while 37% say the biggest challenge is board understanding of AI technology.
Meanwhile, around half of respondents expect AI to positively affect their role, and 24% anticipate negative consequences.
Microsoft Copilot was found to be the most commonly adopted tool amongst respondents. possibly due to ease of accessibility.
Peter Swabey, director of policy at CGIUKI, says training is key if boards are to properly address the risks.
“AI is already being used in governance functions, often informally and without oversight,” he noted. “While tools such as Copilot can offer real efficiency gains, our research shows that governance professionals are deeply concerned about the risks to accuracy, ethics and trust. This report is a wake-up call for boards: they need to develop clear strategies, invest in training and ensure AI use aligns with sound governance principles.”
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.
YOU MIGHT ALSO LIKE