Regulating Artificial Intelligence: Can Policy Keep Up with its Potential?
Artificial Intelligence (AI) has surged into daily life and business with unprecedented speed. Its rapid advances in reading, writing, analysis, and generation bring both promise and anxiety. As Sam Altman, CEO of OpenAI, observed, the short-term impact may be overstated, but in the long term “everything changes.” This uncertainty underscores the urgent need for robust, standardised governance. AI’s immense potential for efficiency and creativity must be harnessed responsibly to ensure socially beneficial outcomes, mitigate risks, and remind us that its future is shaped by the people who design and direct it.
Current Approaches to Regulating Artificial Intelligence
Many nations and unions base their approach to AI governance on the OECD’s AI Principles, which outline five values (human-centred growth, respect for human rights and democratic values, transparency, safety, and accountability) and five recommendations for policymakers. These include investing in AI research, fostering inclusive ecosystems, shaping agile governance, preparing for labour market transformation, and strengthening international cooperation. Together, these aim to ensure AI development remains transparent, secure, and socially beneficial.
At present, 48 countries and one union have adopted the OECD framework, reflecting broad recognition of its importance. Yet practical implementation has been uneven. The EU AI Act, one of the earliest attempts at international regulation, has faced repeated delays. In Australia, AI Ethics Principles and an AI Safety Standard exist but remain voluntary, limiting their effectiveness. This inconsistency highlights a tension at the heart of regulation: balancing innovation with safety.
As governments seek to maximise AI’s potential, many prioritise speed and competitiveness over robust safeguards, overlooking the borderless nature of AI. Without consistent global guardrails, businesses and nations risk racing to market while neglecting the values of accountability, transparency, and security that the OECD framework was designed to uphold.
The Importance of Innovation
For many governments, reluctance to regulate AI too heavily stems from a desire to preserve space for innovation and attract investment. The UK, for instance, has avoided a comprehensive AI Act, favouring its AI Opportunities Action Plan, which emphasises economic growth and competitiveness over prescriptive, risk-based controls like those in the EU. Prime Minister Keir Starmer argued that over-regulation risks stifling progress, instead positioning the UK as a hub where AI companies can innovate freely.
This approach is built on the belief that empowering talent and reducing barriers will allow AI to drive growth, enhance public services, and broaden opportunities. By enabling entrepreneurs to experiment without regulatory buffers, policymakers hope innovation will evolve responsibly from within rather than being imposed externally.
The US has adopted a similar stance, with President Donald Trump issuing an Executive Order for Removing Barriers to American Leadership in AI, repealing prior regulatory directives. This pro-innovation environment accelerates testing and deployment but has also led to costly failures, brand damage, and unstable implementations.
While fostering innovation is essential, unchecked freedom carries risks. As the next section explores, balancing innovation with sound governance is vital to ensure that AI serves societal needs, not just commercial ambition.
The Pace of Change vs. the Pace of Learning
A key concern in AI governance is the widening gap between the Pace of Change (technological evolution) and the Pace of Learning (humanity’s ability to understand and adapt). While these once developed in tandem, the advent of compute in the 1950s accelerated AI’s growth exponentially, leaving regulation and public comprehension struggling to keep pace.
Policies in the US and UK illustrate this tension. In the US, Donald Trump’s Executive Order for Removing Barriers to American Leadership in AI replaced Joe Biden’s more risk-focused directive, prioritising speed over safety. Likewise, the UK’s AI Opportunities Action Plan took precedence over the shelved Artificial Intelligence (Regulation) Bill. Such approaches encourage rapid innovation but heighten risks if left unchecked.
Even where regulation exists, delays hinder effectiveness. The EU AI Act, published in 2024, only began partial implementation in February 2025, with later stages still uncertain. This lag grants the Pace of Change a head start.
Nonetheless, momentum toward governance is building. The EU has introduced an interim AI Pact, allowing organisations to voluntarily endorse the Act’s principles before enforcement. Over 200 signatories demonstrate a growing recognition that innovation must be balanced with security, accountability, and respect for human rights.
Conclusion
From an analysis of the different attitudes towards regulating AI laid out in this article, it is clear that there is a balance to be struck between maximising the innovative potential of AI to make a positive change, and ensuring that this change remains strictly positive through robust and holistic governance. After all, it is not necessarily the AI tools and platforms which pose the biggest risk, but those who develop and use them, and, by adhering to a risk and values-based approach, they can ensure that their products are engineered with people-forward principles at the forefront.
At Cambridge Management Consulting, we are equipped with the knowledge, expertise, and experience to ensure that your AI strategies remain compliant with policies and regulation to avoid penalties, and that they are built around the safety of your people and data. Get in touch now to strengthen your approach to AI that balances safety with success: https://www.cambridgemc.com/Digital-and-Innovation