Navigation

Ultimate AI

Software, Media & Technology

AI “A pro-innovation approach to AI regulation”, a bold yet perhaps lacklustre stance on the societal advancements that have engaged us over the past few months.

Now that some time has passed for everyone to digest the hyper-controversial government white paper on AI, it’s time for us at Polestar to dive in and evaluate how this paper has impacted the UK’s stance in the thriving and ever-evolving AI market.

Why does AI matter to the UK?

The UK is host to 3,170 active AI companies. The UK government’s investments in AI are set to reach a total of £3.6billion since their first investment project in 2014. This funding has supported research projects, mission funds and resources the government believe will push the UK into the front-centre stage of the AI game. Boasting one third of Europe’s AI companies, the government hopes that UK residents will feel “empowered” by UK technologies with the hope that the UK will become the “smartest, healthiest, safest and happiest place to live and work”.  These big sweeping statements are set out by Michelle Donelan MP in the ministerial forward of the white paper, with terminology such as “AI superpower” setting the scene for this political data driven document.

Dolnelan’s foreword continues by documenting potential risks AI poses within the workplace, the home and to overall society. The ministerial foreword set out the importance of a flexible regulatory framework, that encourages innovation, whilst ensuring safety through monitored developments and collaborative conversation. Donelan makes it clear that there will be no new legislation, only a hyper-critical, sector-specific analysis.

The hope is that leading industry experts will deliver analysis on sector-specific AI for legislation to be built around and, in the meantime, companies have the freedom to explore the full capabilities of AI without having to find loopholes in legislation.

However, some industry experts believe that this is a light-handed approach, and that legislation must be put in place to protect humanity’s future. They argue that we must understand the internal of the machine, and the machine must have ethically developed goals to enhance human living.

How do we use AI?

The body of the white paper begins by assessing how AI is being used both in our daily lives and within industry, referencing nuclear fusion control and advancements, that help with medicine and climate change, as major advancements that we have already been benefiting from. AI has been identified as one of five critical technologies within the UK Science and Technology Framework, with people comparing its infancy to the initial development of electricity.

Within the paper there is a positive narrative that is descriptive of advancement, job opportunities and societal development. Experts do agree on is that AI will not take away job opportunities but will bring about new ones. An article in Forbes discusses the ultimate growth of happiness in employees whose roles can be harnessed by AI to build accurate data predictions, allowing them time to focus their energy on client facing issues. Societal development is a more contentious issue, with debates sparking as to if the pros or cons take precedence over the ultimate decisions of development.

The Consequences of AI

The speed at which AI is developing has encouraged the government to reflect on the possible consequences that come with the technology. The paper discusses the “unintended consequences” of AI, not arising “from the deliberate action of bad actors” but developing with its frequent use and accessibility. It is compared to the beginnings of the internet and its capabilities being vastly good but negative in the wrong hands.. The government’s main fears are the increase in security risks for individuals, organisations, and critical infrastructure. Furthermore, there is a fear of the amplification of discrimination within the justice system and a risk to “privacy and human dignity”.

Cybersecurity

Although AI can bring risk to cybersecurity, there are companies that are harnessing AI’s power to combat cybercriminals. New cybersecurity systems recognise an attack much faster than their human counterparts, and much more efficiently. Companies such as Comply Advantage have integrated AI into their platforms to produce highly efficient anti-money laundering software.

Although these companies have demonstrated the positive power AI can have on our security, the issue lies with those creating the technology and ensuring that they can’t be put to misuse in the wrong hands. The same tools that have been used to help protect us can also be used to mistreat us. One article describes AI in cyber security as a “balance between progress and protection”, they believe a compound system that blends ai and human security into a higher security procedure.

Justice System

The fear of AI being utilised in the justice system stems from AI’s human bias. All the data gathered within the system stores a pre-existing bias from the time the data was collected, this will be passed on into the system and produce biased results. The result of which could see a rise in stop and searches for ethnic minority groups. AI surveillance cameras in “hotspot” areas increase geographical discrimination as they are more controlled by the police and result in more arrests than in other areas. The justice sector believes there is a need for legislation to be put into place in order to avoid discrimination and fundamental rights infringement.

Companies’ collaboration

The government wants to ensure AI being produced in Britain and globally poses no harm to the society, democracy, or values within the UK. IBM has followed these principles written an article outlining clear issues within current AI testing methods. The main issue with testing the security of new AI is the lack of Human ability to define unknown risks, it’s a little like the meaning of life – we can’t assess something we can’t fathom.  IBM found that only one third of developers truly know how to test these new AI systems. To battle this statistic within their own workplace they have taken precautionary steps which include the development of a new AI system that tracks weaknesses within machine learning models. The IMB hopes their new model will reduce the risk of negative AI capabilities but can’t ensure 100% safety. The governments “proportional intervention” approach will see companies such as IMB increasing their work on innovative solutions to ensure safe AI that fits within the regulatory framework.

Monopoly

The government believes that “some AI risks arise across, or in the gap between, existing regulatory remits.” These gaps produce an unnecessary burden and leave potential risks unmitigated; this could result in public distrust toward AI which would result in a slow adoption of the tech.

A slow adoption of tech could affect innovation, stifling progress, particularly for small businesses without existing IP, which could lead to these businesses leaving the market. An interview in Lifewire pointed out “Regulation can help to foster innovation and competition by ensuring a level playing field for all businesses,” Portella added. “This can help to prevent dominant companies from monopolising the market and stifling innovation, and can promote fair competition that benefits consumers.”

Within the AI sector there are largely two types of company: dedicated, and diversified. Dedicated companies, as the name suggests, are companies whose operations are solely AI-related – the government’s aim is to ensure these stay in the UK. Diversified companies, however, may only have a partial AI segment to their business.

 

Of the 3,170 active AI companies in the UK, 60% are dedicated AI businesses These companies make up 60% of AI micro companies, compared to 11% of large AI companies; dedicated AI companies tend to be smaller.  The UK government is looking to support these dedicated companies to develop technological growth and advancement, along with flexible working legislation. The goal is to foster a thriving AI industry that can contribute to the UK’s economic growth and competitiveness on the global stage.

The government has stated “intervention is needed to improve the regulatory landscape.” Through building on “existing regimes, maximising the benefits of what we already have while interviewing in a proportionate way to address uncertainty and gaps”.

The government has gone on to stress its approach to legislation is an “incentive to solve important problems.” The whitepaper argues regulation can increase innovation through solving problems that could pose potential harm to citizens.

The government believes regulation will strengthen public trust, which will in turn increase public adoption. Continuing “our approach to AI regulation will be proportionate; balancing real risks against the opportunities and benefits that AI can generate.”

The government framework is proposed as three objectives, these are:

“Drive growth and prosperity”

  • Make responsible innovation easier by reducing regulatory uncertainty. This will encourage AI investment and support adoption through the economy.
  • It goes on talk about removing the “barriers to innovation” to allow AI companies to capitalise on early development success. The paper describes the strengthening of the UK position in the AI market once regulations have been adapted.

“Increase public trust in AI”

  • “Trust is a critical driver for AI adoption”
  • “Reluctance can reduce demand for AI products and hinder innovation”

“Strengthening the UK’s position as a global leader in AI”

  • Development will help address solutions to the world’s “most pressing” issues such as climate change and future pandemics. As there is global recognition that AI must begin to have new regulatory requirements and the UK being at the forefront of this will strengthen its position as global leader.

 

The framework currently follows five principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. The white paper has commented that companies have freedom and flexibility within these principles with no statutory duty binding them. A company may, however, be placed under statutory duty when exercising the functions of a new AI system. In 2017, PWC revealed 67% of business leaders who took part in its global CEO survey believed AI would negatively impact their stakeholders’ levels of trust.

However, responses to the white paper six years down the line demonstrates the shift in investors opinions. Sir Adrian Smith, Institute Director at The Alan Turing Institute commenting “This white paper is good news for how AI is used and regulated in the UK. We’re very pleased to see that the government’s approach builds on the strength of existing regulators. A principle-based, sectoral approach is crucial to ensure that AI is regulated in the most effective and efficient way.”

Conclusion

The paper highlights the importance of the technology in the country’s growth and development, while also acknowledging the potential risks it poses. There is an overall emphasis on the need for flexibility with industry experts responding positively to this diverse approach that will lead to innovation and collaboration between sectors.

There is an opportunity for all members of the AI community to be part of the collaborative conversation surrounding the framework within their sector which will help ensure future legislation is vigorous and ethical without stunting growth. While some experts argue for more stringent legislation, the government believes that leading industry experts will deliver sector-specific analysis for legislation to be built around. At polestar we are embracing the AI-driven changes within all of our sectors whilst ensuring we have open and honest discussions about the possible implications AI might have on our society.

By Ella Bertrand on 15/05/2023