Navigation

AI Adoption – A Cautious Approach?

Software, Media & Technology

Is this the workplace of the future?

AI is everywhere. You can scarcely visit a blog or news website nowadays without coming into contact with some new story regarding the technology, and silicon valley and governments the world over laud its potential, heralding it simultaneously as both the saviour and potential destroyer of the modern workplace.

Cutting through all this noise can be difficult, but businesses should be cautious not to adopt these technologies too soon. More than this, the term ‘AI’ is increasingly being applied to all manner of solutions which, when you peel back the marketing jargon, have little to do with what most would consider ‘AI’, diluting the value of the term to investors and consumers alike.

So, what are the current issues with AI? What does the future look like? And when is the right time to invest?

Generative AI versus “Normal” AI

Generative AI (“GenAI”), a term little known to the public conscience before the stratospheric launch of Open AI’s Chat GPT in November 2022 is distinct from what we traditionally think of as ‘AI’. While “traditional” AI can analyse large data sets and tell you what it sees, GenAI carries out the same process but creates from it entirely new data, often from a written prompt.

GenAI is typically what most people are referring to when they talk about ‘AI’, but it is important to know the difference as both types have their own pitfalls and advantages; it is still novel technology, and the dust is yet to settle in this field.

For example, GenAI systems like Midjourney can be used to create image like the ones used in this blog (and all our blogs, for that matter) using only a simple, user-generated prompt. The prompt for the image below was: “Photo of the world’s largest cat”


Someone get that cat a saddle!

As you can see, the barrier to entry for using GenAI is incredibly low; the technology has advanced at an incredible pace over just the course of a year, to the point now where anyone can type in a natural-language prompt and get a decent result. While the image is not perfect – note the tree branches seeming to clip in front of the person in the background’s face – these images can be produced with such speed that editing their errors will still take substantially less time than creating an image all by yourself.

‘Traditional’ AI, on the other hand, is used in other situations where new data is not required, but the analysis of existing data is the MO. A great example of this can be found in the health sector, where AI has been successfully implemented into the mammography process. Here, a machine learning algorithm is used to analyse mammography screenings at a speed impossible for us meagre meat sacks to comprehend. The speed and accuracy of these systems has proved successful, allowing doctors – who would usually interpret the data themselves – to focus on the tasks AI isn’t very good at yet, like dealing with patients face to face.

Concerns with GenAI

GenAI, being a novel technology, is still very much in its infancy. Questions surrounding potential copyright infringement, for example, are still largely unanswered. Since GenAI models are trained on extremely large datasets, often taken from the internet, many believe the data used to train the models is used without the expressed consent of the creators, stating that their work is being effectively plagiarised. The New York Times is one of the first major news outlets to sue OpenAI for this, citing that articles published by the paper have been used to train its GPT model and now compete with the outlet as a source of reliable news.

On the other hand, one US judge threw out a similar case from the art world, where artists moved to sue AI-image-generation leaders Stable Diffusion and Midjourney (which produced the images you see in this blog) over their potentially copyright-infringing use of their work as part of their AI training datasets.


Here I used the prompt: “image in the style of the Simpsons” – so what do we think? Copyright infringement? Or fair use?

Misinformation is also a key concern for many regulators and governments the world over. The quality, speed, and accessibility of this tech makes it practically impossible to control, especially now the APIs are out in the open. Already many have been fooled by AI-generated images – notably of the pope in a white puffer jacket – and AI-generated text is already being used to generate misinformation online, something many are worried about given this year will see the largest number of elections in human history. While steps have been taken to mitigate misinformation – Midjourney, for example, no longer lets you create images of political figures (although if you ask for a Boris Johnson “lookalike” running through the streets of St. Petersburg with a Putin lookalike, you get much the same results), but for all intents and purposes, the genie is well and truly out of the bottle.

This uncertainty will not put investors at ease, and also complicates how businesses incorporating GenAI will be valued. We also do not know how quickly advances will play out, and this is why venture capital funds are not eager to fund startups focusing solely on GenAI. Instead, it will be the much larger PE firms and established tech companies who will have the capability to be more leveraged with their GenAI investments and can take on the risk.


While under close inspection the cracks begin to show in this image, your casual internet user might be fooled at a glance, given how quickly social media posts are scrolled through.

When is the right time to invest?

Given the uncertainty, most businesses should be cautious and not too eager to invest too heavily in AI at this early stage of its development. For now, while the industry giants, such as Microsoft, can afford to pour billions into developing large language models (“LLMs”) and then deploy them through their pre-existing infrastructure, it is more important – especially for SMEs – to prioritise investing in their own infrastructure, IT service management, and cybersecurity. This will better position these firms to fully leverage GenAI once it has matured to the point where the business case is more certain, as systems will already be in place to safely onboard the new technology.

The ‘internet of things’ (“IoT”), whilst separate from AI, will aid progression in this field, and is a more proven investment at this stage of AI development. Put simply, IoT is a collective network of connected devices and the technologies that facilitates communication between different nodes and the cloud. These nodes collect data about all areas of the network and can then be used to train AI models, which may prove to be one of its primary sources of value as the technology develops.

At the moment, the data used to train AI models – especially GenAI  – may be a better commodity to invest in than the AI models themselves. This will become even more valuable as AI-generated content continues to proliferate online; GenAI models trained on online content will eventually start using AI-generated content as its training data, leading to what Cornell University researchers have called ‘model collapse’ where GenAI models enter a kind of echo chamber, only able to parrot the same data as each other.

Closing thoughts

Whilst the technology certainly has its caveats, this is not to say that AI has no established use cases – far from it. The proliferation of AI-penned content online is already worrying some regulators due to its potential to spread misinformation, and AI-generated imagery can be found everywhere from low-budget website display adverts to multi-million-dollar game franchises.

Like with any emerging technology, the story is likely to be very different in the next five to ten years. We should, however, adjust our expectations for AI and how it will benefit businesses in the immediate future, and leave the excitement for longer-term advances in the field.



 

By Rebecca Garland on 15/02/2024