Navigation

How AI could transform the education system


With technology rapidly changing means that people need to adapt and integrate new software within their life in order to stay up-to-date.  AI opens new opportunities for the education sector, but there are concerns around its proper use.  AI has been used within education for a few years now with universities using it to detect plagiarism as well as it being used for analysing student data, but as there has been further development within the technology with the introduction of new software like ChatGPT being created, has caused an uproar into how the software should be used within education or even used at all.

Recently, all 24 Russel group UK universities have begun drawing up principles to ensure that students and staff are AI literate. At one stage, there were talks of banning software like ChatGPT in order to prevent cheating however, the guidance is now saying that students should be taught how to use the software appropriately within their studies which will help to make them aware of the risks of plagiarism, bias and inaccuracy in generative AI. The universities have reviewed their academic conduct policies and guidance to reflect the emergence of generative AI. The guidance states “These policies make it clear to students and staff where the use of generative AI is inappropriate and are intended to support them in making informed decisions and empower them to use these tools appropriately and acknowledge their use where necessary.”



The Benefits

AI has been found to be beneficial for the education sector with it being used by both students and teachers. One of the benefits of AI being used is that it allows for the students’ education to become more personalised.  AI systems can analyse vast amounts of data which focuses on things like students’ performance, learning styles and preferences which in course of time will allow students to be taught more specifically to their needs and not in one generalised way of teaching. As well as it being beneficial for students, teachers have begun using AI systems for administrative tasks. A recent government survey found that one in five teachers work at least 60 hours a week but on average only spend about half of this time teaching. AI tools can help to streamline the process of these tasks and the teachers are able to spend more time focusing on teaching and supporting students.

The Limitations

Although AI could be beneficial to some extent for the education sector, it does have its limitations. One of the challenges of using AI would be the need for training, teachers would need to be shown how to properly use the technology in order to be able to use the software correctly. As well as teachers being taught how to properly use the software it is important that this is passed on to the students who are interested in using the software themselves. Another significant issue with the software is its lack of accuracy, the information which is provided is not always accurate. There have also been issues brought up around the ethical use of software like ChatGPT. Within a recent survey it was found that more than half of students consider using AI tools within their schoolwork to be a form of cheating with others saying that it is not a form cheating as they already currently use online resources to support their work.



In a recent article done by The Guardian earlier this year, they discuss how AI tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism”. Software like ChatGPT is at a very high writing level making it harder for lecturers to differentiate between what is written by a human and chatbot. Universities are trying to discourage the use of the software by detecting its use within students work, if found using the software the students will be trained on the appropriate use of AI, any further use the students could then be expelled. Irene Glendinning, head of integrity at Coventry University said, “They’re wasting their money and their time if they aren’t using university to learn.” If the students are choosing to use the software to replace doing work instead of using it as an aid, punishments like these would ensure that the students are deterred from using it incorrectly which would be more beneficial for students to be independent within their work.

To Conclude

With some universities and schools beginning to embrace AI software within their learning system this will allow for a more versatile educational environment for students. Stating how AI software should and should not be used within education lets students and teachers understand to treat it as more of an aid instead of a replacement for doing work. There are clear positives to using the software but challenges which come along with it, it will continuously need to be reviewed and re-evaluated in order to meet the needs of the users but overall could enhance student learning globally. AI gives the opportunity for all students to receive a higher quality education in order to reach their highest potential if it is  used to correctly.

The Real Cost of AI

 

The recent investment of 105 million euros into French AI start-up, Mistral AI, has prompted industry commentators the world over to discuss the legitimacy of such ventures and the technology behind them. How is it that a newly formed company, just a couple of months old, can secure such a vast sum of money with no demonstrable product or technology to speak of? Is this another market bubble waiting to pop? Or is investment of this scale required to get this technology off the ground properly, for the benefit of “everyone”?


 

AI development – cutting-edge science or techno-séance? 

 

The precedent – rivalling the most successful product launch in history

OpenAI, now world-renown, was not a company most people had heard of before November 2022, when it finally unleashed its flagship product, ChatGPT, onto the public. This launch would go on to become the most successful product launch in history, with 100 million users signing up within the first two months of its launch.

If you’ve ever used ChatGPT, you’ll likely know why there is such a buzz around this product in particular. Typing in a simple prompt and seeing it write out near-perfect responses is, for lack of a better word, magic. It’s not at all hard to see why such a product went as viral as it did, as people starting using it to write essays, books, legal documents, and everything else in-between; the use cases for ChatGPT seem to increase by the day, with seemingly infinite possibilities promised by OpenAI as it further develops its technology.

This unprecedented launch, coupled with the nature of how these systems are trained (partly on its use and feedback from users), has given OpenAI an incredible most-mover advantage over the rest of the industry. Millions of datapoints are fed into its algorithm every day enable the Company to refine its productive over time, based on user data. Despite this advantage, however, the cost of developing such a system is staggeringly prohibitive and, even with all these users (some of them paying), the Company is still recording mammoth losses.

 

AI development  – a costly endeavour

OpenAI has received a swarth of lucrative investments from its now-partner, Microsoft. Publicly investing $1bn in the Company in 2019, it has since signed a multi-year, multi-billion-dollar investment with OpenAI going forwards, reportedly valued at $10bn.

Despite this, the Company made a loss of some $540m last year, owing mostly to development costs associated with its GPT software. These costs are mainly associated with the computing power required to develop such programmes, which is – it should go without saying – enormous.

Shifting the focus to the UK, the treasury announced it would be investing £900m into building its own AI supercomputer as part of its spring 2023 budget. As part of this initiative, the government plans to set up its own AI research body to explore the possibilities, practicalities, and risks associated with the technology. The computer, when built, will be one of the most powerful in the world, with aims to enhance the fields of cybersecurity, healthcare, and scientific research.

When examining the language used in the government’s announcement, it becomes clear that the move is more than just a desire to break ground on new technology. With the UK’s plan, as with France’s with Mistral AI, there is a large element of sovereignty and nationalist protectionism to the rhetoric.

 

Getting meta – an AI-generated supercomputer – an AI selfie, of sorts

 

The international AI race

As usual, the US is the country leading the way on AI. OpenAI, a US company with largely US backers, is the largest and best-known player in the market right now, which has restoked fears of overdependence on the states from many European nations.

For France’s part, Minstal AI’s president, Arthur Mensch, had the following to say:

“We are proud to initiate this global project from France, our home country, and to contribute, at our level, to the emergence of a credible new player in generative artificial intelligence from Europe.”

From this, as well as the fact that a majority of the investment is coming from French and European investors, it is clear that the 105 million euros invested so far reflect not just the desire to create a good product, but also the fact that France wants to create an independent and sovereign competitor to Silicon Valley.

Similarly, the UK’s £900m investment in its own supercomputer suggests that the government has been similarly swayed by this desire to create and retain its own cutting-edge industries. Adrian Joseph, BT’s Chief Data and Artificial Intelligence Officer, recently stated the following in a session with the Commons science and technology committee:

“We think there’s a risk that we in the UK lose out to the large tech companies, and possibly China, and get left behind… in areas of cybersecurity, of healthcare, and so on. It is a massive arms race that has been around for some time, but the heat has certainly been turned up most recently.”

 

Perhaps a glance at the world home-grown AI could build on the European subcontinent?

 

So, factoring this line of thought into the equation, it becomes clear that these at-first seemingly large investments into relatively new fields make more sense than one might assume at first glance. Not only is it pioneering new – and potentially world-changing – technology, but this capital is also an investment into the future financial sovereignty of the countries in question.

And yet, Mistal AI’s main selling point is the credentials of its founders – hailing from both Meta and Google Deepmind’s AI research teams, both American firms. There is a slight irony to this, as the prestige afforded by these countries is, in essence, foreign. And yet, in a way, it demonstrates the need for this prestige to be dispersed more globally – the hope being that in 10 or 20 years’ time, experts from Mistral will be able to command the same legitimacy, but this time with a purely French pedigree.

 

Final thoughts

It is easy to gawk at the idea of a four-week-old company receiving 105 million euros with nothing much to show for itself other than the credentials of its founders, but to do so would be to miss the bigger picture.

AI, due to its incredible potential for both good and societal harm, should not be thought about in the same way as other investments; it is not a simple product being manufactured sold. It may be better compared to something like the development of nuclear weapons. While the US was, again, the first to develop this technology, it wasn’t long before other nations had developed their own capabilities – military or otherwise. The desire for other nations to develop this technology wasn’t to outcompete the US, but also to create their own national security and energy systems.

Similarly, the use cases for AI are extremely broad. Legislation is still a long way behind, but the EU is making promising moves to become the first international body to draw up AI-based legislation. For now, it remains the case that the US leads the pack, by a substantial margin. If other countries want to stay relevant in this race, it is up to them to take the necessary – and extremely costly – steps towards AI self-reliance. In the grand scheme of things, 105 million euros is barely a drop in the ocean when it comes to AI investment capital – expect to see a lot more stories like this one over the coming years.

AI – The Future is Now? – February Newsletter

Stephen Hawking, in his final book, listed the rise of Artificial Intelligence (AI) as one of the three biggest issues that mankind would have to deal with.  Elon Musk said it is “actually a bigger risk to society than cars or planes or medicine”.  Yet regulators are miles behind and, on the face of it, stand little chance of getting a grip on this immense technology, given the investors themselves are all at sea as they balance their excitement with skepticism.

There is tension in Silicon Valley behind the excitement, the downturn in tech investment and AI’s amazing power.  To quote Gordon Ritter “everyone has stars in their eyes”.  As an early backer of Zoom, he is cautious.  The winners will win very big and become the largest businesses in the world; Christine Melas-Kyriazi, a partner at Bain Capital, said “if ChatGPT is the iPhone we are seeing a lot of calculator apps. We’re looking for Uber”.

So, what might be the effect on founders, mid-market private equity and debt providers?  Firstly, have a look at OpenAI and its offerings, GPT-4 and DALL-E.  These technologies have only been available for public use for a few months, yet ChatGPT can already write dissertations to a level where the professor will give them a half-decent 2:1, and DALL-E can produce images that mimic any art style with uncanny realism.  The AI is learning exponentially.  It is probable that in very little time we will be able to use AI applications to deliver mainstream business solutions – from dubbing on films, to writing social media blogs, forecasting software and, dare I suggest, business plans.  These will still require human intervention (for the moment) but I believe we will all need to invest in understanding how to harness this technology or risk becoming as irrelevant as the farmers with scythes or hand-spun looms of the 18th century.

Returning to Professor Hawking, alongside AI on his list of the 3 biggest issues is the future of our planet. Sustainability is central to our lives, becoming an ever-increasing focus for both individuals and businesses.

 

Polestar has been actively working with a wide range of sustainability-related businesses over the years, including most recently, waste recycling, ESG software, consultancy, and people development & wellness. With our recent award-winning transaction in EVORA Global focusing our minds on the collaborative benefits available across Sustainability, we have taken the opportunity of our imminent website relaunch to pull these together as the core of our sector approach.

 

 

Professor Hawking’s third issue, colonising other planets, might seem like a stretch in terms of relevance to most business leaders. However, the real focus is on preparation for a future occurrence, and preparation is, as we all know, key to obtaining the best result.  With this in mind, we are about to launch a new club for founders, shareholders and management, Eddystone.  I will write separately on this in the next month outlining how we hope to help you prepare for a future transaction and bring together a community of people to share ideas and knowledge to deliver optimised deals.

For now, we have a temporary website where you can register your interest early – click here to sign up now.

GPT-4 and Beyond – The Post-AI Workplace

 

AI is the undeniable zeitgeist. Emerging from relative obscurity to near ubiquity over the course of just one year, AI-based technology has taken the world by storm with its photorealistic image generation and near-human-like GPT models.

More recently, we have seen the launch of OpenAI’s latest offering in the form of GPT-4, a model that generates more accurate results, longer output, and increases functionality like adding the ability to analyse images. Coupled with this, Microsoft has announced its ‘Copilot’ system, an integrated AI aide that Microsoft claims – among an avalanche of self-aggrandising marketing copy – will “unlock a new wave of productivity growth”. So, is there any base to these claims? Or will this be the next flop in a series of underwhelming digital assistants that trace their lineage all the way back to Clippy the Office Assistant.

 

GPT-4

GPT-4 is the latest release from market leader, OpenAI. The programme launched to a fresh wave adulation and criticism, with many equally impressed and worried by the quality of its output. While GPT-3.5 passed the bar exam in the bottom 10%, GPT-4 has managed to pass in the 10% of test takers – in fact, GPT-4 is better at passing all exams given to it than its predecessor.

And, unlike GPT-3.5, GPT-4 is multi-modal, which means that it can now analyse images as well as text. The use cases for this are still relatively simplistic – plug in an image of ingredients, and GPT-4 will tell you what recipes you can cook with them; show it a picture of a man holding a balloon and ask what will happen if he lets go, the programme will tell you that the balloons will fly away – but the potential is enormous.

This opens the gates to a whole host of new use-cases for the tech in the future, think auto-alt-text writing for web images, auto-audio descriptions for the visually impaired, or image archiving, whereby a vast image library can be searched with a complex written query.

More than this, GPT-4 can now generate up to 25,000 words at a time, more than 3x the length its predecessor could. This means that producing lengthy, tonally consistent documents will be easier than ever before, as previously multiple outputs would need to be stitched together in order to achieve a length of more than 8,000 words.

 

Microsoft Copilot and other implementations

At the beginning of this month, Microsoft confirmed the suspicions of many by announcing plans to bring GPT-4’s technology to its flagship product range, Office 365. This seemed a forgone conclusion, given the large investments the Company has made into OpenAI, but the implementation of the technology certainly looks to be very impressive.The only information we currently have comes in the form of a slick marketing trailer and its accompanying blog post so, without a fully product demo, a healthy dose of scepticism is due, especially given the numerous pitfalls still present in the GPT models.

First off, Copilot brings the functionality of Bing’s GPT-implementation to office. This means you can summon Copilot at any time, whether you’re in Word, PowerPoint, Excel, Outlook, or Teams, and ask it the same questions you would ask ChatGPT, and get similar results. Further to this, you can highlight certain portions of a document, and ask the GPT about this specifically. Fairly simple stuff (he says, acknowledging the vast achievement that even this simple implementation represents).

 

Things get interesting when you consider how the GPT model integrates with the already present features of Office 365. Microsoft claims that Copilot will be able to access your calendars, emails, and notes to responsively create output based around these stores of data. For instance, you would be able to type ‘email my team the new business schedule and include a summary of newbusiness.csv’, and Copilot will intelligently create an email that summaries the Excel spreadsheet, works around people’s calendars, and emails the right people accordingly.

More than this, Copilot will be able to create presentations and Word documents on command, based on other documents and even emails. The programme, supposedly, will pick a design appropriate to the work, hand-pick relevant data and copy to fill it with, and have it ready for review faster than a normal human could create one slide’s worth of information.

Additionally, the majority of people (me included) only use very small amount of the functionality within 365 apps, as most is hidden behind tutorials or learned industry knoweldge. Instead of having to learn these techniques, however, Copilot aims to make these accessible through natural-language prompts: ‘animate this slide with a fade in and wipe out’, for example.

As previously stated, however, this, at the moment, is all still just talk. Although we’ve seen GPT-4 demonstrate impressive potential in the real world, we’ve also seen its many and glaring pitfalls, most notably in its confident portrayal of misinformation as fact (which OpenAI more romantically labels as ‘hallucinations’), and the system still struggles with a number of more complex tasks such as arithmetic and sequencing. The true abilities of Copilot remain to be demonstrated, but the promise is certainly alluring.

The post-AI workplace

So, what will be left for us regular old humans? At the moment, the technology is poised to be a productivity enhancer, not a job taker – yet.

Timescales with AI are almost comically ephemeral. Below is an example generated using Midjourney to demonstrate the point. Both images use exactly the same prompt ‘Photo of a ragdoll kitten’, the only difference being nine months of development time between the two outputs. The first (left) uses V2 of the software, the second (right) uses the shiny new V5.

Although seemingly unrelated, this stratospheric leap in capability demonstrates the exponential progress AI continues to make. If we can go from cat-like-abstract depiction to a fully realised photorealistic image of a cat in nine months, it makes it hard to predict just how advanced GPT models will be in, say, two years’ time.

Because of this, predictions surrounding the workplace are hard to make but, given what we already know, and with the features of Copilot unveiled to the world, some assumptions can be made:

 

Closing thoughts

AI presents a technological leap that is equally exciting and terrifying to different people. While at first glance, the technology appears to be the phantom job reaper we all feared it would be, upon closer inspection it becomes clear that it still has some way to go before it can fully replace a human worker.

For now, enjoy the increased efficiency it will surely bring. If the claims of its creators are correct, we might well be standing on the edge of “a new wave of productivity growth” that will see businesses of all sizes increase their capacity exponentially.