KPMG and Automation: The Blue J Way

KPMG has entered the artificial intelligence (AI) game. Its new partnership with software company, Blue J, promises to leverage the benefits of AI to free up more employee time for more nuanced, client-centric processes.

The Promise of AI

As most people are now aware, AI – or, more accurately, algorithm-based software – is much more efficient at certain tasks than its fleshy human counterparts. Large datasets can be processed and analysed much faster than a human could dream of, making the software perfect for these processes as it diminishes the involvement of human labour greatly. 

Blue J Way

The software company in question, Blue J Legal, specialises in predictive legal tax research. The platform requires users to fill in a pre-built questionnaire and, upon completion, will provide a comprehensive breakdown of all the tax implications for advice that one might need.

The software claims to speed up the tax process by up to 100x, greatly reducing the time taken to complete the research stage of tax advice. The idea here is that while the AI handles the more menial and time-consuming work, human employees can spend their time more productively on more complex and client-facing tasks.

Concerns

While AI promises to alleviate the mundanity of certain tasks, there are legitimate worries surrounding the transition process from a pre-AI workplace to a post-AI one.

First of all is the concern that offloading research work to AI will mean employees – especially juniors – are missing out on valuable learning. The formative years of many careers involve a lot of grunt work, but this often pays dividends later on in the form of experience and knowledge that can only be gained in this way.

Secondly, some might fear that this is just the first step in the inexorable march towards the total AI takeover of the workplace. While AI has previously been used in very limited use cases, more sophisticated programmes are now starting to leak into previously unaffected areas of business. Why employ a slow human to complete the task when a one-time-payment piece of software could do the job faster and more accurately.

The Human Touch

But are these fears legitimate? While AI is almost always more efficient than its human counterparts, the technology is still limited by the same human biases that wrote the code itself. In this way, human biases (be they conscious or not) could become even more entrenched by AI-powered algorithms; answers produced by algorithms mimic the morals of their creators.

For instance, people understand the emotional, personal, and professional factors that might play into a person’s decision making and thought process. For example, if you get off the phone with a disgruntled investor, you might surmise that their anger comes from a place of care for the detail, or perhaps their desire to secure the best outcome for themselves and their family.

Conversely, however, AI decision making is often thought of as objective and devoid of emotion. While the latter part of this conclusion might be accurate, it is important to remember that the code used to create these AI processes is subject to the same limitations and biases as the humans that created it. There is no reasoning or subjectivity involved in the process – purely cynical calculations. 

Final Thoughts

What seems to be the most likely path forwards is some form of hybrid working – exactly what firms like KPMG are suggesting. AI has its strengths – its ability to analyse and process huge data sets quickly – but it also has its weaknesses; human oversight seems to be the sensible approach to mitigating some of the technology’s shortcomings.

Perhaps the tech overlords of the future will not the machines as many feared but might, in fact, be us, monitoring our AI-powered minions.