AI could threaten some jobs, but it is more likely to become our personal assistant

BT recently announced that it would be reducing its staff by 55,000, with around 11,000 of these related to the use of artificial intelligence (AI)

AI could threaten some jobs, but it is more likely to become our personal assistant

Estimated reading time: 8 minutes


Jonathan Aitken, University of Sheffield

The remainder of the cuts were due to business efficiencies, such as replacing copper cables with more reliable fibre optic alternatives.

The point regarding AI raises several questions about its effect on the wider economy: what jobs will be most affected by the technology, how will these changes happen and how will these changes be felt?

The development of technology and its associated impact on job security has been a recurring theme since the industrial revolution. Where mechanisation was once the cause of anxiety about job losses, today it is more capable AI algorithms. But for many or most categories of job, retaining humans will remain vital for the foreseeable future.

The technology behind this current revolution is primarily what is known as a large language model (LLM), which is capable of producing relatively human-like responses to questions. It is the basis for OpenAI’s ChatGPT, Google’s Bard system and Microsoft’s Bing AI.

These are all neural networks: mathematical computing systems crudely modelled on the way nerve cells (neurons) fire in the human brain. These complex neural networks are trained on – or familiarised with – text, often sourced from the internet.

The training process enables a user to ask a question in conversational language and for the algorithm to break the question down into components. These components are then processed to generate a response that is appropriate to the question asked.

The result is a system that’s able to provide sensible sounding answers to any question it gets asked. The implications are more wide-ranging than they might seem.

Humans in the loop

In the same way that GPS navigation for a driver can replace the need for them to know a route, AI provides an opportunity for workers to have all the information they need at their fingertips, without “Googling”.

Effectively, it removes humans from the loop, meaning any situation where a person’s job involves looking up an item and making links between them could be at risk. The most obvious example here is call centre jobs.

However, it remains possible that members of the public would not accept an AI solving their problems, even if call waiting times became much shorter.

Any manual job has a very remote risk of replacement. While robotics is becoming more capable and dexterous, it operates in highly constrained environments. It relies on sensors giving information about the world and then making decisions on this imperfect data.

AI isn’t ready for this workspace just yet, the world is a messy and uncertain place that adaptable humans excel in. Plumbers, electricians and complex jobs in manufacturing – for example, automotive or aircraft – face little or no competition in the long-term.

However, AI’s true impact is likely to be felt in terms of efficiency savings rather than outright job replacement. The technology is likely to find quick traction as an assistant to humans. This is already happening, especially in domains such as software development.

Rather than using Google to find out how to write a particular piece of code, it’s much more efficient to ask ChatGPT. The solution that comes back can be tailored strictly to a person’s requirements, delivered efficiently and without unnecessary detail.

Safety-critical systems

This type of application will become more commonplace as future AI tools become true intelligent assistants. Whether companies use this as an excuse to look to reduce workforces becomes dependent on their workload.

As the UK is suffering a shortage of Stem (science, technology, engineering and mathematics) graduates, especially in disciplines such as engineering, it’s unlikely that there will be a loss of jobs in this area, just a more efficient manner of tackling the current workload.

This relies on staff making the most of the opportunities that the technology affords. Naturally, there will always be scepticism, and the adoption of AI into the development of safety-critical systems, such as medicine, will take a considerable amount of time. This is because trust in the developer is key, and the simplest way that it develops is through having a human at the heart of the process.

This is critical, as these LLMs are trained using the internet, so biases and errors are woven in. These can arise accidentally, for example, through a person to a particular event simply because they share the same name as someone else. More seriously, they may also occur through malicious intent, deliberately allowing training data to be presented that is wrong or even intentionally misleading.

Cybersecurity becomes an increasing concern as systems become more networked, as does the source of data used to build the AI. LLMs rely on open information as a building block that is refined by interaction. This raises the possibility of new methods for attacking systems by creating deliberate falsehoods.

For example, hackers could create malicious sites and put them in places where they are likely to be picked up by an AI chatbot. Because of the requirement to train the systems on lots of data, it’s difficult to verify everything is correct.

This means that, as workers, we need to look to harness the capability of AI systems and use them to their full potential. This means always questioning what we receive from them, rather than just trusting their output blindly. This period brings to mind the early days of GPS, when the systems often led users down roads unsuitable for their vehicles.

If we apply a sceptical mindset to how we use this new tool, we’ll maximise its capability while simultaneously growing the workforce – as we’ve seen through all the previous industrial revolutions.The Conversation

Jonathan Aitken, Senior University Teacher in Robotics, University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow