The Agentic AI Revolution


By Bhavesh Amin on 22 January 2026

A London bus on Westminster Bridge with Big Ben in the background

In 2025, the Prime Minister set about plans to make the UK an AI maker and not an AI taker, which will have a significant impact on millions of the population. Keir Starmer announced a potential additional £47bn a year to the UK economy based on a 50-step plan. Much of the initial focus is on compute infrastructure — including large-scale data centres to support LLM training.

 

So, how much of the £47bn will taxpayers see? There is limited information on how the £47bn a year is distributed, and only a conservative mention of 13,250 new jobs created, primarily related to the data centres the government plans to build. Will this be a trickle-down effect (where is Liz Truss’ and Kwasi’s trickle-down stochastic model when you need it)?

A cigarette packet covered in equations.

Figure 1: What a Trussonomics stochastic model might look like


It’s difficult to quantify the benefit most of the population will see. However, it is an important strategy because of the impact of AI on traditional jobs.
 

Charlie Hustle
 

AI doesn’t have to result in companies making huge redundancies. Agentic AI can free up time, allowing employees to strategise more and giving them more time for innovation. The concern will be how some companies will measure the success of agentic AI. If you’ve ever worked in a small team of four, everyone in the team likely worked more hours than contractually obliged and had good synergy while working on projects. When the manager then decides to double the size of the department, there can be a misconception that the output will double. In reality, the department will be lucky to see a 50% improvement in output as the original staff will reduce the number of hours they work, and the new staff will have to be trained.
 

Agentic AI could have a similar impact, providing employees the opportunity to reduce their long work hours and improve their work-life balance. Unfortunately, for some companies, improving their employees’ work-life balance won’t be a primary objective for investing in agentic AI. Not seeing the expected productivity gains from their investment will convince some directors to reduce staff numbers, implement a hustle-type culture, and push up the hours each remaining employee works.
 

The question arises: will companies act responsibly regarding AI and its impact on the workforce? The UK government seems to think so. However, corporate interests have frequently prioritised profit over people, even going back to the 18th century, when the British East India Company effectively took control of Bengal, squeezing every Rupee from the people it could, contributing to the Bengal Famine and the deaths of millions of people living in the region.
 

Tobacco companies, big pharma, banks, and tech companies have all at some point chosen profit over people. Even the recent ban on under-16s using social media in Australia shows how tech companies are not willing to tackle dangerous content and individuals using their platforms. If one of your employees were acting in a toxic manner towards another, you would deal with the toxic employee. However, the Australian government are doing the opposite (and the UK could follow) because it can’t get these social media companies to act responsibly. There is no incentive to make these platforms safer for their users if there isn’t a financial benefit, despite using user data for other purposes, such as advertising, political segmenting, and newsfeeds.

 

While the EU has introduced the AI Act, the UK has gone for a soft, decentralised approach of guidance. Britain has a proven track record of failure with a light-touch and deregulated approach, from banking and utilities to transport to building regulations.
 

Why would the government continue with the same approach?
 

It’s because they are only in power for a short period. Unlike companies, where CEOs can look to grow old with their companies (even if they do leave, they will likely still have an invested interest in the company), prime ministers are lucky to get two consecutive terms. They can be credited with the short-term benefits of their actions, while someone else deals with the long-term fallout. Baby boomers enjoyed the benefits of the privatisation and deregulation of the 1980s, while millennials have suffered as a result of these actions. Interestingly, the EU’s AI Act applies to any products or services provided to the EU market. This means that many UK companies have to comply with the EU’s AI Act, with PWC recommending that governance frameworks are updated to align with the AI Act.

 

The Online Safety Act provides some measures for protection from the misuse of generative AI, with the UK using the act to investigate Grok’s adult content spicy mode. If one of the biggest and most powerful companies in the world built a terrible and offensive feature like spicy mode with no one in the company stopping it, then can tech companies be trusted to act responsibly? Also, agentic AI brings its own risks, such as coercive chatbots and agentic systems making significant decisions without human review. Bringing in robust, responsible AI regulation that considers agentic AI and generative AI safeguards is crucial for us and future generations. 
 

The job market
 

While the Industrial Revolution provided factory automation to replace humans, computers allowed office work automation, with humans very much at the centre. Now, with generative and agentic AI, many human-centric tasks can be replaced by AI-driven workflows. The more proficient AI agents become, the more complex workflows they’ll be able to replicate. Having taken a short agentic AI course for data science last year, this might not happen tomorrow (an AI agent with an SQL tool provided an incorrect result for a query, which was the same as the example query I fed it in the prompt), but it could be soon. Current roles could evolve into monitoring and supervising agents, agent orchestration, and designing guardrails to prevent agents from completing actions that are out of their scope.
 

Amazon announced in October last year that it would be reducing its corporate workforce, with AI being cited as one of the reasons. The Tony Blair Institute for Global Change estimates 1 to 3 million job displacements in the long-term as a result of AI. The Impact of AI on the Labour Market report suggests that the annual job displacements will be between 60,000 and 275,000 per year, which it claims is modest compared to the 450,000 job displacements per year over the last decade. To me, 275,000 additional job losses above a baseline of 450,000 seems like a lot (over 60% increase in the number of displacements). 

 

The report suggests these numbers will be offset as AI creates new demand for workers, with the worst annual impact on unemployment expected to be in the low hundreds of thousands. Despite the planned AI training initiatives highlighted in the government’s 50-step plan, there are likely to be significant mismatches in skillsets from those displaced from work and the requirements for the new AI roles. 

 

Having disappointingly watched a supermarket near me replace more cashiers with automated checkouts, the “doing more with less” culture is already upon us. One employee having to frantically work and assist customers on eight self-checkouts experiencing various issues, summed up where we might end up if corporations are left to do the right thing. 
 

Final thoughts
 

Without detailed plans in place to minimise job losses and ensure responsible usage, the AI Age could end up like many of the futures science fiction has predicted. There was another option that I didn’t mention—as consumers, people can make choices that drive companies to make the right decisions. Unfortunately, there has been little evidence of this working in the past. We’ve all read stories about fashion brands' child labour and poor working conditions in Amazon warehouses. Yet we haven’t been able to stop our consumerist ways. Price, convenience, and the emotional connection a brand makes us feel towards a product outweigh all other reasoning. A simple badge for human-first companies (kind of like the RSPCA Assured sticker on meat) could work by at least giving customers a choice on which companies they want to give their money to. Although when one of your competitors decides to cull its staff for AI agents to save on expenses or improve productivity, it will be difficult for even the most human-friendly company not to follow suit.

What the RSPCA Assured logo looks like and how Human-First Company logo could follow a similar idea.

Figure 2: What a human-first company logo could look like (ironically, created by generative AI) next to the RSPCA Assured logo

 

Tax breaks for human-first firms could be another option. It is beneficial for the government to prevent job displacements from turning into long-term unemployment. A tax credit program based on the number of redundancies per year and possibly employee growth could incentivise companies. The downside is that this type of system could hinder innovation and efficiency improvements that drive long-term growth.

 

What will the AI revolution for workers look like? Maybe there is already an indication. Quite a few companies are appearing on LinkedIn looking to employ people on short-term projects to help train their LLMs. I don’t know how genuine some of these companies are, but the idea of bringing in people only when needed for short-term projects doesn’t seem an implausible business model if the need for human workers is vastly reduced. A large portion of the population could become contractors, having to deal with the financial uncertainty when a project ends. Can the government help lighten this burden? Yes. Will they? For now, that is very much unclear.

 

We need your consent to load the translations

We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.