Home

  –

AI is in need of critical thinking to steer safely

AI is in need of critical thinking to steer safely

Peter Rose, Group Chief Information Officer, TEKenable: ‘In IT, problem-solving is in large part a question of feeling the problem, not just seeing it as academic’

The relentless pace of technological development is forcing companies to reconsider their most valuable assets, shifting the focus from technical proficiency to fundamental human qualities. As AI takes over repetitive tasks, the most durable skills for the modern knowledge worker may be ones that resist automation and provide the essential context machines lack.

According to Peter Rose, group chief information officer at TEKenable, these skills can be hard to pin down as AI is advancing so rapidly, but the right approach is to think about what machines can’t do. “‘I don’t know’ is the honest answer. How do you identify skills gaps when the technology platform is developing so quickly? What I can say is that familiarity with tools and products is probably not the skill we need to be looking for. What we should be looking for is critical thinking, adaptability and empathy,” he said.

Rose argues that these very attributes that Artificial Intelligence is now elevating are among the hardest to teach. “Some of these things aren’t trainable: it’s very hard to train someone to be empathetic, giving them the ability to swivel around in their head and ask what people really need,” he said.

But the shift isn’t merely about developing new talent; it’s about reemphasising foundational intellectual honesty in how problems are approached. Here, soft skills pay dividends as they support exploratory dialogues that can really get to the root of issues.

“In IT, problem-solving is in large part a question of feeling the problem, not just seeing it as an academic problem. You’re dealing with a real company, a real set of directors, a market context, a company context, and an individual with ambitions and goals,” he said.

Staff often only have one part of the picture: their part of the picture. It is important to get underneath any proposed solution and find out what the problem really is. This, Rose suggests, includes the courage to critique requests. “You can admit ignorance and get people to explain things to you, but everything has to be in some way critiqued. That is the foundation.”

Beyond the immediate technical challenges, there is a long-term trade-off between efficiency gains and operational resilience—this concern is highlighted by reports suggesting that when AI replaces humans, there may be “no human to then step back in”.

Rose argues that this risk exists only if AI is adopted without clear governance and that, again, properly AI-augmented development requires the ability to understand a problem and state it, and the proposed solution, accurately. “There’s an inherent presumption in that question,” he said, suggesting that simply asking AI to build an application in isolation is where the risk lies. “I would definitely argue that if you set down the standards you want it to build to: use this, consider security, consider testability, it will incorporate them.”

This skills shift is compounded by the structural challenge of securing a permanent hybrid workforce. Businesses must navigate a digital landscape where employees operate on inherently untrusted networks, necessitating an immediate pivot toward advanced security services like behavioural observation and granular access controls.

Taken together, this means the future of work hinges not only on new infrastructure investment but also on a radical rethinking of employee training and AI governance, ensuring operational resilience is not sacrificed for short-term efficiency.

TEKenable itself has to take a strong interest in hybrid work security due to the extremely high demands of its customers regarding data and privacy. Ironically, though, remote work offers an unexpected benefit: inherent micro-segmentation. While organisations constantly strive for micro-segmentation to prevent a compromise spreading across a network, when people work remotely, “everyone is on their own network, unable to see anyone else’s machines”.

However, this benefit is offset by the disadvantage of sitting on a LAN that cannot be trusted. The answer? A layered approach of behavioural observation to note changes in activity on a machine, network traffic monitoring and scrutinising the actions of access brokers—in other words, a comprehensive defence designed to protect endpoints to a reasonable degree.

“The answer to that is a lot of security services,” Rose said.

The above text was reproduced from the interview published in Business Post on November 09th, 2025.

AI, Critical Thinking, and Hybrid Security FAQs:

Why is critical thinking becoming more important in the age of AI?

As AI automates repetitive tasks, the most valuable skills are those that resist automation—critical thinking, adaptability, and empathy. These enable humans to provide context and judgement that machines lack.

What skills should companies prioritise for future-proofing their workforce?

  • Critical thinking: questioning assumptions and analysing problems deeply.
  • Adaptability: adjusting to rapid technological changes.
  • Empathy: understanding human needs and perspectives.

Can soft skills like empathy be trained?

Empathy and intellectual honesty are among the hardest skills to teach. While technical skills can be learned, qualities like empathy often require experience and mindset shifts rather than formal training.

How does critical thinking improve IT problem-solving?

Problem-solving in IT is not just academic, it involves understanding real-world contexts such as company goals, market conditions, and individual ambitions. Critical thinking helps uncover the root cause of issues rather than just applying surface-level fixes.

What risks come with replacing humans with AI?

The main risk is operational fragility—if AI systems fail, there may be “no human to step back in”. This happens when AI is deployed without proper governance or standards.

How can businesses mitigate AI-related risks?

By setting clear governance standards:

  • Define security and compliance requirements.
  • Ensure testability and resilience in AI-driven development.
  • Avoid delegating tasks to AI without human oversight.

What security challenges arise from hybrid work?

Hybrid work introduces untrusted networks, making endpoints vulnerable. While remote work offers micro-segmentation benefits, it also requires:

  • Behavioural observation of devices.
  • Network traffic monitoring.
  • Scrutiny of access brokers.

Why is micro-segmentation considered a benefit of remote work?

When employees work remotely, each device operates on its own network, reducing the risk of lateral attacks across a corporate LAN. However, this advantage is offset by the need for stronger endpoint security.

What is TEKenable’s approach to hybrid security?

TEKenable uses a layered defence strategy:

  • Behavioural monitoring.
  • Granular access controls.
  • Advanced security services to protect endpoints and maintain compliance.

Did you find your read useful? Stay up to Date with our Insights.

Be our Next Succesful Study

Further Reading

Be our Next Successful Study

Search Our Website

Search for relevant content by typing in the search box below

Get in Touch with TEKenable

Get in Touch with TEKenable