Are workers more willing to trust AI over their boss?

Most of us will admit to using AI at some point - whether to summarise or reword information, plan vacations, troubleshoot a piece of tech, or sense check a financial decision and investigate health symptoms. In fact, a recent KPMG report says 42% of UK people are willing to trust AI and 59% have personally experienced or observed benefits from AI use, suggesting it’s quickly becoming a go-to source of advice for millions.
While many may assume this shift is down to convenience, mental health practitioner Ankita Guchait, writing in Psychology Today, points to something deeper: psychologically, AI can feel like a safer space.
There’s no fear of judgement, no concern about asking a “silly” question, and no anxiety about being perceived as difficult or uninformed; all fears that are arguably intensified at work.
The risks of relying on generic AI for employment-related advice
It’s not surprising that this behaviour is now spilling into the workplace. Increasingly, employees are turning to generic AI platforms like ChatGPT or Google Gemini with questions such as “what does my payslip mean?” or “how much holiday am I entitled to?”, questions that would previously have gone to HR, payroll, or a manager.
The issue isn’t that people are using AI. It’s that most mainstream AI tools, while extremely capable, are fundamentally general-purpose. They’re trained on vast amounts of public information and excel at producing fluent, helpful answers, but they are not natively connected to the reality of an organisation. They don’t have secure, governed access to internal systems, up-to-date policies, contractual terms, payroll rules, country-specific configurations, or the historical decisions that shape “how things are done here”. And even when they can hold context within a single conversation, they can’t reliably carry the right context across time, teams, and workflows in the way a professional environment requires.
That limitation matters because employment questions are rarely generic. They’re personal, policy-bound, and jurisdiction-dependent. The correct answer to something as simple as “how much holiday do I have left?” depends on contract type, seniority, working pattern, local law, company policy, carried-over leave, recent absence, and what’s already been approved in the system. Without that context, a generic model can only guess, often convincingly.
The result is a widening gap between the questions employees feel comfortable asking AI and what non-contextual AI is actually equipped to answer. In practice, that means guidance can be confidently delivered while still being wrong, outdated, or irrelevant, exactly the kind of failure mode that becomes risky in HR, payroll, and compliance.
There’s also a second-order problem: to make a general AI tool “useful enough”, people are tempted to paste in details, payslip lines, absence histories, compensation changes, employee identifiers, even client data. That introduces avoidable exposure across privacy, regulation, security, and ultimately brand trust. In other words, the more you try to force generic AI to be context-aware, the more you risk crossing lines you wouldn’t cross with any other workplace system.
And despite the perception of convenience, generic AI often creates hidden work. Because it doesn’t sit inside the workflow or have durable organisational memory, employees end up repeatedly re-explaining the situation, reconstructing context from scratch, and manually double-checking answers against source systems. That friction defeats the purpose. Instead of removing effort, it shifts effort onto the employee and moves the organisation further away from a single source of truth.
This is why specialised, contextual AI is emerging as the real unlock for professional use cases. Generic LLMs provide the foundation - remarkable language capability and reasoning patterns - but value in the workplace comes from what’s built on top which are secure connections to internal data, policy and compliance guardrails, auditability, and answers grounded in the systems where decisions actually live.
Why the future isn’t about less AI at work
For business leaders, the challenge isn’t stopping employees from using AI all together. Without it, employees are faced with information overload - which is increasingly impossible for humans to effectively navigate - and a much narrower psychological safety window.
With it, they’re able to free up time and mental energy for higher-value jobs, have access to 24/7 for support and instant answers and, of course, have a “safe space” to ask questions.
As such, the key here is to recognise where trust in AI is growing, understand the risks of the wrong implementation and use of AI, and ensure staff can benefit from AI enabled access to reliable, contextual information within the business framework. And the answer lies in contextualised AI tools - platforms that understand a company’s policies, processes, and data, and adapt to each team or employee over time.
These platforms can also handle far larger contexts than ever before, which opens up new possibilities that were previously impossible: cross-functional analysis linking CRM, support, product, and finance; intelligent onboarding tailored to company history; informed decision-making drawing on the full organisational context; and automation of complex, context-dependent tasks.
Advice for businesses to help guide AI use
When adopting contextualised AI tools, it’s important to keep the following advice in mind:
- Anchor on outcomes, not just tools. Define a few priority use cases where AI can add real value. This might be, for example, support deflection, faster HR responses, or more efficient internal reporting. Then, let your teams choose the approaches that fit best, provided they can show measurable impact.
- Adopt freedom within guardrails. By this, I mean make sure governance doesn’t slow teams down. Consider setting a small number of non-negotiable rules around data, security, and legal compliance (think, a simple 3-4 level data classification with examples and clear “can/can’t paste into AI” rules) but for everything else, keep it as just as guidance. This helps avoid unnecessary restrictions. You can also create a small AI stewardship group - one owner plus rotating reps from key functions - that can unblock teams, publish decisions, and maintain standards without slowing adoption. A tiered risk model can work well too. Low-risk internal tasks (drafting, summarising) are “self-serve”, medium-risk needs peer review, high-risk (customer-facing decisions, regulated data) needs formal approval.
- Keep humans in the loop. It sounds obvious, but ensuring exactly where humans must review outputs (external comms, HR decisions, financial commitments) and what “review” means (fact-check, policy compliance) is key here. AI is a helper, not a replacement for human judgement.
- Create safe experimentation lanes. Make AI experimentation normal, safe, and repeatable, because iteration is where the value shows up. Provide an environment and a small monthly budget/time allocation so teams can try ideas without procurement friction. It’s also important to show you care and carry from the top, even if you may not be the most AI literate in your company.
- Always measure and improve. It’s always advisable to track outcomes such as time saved, error rates, and user satisfaction. And, as well as encouraging teams to refine prompts, create templates, and share successful patterns - so learning becomes cumulative rather than siloed - ensure they feel comfortable being transparent by encouraging labeling when AI was used.
And this is why - as well as the future certainly not being about less AI at work - it’s also not about the shift from ‘AI that answers’ to ‘AI that understands and acts’. It’s about AI agents becoming reliable collaborators that genuinely understand your business and workflow, helping to accelerate productivity at an individual and collective scale. Something that is so tightly integrated, responsibly and with humans in the loop depending on levels, that it helps redefine work in more creative and social ways.


.png)