Book Joshua
Reclaiming Purpose Video Mini-Series

Simple, Powerful tools to Rediscover and Reclaim Purpose in the work you are already doing.

AI Managers Are Here And They Don’t Do Small Talk

AI Managers Are Here And They Don’t Do Small Talk

The new boss doesn’t pull you aside for a hallway check-in. It doesn’t ask how your weekend was. It doesn’t soften feedback with a compliment sandwich.

It just updates the dashboard.

Your schedule tightens by eight minutes. Your “productive time” score dips. Your call tone gets flagged. Your performance review arrives pre-written confident, clinical, and oddly certain about what “excellent” looks like.

This is the rise of the AI manager not a humanoid robot in a blazer, but a growing stack of software that instructs, monitors, and evaluates work at scale.

The official name is algorithmic management: software (sometimes AI-powered) that fully or partially automates tasks traditionally done by human managers. OECD+1

And it’s not a niche phenomenon anymore.

An OECD study drawing on a survey of 6,000+ mid-level managers across six countries (June–August 2024) found algorithmic management is already widespread with reported adoption rates as high as 90% in the United States and 79% on average in surveyed European countries (and **40% in Japan). OECD

So yes AI managers are here.

The real question is: what kind of workplace do they create?

What an “AI manager” really is (and where you’ll meet it first)

Most people won’t “meet” an AI manager in a meeting. They’ll meet it in three places:

1) Instruction systems

Tools that assign tasks, route tickets, optimize schedules, set targets, and decide what comes next. The OECD categorizes these as tools that instruct workers. OECD

2) Monitoring systems

Tools that track time, speed, clicks, location, interactions, productivity signals, and sometimes even the “tone” of communication. OECD

In the OECD survey, 55% of U.S. firms reported monitoring the content and tone of conversations, voice calls, or emails (versus 6% in Europe and **8% in Japan). OECD

3) Evaluation systems

Tools that score performance, recommend promotions, flag underperformance, and generate performance summaries often using AI to synthesize what used to be a manager’s narrative. OECD

Put those three together and you get something that functions like a manager: it shapes priorities, measures behavior, and influences outcomes.

Why this is exploding now: the unbossing + AI combo

Two trends are colliding:

Companies are flattening management

Gartner predicts that through 2026, 20% of organizations will use AI to flatten organizational structures, eliminating more than half of current middle management positions. Gartner

In other words: fewer human managers, wider spans of control.

Employees are already using AI whether leadership is ready or not

Microsoft and LinkedIn report that 75% of global knowledge workers use AI at work, and many bring their own tools when employers don’t provide them. Microsoft+1

Then 2025 data adds fuel: a Gallup poll (reported by Business Insider) found 23% of U.S. workers use AI at least a few times per week, up from 12% in mid-2024. Business Insider

When adoption is rising and org charts are thinning, companies reach for systems that scale. And the easiest thing to scale is not coaching it’s measurement.

That’s how you end up with AI managers that don’t do small talk.

The promise: speed, consistency, and “less bias”

Algorithmic management gets adopted because it sells a seductive story:

  • Faster decisions (no meeting required)
  • Consistency (same rules for everyone)
  • Visibility (leaders can “see” operations instantly)
  • Efficiency (less managerial overhead)

Even the OECD notes the potential upsides productivity and efficiency gains, and more consistent decision-making while also stressing the risks. OECD+1

In the OECD survey, managers often reported improvements like increased information, decision speed, and autonomy while also reporting trustworthiness concerns. OECD

So the pro-AI case isn’t fantasy. It’s real.

But it comes with a trade: what you measure becomes what you manage.

And what you manage becomes what people optimize for.

The reality: work intensifies, trust cracks, and accountability blurs

The OECD report explicitly points to documented downsides in prior evidence: work intensification, stress linked to digital surveillance, and doubts about decision quality. OECD

Researchers are also mapping how algorithmic management can reshape psychosocial risks and health outcomes beyond gig platforms spreading into “traditional” sectors like logistics, retail, and healthcare. sjweh.fi+2ScienceDirect+2

Here are the four workplace shifts I’m watching most closely:

1) The workplace becomes a game because the scoreboard is always on

When performance is continually quantified, people naturally adapt:

  • work becomes more “trackable”
  • collaboration becomes more “performable”
  • creativity becomes harder to justify (because it’s harder to measure)

That’s not because workers are cynical. It’s because they’re rational.

2) Surveillance creates compliance… and quiet quitting in the mind

Monitoring can boost short-term output. But long-term, it can sap ownership.

The OECD highlights concerns about trustworthiness, including unclear accountability and difficulty following the tools’ logic. OECD+1

When people can’t explain how decisions are made, they stop believing decisions are fair even if they are.

3) The manager role shifts from “coach” to “enforcer of the system”

In theory, automation frees managers to coach.

In practice, many managers become the human face of rules they didn’t design and can’t fully explain.

That’s how you end up with one of the most corrosive workplace dynamics: “I didn’t decide it the system did.”

4) Bias doesn’t disappear it changes shape

Algorithms can reduce some human inconsistency. But they can also encode bias through data, proxies, and feedback loops.

The OECD includes discussion that algorithms can perpetuate or mitigate bias depending on design and implementation. OECD

So the key question is no longer “Is it biased?” but:

Where does bias enter the pipeline and who audits it?

Regulation is catching up (slowly), and HR is now a risk surface

This trend is pushing governments to define rules for AI in employment.

EU: Employment-related AI can be “high-risk”

Under the EU AI Act framework, employment and worker management use cases appear in the list of “high-risk” areas subject to stronger obligations. Artificial Intelligence Act+2European Parliament+2

NYC: Bias audits and notice requirements for certain tools

New York City’s Local Law 144 restricts the use of “automated employment decision tools” unless they’ve had a bias audit within the past year, with public posting and required notices. New York City Government+1

Translation: AI managers are not just a productivity story anymore. They’re a compliance story.

What this means for the workforce: the next five years

Here’s the future I see emerging fast.

1) “Performance” becomes a data product

Your work will increasingly be represented by a trail of metrics and machine-readable signals.

People who can shape those signals (with clarity, documentation, and outcome framing) will advance faster than people who assume excellence “speaks for itself.”

2) Middle management doesn’t vanish it gets redesigned

Gartner’s flattening prediction points to fewer managers. Gartner
The OECD data suggests the systems that replace managerial tasks are already widespread. OECD

That combination produces a new kind of manager:

  • fewer 1:1s
  • more system design
  • more exception handling
  • more judgment calls around fairness and context

3) Two-tier workplaces widen

Knowledge workers are rapidly adopting AI. Microsoft+1
Meanwhile, algorithmic management is expanding in operational environments too (logistics is a frequent case study context). ScienceDirect+1

That can widen divides: who gets autonomy (AI as copilot) versus who gets control (AI as overseer).

4) “Explainability” becomes a workplace expectation

If AI influences pay, promotion, scheduling, or discipline, people will demand:

  • clear accountability
  • audit trails
  • transparent decision rights
  • human review pathways

The OECD notes concerns precisely around accountability and the difficulty of following tools’ logic. OECD+1

How to win in an AI-managed workplace (without losing your humanity)

If you’re an employee

  • Document outcomes: don’t let your work be reduced to activity metrics.
  • Write your impact in plain language: AI summaries often favor what’s explicit.
  • Learn the system’s incentives: what does it reward speed, volume, quality, satisfaction?
  • Ask for clarity: “What data is used? What’s the appeal path? Who owns final decisions?”

If you’re a manager

  • Be the translator: connect metrics to meaning and context.
  • Protect discretion: build “human override” norms for edge cases.
  • Measure the right things: what you instrument is what you create.

If you’re a leader / HR

  • Govern before you scale: audits, policies, training, and accountability first.
  • Separate monitoring from mistrust: explain why signals exist and how they’ll be used.
  • Build due process: notice, review, correction mechanisms especially where the system can harm.

The bottom line

AI managers won’t replace leadership.

But they will replace a lot of what we used to call leadership: visibility, coordination, evaluation, scheduling, reporting especially in flattened organizations. Gartner+1

The workplaces that thrive won’t be the ones with the most monitoring.

They’ll be the ones that can answer four questions clearly:

  1. What are we measuring and why?
  2. Who is accountable when the system is wrong? OECD+1
  3. Where is the human judgment layer?
  4. How do workers contest decisions and correct data? New York City Government+1

Because the future of work isn’t just AI at work.

It’s AI as work’s invisible management layer.

FAQs:

What are “AI managers”?

“AI managers” are software systems often called algorithmic management tools that partially or fully automate managerial tasks like instruction, monitoring, scheduling, and performance evaluation. OECD+1

How common is algorithmic management?

An OECD study based on a survey of 6,000+ mid-level managers across six countries (June–August 2024) reports high adoption, including 90% in the U.S. and an average of 79% in surveyed European countries. OECD

Is algorithmic management legal?

It depends on jurisdiction and use case. For example, NYC’s Local Law 144 requires bias audits and notices for certain automated employment decision tools. New York City Government+1

What’s the biggest risk of AI-managed workplaces?

Research and policy discussions highlight risks like work intensification, stress from surveillance, unclear accountability, and potential bias depending on design and implementation. 

We use cookies on this website. To learn about the cookies we use and information about your preferences and opt-out choices, please click here. By using our website, you agree to the use of our cookies.