The new boss doesn’t pull you aside for a hallway check-in. It doesn’t ask how your weekend was. It doesn’t soften feedback with a compliment sandwich.
It just updates the dashboard.
Your schedule tightens by eight minutes. Your “productive time” score dips. Your call tone gets flagged. Your performance review arrives pre-written confident, clinical, and oddly certain about what “excellent” looks like.
This is the rise of the AI manager not a humanoid robot in a blazer, but a growing stack of software that instructs, monitors, and evaluates work at scale.
The official name is algorithmic management: software (sometimes AI-powered) that fully or partially automates tasks traditionally done by human managers. OECD+1
And it’s not a niche phenomenon anymore.
An OECD study drawing on a survey of 6,000+ mid-level managers across six countries (June–August 2024) found algorithmic management is already widespread with reported adoption rates as high as 90% in the United States and 79% on average in surveyed European countries (and **40% in Japan). OECD
So yes AI managers are here.
The real question is: what kind of workplace do they create?
Most people won’t “meet” an AI manager in a meeting. They’ll meet it in three places:
Tools that assign tasks, route tickets, optimize schedules, set targets, and decide what comes next. The OECD categorizes these as tools that instruct workers. OECD
Tools that track time, speed, clicks, location, interactions, productivity signals, and sometimes even the “tone” of communication. OECD
In the OECD survey, 55% of U.S. firms reported monitoring the content and tone of conversations, voice calls, or emails (versus 6% in Europe and **8% in Japan). OECD
Tools that score performance, recommend promotions, flag underperformance, and generate performance summaries often using AI to synthesize what used to be a manager’s narrative. OECD
Put those three together and you get something that functions like a manager: it shapes priorities, measures behavior, and influences outcomes.
Two trends are colliding:
Gartner predicts that through 2026, 20% of organizations will use AI to flatten organizational structures, eliminating more than half of current middle management positions. Gartner
In other words: fewer human managers, wider spans of control.
Microsoft and LinkedIn report that 75% of global knowledge workers use AI at work, and many bring their own tools when employers don’t provide them. Microsoft+1
Then 2025 data adds fuel: a Gallup poll (reported by Business Insider) found 23% of U.S. workers use AI at least a few times per week, up from 12% in mid-2024. Business Insider
When adoption is rising and org charts are thinning, companies reach for systems that scale. And the easiest thing to scale is not coaching it’s measurement.
That’s how you end up with AI managers that don’t do small talk.
Algorithmic management gets adopted because it sells a seductive story:
Even the OECD notes the potential upsides productivity and efficiency gains, and more consistent decision-making while also stressing the risks. OECD+1
In the OECD survey, managers often reported improvements like increased information, decision speed, and autonomy while also reporting trustworthiness concerns. OECD
So the pro-AI case isn’t fantasy. It’s real.
But it comes with a trade: what you measure becomes what you manage.
And what you manage becomes what people optimize for.
The OECD report explicitly points to documented downsides in prior evidence: work intensification, stress linked to digital surveillance, and doubts about decision quality. OECD
Researchers are also mapping how algorithmic management can reshape psychosocial risks and health outcomes beyond gig platforms spreading into “traditional” sectors like logistics, retail, and healthcare. sjweh.fi+2ScienceDirect+2
Here are the four workplace shifts I’m watching most closely:
When performance is continually quantified, people naturally adapt:
That’s not because workers are cynical. It’s because they’re rational.
Monitoring can boost short-term output. But long-term, it can sap ownership.
The OECD highlights concerns about trustworthiness, including unclear accountability and difficulty following the tools’ logic. OECD+1
When people can’t explain how decisions are made, they stop believing decisions are fair even if they are.
In theory, automation frees managers to coach.
In practice, many managers become the human face of rules they didn’t design and can’t fully explain.
That’s how you end up with one of the most corrosive workplace dynamics: “I didn’t decide it the system did.”
Algorithms can reduce some human inconsistency. But they can also encode bias through data, proxies, and feedback loops.
The OECD includes discussion that algorithms can perpetuate or mitigate bias depending on design and implementation. OECD
So the key question is no longer “Is it biased?” but:
This trend is pushing governments to define rules for AI in employment.
Under the EU AI Act framework, employment and worker management use cases appear in the list of “high-risk” areas subject to stronger obligations. Artificial Intelligence Act+2European Parliament+2
New York City’s Local Law 144 restricts the use of “automated employment decision tools” unless they’ve had a bias audit within the past year, with public posting and required notices. New York City Government+1
Translation: AI managers are not just a productivity story anymore. They’re a compliance story.
Here’s the future I see emerging fast.
Your work will increasingly be represented by a trail of metrics and machine-readable signals.
People who can shape those signals (with clarity, documentation, and outcome framing) will advance faster than people who assume excellence “speaks for itself.”
Gartner’s flattening prediction points to fewer managers. Gartner
The OECD data suggests the systems that replace managerial tasks are already widespread. OECD
That combination produces a new kind of manager:
Knowledge workers are rapidly adopting AI. Microsoft+1
Meanwhile, algorithmic management is expanding in operational environments too (logistics is a frequent case study context). ScienceDirect+1
That can widen divides: who gets autonomy (AI as copilot) versus who gets control (AI as overseer).
If AI influences pay, promotion, scheduling, or discipline, people will demand:
The OECD notes concerns precisely around accountability and the difficulty of following tools’ logic. OECD+1
AI managers won’t replace leadership.
But they will replace a lot of what we used to call leadership: visibility, coordination, evaluation, scheduling, reporting especially in flattened organizations. Gartner+1
The workplaces that thrive won’t be the ones with the most monitoring.
They’ll be the ones that can answer four questions clearly:
Because the future of work isn’t just AI at work.
It’s AI as work’s invisible management layer.
“AI managers” are software systems often called algorithmic management tools that partially or fully automate managerial tasks like instruction, monitoring, scheduling, and performance evaluation. OECD+1
An OECD study based on a survey of 6,000+ mid-level managers across six countries (June–August 2024) reports high adoption, including 90% in the U.S. and an average of 79% in surveyed European countries. OECD
It depends on jurisdiction and use case. For example, NYC’s Local Law 144 requires bias audits and notices for certain automated employment decision tools. New York City Government+1
Research and policy discussions highlight risks like work intensification, stress from surveillance, unclear accountability, and potential bias depending on design and implementation.