Connect with us
LIVE

Business

Your new teammate is a machine. Are you ready?

Published

on

Companies across various industries are investing heavily in AI to enhance employee productivity. A leader at the consulting firm McKinsey says he envisions an AI agent for every human employee. Soon, a factory manager will oversee a production line where human workers and intelligent robots seamlessly develop new products. A financial analyst will partner with an AI data analyst to uncover market trends. A surgeon will guide a robotic system with microscopic precision, while an AI teammate monitors the operation for potential complications.

These scenarios represent the forefront of human-machine collaboration, a significant shift that is quickly moving from research labs into every critical sector of our society.

In short, we are on the verge of deploying AI not just as a tool, but as an active partner in our most important work. The potential is clear: If we effectively combine the computational power of AI with the intuition, creativity, and ethical judgment of a human, the team will achieve more than either could alone.

But we aren’t prepared to harness this potential. The biggest risk is what’s called “automation bias.” Humans tend to over-rely on automated systems — but, worse, also favor its suggestions while ignoring correct contradictory information. Automation bias can lead to critical errors of commission (acting on flawed advice) and omission (failing to act when a system misses something), particularly in high-stakes environments. 

Even improved proficiency with AI doesn’t reliably mitigate the automation bias. For example, a study of the effectiveness of Clinical Decision Support Systems in health care found that individuals with moderate AI knowledge were the most over-reliant. Both novices and experts showed more calibrated trust.  What did lead to lower rates of automation bias was making study participants accountable for either their overall performance or their decision accuracy.

This leads to the most pressing question for every leader: When the AI-human team fails, who will be held accountable? If an AI-managed power grid fails or a logistics algorithm creates a supply chain catastrophe, who is responsible? Today our legal and ethical frameworks are built around human intent, creating a “responsibility gap” when an AI system causes harm. 

This leads to significant legal, financial, and reputational risks.

First, it produces a legal vacuum. Traditional liability models are designed to assign fault to a human agent with intent and control. But the AI is not a moral agent and its human operators or programmers may lack sufficient control over its emergent, learned behaviors, so it becomes near impossible to assign blame to any individual. This leaves the organization that deployed the technology as the primary target of lawsuits, potentially liable for damages it could neither predict nor directly control. 

Second, this ambiguity around responsibility cripples an organization’s ability to respond effectively. The “black box” nature of many complex AI systems means that even after a catastrophic failure, it may be impossible to determine the root cause. This prevents the organization from fixing the underlying problem, leaving it vulnerable to repeated incidents, and undermines public trust by making it appear unaccountable. 

Finally, it invites regulatory backlash. In the absence of a clear chain of command and accountability, industry regulators are more likely to impose broad, restrictive, stifling innovation and creating significant compliance burdens. 

Advertisement

The gaps in liability frameworks were laid bare after a 2018 fatal accident involving an Uber self-driving car. Debate arose over whether Uber, the system manufacturer, or the human safety driver was at fault. The case ended five years later with “the person sitting behind the wheel” pleading guilty to an endangerment charge, even as the automated driving system failed to identify the person with a bike and brake.

Such ambiguities complicate the implementation of human-machine teams. Research reflects this tension, with one study finding that while most C-suite leaders believe the responsibility gap is a serious challenge, 72% admit they do not have an AI policy in place to guide responsible use.

This isn’t a problem that Washington or Silicon Valley alone can solve. Leaders in any organization, whether public or private, can take steps to de-risk and maximize their return on investment. Here are three practical actions every leader can take to prepare their teams for this new reality. 

Start with responsibility. Appoint a senior executive responsible for the ethical implementation of AI-enabled machines in your organization. Each AI system must have a documented human owner—not a committee—who is accountable for its performance and failures. This ensures clarity from the start. Require your teams to define the level of human oversight for each AI-driven task, deciding whether a human needs to be “in the loop” (approving decisions) or “on the loop” (supervising and able to intervene). Accountability should be the first step, not an afterthought.

Onboard AI like a new hire.  Train your staff not only on how to use AI but also on how it thinks, its limitations, and potential failure points. The aim is to build calibrated trust, not blind trust. Approach AI integration with the same thoroughness as onboarding a new employee. Begin with less critical tasks to help your team understand the AI’s strengths and weaknesses. Establish feedback channels so that human team members can help improve AI. When AI is treated as a teammate, it is more likely to become one.

Integrating AI as a teammate in our work is inevitable, but ensuring success and safety requires proactive leadership. Leaders who establish clear accountability, invest in comprehensive training, and prioritize fairness will thrive. Those who treat AI as just another tool will face the consequences. Our new machine teammates are here; it’s time to lead them effectively.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.

Source link

Title

This industrial giant is emerging as a big AI play, says Wells Fargo This industrial giant is emerging as a big AI play, says Wells Fargo
Crypto6 months ago

This industrial giant is emerging as a big AI play, says Wells Fargo

  Wells Fargo sees Caterpillar continuing to roar higher, emerging as an artificial intelligence play. The bank initiated shares of...

Novo Nordisk's strategy tested as investors push back on board revamp Novo Nordisk's strategy tested as investors push back on board revamp
Crypto6 months ago

Novo Nordisk’s strategy tested as investors push back on board revamp

    Flags with the logos of Danish drugmaker Novo Nordisk, maker of the blockbuster diabetes and weight-loss treatments Ozempic...

Alibaba plans AI subscriptions, stablecoin-like payments with JPMorgan Alibaba plans AI subscriptions, stablecoin-like payments with JPMorgan
Crypto6 months ago

Alibaba plans AI subscriptions, stablecoin-like payments with JPMorgan

  Key Points Alibaba plans to use “tokenization” of payments for cross-border transactions in its business-to-business arm. Kuo Zhang, president...

Abraham Lincoln set off an education revolution in 1862 with the Land Grant Act. We need the same thing today for AI Abraham Lincoln set off an education revolution in 1862 with the Land Grant Act. We need the same thing today for AI
Crypto6 months ago

UK borrowing costs spike on report government to scrap plans to raise income tax

    Rachel Reeves, U.K. chancellor of the exchequer, delivers a speech in London, UK, on Tuesday, Nov. 4, 2025. Bloomberg...

An Indonesian Unicorn's Vision For Digital Payments An Indonesian Unicorn's Vision For Digital Payments
Crypto6 months ago

Trump’s threatened the BBC with a $1B lawsuit: Here’s what’s going on

    US President Donald Trump speaks to reporters as he arrives at Palm Beach International Airport on Oct. 31,...

We're downgrading a portfolio stock. Plus, what's causing the market's rally We're downgrading a portfolio stock. Plus, what's causing the market's rally
Crypto6 months ago

UBS’s picks for global returns next year

  Investors looking for global diversification opportunities should look to a specific subset of stocks in Europe, according to UBS...

Nvidia will soar nearly 75%, says Loop Capital Nvidia will soar nearly 75%, says Loop Capital
News6 months ago

AI companies admit they’re worried about a bubble

    Eakarat Buanoi | Istock | Getty Images LISBON, Portugal — Top tech executives told CNBC they’re concerned about...

CEO Southeast Asia's top bank DBS says AI adoption already paying off CEO Southeast Asia's top bank DBS says AI adoption already paying off
News6 months ago

CEO Southeast Asia’s top bank DBS says AI adoption already paying off

Tan Su Shan, deputy chief executive officer and managing director of institutional banking at DBS Group Holdings Ltd., speaks during...

China's economic slowdown deepens in October as housing slump worsens and investments shrink more than expected China's economic slowdown deepens in October as housing slump worsens and investments shrink more than expected
News6 months ago

China’s economic slowdown deepens in October as housing slump worsens and investments shrink more than expected

CHENGDU, CHINA – OCTOBER 18: People walk past the Louis Vuitton store at Taikoo Li, a high-end shopping area that...

U.S. to remove tariffs on some products from Ecuador, Argentina, Guatemala and El Salvador U.S. to remove tariffs on some products from Ecuador, Argentina, Guatemala and El Salvador
News6 months ago

U.S. to remove tariffs on some products from Ecuador, Argentina, Guatemala and El Salvador

The United States said Thursday it will remove tariffs on some foods and other imports from Argentina, Ecuador, Guatemala and...

Advertisement