Who’s Afraid of AI? 2025 Mega‑Roundup: Why Everyone—From CEOs to Coders—Is Nervous (and Where the Fear Is Overblown)

September 18, 2025
Who’s Afraid of AI
Who’s Afraid of AI

Key facts (scan this first)

  • Public mood: In a fresh Pew Research Center survey (Sept. 17, 2025), more Americans say they’re anxious than excited about AI’s spread—especially around deepfakes, job losses, and loss of control.
  • Jobs at risk (or reshaped): The IMF estimates ~40% of global employment is exposed to AI; exposure rises to ~60% in advanced economies. Effects range from augmentation to displacement.
  • Cyber + election risks: OpenAI and Microsoft disclosed state‑linked operations from Russia, China, Iran and North Korea experimenting with generative AI for influence and cyber tradecraft; takedowns occurred, but capabilities are maturing.
  • Copyright + creators: Legal battles over training data have intensified (NYT v. OpenAI/Microsoft; multiple author suits). Anthropic reached a settlement framework with book authors in 2025, signaling expensive compromises ahead.
  • Deepfakes hit real people: A Biden‑voice robocall targeted New Hampshire voters in Jan. 2024; the FTC has repeatedly warned families about AI voice‑cloning scams.
  • Regulation is catching up: The EU AI Act entered EU law in 2024 (phased implementation), the U.N. adopted its first global AI resolution in 2024, and the U.K. Bletchley Declaration (Nov. 2023) set a common risk language. The U.S. has an AI Executive Order (Oct. 30, 2023) and a shifting federal posture in 2025.
  • Healthcare caution: WHO (2024/2025) and the AMA (2024) issued guidance: use LLMs carefully, manage bias, and keep human oversight.
  • Experts are split on “x‑risk”: Thousands of AI researchers surveyed in 2024 gave a median ~5% chance that AI could cause human extinction; views vary widely.

Who’s afraid—and why

1) AI pioneers & safety researchers

  • Geoffrey Hinton (“godfather of AI”): “There’s a chance the machines could take over.”
  • Yoshua Bengio (Turing Award co‑laureate) on deepfakes/identity fraud: “We absolutely need to ban the counterfeiting of human identities.”
  • Sam Altman (OpenAI) to the U.S. Senate: “If this technology goes wrong, it can go quite wrong.”

Why they’re worried: fast capability jumps, misuse (bio/cyber), and the unsolved problem of reliably aligning very powerful systems with human values. A 2024 survey of thousands of AI authors found 37.8–51.4% assign at least a 10% chance to outcomes “as bad as human extinction,” underscoring real uncertainty even among optimists.

2) Builders & execs: caution vs. contrarianism

  • Dario Amodei (Anthropic): warned in 2025 that AI could eliminate up to half of entry‑level white‑collar jobs within ~5 years; he puts ~25% on things ending “really, really badly.”
  • Yann LeCun (Meta): “AI will bring a lot of benefits… [but] we’re running the risk of scaring people away from it.” He calls existential‑risk fears premature.

Why they’re split: near‑term upside (productivity, science) versus hard‑to‑quantify worst‑case risks and real externalities (bias, IP, safety). Studies show meaningful productivity gains—e.g., 14–25% for some tasks—especially for less‑experienced workers, alongside new error modes and over‑reliance.

3) Workers, unions & creators

  • IMF: AI exposes ~40% of jobs globally; the mix of augmentation vs. automation likely widens inequality without policy.
  • Authors/Newsrooms: The NYT lawsuit and a wave of author suits argue training used copyrighted works without permission; in 2025, Anthropic struck a proposed settlement, hinting at licensing becoming the norm.
  • Artists & civil rights advocates: Joy Buolamwini (Algorithmic Justice League) warns of the “coded gaze”—bias embedded in AI systems that misrecognize or mistreat marginalized groups.

4) Voters & platform watchdogs

The New Hampshire “Biden voice” robocall showed how cheaply AI can impersonate leaders; regulators responded with enforcement threats, and platforms updated policies after public outcry over other high‑profile deepfakes. The FTC has issued plain‑language guidance for families about voice‑cloning scams.

5) Doctors, patients & public‑health officials

  • WHO (2024/2025) and AMA (2024) stress guardrails, transparency, and human oversight for LMMs in health; they highlight bias, safety, and liability risks.
  • Research and reporting have flagged racial bias in medical chatbot outputs, reinforcing caution for clinical use.

6) Cyber defenders & election‑security teams

  • OpenAI + Microsoft disclosed five state‑linked actors experimenting with LLMs (e.g., for phishing, content ops). The early efforts had limited reach, but defenders expect rapid learning cycles.

7) Environmental & labor advocates

  • Kate Crawford (Atlas of AI): “AI is neither artificial nor intelligent.” She argues the industry is extractive (minerals, energy, data, human labelling), causing concerns beyond pure “bits.”

What governments are doing (and why that matters)

  • European Union: The EU AI Act (OJ L, July 2024) creates a risk‑tiered regime (prohibitions, high‑risk duties, transparency). Rollout phases run into 2025–2026, with an AI Office coordinating enforcement. Expect heavy compliance work for “high‑risk” uses and disclosures for generative systems.
  • United Nations: In March 2024, the General Assembly unanimously adopted the first global AI resolution on safe, secure, trustworthy AI—non‑binding, but a strong signal of norms.
  • U.K. & partners: The Bletchley Declaration (Nov. 2023) gave the world a shared vocabulary for “frontier AI” risks; follow‑on summits (Seoul 2024; report in 2025) kept safety testing and evaluations on the agenda.
  • United States: Executive Order 14110 (Oct. 30, 2023) directed federal agencies on AI safety, red‑teaming, and civil‑rights protections; 2025 policy signals reflect a debate between innovation acceleration and new rules.

Bottom line: Regulators focus less on “sci‑fi apocalypse” and more on concrete harms: discrimination, misinformation, safety‑critical failures, privacy, and market power. That’s where they see near‑term fear—and legal liability.


Where the experts actually disagree

  1. Tail risk vs. tangible harms:
    • Existential‑risk contingent points to non‑zero probabilities of catastrophic failure from misaligned systems or enabling bio/cyber super‑threats (see the CAIS statement: “Mitigating the risk of extinction from AI should be a global priority.”).
    • “Normalists” (e.g., Yann LeCun) argue current models are far from human‑level; panic distracts from real safety work and innovation.
  2. Open‑source vs. licensing of “frontier” models:
    Proponents of licensing say we need pre‑deployment safety checks; open‑source advocates fear centralization and slower fixes. (See California’s evolving state bills on frontier transparency.)
  3. Labor impacts:
    The IMF flags macro risk; some CEOs (e.g., Amodei, Jim Farley) predict large white‑collar cuts; others expect task re‑mixing and new roles. Evidence so far: productivity gains with uneven quality effects and new failure modes.

What different communities fear most (with expert voice)

  • Civil‑rights advocates: Bias and surveillance creep. Buolamwini coined the “coded gaze” to describe embedded bias in AI; regulators like EEOC warn about algorithmic discrimination in hiring.
  • Health leaders: Patient‑safety, liability, and equity. WHO and AMA push governance, testing, and transparency before clinical reliance.
  • Cyber/election officials: Scalable deception. Microsoft/OpenAI detail nation‑state experimentation; defenders expect more AI‑assisted influence ops.
  • Creators & publishers: Uncompensated training, market substitution, and reputational harm. (See NYT v. OpenAI/Microsoft; author settlements.)
  • Global leaders: Governance gaps. At Davos 2024, U.N. Secretary‑General António Guterres warned of an “existential threat” from *“runaway development…without guardrails.”

Not every fear holds up

  • “AI is already superhuman.” Today’s leading systems still fail at reasoning, factuality, and grounding. Gary Marcus argues LLMs are “fundamentally unreliable” without a more robust architecture.
  • “All jobs will vanish.” High‑quality studies show big task‑level gains—especially for less‑experienced workers—suggesting augmentation and skill leveling in the near term (with real disruption at entry levels).

What’s being done—and what you can do

Companies:

  • Stand up an AI risk register mapped to the NIST AI Risk Management Framework; require red‑teaming and model‑eval evidence before deployment.
  • Label synthetic media internally; adopt provenance tech; train staff to spot prompt‑injection, hallucinations, and deepfake threats. (FTC has practical voice‑cloning tips for families, too.)
  • Data governance & IP: Track training inputs, licensing, and opt‑outs; watch evolving copyright caselaw (and settlements) closely.

Governments & institutions:

  • Implement risk‑tiered rules (EU‑style) and impact assessments; use public procurement to demand transparency and safe‑use attestations.
  • Fund independent evaluations and safety benchmarks; align with U.N./Bletchley cooperation to curb cross‑border abuse.

Individuals:

  • Treat AI outputs like first drafts; verify facts.
  • Be skeptical of urgent voice messages (possible clones); confirm through a second channel.
  • Learn how your workplace tools use data; opt out where possible.

Short expert‑quote roundup (diverse lenses)

  • Hinton: “There’s a chance the machines could take over.”
  • Bengio: “We absolutely need to ban the counterfeiting of human identities.”
  • Altman: “If this technology goes wrong, it can go quite wrong.”
  • LeCun: “AI will bring a lot of benefits… [but] we’re running the risk of scaring people away from it.”
  • Kate Crawford: “AI is neither artificial nor intelligent.”
  • Joy Buolamwini: “The coded gaze” (her term) captures bias embedded in AI.
  • WHO (guidance): Human oversight and risk governance are essential for LMMs in health.
  • CAIS statement (2023): “Mitigating the risk of extinction from AI should be a global priority.”

FAQs

Is fear mostly about “Terminator” scenarios?
No. Policymakers emphasize present harms (fraud, bias, safety) while researchers debate low‑probability, high‑impact tail risks. The U.N. and EU measures reflect this dual track.

Are jobs doomed?
Evidence shows augmentation and quality gains now, with serious displacement risk at entry‑level white‑collar roles if adoption is rapid and unbuffered. Policy and firm choices will heavily shape outcomes.

Why are creators so alarmed?
Because the legal status of training on copyrighted works is unsettled and deepfake abuse is surging; courts and settlements are beginning to set the contours.


The takeaway

  • Who’s afraid? A lot of people—for different reasons: citizens (fraud/deepfakes), workers (displacement), doctors (safety), civil‑rights groups (bias/surveillance), security teams (cyber ops), creators (IP), and yes, some AI pioneers (tail risk).
  • What’s rational? Focus your energy on controls for real, current harms while insisting on serious evaluation of frontier systems.
  • What’s next? Watch EU implementation, U.S. federal/state rulemaking, copyright settlements, and whether AI‑assisted cyber ops move from “experiments” to material impact in 2025–2026.

Sources & further reading

Pew Research Center; IMF; CBS/60 Minutes; C‑SPAN; Wired; EU Official Journal; AP/Reuters; UK Government (Bletchley); WHO/AMA; OpenAI & Microsoft threat reports; FTC. (See citations throughout.)

Artur Ślesik

I have been fascinated by the world of new technologies for years – from artificial intelligence and space exploration to the latest gadgets and business solutions. I passionately follow premieres, innovations, and trends, and then translate them into language that is clear and accessible to readers. I love sharing my knowledge and discoveries, inspiring others to explore the potential of technology in everyday life. My articles combine professionalism with an easy-to-read style, reaching both experts and those just beginning their journey with modern solutions.

Leave a Reply

Your email address will not be published.

Don't Miss