‘Don’t Hold Your Breath’: Scott Galloway Warns Against Leaving Ethical AI Regulation to Tech CEOs

In an era where artificial intelligence permeates every facet of modern life—from the algorithms curating our social media feeds to the predictive models guiding national security decisions—the question of who holds the reins on ethical oversight has never been more pressing. On a recent episode of CNN’s Inside Politics, podcast host and business provocateur Scott Galloway delivered a stark rebuke to the notion that tech industry titans can be trusted to self-regulate. “Don’t hold your breath,” Galloway quipped to anchor Dana Bash, underscoring a profound skepticism toward the self-serving impulses of Silicon Valley leaders. His admonition arrives amid escalating tensions between the U.S. Department of Defense and AI firm Anthropic, a dispute that exemplifies the perilous intersection of innovation, ethics, and unchecked power.

Galloway, a New York University professor emeritus and co-host of the influential Pivot podcast, has long positioned himself as a gadfly in the tech world. With a background in marketing and a portfolio of ventures that span from digital agencies to e-commerce platforms, he brings a blend of insider savvy and outsider critique to his commentary. In the Inside Politics segment, which aired on February 16, 2026, Galloway argued that society has dangerously “outsourced our ethics, our civic responsibility, what is good for the public to the CEOs of companies of tech.” This delegation, he contended, represents not just a lapse in governance but a systemic failure of democratic institutions to assert authority over technologies that could redefine human agency itself.

At the heart of Galloway’s critique lies a simple yet sobering observation: profit motives and ethical imperatives rarely align. Tech CEOs, incentivized by shareholder demands and the relentless pace of market competition, prioritize scalability and monetization over safeguards against misuse. “This is another example of how government is failing to step in and provide thoughtful, sensible regulations,” Galloway told Bash, his tone laced with frustration. He evoked historical parallels, likening the current AI laissez-faire to the deregulation of financial markets in the lead-up to the 2008 crash—a cautionary tale of innovation run amok. Yet, Galloway’s warning transcends mere economic analogy; it probes deeper into the philosophical underpinnings of progress. If AI systems can autonomously make life-altering decisions—whether in hiring practices, judicial sentencing, or military targeting—who ensures they reflect societal values rather than corporate algorithms?

The timing of Galloway’s remarks could not have been more poignant, coinciding with revelations about a brewing crisis at the Pentagon. Just days earlier, reports surfaced that the U.S. military is scrutinizing its multimillion-dollar contract with Anthropic, the AI startup founded by former OpenAI executives with a mission to build “safe and reliable” systems. At issue are the so-called “guardrails” embedded in Anthropic’s flagship model, Claude—an AI designed to refuse queries that could enable mass surveillance of American citizens or the development of autonomous weapons. According to sources familiar with the negotiations, the Pentagon has grown “fed up” after months of stalemated talks, with defense officials demanding broader flexibility in deploying Claude for sensitive operations. The friction escalated dramatically when it emerged that Claude had been integral to a high-stakes military raid capturing Venezuelan leader Nicolás Maduro in early February 2026. While the operation’s success bolstered U.S. foreign policy objectives, it exposed a chasm between Anthropic’s ethical commitments and the military’s pragmatic needs.

Anthropic, valued at over $18 billion and backed by investors like Amazon and Google, has marketed itself as an ethical counterweight to less scrupulous AI developers. Its “Constitutional AI” framework, which trains models on principles derived from documents like the Universal Declaration of Human Rights, aims to instill inherent biases toward safety. However, Pentagon negotiators view these constraints as impediments to national security. “Anthropic wants to put guardrails in place to stop Claude from being used for mass surveillance of Americans or to develop weapons that can be indiscriminately lethal,” one report noted, highlighting the company’s reluctance to dilute these protections even for government clients. In response, defense officials have floated the possibility of severing ties, a move that could reverberate through the AI ecosystem. Critics, including ethicists at organizations like the Electronic Frontier Foundation, applaud Anthropic’s stance as a rare bulwark against militarized AI, while hawks in Congress decry it as naive obstructionism in an era of great-power rivalry with China.

This standoff is not an isolated incident but a microcosm of the broader regulatory vacuum Galloway decries. As 2026 unfolds, the global landscape of AI governance is a patchwork of ambition and inertia. In the European Union, the AI Act—hailed as the world’s first comprehensive AI law—fully enters into force this August, imposing tiered obligations on “high-risk” systems like those used in biometrics or critical infrastructure. Companies must demonstrate transparency, risk assessments, and human oversight, with fines up to 7% of global revenue for non-compliance. Proponents argue it sets a gold standard, fostering trust through accountability. Yet, even here, implementation challenges loom: a proposed “omnibus” package could delay or dilute provisions, reflecting the lobbying muscle of Big Tech.

Across the Atlantic, the United States remains a regulatory laggard, fragmented by partisan divides and federal-state tensions. President Trump’s December 2025 executive order on AI, which emphasizes “innovation-friendly” policies, has drawn fire for prioritizing deregulation over safety. Meanwhile, states like Colorado and California are forging ahead with bespoke frameworks. Colorado’s AI Act, effective February 2026, mandates impact assessments for algorithmic decision-making in employment and housing, while California’s updates to its privacy laws extend scrutiny to generative AI outputs. These piecemeal efforts, while commendable, underscore Galloway’s point: without unified federal action, ethical oversight devolves into a compliance lottery, where resource-rich firms game the system and smaller innovators falter.

Globally, the ethics conundrum intensifies. China’s amended Cybersecurity Law, rolled out on January 1, 2026, bolsters AI risk assessments but ties them inextricably to state surveillance goals, raising alarms about authoritarian co-optation. In India and Brazil, emerging guidelines grapple with cultural nuances, such as AI’s role in combating misinformation during elections. Yet, as Darden School of Business professor Peter Cappelli warned in a January 2026 op-ed, “Ethics is the defining issue for the future of AI. And time is running short.” Fragmented regulations risk a race to the bottom, where lax jurisdictions undermine global standards.

Galloway’s interview with Bash also touched on tangential flashpoints, such as the ethical quandaries in AI-driven social media. Echoing concerns raised by Pivot co-host Kara Swisher in a prior CNN appearance, he highlighted how platforms like X and Meta amplify division through unchecked recommendation engines. These systems, ostensibly neutral, often perpetuate biases—racial, gender, or ideological—that erode civic discourse. Galloway advocated for “thoughtful, sensible” interventions, like mandatory audits and algorithmic transparency, drawing from antitrust precedents to argue that AI monopolies warrant similar scrutiny.

The implications of this regulatory shortfall extend far beyond boardrooms and battlefields. For everyday citizens, the stakes involve privacy erosion, job displacement, and the specter of autonomous weapons that could escalate conflicts without human intervention. Ethicists like Timnit Gebru, formerly of Google, have long warned that self-regulation invites “accountability washing”—cosmetic gestures masking deeper flaws. In 2026, as AI ethics trends pivot toward verifiable standards for fairness and explainability, the onus falls on policymakers to reclaim authority.

Galloway’s plea on Inside Politics serves as a clarion call: entrusting AI’s moral compass to tech CEOs is akin to handing the keys of democracy to those who profit from its vulnerabilities. As the Anthropic saga unfolds and global frameworks coalesce, the question remains whether governments will heed the warning or continue outsourcing the future. One thing is certain—holding our breath won’t suffice; it demands vigilant, collective action to ensure AI serves humanity, not subjugates it.