• The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
    Jun 12 2025

    The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.

    Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path.

    This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control?

    We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    Tristan’s TED talk on the Narrow Path

    Sam’s 95 Theses on AI

    Sam’s proposal for a Manhattan Project for AI Safety

    Sam’s series on AI and Leviathan

    The Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James Robinson

    Dario Amodei’s Machines of Loving Grace essay.

    Bourgeois Dignity: Why Economics Can’t Explain the Modern World by Deirdre McCloskey

    The Paradox of Libertarianism by Tyler Cowen

    Dwarkesh Patel’s interview with Kevin Roberts at the FAI’s annual conference

    Further reading on surveillance with 6G

    RECOMMENDED YUA EPISODES

    AGI Beyond the Buzz: What Is It, and Are We Ready?

    The Self-Preserving Machine: Why AI Learns to Deceive

    The Tech-God Complex: Why We Need to be Skeptics

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    CORRECTIONS

    Sam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.”

    Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner’s guide to sociopolitical collapse.”

    Show More Show Less
    48 mins
  • People are Lonelier than Ever. Enter AI.
    May 30 2025

    Over the last few decades, our relationships have become increasingly mediated by technology. Texting has become our dominant form of communication. Social media has replaced gathering places. Dating starts with a swipe on an app, not a tap on the shoulder.

    And now, AI enters the mix. If the technology of the 2010s was about capturing our attention, AI meets us at a much deeper relational level. It can play the role of therapist, confidant, friend, or lover with remarkable fidelity. Already, therapy and companionship has become the most common AI use case. We're rapidly entering a world where we're not just communicating through our machines, but to them.

    How will that change us? And what rules should we set down now to avoid the mistakes of the past?

    These were some of the questions that Daniel Barcay explored with MIT sociologist Sherry Turkle and Hinge CEO Justin McLeod at Esther Perel’s Sessions 2025, a conference for clinical therapists. This week, we’re bringing you an edited version of that conversation, originally recorded on April 25th, 2025.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find complete transcripts, key takeaways, and much more on our Substack.

    RECOMMENDED MEDIA

    “Alone Together,” “Evocative Objects,” “The Second Self” or any other of Sherry Turkle’s books on how technology mediates our relationships.

    Key & Peele - Text Message Confusion

    Further reading on Hinge’s rollout of AI features

    Hinge’s AI principles

    “The Anxious Generation” by Jonathan Haidt

    “Bowling Alone” by Robert Putnam

    The NYT profile on the woman in love with ChatGPT

    Further reading on the Sewell Setzer story

    Further reading on the ELIZA chatbot

    RECOMMENDED YUA EPISODES

    Echo Chambers of One: Companion AI and the Future of Human Connection

    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    Esther Perel on Artificial Intimacy

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Show More Show Less
    44 mins
  • Echo Chambers of One: Companion AI and the Future of Human Connection
    May 15 2025

    AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person.

    But these AI companions are not human, they’re a platform designed to maximize user engagement—and they’ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.

    RECOMMENDED MEDIA

    Further reading on the rise of addictive intelligence

    More information on Melvin Kranzberg’s laws of technology

    More information on MIT’s Advancing Humans with AI lab

    Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use

    Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes

    Pattie and Pat’s study that found that AI systems that frame answers and questions improve human understanding

    Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction

    Further reading on AI’s positivity bias

    Further reading on MIT’s “lifelong kindergarten” initiative

    Further reading on “cognitive forcing functions” to reduce overreliance on AI

    Further reading on the death of Sewell Setzer and his mother’s case against Character.AI

    Further reading on the legislative response to digital companions

    RECOMMENDED YUA EPISODES

    The Self-Preserving Machine: Why AI Learns to Deceive

    What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton

    Esther Perel on Artificial Intimacy

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.

    Show More Show Less
    42 mins
  • AGI Beyond the Buzz: What Is It, and Are We Ready?
    Apr 30 2025

    What does it really mean to ‘feel the AGI?’ Silicon Valley is racing toward AI systems that could soon match or surpass human intelligence. The implications for jobs, democracy, and our way of life are enormous.

    In this episode, Aza Raskin and Randy Fernando dive deep into what ‘feeling the AGI’ really means. They unpack why the surface-level debates about definitions of intelligence and capability timelines distract us from urgently needed conversations around governance, accountability, and societal readiness. Whether it's climate change, social polarization and loneliness, or toxic forever chemicals, humanity keeps creating outcomes that nobody wants because we haven't yet built the tools or incentives needed to steer powerful technologies.

    As the AGI wave draws closer, it's critical we upgrade our governance and shift our incentives now, before it crashes on shore. Are we capable of aligning powerful AI systems with human values? Can we overcome geopolitical competition and corporate incentives that prioritize speed over safety?

    Join Aza and Randy as they explore the urgent questions and choices facing humanity in the age of AGI, and discuss what we must do today to secure a future we actually want.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_ and subscribe to our Substack.

    RECOMMENDED MEDIA

    Daniel Kokotajlo et al’s “AI 2027” paper
    A demo of Omni Human One, referenced by Randy
    A paper from Redwood Research and Anthropic that found an AI was willing to lie to preserve it’s values
    A paper from Palisades Research that found an AI would cheat in order to win
    The treaty that banned blinding laser weapons
    Further reading on the moratorium on germline editing

    RECOMMENDED YUA EPISODES
    The Self-Preserving Machine: Why AI Learns to Deceive

    Behind the DeepSeek Hype, AI is Learning to Reason

    The Tech-God Complex: Why We Need to be Skeptics

    This Moment in AI: How We Got Here and Where We’re Going

    How to Think About AI Consciousness with Anil Seth

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Clarification: When Randy referenced a “$110 trillion game” as the target for AI companies, he was referring to the entire global economy.

    Show More Show Less
    53 mins
  • Rethinking School in the Age of AI
    Apr 21 2025

    AI has upended schooling as we know it. Students now have instant access to tools that can write their essays, summarize entire books, and solve complex math problems. Whether they want to or not, many feel pressured to use these tools just to keep up. Teachers, meanwhile, are left questioning how to evaluate student performance and whether the whole idea of assignments and grading still makes sense. The old model of education suddenly feels broken.

    So what comes next?

    In this episode, Daniel and Tristan sit down with cognitive neuroscientist Maryanne Wolf and global education expert Rebecca Winthrop—two lifelong educators who have spent decades thinking about how children learn and how technology reshapes the classroom. Together, they explore how AI is shaking the very purpose of school to its core, why the promise of previous classroom tech failed to deliver, and how we might seize this moment to design a more human-centered, curiosity-driven future for learning.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_

    Guests

    Rebecca Winthrop is director of the Center for Universal Education at the Brookings Institution and chair Brookings Global Task Force on AI and Education. Her new book is The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better, co-written with Jenny Anderson.

    Maryanne Wolf is a cognitive neuroscientist and expert on the reading brain. Her books include Proust and the Squid: The Story and Science of the Reading Brain and Reader, Come Home: The Reading Brain in a Digital World.

    RECOMMENDED MEDIA
    The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better by Rebecca Winthrop and Jenny Anderson

    Proust and the Squid, Reader, Come Home, and other books by Maryanne Wolf

    The OECD research which found little benefit to desktop computers in the classroom

    Further reading on the Singapore study on digital exposure and attention cited by Maryanne

    The Burnout Society by Byung-Chul Han

    Further reading on the VR Bio 101 class at Arizona State University cited by Rebecca

    Leapfrogging Inequality by Rebecca Winthrop

    The Nation’s Report Card from NAEP

    Further reading on the Nigeria AI Tutor Study

    Further reading on the JAMA paper showing a link between digital exposure and lower language development cited by Maryanne

    Further reading on Linda Stone’s thesis of continuous partial attention.

    RECOMMENDED YUA EPISODES
    We Have to Get It Right’: Gary Marcus On Untamed AI

    AI Is Moving Fast. We Need Laws that Will Too.

    Jonathan Haidt On How to Solve the Teen Mental Health Crisis

    Show More Show Less
    43 mins
  • Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI
    Apr 3 2025

    Artificial intelligence is set to unleash an explosion of new technologies and discoveries into the world. This could lead to incredible advances in human flourishing, if we do it well. The problem? We’re not very good at predicting and responding to the harms of new technologies, especially when those harms are slow-moving and invisible.

    Today on the show we explore this fundamental problem with Rob Bilott, an environmental lawyer who has spent nearly three decades battling chemical giants over PFAS—"forever chemicals" now found in our water, soil, and blood. These chemicals helped build the modern economy, but they’ve also been shown to cause serious health problems.

    Rob’s story, and the story of PFAS is a cautionary tale of why we need to align technological innovation with safety, and mitigate irreversible harms before they become permanent. We only have one chance to get it right before AI becomes irreversibly entangled in our society.

    Your Undivided Attention is produced by the Center for Humane Technology. Subscribe to our Substack and follow us on X: @HumaneTech_.

    Clarification: Rob referenced EPA regulations that have recently been put in place requiring testing on new chemicals before they are approved. The EPA under the Trump admin has announced their intent to rollback this review process.

    RECOMMENDED MEDIA

    “Exposure” by Robert Bilott

    ProPublica’s investigation into 3M’s production of PFAS

    The FB study cited by Tristan

    More information on the Exxon Valdez oil spill

    The EPA’s PFAS drinking water standards

    RECOMMENDED YUA EPISODES

    Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook

    AI Is Moving Fast. We Need Laws that Will Too.

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    Big Food, Big Tech and Big AI with Michael Moss

    Show More Show Less
    1 hr and 5 mins
  • Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook
    Mar 20 2025

    One of the hardest parts about being human today is navigating uncertainty. When we see experts battling in public and emotions running high, it's easy to doubt what we once felt certain about. This uncertainty isn't always accidental—it's often strategically manufactured.

    Historian Naomi Oreskes, author of "Merchants of Doubt," reveals how industries from tobacco to fossil fuels have deployed a calculated playbook to create uncertainty about their products' harms. These campaigns have delayed regulation and protected profits by exploiting how we process information.

    In this episode, Oreskes breaks down that playbook page-by-page while offering practical ways to build resistance against them. As AI rapidly transforms our world, learning to distinguish between genuine scientific uncertainty and manufactured doubt has never been more critical.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_

    RECOMMENDED MEDIA

    “Merchants of Doubt” by Naomi Oreskes and Eric Conway

    "The Big Myth” by Naomi Oreskes and Eric Conway

    "Silent Spring” by Rachel Carson

    "The Jungle” by Upton Sinclair

    Further reading on the clash between Galileo and the Pope

    Further reading on the Montreal Protocol

    RECOMMENDED YUA EPISODES

    Laughing at Power: A Troublemaker’s Guide to Changing Tech

    AI Is Moving Fast. We Need Laws that Will Too.

    Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins

    Former OpenAI Engineer William Saunders on Silence, Safety, and the Right to Warn

    CORRECTIONS:

    • Naomi incorrectly referenced Global Climate Research Program established under President Bush Sr. The correct name is the U.S. Global Change Research Program.
    • Naomi referenced U.S. agencies that have been created with sunset clauses. While several statutes have been created with sunset clauses, no federal agency has been.

    CLARIFICATION: Naomi referenced the U.S. automobile industry claiming that they would be “destroyed” by seatbelt regulation. We couldn’t verify this specific language but it is consistent with the anti-regulatory stance of that industry toward seatbelt laws.

    Show More Show Less
    51 mins
  • The Man Who Predicted the Downfall of Thinking
    Mar 6 2025

    Few thinkers were as prescient about the role technology would play in our society as the late, great Neil Postman. Forty years ago, Postman warned about all the ways modern communication technology was fragmenting our attention, overwhelming us into apathy, and creating a society obsessed with image and entertainment. He warned that “we are a people on the verge of amusing ourselves to death.” Though he was writing mostly about TV, Postman’s insights feel eerily prophetic in our age of smartphones, social media, and AI.

    In this episode, Tristan explores Postman's thinking with Sean Illing, host of Vox's The Gray Area podcast, and Professor Lance Strate, Postman's former student. They unpack how our media environments fundamentally reshape how we think, relate, and participate in democracy - from the attention-fragmenting effects of social media to the looming transformations promised by AI. This conversation offers essential tools that can help us navigate these challenges while preserving what makes us human.

    Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_

    RECOMMENDED MEDIA

    “Amusing Ourselves to Death” by Neil Postman

    ”Technopoly” by Neil Postman

    A lecture from Postman where he outlines his seven questions for any new technology.

    Sean’s podcast “The Gray Area” from Vox

    Sean’s interview with Chris Hayes on “The Gray Area”

    "Amazing Ourselves to Death," by Professor Strate

    Further listening on Professor Strate's analysis of Postman.

    Further reading on mirror bacteria


    RECOMMENDED YUA EPISODES

    ’A Turning Point in History’: Yuval Noah Harari on AI’s Cultural Takeover

    This Moment in AI: How We Got Here and Where We’re Going

    Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt

    Future-proofing Democracy In the Age of AI with Audrey Tang

    CORRECTION: Each debate between Lincoln and Douglas was 3 hours, not 6 and they took place in 1859, not 1862.

    Show More Show Less
    59 mins