Episodes

  • The AI Risks (Almost) No One is Talking About, w/ Tech Critic Sara M. Watson
    Sep 19 2024

    In this episode of AI-Curious, Jeff Wilser dives deep into AI critiques with Sara Watson, a leading technology critic. The conversation explores the often overlooked risks of AI, moving beyond the usual dystopian scenarios of robots taking over and instead addressing the more subtle, real-world implications of AI.

    Sara is the former lead analyst at Forrester, the co-founder of co-founder A People’s History of Tech, and has written for outlets such as The New Yorker, The Atlantic, and Wired. She brings a thoughtful and nuanced perspective to the current AI landscape. On this episode we cover AI’s environmental toll, the concentration of power, bias in AI training data, and the challenges of distinguishing truth in an era of deepfakes.If you’re curious about how technology shapes society and what AI means for our future, this is an episode you don’t want to miss.

    Key Topics:

    •[00:02:00] The Role of Criticism: It’s Not About Negativity or Nihilism

    •[00:04:00] Being “Cautiously Optimistic” and Understanding the Hype Cycle

    •[00:05:00] Investigating AI’s Claims in a Critical Moment of Hype

    •[00:06:00] The Importance of Asking the Right Questions

    •[00:07:00] Long-Term Existential AI Risks vs. Present Harms

    •[00:08:00] Bias in AI: The Historical Injustices Baked into Data Sets

    •[00:11:00] Environmental Costs of AI and the Hidden Resource Strain

    •[00:12:00] The Concentration of Power: Who Owns and Controls AI?

    •[00:14:00] The Ethical Challenges of AI Development

    •[00:15:00] How Bias in Training Data Reinforces Systemic Injustice

    •[00:16:00] Human Choices Behind Algorithms: The Myth of Objectivity

    •[00:19:00] Deepfakes and the Erosion of Trust in Visual Media

    •[00:22:00] The Weaponization of AI to Cast Doubt on Authentic Media

    •[00:24:00] Sci-Fi’s Role in AI Critique: From Killer Robots to Governance

    •[00:25:00] The Fragility of the Systems Built on AI: A Conversation on Tech Infrastructure

    •[00:26:00] Tech Criticism Today: How It’s Evolved Over the Last Decade

    •[00:27:00] What Are We Optimizing For? How This Question Shapes AI’s Development

    •[00:29:00] Regulation, Children, and the Role of Government in Tech

    •[00:31:00] Progress for Whom? Criticism as Part of AI’s Development

    •[00:34:00] Future Justice and What AI Should Be Used For

    •[00:35:00] Creativity, Human Time, and AI’s Role in Freeing Us for Innovation

    •[00:36:00] What Sara Means by Being a “Radical Futurist”

    •[00:38:00] AI and Capitalism: Can We Imagine a Different System?


    Key Topics Covered:
    •The role of AI criticism in driving thoughtful progress
    •The environmental impact of AI and its hidden costs
    •Bias in AI algorithms and its real-world consequences
    •The risks of AI misinformation and deepfakes
    •How concentration of power in AI development affects us all


    Sara M. Watson
    http://www.saramwatson.com/

    Toward a Constructive Technology Criticism
    https://www.cjr.org/tow_center_reports/constructive_technology_criticism.php

    Show More Show Less
    43 mins
  • OpenAI’s New o1 Model: Game-Changer or Overhyped? (And 3 quick things.)
    Sep 16 2024

    In this episode of AI-Curious, here's your quick, bite-sized weekly roundup of the latest developments in artificial intelligence, starting with OpenAI’s newest model, the bizarrely-named “o1.” Is it a groundbreaking advancement or just another step in AI’s evolution? We explore both sides, offering a balanced look at the bullish and skeptical takes on the model.

    Key Topics:

    [00:00] - Introduction to OpenAI’s o1 Model

    A quick primer on OpenAI’s “o1” model, highlighting its unique approach to problem-solving and how it compares to previous models like ChatGPT.

    [01:00] - Bullish Take on o1’s Performance

    How o1 outperformed ChatGPT-4 in key benchmarks, including PhD-level science questions, coding, and even scoring an IQ of 120. There’s also the example where o1 developed a 200-year plan to Terraform Mars.

    [02:00] - The skeptical take

    On the flip side, we explore skepticism from OpenAI CEO Sam Altman, who admits the model is still flawed. AI expert Gary Marcus also critiques o1’s limitations, noting its failure in tasks like playing chess.

    [03:00] - Ethan Mollick’s Middle Ground

    We take a closer look at Ethan Mollick’s perspective, where he acknowledges o1’s strength in handling complex work tasks but notes that most people might not find it useful for everyday applications.

    [04:00] - 3 Quick Things:

    [04:00] - The White House’s new AI infrastructure task force and its role in managing AI’s growing energy consumption, which could account for 17% of U.S. electricity demand by 2030.

    [05:00] - Fei-Fei Li’s $230 million spatial AI startup, World Labs, and how it’s contributing to the future of humanoid robots.

    [06:00] - The legacy of James Earl Jones and how AI voice cloning could preserve iconic voices for the future.

    Mars terraform AI prompt:
    https://docs.google.com/document/d/1JTF411tMmicqEe6HJ9OnhRNswTq8qIjcZUzh_GhppPM/edit

    Futurism's article on James Earl Jones:
    https://futurism.com/the-byte/james-earl-jones-voice-rights-ai

    Jeff Wilser on Twitter/X:
    https://x.com/jeffwilser

    AI-Curious on YouTube:
    https://www.youtube.com/playlist?list=PLT9Zee6EXhjoVyGT7ihvv8EeaN1BQLExN

    Show More Show Less
    8 mins
  • AI and Race, w/ Dr. Broderick Turner & Angela Yi
    Sep 12 2024

    Two researchers from Virginia Tech recently published a study: “Representations and Consequences of Race in AI Systems.“

    I'm thrilled that the co-authors of the study, Dr. Broderick Turner (Assistant Professor of Marketing at Virginia Tech) and Angela Yi (Doctoral Candidate at Virginia Tech) could join AI-Curious to discuss the takeaways from their study, and more broadly, help us unpack the complicated intersection between AI and race. When should AI models include race? When should they *ignore* race?

    Or more to the point, how should AI developers be thinking about race?

    Dr. Turner and Angela get into all of this, offering concrete advice for those building AI.

    https://news.vt.edu/articles/2024/08/pamplin-race-ai.html

    https://www.sciencedirect.com/science/article/abs/pii/S2352250X24000447?dgcid=author

    https://www.jointhetrap.com/

    Show More Show Less
    44 mins
  • Robot Butlers are (Sort of) Here, and Apple AI Looms Large
    Sep 11 2024

    Welcome to the debut episode of The Weekly AI Edge!

    This is part of the AI-Curious podcast. Each week you'll be getting short, punchy roundups of key AI news. We'll get you in and out.

    This week:

    A company called Weave Robotics announced that it's now taking pre-orders for humanoid robots that can do household chores -- like fold laundry, feed your dog, etc. This is just the latest development in humanoid robots, coming fast on the heels of other updates from companies ranging from OpenAI to Tesla.

    And 3 quick things:
    1. Apple gets ready to launch Apple Intelligence... and why this is a sneaky-big deal.
    2. Trump gets AI-pic treatment with kittens and ducks...in more AI-fueled election misinformation.
    3. OpenAI ready to release its biggest update since ChatGPT4?

    Enjoy the ep!

    https://www.weaverobots.com/

    https://www.theinformation.com/articles/new-details-on-openais-strawberry-apples-siri-makeover-larry-ellison-doubles-down-on-data-centers

    Show More Show Less
    10 mins
  • Trailer for AI-Curious: The Weekly Edge
    Sep 5 2024

    Fun announcement!

    Starting next week, AI-Curious will be serving up two episodes each week:

    1) One long-form interview with thought leaders in the space (what we've been doing for the past week), and..

    2) "The Edge," a weekly roundup of AI news. This will be short, punchy, and fun(ish). Each week we'll have one headline topic, and then three quick things. That's it. No muss, no fuss. For busy people who want to stay abreast of key developments in the AI space, but don't want to devote a ton of time to do it.

    Gonna be a fun ride!

    Show More Show Less
    3 mins
  • AI in Hollywood (part ii), with Toonstar's John Attanasio and Luisa Huang
    Aug 30 2024

    Happy Anniversary!

    Exactly one year ago, we launched AI-Curious. We've published an episode every week since, rain or shine, work or holiday, human or AI. We've spoken with CEOs, philosophers, artists, inventors -- but it all started with a conversation with two Hollywood producers: John Attanasio and Luisa Huang, the cofounders of Toonstar studios.

    Toonstar has been using AI for years to make technology -- particularly animation -- easier, cheaper, and faster for creators. This makes it more inclusive. And in the past year they have been VERY busy. So for our one-year anniversary, I wanted to invite John and Luisa back on the pod as our first-ever repeat guests.

    Since we spoke, Toonstar launched the hit series StEvEn & Parker, which now has over 30 million weekly YouTube videos. AI is a big part of the story.

    We get into the current vibes of AI and Hollywood (the good, the bad, and the contrarian), how projects like StEvEn & Parker help "expand the pie," and much of the nuance of how the industry is thinking about artificial intelligence.

    Enjoy the episode and Happy Anniversary.

    StEvEn & Parker
    https://www.youtube.com/channel/UCT62Tu2qTUQUsHfo4E0X9_Q

    Deadline's story on StEvEn & Parker
    https://deadline.com/2024/03/steven-parker-random-house-toonstar-parker-james-deal-1235848736/

    Toonstar
    https://www.toonstar.com

    Show More Show Less
    34 mins
  • The Case for Decentralized AI, w/ Aethir Cofounder Mark Rydon
    Aug 23 2024

    AI, as even its supporters acknowledge, has many problems today. Just a few: Concerns over data privacy, creators not getting compensated for supplying the data that trains the AI models, lack of trust of Big Tech AI, a shortage of the high-powered chips needed to train AI, debates over censorship, and on and on.

    Could Decentralized AI be the answer?

    I'm joined today by Mark Rydon, cofounder of Aethir, one of the leading companies in Decentralized AI. Aethir is working, essentially, to make it easier for more people to gain access to the GPUs needed to train AI -- wherever they are on the globe.

    Mark and I walk through the current challenges of "centralized" (or Big Tech) AI, consider the merits of a Decentralized approach, take a sober look at the challenges that need to be overcome, and then hear about how Aethir is working to crack the problem.

    Fun episode - enjoy!

    Aethir:
    https://aethir.com/

    Mark Rydon on Twitter/X:
    https://x.com/MRRydon

    Mark Rydon on LinkedIn:
    https://www.linkedin.com/in/markrydon/?originalSubdomain=sg

    Show More Show Less
    50 mins
  • AI, Transhumanism, and the Quest for Longevity, w/ Futurist Dr. Natasha Vita-More
    Aug 15 2024

    Can AI help us live forever?

    Okay, that's the click-baitey version of the question, but the real version is far more nuanced. Our guest today is a leader of the transhumanism movement -- Dr. Natasha Vita-More -- and she explains what it is, what it's *not*, how AI is involved, and why we are likely to see a dramatic improvement to our lifespan...

    Dr. Vita-More dispels some common misconceptions, that transhumanism is just trying to turn us into cyborgs. The roots of transhumanism, at heart, go back to the birth of philosophy -- she's as inspired by Plato as she is the latest tech.

    We get into all of this, from advancements in data analysis to nano-robots that can repair our cells.

    Super fun conversation - hope you enjoy.

    Dr. Natasha Vita-More on Wikipedia:
    https://en.wikipedia.org/wiki/Natasha_Vita-More

    https://www.natashavita-more.com/

    https://x.com/NatashaVitaMore


    Show More Show Less
    44 mins