AI-Curious with Jeff Wilser

By: Jeff Wilser
  • Summary

  • A podcast that explores the good, the bad, and the creepy of artificial intelligence. Weekly longform conversations with key players in the space, ranging from CEOs to artists to philosophers. Exploring the role of AI in film, health care, business, law, therapy, politics, and everything from religion to war.

    Featured by Inc. Magazine as one of "4 Ways to Get AI Savvy in 2024," as "Host Jeff Wilser [gives] you a more holistic understanding of AI--such as the moral implications of using it--and his conversations might even spark novel ideas for how you can best use AI in your business."

    © 2024 AI-Curious with Jeff Wilser
    Show More Show Less
activate_samplebutton_t1
Episodes
  • The AI Risks (Almost) No One is Talking About, w/ Tech Critic Sara M. Watson
    Sep 19 2024

    In this episode of AI-Curious, Jeff Wilser dives deep into AI critiques with Sara Watson, a leading technology critic. The conversation explores the often overlooked risks of AI, moving beyond the usual dystopian scenarios of robots taking over and instead addressing the more subtle, real-world implications of AI.

    Sara is the former lead analyst at Forrester, the co-founder of co-founder A People’s History of Tech, and has written for outlets such as The New Yorker, The Atlantic, and Wired. She brings a thoughtful and nuanced perspective to the current AI landscape. On this episode we cover AI’s environmental toll, the concentration of power, bias in AI training data, and the challenges of distinguishing truth in an era of deepfakes.If you’re curious about how technology shapes society and what AI means for our future, this is an episode you don’t want to miss.

    Key Topics:

    •[00:02:00] The Role of Criticism: It’s Not About Negativity or Nihilism

    •[00:04:00] Being “Cautiously Optimistic” and Understanding the Hype Cycle

    •[00:05:00] Investigating AI’s Claims in a Critical Moment of Hype

    •[00:06:00] The Importance of Asking the Right Questions

    •[00:07:00] Long-Term Existential AI Risks vs. Present Harms

    •[00:08:00] Bias in AI: The Historical Injustices Baked into Data Sets

    •[00:11:00] Environmental Costs of AI and the Hidden Resource Strain

    •[00:12:00] The Concentration of Power: Who Owns and Controls AI?

    •[00:14:00] The Ethical Challenges of AI Development

    •[00:15:00] How Bias in Training Data Reinforces Systemic Injustice

    •[00:16:00] Human Choices Behind Algorithms: The Myth of Objectivity

    •[00:19:00] Deepfakes and the Erosion of Trust in Visual Media

    •[00:22:00] The Weaponization of AI to Cast Doubt on Authentic Media

    •[00:24:00] Sci-Fi’s Role in AI Critique: From Killer Robots to Governance

    •[00:25:00] The Fragility of the Systems Built on AI: A Conversation on Tech Infrastructure

    •[00:26:00] Tech Criticism Today: How It’s Evolved Over the Last Decade

    •[00:27:00] What Are We Optimizing For? How This Question Shapes AI’s Development

    •[00:29:00] Regulation, Children, and the Role of Government in Tech

    •[00:31:00] Progress for Whom? Criticism as Part of AI’s Development

    •[00:34:00] Future Justice and What AI Should Be Used For

    •[00:35:00] Creativity, Human Time, and AI’s Role in Freeing Us for Innovation

    •[00:36:00] What Sara Means by Being a “Radical Futurist”

    •[00:38:00] AI and Capitalism: Can We Imagine a Different System?


    Key Topics Covered:
    •The role of AI criticism in driving thoughtful progress
    •The environmental impact of AI and its hidden costs
    •Bias in AI algorithms and its real-world consequences
    •The risks of AI misinformation and deepfakes
    •How concentration of power in AI development affects us all


    Sara M. Watson
    http://www.saramwatson.com/

    Toward a Constructive Technology Criticism
    https://www.cjr.org/tow_center_reports/constructive_technology_criticism.php

    Show More Show Less
    43 mins
  • OpenAI’s New o1 Model: Game-Changer or Overhyped? (And 3 quick things.)
    Sep 16 2024

    In this episode of AI-Curious, here's your quick, bite-sized weekly roundup of the latest developments in artificial intelligence, starting with OpenAI’s newest model, the bizarrely-named “o1.” Is it a groundbreaking advancement or just another step in AI’s evolution? We explore both sides, offering a balanced look at the bullish and skeptical takes on the model.

    Key Topics:

    [00:00] - Introduction to OpenAI’s o1 Model

    A quick primer on OpenAI’s “o1” model, highlighting its unique approach to problem-solving and how it compares to previous models like ChatGPT.

    [01:00] - Bullish Take on o1’s Performance

    How o1 outperformed ChatGPT-4 in key benchmarks, including PhD-level science questions, coding, and even scoring an IQ of 120. There’s also the example where o1 developed a 200-year plan to Terraform Mars.

    [02:00] - The skeptical take

    On the flip side, we explore skepticism from OpenAI CEO Sam Altman, who admits the model is still flawed. AI expert Gary Marcus also critiques o1’s limitations, noting its failure in tasks like playing chess.

    [03:00] - Ethan Mollick’s Middle Ground

    We take a closer look at Ethan Mollick’s perspective, where he acknowledges o1’s strength in handling complex work tasks but notes that most people might not find it useful for everyday applications.

    [04:00] - 3 Quick Things:

    [04:00] - The White House’s new AI infrastructure task force and its role in managing AI’s growing energy consumption, which could account for 17% of U.S. electricity demand by 2030.

    [05:00] - Fei-Fei Li’s $230 million spatial AI startup, World Labs, and how it’s contributing to the future of humanoid robots.

    [06:00] - The legacy of James Earl Jones and how AI voice cloning could preserve iconic voices for the future.

    Mars terraform AI prompt:
    https://docs.google.com/document/d/1JTF411tMmicqEe6HJ9OnhRNswTq8qIjcZUzh_GhppPM/edit

    Futurism's article on James Earl Jones:
    https://futurism.com/the-byte/james-earl-jones-voice-rights-ai

    Jeff Wilser on Twitter/X:
    https://x.com/jeffwilser

    AI-Curious on YouTube:
    https://www.youtube.com/playlist?list=PLT9Zee6EXhjoVyGT7ihvv8EeaN1BQLExN

    Show More Show Less
    8 mins
  • AI and Race, w/ Dr. Broderick Turner & Angela Yi
    Sep 12 2024

    Two researchers from Virginia Tech recently published a study: “Representations and Consequences of Race in AI Systems.“

    I'm thrilled that the co-authors of the study, Dr. Broderick Turner (Assistant Professor of Marketing at Virginia Tech) and Angela Yi (Doctoral Candidate at Virginia Tech) could join AI-Curious to discuss the takeaways from their study, and more broadly, help us unpack the complicated intersection between AI and race. When should AI models include race? When should they *ignore* race?

    Or more to the point, how should AI developers be thinking about race?

    Dr. Turner and Angela get into all of this, offering concrete advice for those building AI.

    https://news.vt.edu/articles/2024/08/pamplin-race-ai.html

    https://www.sciencedirect.com/science/article/abs/pii/S2352250X24000447?dgcid=author

    https://www.jointhetrap.com/

    Show More Show Less
    44 mins

What listeners say about AI-Curious with Jeff Wilser

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.