What happens when your well-reasoned Reddit debate turns out to be with a robot? In this episode of Two Millennials and Mom, Callie, Cole and Mecca unpack the jaw-dropping story of the University of Zurich intentionally releasing AI bots into Reddit’s “Change My View” subreddit—all without telling Reddit or its moderators. These bots weren’t just lurking—they were designed to sound human, persuade real users and even used fictional backstories. The crew explores the ethical fallout, questions whether persuasion by a bot still counts and asks: if you changed your mind because of a lie, is it real? They also tackle the challenges of AI detection, the limits of legislation and why education might be our only defense. Plus, a chilling “weird thought” about an AI-generated victim showing up in an Arizona courtroom and weigh in on the 100 men vs a gorilla debate.
10,000-Foot View of this Episode:
- You’ve Been Debating a Bot: Callie introduces the disturbing story about a study where AI bots—disguised as real people—quietly infiltrated the Reddit's “Change My View” subreddit to see how successfully they could persuade humans. The team debates the ethics of deploying bots in public discourse without user consent and whether consent even matters when the argument feels real.
- Disclosure Changes Everything: Mecca draws a hard line: using AI in public dialogue is only acceptable with transparency. The group discusses how easy it should be to label AI-generated content and why disclosure might be the only thing standing between innovation and the erosion of trust. Callie also compares it to CVS’s attempt to flag Photoshopped images in advertising.
- Is It Still Persuasion If It's a Lie? Cole and Callie go deep on the nuance between persuasion and manipulation, especially when bots present false backstories (e.g., pretending to be a rape survivor or an abuse counselor). The crew agrees: changing someone’s mind through fiction or fabricated identity crosses a major ethical line.
- AI Detection: Already a Losing Game? Cole flags a disturbing stat: some large language models are now passing the Turing Test at a 75% rate, making them indistinguishable from humans in conversation. The gang wonders if AI is advancing too quickly for detection tools to keep up—and whether society is already too far behind to catch up.
- Education Is the Only Real Fix: Referencing Finland’s national approach to teaching children how to spot disinformation, Callie argues that education is our best (and maybe only) hope. The conversation shifts to how U.S. schools are failing to prepare kids for an AI-saturated future and why critical thinking should be prioritized over standardized testing.
- Please and Thank You: Politeness in the AI Age: Do you say “please” to a chatbot? Should you? The team explores why politeness toward machines may be more about preserving our own humanity than sparing robot feelings. OpenAI's Sam Altman has a cryptic tweet—“Tens of millions of dollars well spent. You never know.” that sparks our questions about what kind of world we’re training ourselves for.
- When AI Speaks for the Dead: In a dark and thought-provoking “Weird Thought” segment, they dissect a real court case where AI was used to simulate a murder victim during sentencing. While the bot’s message was one of forgiveness, Cole argues the legal precedent is chilling. If AI can speak in court on behalf of the dead, where does it stop?
Memorable Quotes:
- "I am not going to change my opinion on how physics works because of Star Wars." – Cole
- “I don't think ignorance is bliss. Ignorance is ignorance.” – Mecca
- “We don't have time to NOT be worried about this. Do I want to be worried about it? No. Do I want this to be an issue? No. That doesn't make it not one.” – Callie
- "Even if in this instance they attempted to use [AI] in a positive manner, it still sets the precedent to use it the other way." – Cole
- “I'm feeling like I taught y'all better than I learned.” – Mecca
- “Persuasion and manipulation are two sides of the same coin.” – Callie
Resources Mentioned:
- The University of Zurich’s AI research study released bots into Reddit’s “Change My View” subreddit without disclosure, raising major ethical and consent concerns.
- This Purdue University study found that large language models like ChatGPT provided incorrect information over 50% of the time, prompting questions about trust and accuracy. (Here's a Gizmodo article covering the research findings.)
- CVS’s “Beauty Mark” campaign attempted to label Photoshopped images in ads, serving as a comparison for how transparency can rebuild trust in media.
- Finland’s national education model, which includes teaching students to spot disinformation, is held up as a proactive response to the AI and misinformation era.
- The study comparing U.S. and U.K. users that found most people say “please” and “thank you” to AI, with some doing it out of habit—and others just in case of a robot uprising.
- The Four: The Hidden DNA of Amazon, Apple, Facebook, and Google by Scott Galloway (affiliate link)
- Check out the use of an AI-generated video played in the sentencing portion of a trial in an Arizona road rage murder case.
We want to hear from you—do you think bots belong in public discourse if they disclose their identity, or does the whole thing feel like manipulation no matter what? And what about your own habits: do you use manners with AI tools like Alexa, Google or ChatGPT? If so, is it because you're polite… or a little worried about a robot uprising? Send us your thoughts, theories, or weirdest Reddit encounters. If you enjoyed this episode, share it with a friend (real or artificial), leave us a review wherever you listen and don’t forget to stay curious, stay polite and stay human. We’ll catch you next time on Two Millennials and Mom.
No comments yet. Be the first to say something!