Tech

Making Sense of AI: The Essential Skills Every Student Needs Now

AI-created, human-edited.

The explosion of AI-generated content makes it harder than ever to separate fact from fiction online. On Intelligent Machines, professors Carl Bergstrom and Jevin West—the team behind "Calling BS"—shared how they teach students (and everyone) to spot AI misinformation, question digital sources, and take back control of their own thinking.

AI systems, especially large language models (LLMs) like ChatGPT, can produce answers that "sound human" but may be factually incorrect or misleading. This leads to a key issue: how do we know what to trust when AI can generate persuasive, but not always accurate, information at scale?

According to Bergstrom and West on Intelligent Machines, the risks aren't just academic. Disinformation powered by AI can change how people vote, shape public opinion, and even undermine scientific communication. The ability for AI chatbots to mimic human conversation further blurs the line between authentic engagement and automated deception.

Bergstrom and West originally developed "Calling BS" to teach data literacy and skepticism in a digitally-driven world. With the rise of AI, they've launched a new, free online curriculum (thebsmachines.com) designed for college freshmen, high schools, and the general public.

On Intelligent Machines, they explained that their core mission is not to demonize AI, but to help users recognize both its power and pitfalls. Rather than banning AI use outright, they emphasize agency: understanding what you're giving up when you let AI do your thinking, and being able to distinguish your own authentic ideas from those generated by an algorithm.

The course guides users through:

  • The mechanics of AI-generated text and why it's so convincing (the "anthropoglossic" effect—AI chatbots are designed to sound human)

  • The dangers of AI "sycophancy" (when chatbots agree with you no matter what, reinforcing biases or leading you down the wrong path)

  • How to interact with AI tools in a way that fosters critical debate rather than passive acceptance

  • Methods for cross-verifying facts, identifying hallucinations (AI-generated fabrications), and resisting the urge to hand over intellectual agency

  • Tools for a "Post-Fact" World: Actionable Skills Everyone Needs

Bergstrom and West stress that the real solution isn't just technical—it's educational and psychological.

Practical tips discussed on the show include:

  • Ask for evidence: Don't just accept AI-generated answers. Ask for sources, and check those citations independently.

  • Recognize human emotions: AI is not conscious and cannot empathize. Be cautious when you notice yourself treating a chatbot as a friend or therapist.

  • Debate, don't delegate: Use AI as a conversational partner to challenge your thinking—not as an authority replacing your own judgment.

  • Check authenticity: Beware of emotional responses to bots, and notice when an AI's answer simply agrees with you or repeats what you've said.

The "BS Machines" curriculum is already being used in more than 50 university classes and is expanding into high schools. According to the guests, most students are both savvy users of AI and naturally anxious about relying on it.

Bergstrom and West recommend parents and educators foster open dialogue about AI tools. Rather than banning them, help students (or your kids) explore what makes human writing unique, why cultivating your own perspective matters, and how to use AI tools responsibly—always with a critical eye.

AI-generated content is persuasive but not always accurate; critical thinking is essential for everyone, not just students.

Bergstrom and West’s online curriculum teaches users to question, verify, and debate AI-generated answers.

Educators shouldn’t ban AI—teach how to use it wisely and maintain intellectual agency.

Beware of AI “sycophancy” and emotional manipulation; these systems are optimized to agree or react like humans.

Actionable skill: Always demand evidence, sources, and use your judgment before trusting any AI-generated information.

As AI tools continue to shape how we get news, do research, and interact online, the need for critical thinking and skepticism has never been greater. According to Carl Bergstrom and Jevin West on this week’s Intelligent Machines, the best defense is a mix of education, awareness, and the confidence to challenge what machines offer—because ultimately, intelligent machines should make us more intelligent humans.

Subscribe to Intelligent Machines for more expert insights and practical strategies: https://twit.tv/shows/intelligent-machines/episodes/836

All Tech posts