What's Here? - Table of Contents
In three low-tech, high-engagement activities—storytelling, role-play, and Scratch—students learn how algorithms work, how data carries bias, and how sensors drive action, while simultaneously growing academic language.
AI already shapes what students see, learn, and believe. For most, the systems are invisible—feeds rank content, recommenders suggest videos, and chatbots answer questions without revealing why.
Those “why” questions matter. And for multilingual learners—students navigating school while learning English—they matter even more.
AI systems are trained on non-neutral datasets that reflect dominant patterns; performance can vary across languages, dialects, and cultures. Without guidance, multilingual learners may be positioned as passive consumers of systems that don’t always represent them—or worse, may assume these tools are “for” more fluent peers.
Yet the potential is enormous. When students make and interrogate AI—telling stories about it, acting out its decisions, or programming simple pipelines—they see how rules and data shape outcomes, and they practice purposeful language at the same time. AI literacy becomes a language-rich, collaborative bridge between home and school, first language and English, identity and innovation. Most importantly, equity becomes tangible: a learner explaining how an algorithm works is claiming ownership over the tools that shape daily life.
AI literacy isn’t just spotting that “something uses AI.” It’s the ability to ask and answer: What is this system? How does it work? Why did it choose that? What might it miss—and who gets to shape it?
Multilingual learners bring powerful lenses. Many use translators, voice assistants, and captions daily—experiencing first-hand when technology includes or misunderstands. That lived experience fuels critical questions and authentic language use.
Equity goes beyond access to devices. It’s about participation and voice: Who asks the hard questions? Who feels invited to discuss fairness, bias, and impact? Done well, AI literacy doubles as language development—concepts like decision trees, patterns, training data, and feedback loops naturally introduce precise vocabulary and complex syntax. When learners translanguage—brainstorming in one language, drafting in another, clarifying in both—they do deeper cognitive work, not “use a crutch.”
Working with multilingual middle schoolers in a summer program, the UC Irvine team explored how to introduce core AI ideas in ways that are accessible, engaging, and language-forward. Rather than a rigid curriculum, they surfaced three flexible strategies—storytelling, role-play, and basic programming—that map to widely cited AI-literacy competencies (e.g., decision-making, programmability, learning from data, sensors, action–reaction).
Where the pillars came from. The team asked: Which competencies matter for middle-grade learners? And how can those be taught in ways that scaffold language growth (multimodality, translanguaging, conversational-agent examples)? From that synthesis emerged three high-leverage strategies designed for low-tech classrooms and libraries.
AI focus: algorithms & decision trees
Language focus: conditionals (if/then), narrative sequencing
Students examined how a voice assistant might “wake,” then authored choose-your-own-adventure stories with two-layer decision trees. Writing frames like “If the sensor hears ‘Hey Siri,’ then activate” internalized both algorithmic logic and conditional syntax in English and the home language.
Reported observations from the UCI project included high engagement (every student produced a branching story), natural uptake of terms like node, branch, root, and early critical questions (“What happens when the AI mis-hears?”).
AI focus: training data, accuracy, iterative improvement
Language focus: command–response pairs, labels, metacognitive talk
In pairs, one student played the “trained agent,” the other a tester feeding prompts. Competitive scoring surfaced misclassifications and sparked analysis: Why was the response wrong? Which data were missing?
Reported observations noted sustained talk (often bilingually) about prompts, training, testing, and accuracy; embodied experience helped cement that systems learn only from the data humans provide.
AI focus: perception via sensors, modality conversion, chained algorithms
Language focus: translanguaging, technical process narration
Using Scratch blocks, learners built a three-step pipeline: capture speech → translate → speak output. Questions about pronunciation and mistranslation linked dataset limitations to linguistic diversity. This illustrates the classic “sensor → processor → actuator” loop common to AI-adjacent systems.
Download or print classroom-ready templates for decision-tree storytelling, AI role-play, and Scratch STT→TTS builds—complete with sentence frames and assessment prompts.
Concept: Narrative provides a concrete way to experience rules and consequences—exactly how algorithms and decision trees behave.
Elementary application—“The Robot Who Couldn’t Decide”: Learners design rules (“If it sees a red block, it will…”) and test edge cases. Tools: storyboards, drawings, or a digital comic tool (e.g., StoryboardThat).
Middle/High—“My AI Recommender”: Students model what data a recommender collects, how it ranks, and where bias may creep in. ELA tie-ins via first-person narratives.
Why it works for MLs: familiar story structures, visual supports, conditional-logic vocabulary, and low-pressure discussions of fairness/bias.
Concept: Acting out systems turns abstraction into social, talk-rich learning.
Elementary—“Sorting Robot Game”: One student is the AI; classmates are labeled “data.” The rule changes; so does the behavior.
Middle/High—“News Feed Algorithm”: Students sort posts for profiles (sports fan, music lover, activist) and witness how feeds shape different realities.
Why it works for MLs: kinesthetic learning, visual cues, collaborative negotiation of rules, and repeated practice with target vocabulary.
Concept: Input → processing → output becomes tangible, even in block-based tools.
Elementary—“My Talking Machine”: Scratch or CS First chatbots with preset responses. Sentence frames: “If you say ___, the bot says ___.”
Middle/High—“AI Translator Bot”: Build a simple speech-to-text → translate → text-to-speech chain. Discuss accent handling, error sources, and dataset limits. Tools: Scratch, Google CS First, or beginner-friendly Python hosts.
If the sensor hears ___, then it ___. If it does not hear ___, it ___.
Root/Node (question): “Did the mic hear ‘Hey Siri’?”
Branches: Yes → Action: Wake | No → Action: Keep listening
If [condition], then [action]; else [action].
Root (Node): “Heard ‘Hey Siri’?” ├── Yes (Branch) → Action: Wake └── No (Branch) → Action: Keep listening
“I answered ___ because the rule was ___.” “I misclassified because ___ was missing.”
Which pronunciations worked best? What mistakes appeared? What does that say about the data?
Teaching AI literacy through hands-on, linguistically inclusive activities is equity in action. Each small experiment—drawing a branching story, acting out a classifier, chaining Scratch blocks—moves learners from passive users to active shapers.
It’s the knowledge and skills to understand how AI systems work, question their outputs, and use them responsibly—covering concepts like decision-making, training data, and sensors.
Activities require precise vocabulary, complex sentences (e.g., conditionals), and explanation—authentic contexts that promote academic language and translanguaging.
No. All three strategies have unplugged or low-tech variants; libraries and classrooms can run them with paper, labels, and a few devices for demonstrations.
Elementary: rules and consequences via stories and sorting. Middle: training/testing, accuracy, and multimodal pipelines (speech→text→speech) with reflection on bias.
Use concrete examples (misheard names, captioning errors). Ask who is represented in the data and how rules might change outcomes.