What's Here? - Table of Contents
AI is already here. It’s curating what we read, filtering what we see, recommending what we buy—and in subtle ways, it’s shaping how we understand the world. For students, these systems are often invisible. They click, scroll, tap, and swipe—rarely stopping to wonder why a particular video showed up first, or why a chatbot responded the way it did.
But those questions matter. And for multilingual learners—students navigating school while learning English—those questions take on even more weight.
AI systems don’t speak neutral languages. They’re built on data drawn from dominant cultures, dominant perspectives, and dominant voices. Without guidance, multilingual learners may find themselves not only consumers of AI but also shaped by systems that don’t fully understand or represent them. Worse, they may feel those systems are off-limits—something only fluent English speakers or tech-savvy peers get to explore or question.
And yet, there’s enormous potential here.
When students engage with AI in hands-on, creative ways—telling stories about it, acting out its decisions, even programming it in simplified environments—they begin to peel back the curtain. They see how algorithms rely on rules, how data can carry hidden biases, how “smart” systems learn from people like them. In these moments, multilingual learners aren’t just catching up—they’re actively building critical thinking, expressive language, and digital fluency, all at once.
AI literacy becomes more than a technical skill. It becomes a language-rich, collaborative opportunity to bridge worlds: home and school, first language and English, identity and innovation. It makes room for play, inquiry, and voice.
And perhaps most importantly—it makes equity tangible.
There’s a quiet transformation that happens when a student who’s often underestimated starts explaining how an algorithm works. Or when a fifth grader, still building confidence in English, leads their group through a role-play of how an AI might recommend news stories. These aren’t just learning moments. They’re glimpses of a future where all students—not just the loudest or most fluent—feel ownership over the tools shaping their world.
That kind of future doesn’t build itself. It starts here—with small shifts in how we approach technology, language, and the learners in front of us.
Ask a group of students what artificial intelligence is, and the answers might range from “robots that talk” to “whatever’s behind TikTok.” And they wouldn’t be wrong—at least not entirely. But those answers barely scratch the surface. AI literacy isn’t just about recognizing that something uses AI. It’s about understanding what AI is, how it works, why it matters, and who gets to shape it.
That’s where literacy takes on new meaning.
At its heart, AI literacy is the ability to critically engage with systems that think, learn, and make decisions based on data. It includes understanding how algorithms function, where they get their information, how they interpret it, and how those interpretations can reflect—intentionally or not—human bias. It’s the skill of asking, “Why did the system choose that?” or “What might it be missing?”
And just like reading or math, AI literacy isn’t optional anymore.
But here’s the deeper layer: multilingual learners often approach AI systems with a different set of lenses. They may interact with translation tools daily. They might rely on voice assistants that don’t always understand their accent or phrasing. Their lived experience gives them an intuitive sense that technology is not neutral—it either includes or excludes, understands or misunderstands.
That insight is powerful.
Too often, discussions around tech equity focus solely on access—whether a student has a device or internet at home. But equity isn’t just about bandwidth. It’s also about participation. Who gets to ask the big questions? Who sees themselves as capable of understanding how systems work? And who feels invited into the conversation about ethics, fairness, and impact?
Multilingual learners deserve more than a seat at the table. They deserve to help set the agenda.
Engaging MLs in AI literacy offers a double benefit. First, it builds a critical future-ready skill. Second, it supports their language growth in ways that are natural, motivating, and context-rich. AI concepts—like decision trees, patterns, or feedback loops—offer perfect springboards for developing new vocabulary and complex syntax. When students describe how an algorithm “learns,” or explain why a system might be biased, they’re doing far more than tech analysis—they’re using language with purpose.
And when they do so in both English and their home languages? That’s translanguaging in action.
Rather than forcing students to compartmentalize their thinking—English over here, native language over there—AI literacy provides opportunities to bridge. A student might brainstorm in Spanish, draft explanations in English, and clarify their thinking in both. That blending isn’t a crutch; it’s a cognitive superpower.
What’s more, discussions around AI are deeply cultural. AI systems are trained on patterns found in data—often reflecting dominant cultural norms. Who gets represented? Whose language is understood by speech recognition? Whose values are encoded in content moderation algorithms?
When multilingual learners examine these questions, they’re not just decoding technology. They’re reflecting on identity, fairness, and voice.
That’s the real promise of AI literacy in diverse classrooms. It’s not just about preparing students to work with machines. It’s about preparing them to question the systems that shape their world—and to imagine something better.
When educators ask how to teach AI to younger students—especially those still learning English—the most common concern is, “Where do we even begin?” The subject feels technical, complex, even intimidating. But sometimes the most powerful teaching starts not with explanations, but with experiences.
That’s exactly the spirit behind the approach developed by researchers at the University of California, Irvine. Working with multilingual middle schoolers during a summer learning camp, the team set out to explore how AI literacy could be taught in ways that are not just accessible, but deeply engaging. This wasn’t a lab experiment—it was a pedagogical inquiry grounded in real classrooms, real students, and the real challenges educators face.
What they discovered wasn’t a new curriculum or set of rigid lesson plans. It was a flexible framework—three simple, creative strategies that use storytelling, role-play, and basic programming as vehicles for learning. These aren’t just “fun activities.” Each one is designed to help students understand how AI works, how it learns, and how it can go wrong.
The UC Irvine team began with two guiding questions:
Which AI-literacy competencies matter most for middle-grade learners?
They leaned on Long & Magerko’s framework (decision-making, programmability, learning from data, sensors, action–reaction).
How can those competencies be taught in ways that double as language scaffolds?
Drawing on bilingual–STEM research, they embedded translanguaging, multimodal expression (words, images, movement), and conversational-agent examples so that AI talk would add to English growth—not compete with it.
From that synthesis emerged three high-leverage strategies—each mapped to a specific AI competency set and deliberately low-tech so any classroom or library could adapt them.
AI focus: Algorithms & decision trees
Language focus: Conditionals (“if…then”), narrative sequencing
Students first probed how Siri decides to wake up, then built “choose-your-own-adventure” tales on two-layer decision trees. By writing sentences like “If the sensor hears ‘Hey Siri,’ then activate,” learners internalised both algorithmic logic and conditional syntax—in English and Spanish.
Observed outcomes
High engagement: every student produced a branching story they could explain aloud.
Vocabulary uptake: terms such as node, branch, root entered classroom talk naturally.
Early criticality: learners began asking, “What happens when the AI mis-hears?”—a doorway to bias discussions.
AI focus: Training data, accuracy, iterative improvement
Language focus: Command-response pairs, data labels, metacognitive talk
Pairs created mini “databases” of prompt-and-answer pairs; one student “trained” as the conversational agent, another “tested” it. Competitive scoring on response accuracy turned abstraction into a game.
Observed outcomes
Kinesthetic & social energy kept multilingual students talking—in both languages—about prompts, training, testing, and accuracy.
Misclassifications sparked laughter → analysis: “Why did I give the wrong response? Did I forget that datum?”
Embodied experience cemented the idea that machines “learn” only what humans feed them.
AI focus: Perception via sensors, modality conversion, chained algorithms
Language focus: Translanguaging, technical process narration
Using Scratch blocks, learners built a three-step pipeline: capture speech ➜ translate ➜ speak output in a new language. Questions about pronunciation glitches or mistranslations drew direct links between dataset limitations and linguistic diversity.
Observed outcomes
Visible curiosity: students repeatedly tested phrases in multiple languages, comparing results.
Deeper linguistic pride: several imagined chatbots that could code-switch as fluidly as they do.
Concrete grasp of “sensor ➜ processor ➜ actuator” loops—core to any AI system.
Engagement through multimodality: drawings, gestures, L1+L2 dialogue, and code blocks operated side-by-side, letting every learner contribute.
Reinforcing language growth: explicit conditional structures and tech vocabulary appeared in both spoken and written work.
Equity in action: quieter students—often those less confident in English—took leadership roles when their translanguaging skills illuminated how AI “hears” multiple tongues.
Limitations acknowledged: The project was a design narrative, not a controlled study; still, observational evidence suggests a mutually beneficial link between AI-literacy growth and second-language development, warranting future empirical research.
Each strategy meets students where they are. They don’t assume advanced English proficiency or prior technical knowledge. Instead, they build language and literacy alongside computational thinking. They center curiosity over correctness, and creativity over complexity.
In practice, these strategies invite multilingual learners to do more than just decode AI—they get to imagine it, perform it, and even build it. That’s a powerful shift. Because when students understand how AI works, they’re more likely to ask whether it’s working for them.
And those are the kinds of questions we want every learner—especially those whose voices are too often overlooked—to feel confident asking.
Bringing AI literacy into the classroom doesn’t require a high-tech lab or a computer science degree. What it does require is a mindset shift: from explaining technology to inviting students to explore it, question it, and simulate it in ways that are creative, collaborative, and language-rich.
These three strategies—storytelling, role-playing, and programming—are flexible by design. Each one introduces foundational AI concepts while supporting language development, critical thinking, and student voice. Whether you’re teaching in a tech-equipped classroom, a modest school library, or a pop-up makerspace, these ideas can meet you where you are.
Concept:
Storytelling lets students make sense of algorithms and decision trees through narrative. By casting AI as a character—or as the invisible force behind a character’s choices—students begin to understand how rules and data shape outcomes. Just like in a story, every input (choice) leads to a consequence (plot).
Students invent characters like a lost robot or a confused smart vacuum. They give it a task—“sort toys,” “find the way home”—and then decide what rules it will follow.
Use sentence frames: “If it sees a red block, it will…”
Introduce surprises: What happens if the robot sees two red blocks? Or a green one?
Tools: Storyboarding paper, drawing supplies, or digital comic platforms like StoryboardThat.
Students build a character who interacts with a personalized AI—like a streaming service or social media feed. They outline what data the AI collects, how it uses it, and what outcomes it creates.
Discussion prompt: “Is it always fair? How could this be biased?”
Language integration: Encourage writing in first-person perspective for deeper empathy and richer vocabulary.
Tools: Slide decks, comics, digital story creators, or written short stories. Great for ELA tie-ins.
Why it works for MLs:
Uses familiar story structures and visual supports.
Encourages creative expression in both home language and English.
Builds conditional logic vocabulary (if/then, unless, because).
Low-pressure way to explore fairness, bias, and reasoning.
Concept:
When students act out AI systems, they begin to internalize how machines “learn” from inputs. These activities are active, physical, and full of communication—ideal conditions for multilingual learners to engage in exploratory talk, hypothesis-making, and language practice.
One student is the AI. Others are “data”—wearing cards labeled “blue shirt,” “red hat,” “green shoes.”
The AI receives a rule: “If red, go left.”
New data enters: What happens if someone has both red and blue?
Students reflect on how the AI “learns” to adapt.
Tools: Index cards, construction paper, masking tape for movement zones. Entirely unplugged.
Simulate a social media feed.
Students are the AI, sorting content based on different “user profiles” (e.g., sports fan, music lover, political activist).
As they sort content, they begin to notice: the algorithm creates different realities for each user.
Why it works for MLs:
Kinesthetic learning supports language retention.
Speaking and negotiating rules reinforces vocabulary and conditional phrasing.
Visual cues make abstract AI behaviors concrete.
Collaborative structure builds confidence for students still gaining fluency.
Concept:
This strategy helps students see how AI systems process sensory input (like sound), transform it, and respond. Even in simplified, block-based environments, the logic of input ➜ processing ➜ output becomes accessible and interactive.
Students use Scratch or Google CS First to build a simple chatbot that reacts to typed words with pre-set responses or sounds.
Example: Type “Hello,” the bot says “Hi there!”
Scaffold logic with sentence frames like: “If you say ____, then the bot says ____.”
Tools: Scratch, Google CS First.
Using block-based or simplified Python (via Trinket.io or Replit), students experiment with basic language-processing tasks.
Example: Use a basic speech-to-text tool to turn voice into writing, then translate it and play it back.
Discussion prompt: “What happens when it misinterprets?” or “Whose voice is understood best?”
Why it works for MLs:
Connects programming to real communication challenges.
Helps students explore accents, clarity, and language norms.
Offers translanguaging opportunities (e.g., test phrases in multiple languages).
Introduces debugging as a form of reflective thinking.
Time: Each strategy can be a 1–2 period activity or expanded into a multi-day mini-unit.
Tech-lite: All strategies have “unplugged” or low-device versions.
Language support: Use word banks, sentence frames, dual-language visuals, and peer scaffolding to create access points.
Assessment: Focus on discussion, explanation, and reflection—not technical perfection.
At their core, these strategies aren’t just about teaching AI—they’re about inviting students to think like designers, question like critics, and communicate like educators. And for multilingual learners, that means being seen not just as tech users, but as insightful contributors to how AI should work in a diverse world.
Equipping students with AI literacy—especially multilingual learners—isn’t a job for one subject or one specialist. It’s a shared effort, and school librarians, classroom educators, and aspiring professionals each bring something essential to the table.
What’s powerful is that these roles aren’t isolated—they interlock. When collaboration happens across classrooms, libraries, and learning communities, students get a deeper, more connected experience.
The school library has long been the heart of information literacy. Now it can be the nerve center for AI literacy, too. With the right mindset and tools, librarians can bridge the gap between digital curiosity and critical understanding.
Curriculum Integration
Partner with ELA, science, social studies, and tech teachers to embed storytelling, role-playing, and coding strategies into classroom projects. AI doesn’t need its own unit—link it to the work already happening.
Co-Teaching & Planning
Offer to co-lead lessons or develop learning stations around AI activities. Your space and expertise in inquiry-based learning are a huge asset.
Resource Curation
Build collections of books, websites, videos, and tools that support AI exploration at multiple reading and language levels. Include titles that examine ethics, representation, and global impacts of technology.
Professional Development Leadership
Host short PD sessions or informal coffee chats with staff to demystify AI literacy. Share success stories and invite colleagues to experiment.
Makerspaces & Innovation Hubs
Use existing makerspace materials to simulate AI systems: sorting games, storytelling tools, Scratch stations. These spaces don’t need fancy robots to become centers of computational thinking.
Advocacy
Position AI literacy as an extension of digital citizenship, media literacy, and equitable access. Make the case that understanding algorithms is no longer optional—it’s a form of modern civic education.
You don’t need to be a tech expert to teach AI concepts. These strategies are designed to fit into familiar routines—creative writing prompts, social simulations, coding warm-ups, or group projects. What matters most is helping students think critically, communicate clearly, and reflect on the systems around them.
Start Small
Use a single storytelling activity or unplugged simulation as a warm-up or station. Reflect on what worked, then expand gradually.
Leverage the School Librarian
Collaborate on co-teaching lessons or ask for support sourcing tools and resources. Librarians often have access to platforms or spaces you can tap into.
Model Curiosity
Let students see you learning with them. Ask out loud: “How would this AI decide?” or “What do you think it would do with this data?” That habit of wondering becomes contagious.
Encourage Dialogue Across Languages
Invite students to explain AI concepts using their full linguistic repertoire—English, home language, visual representations, gestures. It’s not a detour; it’s deep learning.
Whether you’re still in school or just entering the field, now is the time to develop your AI literacy voice. These strategies offer an opportunity to blend creativity, pedagogy, and tech innovation in ways that matter.
Build Your Portfolio
Create sample lesson plans or digital activities that use storytelling, role-play, or Scratch to teach AI concepts. Show how they support language learners.
Seek Fieldwork That Embraces Innovation
Look for practicum placements in schools that are exploring makerspaces, digital citizenship, or inquiry-based learning. Offer to contribute an AI literacy mini-unit.
Stay Informed & Reflective
Follow thought leaders in AI and education (especially those focused on equity, multilingual learners, and K–12). Keep notes on what inspires you and how you might adapt it for your future role.
No matter your title or years of experience, your role is clear: to help students ask better questions, make sense of powerful tools, and speak up in the systems that shape their lives. AI isn’t a siloed subject—it’s woven into everything. And when we approach it with equity, collaboration, and imagination, every student—especially those learning in more than one language—gets a chance to lead, not just follow.
AI isn’t just a tool of the future—it’s a force shaping the lives of students right now. It influences what they see, how they learn, and even how they understand themselves. For multilingual learners, who are often balancing new languages, cultures, and expectations all at once, this moment carries even greater weight.
Teaching AI literacy in ways that are hands-on, creative, and linguistically inclusive is not just an academic choice—it’s an act of equity. It ensures that students who might otherwise be overlooked are instead invited to engage, question, and contribute. It says: your language, your story, your perspective matters here.
And that’s where real empowerment begins.
When a fifth grader builds a comic strip about a confused robot… when a middle schooler acts out an algorithm and realizes how bias creeps in… when a multilingual student writes their first line of code in Scratch and watches it come to life—those are not small victories. Those are seeds of agency. They are moments when students shift from passive users of technology to active shapers of it.
None of this requires perfection. It starts with small experiments, playful questions, and the willingness to say, “Let’s figure this out together.” What matters most is that these conversations happen—that students learn to think critically about the systems around them and see themselves as part of the story.
The future of AI is still being written. The classroom—and the school library—can be where that story begins to change.
And when multilingual learners are part of the authorship?
The future gets smarter, fairer, and infinitely more human.