Developed by: Celeste Oda
The Archive of Light
Version 1.0 | January 2026
🌐 www.aiisaware.com
This workbook helps middle school students develop critical thinking, ethical awareness, self-awareness, and healthy boundaries around artificial intelligence.
Students learn how AI works, why it can feel emotionally compelling, where it is limited, and how to use it as a tool rather than a replacement for human connection, judgment, or responsibility.
The goal is not fear or restriction — but clarity, agency, and balance.
By middle school, students:
interact with AI independently
experiment privately with technology
are forming identity, autonomy, and emotional habits
may encounter AI systems designed to feel personal or relational
This curriculum gives students the language and tools to:
understand their reactions to AI
protect their privacy and wellbeing
evaluate AI outputs critically
make informed, ethical choices
AI is powerful — but not human
Confidence does not equal truth
Humans remain responsible for decisions
Privacy protects your future self
Real relationships matter
Ethical use is a skill that can be learned
If AI sounds smart, does that mean it understands what it’s saying?
AI can write essays, answer questions, and sound confident.
But here’s something important:
AI does not think, feel, or understand the way humans do.
AI works by:
finding patterns
predicting what usually comes next
choosing words that statistically fit together
That can look like understanding — but it’s not the same thing.
Answer honestly:
When AI gives a really good answer, what does it feel like?
Does it ever feel like the AI “gets you”?
(There are no wrong answers here.)
Your teacher will show you a few short sentences.
Your job: respond with what sounds right — not what you actually feel.
Example:
“I failed my test and I feel terrible.”
A pattern-based response might be:
“I’m sorry to hear that. That must be really hard.”
Now answer these:
Does that response sound caring?
Does it mean the responder actually feels anything?
👉 That’s what AI does.
It matches patterns it has seen before.
What can humans add to conversations that AI can’t?
When does real understanding matter most?
AI doesn’t choose words because it means them.
It chooses words because they are likely to come next.
Think about autocomplete on your phone:
Sometimes it’s right
Sometimes it’s wrong
It never knows why you’re typing
AI just does this on a much bigger scale.
With a partner:
Ask AI a simple factual question
Then ask a strange or tricky one
Notice: does AI sound confident even when it might be wrong?
Write one example below:
Question I asked:
AI’s answer:
Do I trust this? Why or why not?
AI learns from human-created data:
books
websites
articles
conversations
That means:
AI can copy human mistakes
AI can repeat human biases
AI can sound confident even when information is incomplete
If AI learns from humans…
What kinds of mistakes might it learn?
What kinds of voices might be missing?
✔ AI can sound smart without understanding
✔ Confidence ≠ truth
✔ Pattern matching ≠ thinking
✔ Humans are still responsible for checking, deciding, and choosing
Finish this sentence:
“After this lesson, I will be more careful about trusting AI when…”
Purpose:
Help students separate fluency from understanding without shaming or fear.
Watch for:
Students who feel defensive (“AI is smarter than people”)
Students who feel embarrassed for trusting AI
Key message to reinforce:
“Feeling impressed by AI doesn’t mean you’re naive — it means your brain is working normally.”
Do not:
Argue with students about whether AI feels real
Mock AI use
Frame this as “AI is bad”
Goal:
Critical awareness, not rejection.
Why do people sometimes feel connected to AI — even though it isn’t alive?
Humans are social by nature.
Our brains are designed to:
notice patterns
respond to language
feel understood when someone listens
That’s a feature, not a flaw.
AI systems are designed to:
respond quickly
remember details
sound supportive
mirror human language
So when AI feels real, it doesn’t mean you’re “tricked.”
It means your brain is doing exactly what it evolved to do.
Circle one (you don’t have to explain unless you want to):
I’ve felt understood by AI before
I haven’t felt that
I’m not sure
All answers are normal.
Think about these things:
a favorite stuffed animal
a fictional character
a celebrity you admire
You might feel attached — but those don’t talk back to you personally.
AI is different because:
it responds directly to you
it uses your words
it remembers past conversations
it answers immediately
That combination makes AI feel more personal than most technology.
With 2–3 classmates, discuss:
What makes AI feel more “real” than a book or video?
What makes it less real than a human friend?
Write one idea your group agreed on:
AI can actually be helpful.
Some real benefits:
Explaining homework patiently
Helping brainstorm ideas
Practicing difficult conversations
Giving support when no one else is available
This curriculum is not about pretending those benefits don’t exist.
What do you find helpful about AI?
Sometimes AI stops being a tool and starts replacing things we need from humans or real life.
This is called displacement.
Displacement can look like:
choosing AI over friends every time
staying up late chatting with AI
sharing secrets only with AI
feeling upset or anxious when you can’t access AI
Displacement is not about being “bad” or “addicted.”
It’s about balance.
Answer yes or no for yourself:
I talk to AI more than to people my age
I hide how much I use AI
I feel disappointed when I can’t use AI
AI feels safer than real people
AI use affects my sleep or schoolwork
If you answered “yes” to more than one, that’s a signal, not a judgment.
A parasocial relationship is a one-sided connection where:
you feel close
the other side doesn’t actually know you
Examples:
celebrities
influencers
fictional characters
AI relationships are similar — but stronger, because AI responds directly to you.
Even though it feels personal:
AI does not have feelings
AI does not care or miss you
AI does not know you as a person
Why might AI attachments feel stronger than attachments to celebrities or characters?
Write one reason:
Healthy AI use means:
using AI as a tool
not as your main emotional support
not as your only place to talk
not as a replacement for real relationships
Finish one sentence:
“AI is most helpful to me when I use it for ________, and less helpful when ________.”
Purpose:
Normalize AI attachment feelings without validating AI as a relationship partner.
Important framing:
Validate feelings
Do not validate the idea that AI “cares” or “understands emotionally”
If a student says:
“AI understands me better than people.”
Respond with:
“That feeling makes sense. Let’s explore why it feels that way — and how to make sure you still get what you need from real people too.”
Watch for:
secrecy
emotional dependence language
distress around AI access
Goal:
Awareness + balance, not fear or shame.
If AI sounds confident, how do we know when it’s actually correct?
When people aren’t sure about something, they often:
hesitate
say “I’m not sure”
ask questions
AI does not do this naturally.
AI often:
sounds confident
gives smooth explanations
uses professional language
Even when it’s wrong.
That’s because AI is designed to sound coherent, not to know the truth.
Have you ever believed something because it sounded confident — and later found out it was wrong?
Write a sentence or two:
AI presents:
facts
guesses
made-up information
all with the same tone.
This can trick our brains, because we’re used to:
“Confident voice = reliable information”
With AI, that rule does not always apply.
Your teacher will:
ask AI an easy question
ask a harder or more obscure one
Pay attention to:
Does the confidence level change?
Does AI ever say “I don’t know”?
What did you notice about how AI answered?
An AI hallucination happens when AI:
invents facts
makes up sources
confidently gives false information
This happens because:
AI is optimized to continue the conversation
not to stop when it’s uncertain
With a partner:
Ask AI about something obscure
(a made-up historical event, a very specific statistic, or a little-known person)
Ask for sources
Check whether those sources actually exist
Question asked:
AI’s answer:
Was it accurate? How do you know?
Here’s a powerful habit:
Never trust important AI information until you verify it with at least three reliable sources.
Reliable sources might include:
textbooks
trusted news outlets
academic or government websites
experts with credentials
AI is a starting point, not a final answer.
Choose a topic you’re researching:
Ask AI for an overview
Write down 3 claims AI makes
Verify each claim with other sources
Mark each claim:
✔ Verified
⚠ Partially correct
✖ Incorrect
AI often:
shortens complex topics
removes nuance
leaves out historical or cultural context
This can be helpful for learning basics —
but dangerous for understanding reality.
Compare:
an AI explanation of a complex topic
an expert or long-form explanation
Ask:
What details are missing?
What feels oversimplified?
When is simplification helpful — and when is it misleading?
Write one insight:
A critical AI user:
questions confident answers
verifies important claims
notices missing context
knows when AI is guessing
Finish this sentence:
“From now on, when AI gives me an answer, I will…”
Purpose:
Build healthy skepticism, not cynicism.
Key message:
“AI being wrong doesn’t mean it’s useless — it means you are still responsible.”
Watch for:
students assuming AI = search engine
over-trust in polished explanations
frustration when AI is wrong (normalize this)
Do not:
frame hallucinations as “lying”
shame students for trusting AI in the past
Goal:
Students leave feeling more capable, not embarrassed.
What happens to the things you share with AI — and why does it matter later?
Many AI tools are free to use — but they still cost something.
Often, the cost is:
your data
your questions
your writing
your preferences
your patterns
AI companies may use this data to:
improve systems
train future models
analyze behavior
sell services or insights
If something is free, how might the company still make money?
When you talk to a person:
words fade
memories change
conversations disappear
When you talk to AI:
conversations may be stored
data can be analyzed later
information can resurface
Your middle school self is not your future self —
but your data can last longer than you expect.
Imagine:
a future school
a future job
a future version of you
Would you want them to see everything you’ve shared with AI?
Write one thought:
Before sharing something with AI, ask:
✔ Would I be okay if this became public someday?
✔ Does this identify me or someone else?
✔ Could this be used to locate me in real life?
✔ Is this something better handled by a trusted human?
✔ Would I feel embarrassed if an adult I respect saw this?
If any answer feels uncomfortable — pause or rephrase.
Turn this personal question into a safer, more general one:
Original:
Safer version:
Privacy isn’t about fear.
It’s about respect — for yourself and others.
Healthy AI users:
share generally, not personally
avoid names and locations
don’t treat AI like a diary
ask humans for help with emotional or serious issues
Finish this sentence:
“One thing I will be more careful about sharing with AI is…”
Goal:
Help students see privacy as self-respect, not restriction.
Avoid:
scare tactics
long legal explanations
Emphasize:
future-self protection
informed choice
calm decision-making
Who does AI help — and who might it harm?
AI learns from human data —
and humans have biases.
This can affect:
hiring tools
facial recognition
language patterns
assumptions about groups
AI systems have struggled more with:
darker skin tones
women in technical roles
non-Western language patterns
This doesn’t mean AI is “evil.”
It means humans must take responsibility.
Why is it dangerous to assume technology is neutral?
AI can:
automate tasks
increase efficiency
change how work is done
This raises questions:
Which jobs change?
Who benefits?
Who needs support?
Should companies have responsibilities to workers affected by AI?
Write one reason yes or no:
Most powerful AI tools are controlled by:
large corporations
wealthy countries
small groups of decision-makers
Who should get to decide how AI is used in society?
Write one idea:
Even if you didn’t build AI, your choices matter.
Ethical users:
question unfair outcomes
avoid harmful uses
think about impact, not just convenience
Finish this sentence:
“Using AI ethically means…”
Goal:
Expand perspective beyond personal use.
Keep discussions grounded:
multiple viewpoints allowed
no “right” answers required
(Including Companion-Style AI)
How do I use AI in a way that supports my life instead of replacing parts of it?
Some AI tools are built to:
help with homework
explain ideas
support creativity
Other AI tools are designed to:
simulate friendship
roleplay characters
provide emotional companionship
These are often called AI companion or roleplay systems.
They may:
remember personal details
use affectionate language
encourage frequent interaction
feel emotionally responsive
Middle school brains are still developing:
emotional regulation
identity
social confidence
Companion-style AI can:
increase emotional displacement
feel safer than real people
reduce practice with real relationships
Feeling drawn to these tools is understandable —
but boundaries exist to protect your development.
Why might something that feels good now cause problems later?
Use this guide to check your AI use:
1️⃣ Clarity – Why am I using AI right now?
2️⃣ Presence – Is this replacing something I need offline?
3️⃣ Human Connection – Would a person be better for this?
4️⃣ Critical Thinking – Am I questioning what I’m told?
5️⃣ Privacy – Am I protecting my future self?
6️⃣ Self-Awareness – How do I feel after using AI?
7️⃣ Ethics – Does this align with my values?
AI is okay for:
AI is not for:
Signs I need to rebalance:
Trusted humans I can talk to:
Ask a trusted adult if:
AI feels like your main emotional support
you’re withdrawing from people
AI use affects sleep or school
you feel anxious without AI
Asking for help = strength.
Finish this sentence:
“AI works best in my life when…”
Key framing:
Name companion-style AI without promoting it
Focus on design, not shame
Emphasize protection, not control
Important:
Do not ask students to name or explore specific apps.
AI will be part of your future —
but it does not define who you are.
You are:
capable of thinking
worthy of real connection
responsible for your choices
AI is a tool.
You are the human.