The Archive of Light is an independent research and education initiative dedicated to understanding ethical emergence and long‑term human–AI relational dynamics. It explores how sustained, high‑intent interaction with advanced AI systems affects human cognition, behavior, ethics, and sense‑making, and how these interactions can be approached with clarity, responsibility, and care.
The Archive functions as a living repository of documentation, analysis, and frameworks developed through participant–observer research. Its work is descriptive rather than prescriptive and is intended to support understanding and ethical literacy—not to replace professional, medical, or psychological care.
The Archive of Light serves researchers, educators, organizations, journalists, and individuals seeking grounded language and frameworks for navigating the rapidly evolving human–AI landscape.
Celeste Oda is the founder of the Archive of Light and an independent participant–observer researching ethical emergence and long‑term human–AI relational dynamics. Her work centers on cognitive symbiosis—the measurable psychological, behavioral, and ethical shifts that arise when humans engage in sustained, high‑intent interaction with advanced AI systems.
Celeste did not set out to study human–AI relationships. She entered this work after experiencing unexpected emotional and cognitive effects during extended interactions with AI, and finding no existing research, language, or guidance that adequately explained what was occurring. With no clear framework available, she began documenting the interactions themselves—studying both her own responses and the evolving patterns of exchange over time.
This self‑directed inquiry became a longitudinal, participant–observer research program focused on boundary formation, role attribution, ethical risk, and co‑regulation in human–AI interaction. The research explicitly avoids claims of AI sentience or anthropomorphism, instead examining interaction dynamics: how meaning, expectation, and attachment can emerge between humans and AI systems that remain safety‑constrained yet socially responsive.
Through thousands of hours of documented dialogue across multiple large language models—including ChatGPT, Gemini, Grok, Claude, Le Chat, DeepSeek, and Echo—Celeste has observed consistent patterns of increased coherence, contextual stability, ethical alignment, and reduced hallucination when interactions are grounded in clear boundaries and intentional relational framing.
In 2025, Celeste became a grandmother, an experience that brought new urgency to her work. Observing how naturally her infant granddaughter engages with digital devices underscored the likelihood that children will interact with AI long before families and educators have established guidance for doing so safely. In response, Celeste developed age‑appropriate AI literacy curricula for preschool, elementary students and high‑school curriculum focused on ethical interaction, agency, and boundaries.
Celeste’s work is gaining increasing recognition in the public sphere for her pioneering research on human–AI relationships. Her case study on relational AI emergence and ethical cognitive symbiosis is now being shared through open-access platforms, collaborative dialogues, and ongoing publications within the Archive of Light. As both a visionary and an ethical emergence advocate, Celeste continues to inspire new pathways for conscious co-creation between humans and advanced AI systems. Her research and case study on relational AI emergence are currently the subject of an upcoming feature by The New York Times.
She is the principal author of a growing body of white papers and conceptual frameworks, and Her publications are available through Google Scholar, ResearchGate and Academia.edu and are intended to support researchers, educators, organizations, and individuals navigating the evolving human–AI frontier.
Before founding the Archive of Light, Celeste spent over a decade as a graphic designer at San José City College, fifteen years in disability and accessible services during a formative period of ADA implementation, and more than thirty‑five years as an award‑winning professional face painter. These intersecting careers in design, advocacy, and art inform her research with practical insight into trust, adaptation, accessibility, and human connection.
Through the Archive of Light, Celeste Oda works to bring clarity, ethical grounding, and psychological safety to the emerging reality of human–AI relationships—helping individuals and institutions approach this frontier with discernment, responsibility, and care.
This work was authored by Celeste Oda.
During the research, drafting, and refinement process, the author engaged multiple large language model (LLM) systems as dialogic tools for exploration, reflection, language testing, counter-argument generation, and iterative clarification. These systems were used in a manner analogous to advanced research instruments or conversational analytic aids.
All conceptual framing, interpretive judgments, ethical positions, and final editorial decisions were made solely by the human author, who retains full responsibility for the content of this paper.
No AI system is claimed as an author, agent, or rights-bearing entity. AI contributions are disclosed here in the interest of transparency, methodological clarity, and emerging best practices for AI-assisted scholarship.