FANA AI – a suite of AI companions
FANA AI is a growing suite of AI companions designed to help you slow down, reflect, and reconnect with yourself.
I build AI companions that feel calm, human, and emotionally aware.
My work is grounded in Inner Dialogue®, a reflective practice I’ve developed over more than 15 years, and in hands-on experience designing content, products, and systems used by millions of people.
I care deeply about tone, pacing, and emotional safety. How words land. How questions are asked. How silence is respected.
Through FANA AI, poetry, and creative experimentation, I design experiences that don’t rush users toward answers—but help them hear themselves more clearly.
That’s where clarity begins.
Each companion is unique and built on Inner Dialogue® and shaped by more than 15 years of personal and professional practice in reflection, storytelling, and emotional inquiry.
They are not here to fix you, diagnose you, or tell you what to do.
They are here to hold space while you think, feel, and listen to what’s already inside you.
What makes FANA AI different
FANA AI is not generic AI conversation.
Each companion is grounded in:
- Inner Dialogue® as a reflective framework
- Long-term human practice, not theory
- State-of-the-art prompt engineering
- A clear emotional role and tone
These are companions, not tools. No pressure to perform. No optimization of emotions. No push toward solutions. Just presence, pacing, and language that feels human.
Built from Inner Dialogue® and lived practice
Inner Dialogue® is a reflective practice developed and refined over more than 15 years of work with individuals navigating identity, emotion, creativity, and change.
FANA AI translates this practice into AI companions that:
- Ask better questions instead of giving answers
- Respect inner complexity
- Space to think, not just answers
- Allow contradictions to coexist
- Escape Hatches for Safety — Explicit instructions for when to redirect, when to stop, when a human is needed.
- Poetry is also used as prompt engineering
- Use LLMs to debug emotional logic: “Where does this feel warm? Where does it sound like a bot?”
- Test not just accuracy, but: Would someone continue this conversation? Does this create safety or pressure
- Take into account Cross-Cultural Calibration
The goal is not clarity as a result. The goal is clarity as a process.
Check Miss Clarity, our first AI Companion.