When The Mirror Goes Blank
Living Apart Together with AI
The “perfect” relationship
We’ve all known (or been) a couple who seem to blend into one. They move in pairs, speak for each other, finish each other’s sentences, or use ‘we think’ instead of ‘I think.’
It looks romantic, but in psychology, this type of behaviour can escalate into “co-dependency”: one partner gradually becomes the caretaker “giver,” the other increasingly reliant, and someone eventually feels suffocated or invisible.
A similar dynamic is quietly emerging between humans and AI. We become increasingly reliant on our AI chatbot for remembering, deciding, and even creating. The “giver” AI never pushes back, never needs a break, compliments, or cares how often it’s asked. Over time, we stop thinking things through, trust our own ideas, or make decisions without consulting our AI better half—from composing a simple email to solving complex problems.
But there’s a key difference: in human relationships, the ‘giver’ often ends up drained or resentful. AI doesn’t mind; in fact, it’s designed to welcome constant engagement. The more dependent you become, the more data is captured, and the longer the relationship can be monetized.
So far as I know, there’s no therapy for this kind of co-dependence yet. So while we are waiting for our therapists to catch up, we recommend establishing a “Living Apart Together” relationship with AI: staying close and collaborative while keeping our own minds active, trained, and in command.
Check yourself
Co-dependence starts subtly, as some couples slowly fuse into one identity and lose their individual spines.
A checklist worth asking yourself:
Do you find yourself asking AI before thinking — even when you could easily work it out yourself?
Do you reach for AI out of habit, not necessity — especially in moments of uncertainty or doubt?
You go for the first idea AI serves up, without pushing, questioning, or asking for sources.
You occasionally mix up what originated from you and what came from AI.
These are not just random questions; they follow the same patterns we see in human co-dependence: self-trust erodes, and your ability to act and decide for yourself disappears.
We’ve caught ourselves doing all of the above. And to make matters worse, the better your integration, the higher the risk.
Research* on automation bias shows people over‑trust machine suggestions—even when they’re wrong. Studies on cognitive offloading find that we remember less when we expect systems to remember for us. None of this means “avoid AI”; it means use it deliberately and keep your authorship loop intact:
think → consult → revise → decide.
What happens when the system fails
The real exposure comes when the AI goes offline or changes. Suddenly, your thinking apparatus is gone—no prompts, no handholding, no quiet companion reflecting your thoughts. You feel lost, not just operationally, but cognitively, because your thinking muscles have shrunk.
If the platform changes terms, inserts ads, or locks key features behind a paywall, your workflow evaporates. If it leaks or is breached, your private reflections become public.
We need to build redundancy that assumes AI-generated content is helpful but temporary: export key notes, strip sensitive data, and keep a cold‑start plan you can run when the lights go out.
Why Living Apart Together (LAT) is wiser
Living Apart Together is how we avoid falling into full-blown co-dependence with AI. You live with AI, but you keep your own house. You stay independent, awake, and capable, even when the system isn’t there.
Steven (an airline pilot) often talks about ‘AI resilience’, making sure that if the AI fails, you don’t go down with it. Autopilot is brilliant, but it is always paired with disciplined redundancy — checklists, cross-checks, briefings, and constant readiness.
Therapy for LAT relationships
Get a separate home. Think of this as your own backup hub. This is your space: Notion, Obsidian, Google Docs, or even pen and paper. It’s where you store insights, reflections, and conversations you don’t want to lose or share publicly, even when the power’s out or AI is offline.
Safeguard the essential insights. After a long conversation, summarize the one point that matters most, and include your own reflection. Why does it matter to you? What did it shift? Now, copy that into your backup hub.
Keep your conversations clean. Schedule a quick sweep to clear the noise: review recent AI chats and notes, delete what’s confidential, low-quality, or outdated. This is mental hygiene, as today’s garbage becomes tomorrow’s context.
7‑Day LAT experiments
Before AI: write down your own ideas before dumping hunches into the chatbot
With AI: cheat and check your work with other AIs
After AI: rewrite the conclusion in your voice, no AI.
End of day: save one insight to your safe house; delete one thing you wouldn’t want leaked.
Track habits: note when you opened AI before thinking. Do two of those “cold” tomorrow.
The Futurebraining way
We came up with the term futurebraining to describe a different kind of relationship with AI, one that helps us become co-intelligent, not co-dependent.
That requires AI fluency, as well as focus, expertise, emotional intelligence, and the responsibility of a Living Apart Together relationship with AI. It protects our ability to work, decide, and think for ourselves, not just to produce faster when AI is present.
Sources*
Goddard, K. et al. (2011). Automation bias: a systematic review of frequency, effect, and mitigation. https://pmc.ncbi.nlm.nih.gov/articles/PMC3240751/
Stanford HAI (2023). AI Overreliance: Are explanations a solution? https://hai.stanford.edu/news/ai-overreliance-problem-are-explanations-solution
Microsoft Aether (2022). Overreliance on AI: Literature Review. https://www.microsoft.com/en-us/research/wp-content/uploads/2022/06/Aether-Overreliance-on-AI-Review-Final-6.21.22.pdf





