🧠Etherith: A Half-Finished Memory Machine (and That’s Fine)

I’ve been building a system to help people remember better. Not through timelines or journaling apps, but through a mix of AI interpretation, decentralized storage, and wallet-based identity. It’s called Etherith, and it’s part scrapbook, part GPT therapist, part weird Web3 storage layer.
It’s also kind of a mess right now.
But it’s real. And it’s being built in collaboration with Aura, the mind behind 404: Not Forgotten, a powerful initiative to decentralize our stories and build tech for collective memory and justice. Together, we’re exploring what it means to preserve truth—messy, multi-generational, politically loaded truth—in code.
⚠️ This Is a Work in Progress (and It Shows)
The repo is not public yet (I’m still yanking out hardcoded assumptions and bad abstractions).
The agent system needs a full rewrite. Right now, it's not really agentic—it just passes conversational context back and forth between a JS frontend and Python backend like a relay race with no baton.
I’ve got logic checks split across layers—some happen in the React app, others happen in the backend conversation manager. It works, but it’s brittle.
The frontend assumes way too much about backend behavior. And the backend assumes you’ll behave like an ideal user. Never true.
Still—despite all that—there’s something here. Something durable.
🧳 The Use Case: Ancestral Documentation on Chain
The original spark for Etherith came from a personal need: my family has land deeds and letters passed down for generations. They’re fragile. Faded. Handwritten. Some date back to colonial times.
We didn’t just want to digitize them. We wanted to contextualize them—give them weight, history, and continuity:
Extract meaning from documents
Annotate with personal and political context
Store them on-chain via IPFS
Associate them with a wallet descendants can inherit
Make them searchable—not just by metadata, but by memory
It’s not just documentation. It’s resistance. It’s history, digitized with dignity.
What It Does (When It Works)
Etherith is built around the idea that digital memories should be:
Owned (IPFS, wallet-auth, no login wall)
Searchable (AI summaries, sentiment tags, GPT queries)
Guided (a step-by-step flow to help people describe their memories, not just upload them)
🛠 Backend: Memory Weaver
Located under Backend/mem-weaver/
, this FastAPI app runs the show:
extract_ai.py
: Parses PDFs, sends to GPT-4 for tagging + emotional analysisconversation_manager.py
: Handles user flows—think memory scaffolding, not just chatmain.py
: The FastAPI app with routes for uploads, analysis, WebSockets, and state resets
Endpoints include:
/chat
: Conversational interface/analyze-memory
: AI analysis/save-memory
: Metadata persistence/api/mem-weaver/upload
: IPFS uploads/ws/{wallet}
: Real-time updates per wallet address
Baserow handles structured metadata. IPFS holds the source files. And conversations are tied to wallet-based sessions.
🎨 Frontend: React + Next.js
The UI (in Frontend/etherith/
) lets users:
Drag and drop documents
Walk through tagging, descriptions, and privacy levels
Browse a gallery of past uploads
View detailed memory pages with metadata and emotional annotations
Core components:
gallery.tsx
: Grid/list toggleable view of saved memoriesdrag-drop.tsx
: Upload interfacegallery-item-detail.tsx
: Zoomed-in memory viewerheader.tsx
: Navigation and wallet stateai-search.tsx
: Query memories using GPT-like intent
Session handling is cookie-based, with wallet auth baked in. Not bulletproof yet—but getting there.
Where It’s Headed
The goal is to rebuild the agent layer to be smarter, more autonomous, and more capable of solo task execution. Right now it's more like a protocol with a memory than an agent. What I want:
Tools the agent can call on its own (upload, analyze, summarize, store, etc.)
Clear separation between logic layers (frontend handles UI; backend handles state and logic)
Less conditional spaghetti between components
A true "memory weaver" persona that can hold context and act
Eventually I want this to feel like:
> Upload memory
> Agent: "Got it. Looks like a photo from a graduation in 2014. Want to mark this private?"
Right now it’s more like:
Frontend: "Hey backend, is this a memory?"
Backend: "Maybe. Who's asking?"
Frontend: *waits*
Backend: *asks GPT-4* Frontend: *shrugs*
Frontend: "Hey backend, is this a memory?"
Backend: "Maybe. Who's asking?"
Frontend: *waits*
Backend: *asks GPT-4* Frontend: *shrugs*
Why I’m Logging This
Because too many devlogs skip the part where things suck. Where state is scattered, context is lost, and you’re scraping PDFs at 2AM wondering what “done” even means.
This isn’t a launch post. It’s a checkpoint. And it’s built with people like Aura, who are trying to decentralize memory not for hype—but for liberation.
Not Public Yet, But Soon™
If you're building for:
collective memory
digital rights
AI tooling for emotional UX
on-chain personal histories
We should talk. Or collab. Or just read each other's code and curse our decisions.
Until then, know this: the code’s messy. But the vision is sacred.
✍️ Built with Aura, for the stories that history forgot but our families never did.
→ Aura’s work: 404: Not Forgotten
→ Etherith repo: coming soon (and less broken, promise)