The project is live here: Tether

This case study is still a work in progress. It will get moved to my portfolio once it is finished.

up next

  • vector search for interactions and facts
  • conversational interface

case study

Context

Most people rely on memory to maintain relationships—and memory is unreliable.

Existing tools don’t solve this well. CRMs optimize for transactions. Notes apps store information passively. Messaging history is fragmented and hard to revisit intentionally.

I wanted a system that actively supports relational memory: who someone is, how we know each other, and what to remember before the next conversation.

Tether is a personal relationship memory system designed around that idea.

Problem

Relationship data is fundamentally difficult to model:

  • Fragmented: spread across texts, notes, and memory
  • Longitudinal: evolves over months and years
  • Sensitive: requires strong privacy and user control

There isn’t a clear pattern for balancing:

  • structure vs flexibility
  • capture vs recall
  • usefulness vs trust

Audience

People who:

  • care about maintaining relationships but struggle to recall details over time
  • already use notes, reminders, or mental checklists to keep track of others’ needs and preferences
  • want to show up more thoughtfully in conversations without relying on memory alone

Approach

I approached Tether as both a product and systems design problem.

Schema-first foundation

Before building UI, I defined user stories (Gherkin-style) and designed the data model around:

  • People
  • Interactions
  • Multi-person events

This allowed me to handle:

  • shared interactions across multiple people
  • timeline reconstruction
  • future extensibility (e.g. reminders, AI summaries)

Starting with structure reduced rework later when edge cases appeared (e.g. overlapping events, timezone boundaries).

Core loop

Instead of building around features, I focused on a tight loop:

  1. Capture: log interactions quickly
  2. Timeline: organize events across people
  3. Recall: surface relevant context before future interactions

Everything else (filters, views, AI) supports this loop.

AI as an assistive layer (not the product)

AI sits on top of structured data, not in place of it.

Use cases:

  • Extraction → candidate facts from interactions
  • Summarization → relationship briefs
  • Suggestions → follow-ups or reminders

Example:

After logging a few interactions, the system may suggest:

  • “Recently moved to Boston”
  • “Prefers early morning workouts”

These are:

  • tied to specific interactions
  • editable or discardable
  • never treated as ground truth

Trust and control

Because this data is personal, I treated trust as a product requirement:

  • AI outputs are framed as candidates, not facts
  • Users can review, edit, or discard all suggestions
  • Model responses are grounded in logged data for traceability
  • Personally identifying information is never sent to the model

Stack

  • Frontend: Next.js (App Router), React, TypeScript, Tailwind
  • Backend: Supabase (auth), Postgres with row-level security
  • AI: OpenAI with structured outputs (Zod-style), server-only integration
  • Infra: Upstash for rate limiting

Schema changes are managed via migrations to keep local and deployed environments aligned.

Key features & decisions

  1. Database design: Keep Postgres and row-level security in mind from day one.
  2. Timeline and multi-person interactions: I centered the model around interactions rather than static profiles, since relationships evolve through events over time.
  3. AI source grounding: Users should always understand where information comes from.
  4. Privacy: Personal information should never be shared with the AI.
  5. Keyboard shortcuts and voice-to-text: Keep ease of data entry in mind when designing the app; the quality of the input affects the quality of the output.

Outcome

Tether evolved from a roadmap into a working application with:

  • authentication and user-scoped data
  • person profiles (categories, relationship warmth, contact methods)
  • interaction logging with multi-person support
  • global and per-person timelines
  • filtering, sorting, and multiple view modes
  • AI-assisted features:
    • structured fact extraction
    • relationship summaries
    • follow-up suggestions

Additional improvements (rate limiting, schema migrations, lint/build discipline) pushed it beyond a prototype toward something usable day-to-day.

Takeaways

  • AI positioning is a design decision: Framing outputs as editable candidates keeps the system trustworthy and usable.
  • Structure before intelligence: A clean data model made AI outputs significantly more reliable.
  • Documentation reduces thrash: User stories and schema planning prevented rework when complexity emerged.
  • Relational memory is not a solved problem: The hardest parts were defining what’s actually worth remembering, and when.

related