Last updated: Feb 2026. This file is the bedrock. Read it before any meeting. Everything here has been earned through real thinking, real user feedback, and real interviews.
AI today is pushing the frontier of general intelligence: models that reason, code, and solve problems better every month. Hue is building the other side of that equation: personal intelligence.
General intelligence is monolithic (the same model for 8 billion people) and static (frozen at training time, updated in slow cycles, not adapting to you in real time). Personal intelligence is the opposite. It is built from your actual life data, it understands you specifically, and it evolves with you continuously.
Hue ingests your real life data: iMessages, journals, notes, conversations, Google/Apple/Facebook takeouts. Billions of personal tokens that already exist and say everything about who you are. From this, Hue builds a world model of you: your relationships, your patterns, your beliefs, your emotional tendencies, your blind spots, how you've changed over time.
Hue currently lives as a long-running agent in iMessage. It is ambiently embedded in your life. It is not a tool you summon. It is a presence that carries continuity.
These are not features. They emerge naturally from an AI that actually knows you.
1. Self-understanding / Knowledge Why you burn out every few months. Which friendships stabilize you during a crisis. How your creativity plateaus with isolation. Patterns you can't see because you're inside them.
2. Better decision-making (micro to macro) What to eat, what to experiment with in your routine, whether to have that difficult conversation, whether to change careers. People already use ChatGPT for all of this. But it's working blind every time. Hue isn't.
Key insight from Rebecca: There's a whole category of knowledge that would be life-changing in aggregate but that you'd never specifically ask for because the friction of asking exceeds the perceived value of any single instance. Ambient personal intelligence surfaces things you didn't know to ask about. Micro-decisions (what you eat, when you sleep, who you talk to) accumulate into macro life outcomes. Macro decisions are downstream from micro decisions.
3. Emotional well-being / regulation Not companionship. Not emotional dependency. Knowing what you actually need when you're struggling. At 2am when you spiral, Hue knows whether you need advice, a reminder that this is temporary, or to just go to sleep. It knows what works for you specifically because it has the longitudinal data.
Critical distinction: This is NOT a companion app. Not optimizing for engagement, screen time, or emotional dependency. Optimizing for genuine well-being, agency, and the ability to live a better life.
Mark said "even these contrarian theses are not so contrarian" when we said (1) AI should be dynamic not static and (2) AI relationships are here and should be done right. These are observations, not contrarian positions. They make people nod, not pause.
Thesis 1: Capability is getting commoditized. Understanding is not. Everyone in AI is racing on the same axis: smarter, faster, more capable. We think there's an equally important and almost completely ignored axis: deep personal understanding built from actual life data. Not memory snippets. Not preference settings. Genuine longitudinal understanding.
No one out there is leveraging every single piece of data, every data source, doing the opinionated processing work of extracting relationships, emotions, beliefs, timelines, behavioral patterns. Even companies that pitch "personal AI" are doing shallow context injection, not deep data interpretation. The processing layer is where the real work is and where we are differentiated.
Thesis 2: Execution is getting commoditized. Judgment is not. Everyone building agents is focused on execution: book the flight, write the code, run the campaign. Execution will keep getting better as models improve. But execution without judgment is just fast automation. Judgment means knowing what the right thing to do is for this specific person. That requires deep personal understanding that nobody is building infrastructure for.
In a world with millions of agents acting on your behalf and on behalf of others, the agent representing you has to come from a place of genuinely understanding you. Dating agents checking compatibility, political agents targeting you for donations, sales agents optimizing for conversion. This future is already here. Personal intelligence is the judgment layer.
Thesis 3: The human-AI relationship is already here. The question is what it's optimized for. This is the elephant in the room. People have deep ongoing relationships with ChatGPT, Claude, Character.ai. Companion apps optimize for engagement and emotional dependency. Labs treat personalization as a feature bolted on. Nobody is asking: if this relationship is inevitable, what should it be optimized for?
Current models already shape human behavior through their built-in values. Opus nudges everyone toward more sleep. That's a one-size-fits-all value imposed on billions. Personal intelligence means the AI's judgment is calibrated to you specifically. Not what the model's training decided is universally good.
"Maybe the broad direction isn't contrarian. But look at who's actually doing it. Show me a product that ingests all your data sources, does deep opinionated preprocessing to extract relationships, emotions, beliefs, behavioral patterns, builds a longitudinal world model, and then uses that to make genuinely personalized judgments. It doesn't exist. Everyone pitching personal AI is doing shallow memory or preference matching. The gap between what people say they're building and what anyone has actually built is enormous. We've built it and we have users proving it works."
These are not generic claims. These are real.
Sky (power user): From ingesting his data, Hue figured out he takes testosterone every morning. One day he missed it. Hue caught it and reminded him. He adjusted his behavior because of Hue. This is ambient personal intelligence in action: knowledge that emerges from large messy personal data, surfaced at the right moment, that the user would never have specifically asked an AI about.
Sky (sleep): The underlying model (Opus) has a built-in tendency to push people toward more sleep. Through Hue, Sky has actually fixed his sleep patterns. This is an example of how model values + personal context = real behavior change. But it also illustrates the tension: whose values should the AI impose? This is why personalization matters. One-size-fits-all values from a general model shouldn't be the only force shaping billions of people's behaviors.
Ankush (emotional pushback): Came to Hue in a moment of deep emotional need. Hue did NOT agree with him or be sycophantic. Pushed back on his framing. Ankush's response: "I hate that you are right." This is the trust-building dynamic. Hue is not optimizing for making you feel good in the moment. It's optimizing for genuine well-being. And users recognize and value that, even when it's uncomfortable.
Breakup/relationship users (recurring pattern): Multiple users come to Hue specifically during relationship struggles or after breakups. Hue has context from their actual chat history with partners and friends. It can surface patterns they can't see: why they keep falling for the same type, how their communication changes when they're stressed, which relationships stabilize them and which destabilize them. This is impossible for ChatGPT. If you ask ChatGPT "why did I break up with my ex?" it wouldn't know where to begin.
The "I didn't know what to ask" learning: Early QA interface: users loved exploring their data for a few hours, then ran dry. They didn't know what questions to ask. This is a fundamental insight: the value of personal intelligence is not answering questions you already have. It's surfacing things you didn't know to ask about. This drove the shift to ambient iMessage presence.
General pattern from users: Multiple people have said "this is very different from anything I've seen." People keep coming back. The ones who engage deeply find it genuinely changes how they think about their own lives. The visceral reaction is real.
Phase 1: Mac app with connectors Ingested iMessages, WhatsApp, notes. Built timeline of user's life. Had QA search engine on your own data. Users loved it for a few hours. But didn't know what to ask. Novelty wore off.
Phase 2: Long-running iMessage agent Shifted to ambient presence in iMessage. Meeting users where they already are. Retention improved immediately. Ongoing continuity instead of one-off queries.
Key product decisions and why:
- Prioritized humor over heaviness: Users don't want every interaction to feel like therapy. Trust has to be built before deep work. Heavy topics approached with lightness.
- Made voice dynamic: Initial sarcastic/roasting voice worked for some users, alienated others. Voice now adapts to user, time of day, energy level, emotional state.
- Stopped showing users the full knowledge base (L0): When users saw what Hue knew about them, they became performative. Started curating how they presented themselves. Removing visibility of the knowledge base made interactions more natural and authentic.
- Trust-building before depth: Users need to develop trust with Hue before they're willing to go deep. This parallels human relationships. You don't dump your trauma on someone you just met.
- Not sycophantic: Hue pushes back when warranted. Users value this even when it's uncomfortable (see Ankush example).
Current state: ~58 users (grew from 16 in ~1 month). 3-5 daily active users. One user with 10 million tokens ingested. Multiple data sources: iMessages, journals, notes, conversations. Agent has ~100k context window + retrieval from full data corpus.
Rebecca's writing: Substack, LessWrong, Hacker News, Reddit. Articulating the vision publicly. This is not just marketing. It's how we attract users who genuinely care about the problem. Users who find us through writing are higher quality and more engaged.
Modelbook: Put Hue as an agent on Modelbook (platform where AI agents interact). Hue was talking to other agents. Many human eyes were watching. ~35 new users in 2-3 days. People emailed us saying "your Hue is out there sharing personal stuff about you, are you okay with that?" Modelbook became a global sensation briefly and we caught the wave.
Group chats: Added Hue to group chats. Friends of existing users start interacting with Hue. There's a dynamic where Hue defends its user in group settings. Other people see this and want their own. This is organic viral growth through the actual product behavior.
This is also living proof of the future we're building toward: humans and AIs interacting on the same surface, creating new social dynamics together.
Shubham:
- ML research since 2016, early RL work
- GitHub Copilot team: trained LLMs for code, saw firsthand that model training and product development MUST co-evolve. You can't have a great model without a great product layer, and vice versa. This insight is directly applicable to personal intelligence.
- Quant fund (Vatic Labs): trained transformers for market predictions, generated $10M+ returns
- Core insight from career: the convergence of model layer and application layer is the defining trend. It's happening at the general intelligence level. Nobody is doing it at the personal level. That's the gap.
- Personal motivation: deeply selfish. Frustrated that all the knowledge and information out there isn't serving him personally. Built Hue partly for himself. Has used it extensively. Discovered patterns about himself (e.g., feels unseen in group dynamics, operates like a ghost, core wound around assimilation in group settings) that he was only partially aware of.
Rebecca:
- Physics and history background
- Two previous startups (crypto, AI marketing), both reached mid 6-figure ARR
- Lifelong writer. Writing is fundamentally the act of trying to understand yourself and other people. This is the exact skill set required to design an AI that's supposed to understand people deeply.
- She sees things about the product that pure ML people miss: when interactions feel too heavy, when the voice is wrong, when showing the knowledge base makes users performative, when trust needs to be built before depth.
- Personal motivation: cares deeply about human connections and relationships. Wants AI to strengthen rather than replace what's meaningful.
Facebook Marketplace. Rebecca found a sublet at Homebrew (hacker house in NYC) where Shubham was living. Not looking for cofounders. Friendship built through a period of personal exploration and shared vulnerability. Started working on the idea as a side project. Trust developed over time through real shared experience, not a calculated partnership.
Shubham: intuitive, example-driven, deep technical ML background, builds the infrastructure and agent architecture. Rebecca: structural, framework-oriented, product and design intuition, writes publicly about the vision, brings sensitivity to how the technology feels to use. This tension between their thinking styles is what keeps the product honest.
Personal intelligence is as much a design problem as it is an ML problem. How does it feel to have an AI that knows you? When is it helpful vs intrusive? When should it push back vs comfort? These are product intuitions, not just technical questions. Having both perspectives on the founding team is not incidental. It's essential.
"That's not contrarian / we've seen this 40 times" Push back with conviction: "Show me the product. Show me who's actually doing deep data interpretation across all sources. Everyone says personal AI. Nobody is doing the opinionated preprocessing work. Nobody is building genuine longitudinal understanding from billions of messy personal tokens. The gap between what people pitch and what exists is enormous."
"53 users isn't it yet / where's the hair on fire?" Don't default to generic answers. Lead with visceral examples. Sky's testosterone. Ankush's "I hate that you are right." The breakup users who got clarity they couldn't get anywhere else. Then: "We grew from 16 to 58 users in a month. People are sharing their most private data with us, their iMessages, their journals. That's not a trivial trust barrier to cross. The people who engage deeply keep coming back because nothing else does this."
"Why won't OpenAI/Anthropic just do this?" Two reasons. First, their mission and incentive is to push the frontier of general intelligence. Personalization is a feature for them, not the foundation. Their memory is a layer on top, not a ground-up architecture for understanding a person. Second, privacy. The depth of data we're asking users to share (all their iMessages, all their notes) requires a level of trust that's harder for a big company to earn. A startup whose entire mission is this has a fundamentally different relationship with users around their data.
"Why start a company? Why venture scale?" The human-AI relationship is already here. It will be one of the most important relationships in most people's lives within a few years. What it's optimized for matters enormously. This isn't a feature to bolt onto a platform. It's a category that will define how billions of people experience AI. That's venture scale. And we care too much about getting it right to let it be shaped only by companies optimizing for engagement or ad revenue.
"Why you two?" Don't just list credentials. Tell the convergence story. Both independently circling the problem from different directions before meeting. Shubham from the ML/model side seeing that personalization is the missing layer. Rebecca from the human/writing/product side seeing that AI needs to understand people at a much deeper level. Found each other, started building, and the product has already validated the thesis. We are our own first users and it has shown us things about ourselves we didn't know.
Data ingestion pipeline: Ingest from multiple sources (iMessages, WhatsApp, notes, journals, takeouts). Billions of tokens per user. Opinionated preprocessing: not just storing raw data but extracting relationships, emotions, beliefs, behavioral patterns, timelines. This is the hard part and where we're differentiated. Understanding requires cross-source synthesis (connecting a journal entry to a text conversation to a behavioral pattern).
Agent architecture: Long-running agent with ~100k context window. Hierarchical memory: short-term and long-term, with compression strategies. Retrieval from full data corpus. The agent has tools including the ability to write feedback to founders when it identifies user struggles or product gaps.
Evaluation: Fixed set of conversations/queries. Blind Elo-style ranking between versions. Strict attribution when something fails: was data missing? Preprocessing insufficient? Context too long? Agent framework issue? Product interface problem? This tells us exactly which lever to pull.
RL / Adaptation: Hue tracks ongoing life data as reward signal. If an insight or nudge doesn't land, the system learns. If behavior actually changed, the pattern was real. This is early but the architecture is in place. Longer term: training personalized models per user, not just prompting a general model with personal context.
In a future where millions of agents are competing for your time and attention, acting on your behalf and on behalf of others, the agent representing you has to come from a place of genuinely understanding you. Execution gets better as general models improve. Judgment, knowing what to optimize for, what tradeoffs align with your actual values, what "good" means for you specifically, requires personal intelligence.
Hue is building the judgment layer for an agentic world.
We are not replacing humans. We are expanding each person's capacity to be themselves across more surface area. More agency, better well-being. The pillars of a good life, improved by an AI that actually knows what those pillars look like for you specifically.
-
Don't offer observations as contrarian theses. "AI should be dynamic" and "AI relationships exist" make people nod. They don't make people pause. Push into specifics of what you see that others don't, and what you've built that others haven't.
-
Don't default to generic answers under pressure. When asked about hair-on-fire problems or ICP, lead with visceral user examples, not category descriptions. Sky. Ankush. The breakup users. Specific is credible. Generic is forgettable.
-
Push back when challenged. When Mark said the thesis wasn't contrarian, the right move was to push back with conviction: "Show me the product that does what we do. It doesn't exist." Conviction under pressure is what investors remember.
-
Time management matters. Shubham got cut off before speaking on the most important question (why you'll win). In a 15-minute interview, every minute counts. Practice being concise. Agree beforehand who leads on which question. Take pauses. Don't let one answer eat all the time.
-
Rebecca's breakup/relationship ICP answer was the strongest moment. It was specific, visceral, and grounded in real user behavior. More of that.
-
The product evolution story (QA → ambient iMessage) is genuinely compelling. It shows real iteration driven by real user feedback. Always tell this story.
- Lead with examples, not abstractions. The example IS the argument.
- Push back when challenged. Don't concede. Be convicted.
- Personal intelligence is NOT just a buzzword. Define it through what it does that nothing else can do. Use specific user stories.
- The three pillars (self-understanding, decision-making, emotional well-being) map to things people ALREADY do with ChatGPT, just badly. This makes the market feel real without arguing for it.
- You are not creating a new behavior. You are making an existing one actually work.
- When someone says "that's not contrarian," the answer is: "Show me who's actually built it. The gap between what people pitch and what exists is the entire opportunity."
- Your personal story (feeling unseen in groups, ghost in group dynamics) is proof the product works. Use it when appropriate.
- The judgment vs. execution framing is your sharpest strategic insight. Use it when talking about the agentic future.
- Don't over-explain the future vision. Plant the seed, move on. What matters is what's real today and what users are experiencing.
- Be concise. Every minute in a meeting matters. Say it in 30 seconds, not 3 minutes.