Skip to content

Instantly share code, notes, and snippets.

@bennycwong
Created November 22, 2025 05:32
Show Gist options
  • Select an option

  • Save bennycwong/85cd4ff02164ff22f1982da15960f3b5 to your computer and use it in GitHub Desktop.

Select an option

Save bennycwong/85cd4ff02164ff22f1982da15960f3b5 to your computer and use it in GitHub Desktop.

Product Sense and Business Sense for Engineers

A Director's Guide to Building Business-Savvy Engineering Organizations


Table of Contents

  1. Introduction
  2. Core Concepts
  3. Glossary
  4. Core Principles
  5. Framework for Application
  6. Practical Examples
  7. Case Studies
  8. Building These Skills
  9. For Engineering Leaders

Introduction

Why Product and Business Sense Matter for Engineers

Over my years leading engineering organizations, I've observed a clear pattern: the engineers who advance fastest and create the most impact aren't necessarily the most technically brilliant. They're the ones who understand why we're building what we're building, and how it connects to business outcomes.

When I joined my first startup, I watched two senior engineers approach the same problem. Engineer A spent three weeks building a perfect, scalable solution that could handle 10x our current load. Engineer B asked one simple question: "What happens to our revenue if we ship this in three days instead of three weeks?" That question changed everything. We shipped the MVP, learned our assumptions were wrong, and pivoted before wasting another two weeks.

The reality is this: In today's environment, being a great engineer means being a business partner, not just a code producer.

The Competitive Advantage of Business-Savvy Engineers

Engineers with product and business sense:

  • Make better technical decisions because they understand the context
  • Communicate more effectively with stakeholders across the organization
  • Get promoted faster because they demonstrate impact, not just output
  • Build products users actually want instead of technically impressive solutions nobody needs
  • Save the company money by knowing when to choose pragmatism over perfection
  • Become trusted advisors to product and executive leadership

How This Impacts Career Trajectories

Let me be direct: if you want to move into senior IC roles (Staff+) or engineering leadership, product and business sense aren't optional. They're table stakes.

Here's what I look for when evaluating engineers for promotion:

Level Technical Skill Product/Business Sense Impact Scope
Junior Learning fundamentals Understands team goals Individual tasks
Mid Executes independently Connects work to product value Features/projects
Senior Technical authority in domain Shapes product direction Product area
Staff+ Sets technical strategy Drives business outcomes Org-wide initiatives

Notice that progression isn't linear on technical skill alone. The multiplier is understanding business context.


Core Concepts

What is Product Sense?

Product sense is the ability to understand what makes a product valuable to users and how to make it better. It's the skill of:

  1. Understanding User Needs

    • Empathizing with user pain points, not just stated requirements
    • Recognizing the difference between what users ask for and what they actually need
    • Identifying unmet needs that users haven't articulated
  2. Anticipating Product Requirements

    • Seeing downstream implications of current decisions
    • Asking "what happens next?" before building anything
    • Understanding how features fit into the larger product vision
  3. Evaluating Product Decisions

    • Judging whether a feature will actually solve the problem
    • Understanding trade-offs between user experience and technical complexity
    • Knowing when "good enough" is better than "perfect"

Example: A PM asks you to add a search filter to a dashboard.

  • Without product sense: You build exactly what was asked for
  • With product sense: You first ask: "What are users trying to accomplish? Is search the right solution, or do they need better default filtering? How will this scale as we add more data types?"

What is Business Sense?

Business sense is understanding how the company makes money and how your work contributes to that. It encompasses:

  1. Understanding Business Models

    • How does the company generate revenue?
    • What are the key cost drivers?
    • Which customers are most valuable and why?
  2. ROI and Opportunity Cost Thinking

    • Every engineering hour has a cost
    • Choosing to build X means not building Y
    • Understanding the economic value of technical decisions
  3. Market Dynamics and Competitive Positioning

    • What makes us different from competitors?
    • What are the market trends affecting our business?
    • How do technical decisions impact our competitive advantage?

Example: Your team is debating whether to rewrite a legacy system.

  • Without business sense: Focus solely on technical debt and engineering happiness
  • With business sense: You calculate: "This will take 4 engineers × 3 months = $240K in opportunity cost. What revenue-generating features could we build instead? Can we refactor incrementally while shipping new features?"

The Engineering-Product-Business Triangle

Great companies operate at the intersection of three domains:

        PRODUCT
           /\
          /  \
         /    \
        /      \
       /________\
ENGINEERING   BUSINESS
  • Engineering alone: Technically impressive but useless
  • Product alone: Great ideas that can't be built or don't make money
  • Business alone: Profitable in the short term but not sustainable

Your job as an engineer is to operate in the center of this triangle.

This means:

  • Bringing technical feasibility to product discussions
  • Bringing user value to technical architecture decisions
  • Bringing business constraints to both

Glossary

Product Terms

MVP (Minimum Viable Product) The smallest version of a product that allows you to learn from real users. Not "minimum" quality, but minimum scope to test hypotheses.

Product-Market Fit (PMF) When you've built something people desperately want. You can feel it: organic growth, high retention, users upset when it breaks.

User Story A description of a feature from the user's perspective: "As a [user type], I want to [action] so that [benefit]."

Job-to-be-Done (JTBD) The underlying problem a user is trying to solve. Users don't want a drill; they want a hole in the wall.

Persona A semi-fictional representation of your target user, based on research and data. Helps teams build empathy and make user-centered decisions.

Feature Flag A technique to enable/disable features without deploying new code. Critical for testing and gradual rollouts.

A/B Test Comparing two versions of a feature to see which performs better. The foundation of data-driven product development.

Business Terms

Revenue Money coming in from customers. This is what keeps the lights on.

ARR (Annual Recurring Revenue) The yearly value of recurring revenue from subscriptions. Key metric for SaaS businesses.

MRR (Monthly Recurring Revenue) Monthly recurring revenue. ARR ÷ 12, but more useful for tracking month-to-month growth.

CAC (Customer Acquisition Cost) How much it costs to acquire a new customer (marketing + sales costs ÷ number of new customers).

LTV (Lifetime Value) The total revenue you expect from a customer over their entire relationship with your company.

LTV:CAC Ratio A key health metric. Generally want 3:1 or better. If LTV is less than CAC, you lose money on every customer.

Churn Rate The percentage of customers who leave in a given period. The silent killer of SaaS businesses.

Gross Margin Revenue minus cost of goods sold, expressed as a percentage. Shows how much you keep from each dollar earned.

Burn Rate How much money the company is losing per month. Critical for runway calculations.

Runway How long the company can operate before running out of money (cash ÷ burn rate).

TAM/SAM/SOM

  • TAM (Total Addressable Market): Everyone who could possibly use your product
  • SAM (Serviceable Addressable Market): The portion you can reach with your model
  • SOM (Serviceable Obtainable Market): What you can realistically capture in the near term

Unit Economics The revenue and costs associated with a single unit (customer, transaction, etc.). Needs to be positive for business viability.

Gross Profit Revenue minus direct costs. The money available to cover operating expenses.

EBITDA Earnings Before Interest, Taxes, Depreciation, and Amortization. A measure of operational profitability.

Metrics Terms

North Star Metric The single metric that best captures the core value you deliver to customers. For Airbnb: nights booked. For Uber: rides completed.

KPI (Key Performance Indicator) Metrics that indicate whether you're achieving business objectives. Different from vanity metrics.

OKR (Objectives and Key Results) A goal-setting framework: Objective (what you want to achieve) + Key Results (how you'll measure success).

Conversion Rate The percentage of users who complete a desired action (signup, purchase, etc.).

Engagement Rate How actively users interact with your product. Can be measured many ways: DAU/MAU, sessions per user, etc.

DAU/MAU (Daily/Monthly Active Users) Users who used the product in the last day/month. DAU/MAU ratio indicates stickiness.

Retention Rate The percentage of users who return after their first visit. Cohort-based retention is more informative.

NPS (Net Promoter Score) "How likely are you to recommend us?" (0-10). Promoters (9-10) - Detractors (0-6) = NPS.


Core Principles

1. User-Centricity: Building for Outcomes, Not Outputs

The Principle: Users don't care about features. They care about accomplishing their goals.

What This Means in Practice:

  • Before building, ask: "What is the user trying to accomplish?"
  • Measure success by user outcomes, not feature completion
  • Be willing to throw away code if it doesn't serve users

Example: Your PM asks for a "bulk edit" feature. Instead of immediately designing the API:

  1. Ask: "What are users trying to accomplish with bulk edit?"
  2. Learn: They're fixing data quality issues from a CSV import
  3. Realize: The real problem is import validation
  4. Build: Better import validation instead of bulk edit

This saved weeks of engineering time and solved the actual problem.

2. Business Impact Orientation: Every Line of Code Should Tie to Value

The Principle: Engineering is an investment. Every story point spent should generate returns.

What This Means in Practice:

  • Before starting work, articulate the business value
  • Push back on work that doesn't have clear value
  • Understand which metrics your work should move

The Impact Hierarchy:

  1. Tier 1: Directly increases revenue or reduces churn
  2. Tier 2: Improves conversion funnels or user engagement
  3. Tier 3: Reduces operational costs or technical risk
  4. Tier 4: Improves team productivity or code quality
  5. Tier 5: "Nice to have" or purely technical preference

Not all work needs to be Tier 1, but you should know which tier you're operating in and why.

3. Data-Driven Decision Making

The Principle: Opinions are cheap. Data is valuable. But you need to know which data matters.

What This Means in Practice:

  • Instrument your features from day one
  • Define success metrics before building
  • Use data to validate assumptions, not just confirm biases
  • Know the difference between correlation and causation

The Data Hierarchy:

  1. User behavior data: What users actually do
  2. User feedback: What users say they do
  3. Stakeholder opinions: What we think users do

Trust in that order.

Anti-Pattern: Building features because "the CEO's friend mentioned it" or "our competitor has it."

Better Approach: "Let's instrument the current flow, understand where users drop off, and validate whether this feature addresses the actual problem."

4. Strategic vs. Tactical Thinking

The Principle: Junior engineers solve the immediate problem. Senior engineers solve the right problem for the next 2-3 years.

Strategic Thinking Questions:

  • Where is this product going in 12-24 months?
  • What are we building today that we'll wish we hadn't?
  • What are we not building today that we'll desperately need?
  • How does this decision constrain or enable future decisions?

Tactical Thinking Questions:

  • Can we ship this by Friday?
  • Will this pass code review?
  • Does this fix the bug?

Both are necessary. Great engineers know when to switch modes.

Example: You're building a notification system.

  • Tactical: Use email, ship in a week
  • Strategic: Build an abstraction layer that supports email, SMS, push, and in-app notifications

The strategic approach takes 2x as long but saves 10x the time when you need to add new channels.

5. The "Why" Before the "How"

The Principle: If you don't understand why you're building something, you can't make good decisions about how to build it.

What This Means in Practice:

  • Ask "why?" three times before starting work
  • Understand the user story behind every ticket
  • Challenge requirements that don't have clear reasoning

The Five Whys Example:

  • PM: We need to add OAuth login
  • You: Why?
  • PM: Users are asking for it
  • You: Why are users asking for it?
  • PM: They don't want to create another password
  • You: Why is that a problem for them?
  • PM: They're signing up from mobile and typing passwords is painful
  • You: Why don't we just improve the password input UX or add passwordless magic links?
  • PM: ... that's actually a better solution

This is not being difficult. This is being a partner.


Framework for Application

Daily Engineering Decisions

Feature Prioritization Input

Your Role: You're not just executing the roadmap. You're helping shape it.

Questions to Ask:

  1. Value: What user problem does this solve? How many users have this problem?
  2. Effort: What's the engineering complexity? Are there hidden costs?
  3. Risk: What could go wrong? What's the rollback plan?
  4. Alternatives: What else could we build with these resources?
  5. Sequencing: Does this unlock future work? Do we need other work first?

Framework: RICE Scoring

  • Reach: How many users will this impact?
  • Impact: How much will it improve their experience? (0.25x to 3x)
  • Confidence: How sure are we? (percentage)
  • Effort: How many engineering weeks?

Score = (Reach × Impact × Confidence) ÷ Effort

Example: Feature A: Social sharing

  • Reach: 1000 users/month
  • Impact: 1x (moderate improvement)
  • Confidence: 80%
  • Effort: 4 weeks
  • Score: (1000 × 1 × 0.8) ÷ 4 = 200

Feature B: Onboarding optimization

  • Reach: 5000 users/month
  • Impact: 2x (high improvement)
  • Confidence: 70%
  • Effort: 2 weeks
  • Score: (5000 × 2 × 0.7) ÷ 2 = 3500

Feature B is 17.5x more valuable. This is how you have productive prioritization conversations.

Technical Architecture Choices

Framework: Architecture Decision Records (ADRs)

Every significant technical decision should answer:

  1. Context: What's the business/product situation?
  2. Decision: What are we doing?
  3. Alternatives: What else did we consider?
  4. Consequences: What are the trade-offs?
  5. Business Impact: How does this affect user value and costs?

Example Decision: Choosing a Database

Context: We're building a user analytics dashboard. We need to query 1M+ events per user across multiple dimensions. Business goal is to increase user retention by surfacing insights.

Alternatives:

  1. Postgres with materialized views

    • Pros: We already use it, team knows it, low operational cost
    • Cons: Query performance degrades with scale, complex indexing
    • Cost: $500/month in current infra
  2. Elasticsearch

    • Pros: Fast queries, great for analytics, powerful aggregations
    • Cons: Another system to maintain, higher operational cost, team learning curve
    • Cost: $2000/month + 2 weeks of engineering time
  3. ClickHouse

    • Pros: Built for analytics, extremely fast, columnar storage
    • Cons: Specialized tool, team learning curve, limited ecosystem
    • Cost: $800/month + 3 weeks of engineering time

Decision: Postgres with materialized views for MVP, with abstraction layer for future migration.

Rationale:

  • Time to market is critical (PM wants to test this in 2 weeks)
  • We're not sure which analytics users want yet
  • We can test with Postgres, learn what queries matter, then optimize
  • The abstraction layer means we can swap out the backend without changing application code
  • If this feature drives retention, we'll have budget for Elasticsearch

This is business-aware technical decision-making.

Build vs. Buy Decisions

Framework: Total Cost of Ownership (TCO)

Build Costs:

  • Development time (engineer hours × hourly cost)
  • Opportunity cost (what else could we build?)
  • Maintenance burden (ongoing)
  • Technical debt accumulation

Buy Costs:

  • License/subscription fees
  • Integration time
  • Vendor lock-in risk
  • Customization limitations

Decision Matrix:

Factor Weight Build Buy
Time to market High Slow Fast
Customization Medium Total Limited
Core competency High Relevant Not relevant
Long-term cost High Variable Predictable
Maintenance High Your team Vendor

Example: Should we build our own email sending system?

Analysis:

  • Not core competency ✗
  • Commodity feature ✗
  • Email deliverability is hard ✗
  • Sendgrid is $10/month ✓

Decision: Buy. This took 5 minutes to decide, not 5 weeks to build.

Counter-Example: Should we build our own recommendation engine?

Analysis:

  • Core product differentiator ✓
  • Competitive advantage ✓
  • Unique data/algorithms ✓
  • Off-shelf solutions don't fit ✓

Decision: Build. This justifies 2 engineers for 6 months.

Cross-Functional Collaboration

Asking the Right Questions

To Product Managers:

  1. "What user problem are we solving?" (not what feature)
  2. "How will we measure success?" (define metrics upfront)
  3. "What did we learn from talking to users?" (validate assumptions)
  4. "What's the rollback plan?" (de-risk deployments)
  5. "What happens if we don't build this?" (understand urgency)
  6. "Can we test this assumption with less code?" (MVP thinking)

To Business/Leadership:

  1. "What's the revenue impact of this work?" (quantify value)
  2. "How does this affect our runway?" (understand financial constraints)
  3. "What's the competitive pressure?" (understand market timing)
  4. "What's the penalty for being late vs. being wrong?" (risk assessment)
  5. "Are we building for this quarter or for the next three years?" (strategic vs. tactical)

To Design:

  1. "Have we validated this with users?" (research-backed)
  2. "What's the technical complexity of this interaction?" (feasibility check)
  3. "Can we simplify this?" (ruthless simplification)
  4. "What's the mobile experience?" (don't forget mobile)
  5. "How does this scale with more data?" (edge cases)

Challenging Requirements Productively

The Wrong Way: "This requirement is stupid. It'll never work."

The Right Way: "I want to make sure I understand the goal here. We're adding this feature because [restate the business goal]. I'm concerned that [specific technical/user issue]. What if we tried [alternative approach] instead? That would give us [same business outcome] with [these benefits]."

Framework: "Yes, and..." not "No, but..."

Example:

PM: "We need real-time notifications for every user action."

Wrong Response: "That's not possible. It'll bring down the server."

Right Response: "I love that we're prioritizing responsiveness. Real-time for everything would be challenging at scale. What if we identified the top 3 most important actions that need to be real-time, and batched the rest? We'd get 80% of the user value with 20% of the technical complexity. Which actions matter most for the user experience?"

This is collaborative problem-solving, not gatekeeping.

Communication Patterns

Translating Technical Concepts to Business Value

The Pattern: [Technical thing] → enables → [User benefit] → which leads to → [Business outcome]

Examples:

Bad: "We need to refactor the authentication system to use JWT tokens instead of session cookies."

Good: "We're updating our authentication system. This will let users stay logged in across devices, which should reduce friction in our onboarding funnel and improve conversion from 40% to 50%. That's an extra $50K/month in revenue."


Bad: "We're implementing database indexing on the queries."

Good: "We're optimizing database performance to reduce page load times from 3 seconds to under 1 second. Faster load times typically increase conversion by 10-15%, which would add ~200 new customers per month."


Bad: "We need to implement caching."

Good: "We're adding caching to reduce our AWS bill by 40% (~$10K/month) while making the app faster for users. This is a high ROI investment: 1 week of engineering time for $120K/year in savings."

Writing Effective Proposals and RFCs

Structure:

  1. Executive Summary (2-3 sentences)

    • What are we doing and why does it matter?
  2. Business Context

    • What's the opportunity or problem?
    • What's the user impact?
    • What's the business impact?
  3. Proposal

    • What exactly are we building?
    • How does it work? (high level)
  4. Alternatives Considered

    • What else did we look at?
    • Why did we reject those options?
  5. Implementation Plan

    • What are the phases?
    • What are the milestones?
    • What's the timeline?
  6. Risks and Mitigations

    • What could go wrong?
    • How will we handle it?
  7. Success Metrics

    • How will we know this worked?
    • What KPIs will we track?
  8. Resource Requirements

    • How many engineers?
    • For how long?
    • What's the opportunity cost?

Example RFC Structure:

# RFC: Implement Recommendation Engine

## Executive Summary
Build a personalized recommendation engine to increase user engagement
and reduce churn by surfacing relevant content. Expected impact: +15%
DAU/MAU ratio, -10% churn rate.

## Business Context
**Problem:** 60% of users don't return after their first session. Exit
surveys indicate users can't find relevant content.

**Opportunity:** Competitors with recommendations see 2x engagement.
Our data science team has prototyped an algorithm with promising results.

**Impact:**
- Revenue: +$500K/year from reduced churn
- Engagement: +15% DAU/MAU (from 0.35 to 0.40)
- Retention: Day-7 retention from 25% to 35%

## Proposal
[Technical details...]

## Alternatives Considered
1. **Manual curation:** Doesn't scale, too much content
2. **Third-party service:** Tried two, neither worked with our data model
3. **Simple popularity algorithm:** Prototyped, showed limited improvement

## Implementation Plan
- Phase 1 (2 weeks): Infrastructure and data pipeline
- Phase 2 (3 weeks): Algorithm implementation
- Phase 3 (1 week): A/B test and iteration
**Total: 6 weeks, 2 engineers**

## Risks and Mitigations
- Risk: Algorithm performs poorly → Mitigation: A/B test before full rollout
- Risk: Latency impact → Mitigation: Pre-compute recommendations hourly
- Risk: Cold start problem → Mitigation: Fall back to popularity for new users

## Success Metrics
**Primary:** DAU/MAU ratio increases from 0.35 to 0.40
**Secondary:** Day-7 retention improves from 25% to 35%
**Guardrail:** P95 page load time stays under 2 seconds

## Resource Requirements
- 2 senior engineers × 6 weeks
- Data science support (already committed)
- Opportunity cost: Delays mobile app improvements by 1 sprint

This RFC makes it easy for stakeholders to understand the value and make informed decisions.


Practical Examples

Example 1: The Premature Optimization Trap

Situation: Your team is building a new feature. An engineer proposes spending 3 weeks building a caching layer because "we'll need it eventually."

Without Product/Business Sense: "Sounds good, let's build for scale from day one."

With Product/Business Sense:

Analysis:

  1. How many users will use this feature in the first month? (100)
  2. What's the performance threshold for good UX? (<2 seconds)
  3. What's the current performance without caching? (0.5 seconds)
  4. What's the opportunity cost of 3 weeks? (Could build two other features)

Decision: "Let's ship without the caching layer and instrument performance. If we see load times over 1.5 seconds or get beyond 1000 users, we'll add caching then. That gives us 3 extra weeks to ship value now."

Outcome: Shipped the feature early, learned users wanted something different entirely, pivoted without having wasted time on infrastructure we didn't need.

Lesson: Optimize for learning, not hypothetical scale.


Example 2: The Build vs. Buy Decision

Situation: Product wants an in-app chat feature. An engineer is excited to build a real-time chat system from scratch.

Without Product/Business Sense: "This will be fun to build! WebSockets, message queues, the works!"

With Product/Business Sense:

Analysis:

  1. Is chat a core differentiator? (No, it's table stakes)
  2. What's the engineering cost? (2 engineers × 8 weeks = $80K opportunity cost)
  3. What's the ongoing maintenance? (On-call, scaling, spam prevention, etc.)
  4. What's the third-party cost? (Stream or Sendbird: $500/month)

Decision: "Let's use Stream for chat. It'll cost $6K/year vs. $80K to build plus ongoing maintenance. We can ship in 1 week instead of 8, and spend those 7 weeks building features that actually differentiate us."

Outcome: Shipped chat in a week, spent the saved time building the core product, grew faster.

Lesson: Not everything needs to be built in-house.


Example 3: The API Design Trap

Situation: You're designing an API. Your instinct is to make it maximally flexible with dozens of optional parameters.

Without Product/Business Sense: Build a highly generic API that can handle every possible use case.

With Product/Business Sense:

Analysis:

  1. Who are the API consumers? (Just our own frontend right now)
  2. What are the actual use cases? (Two: listing items, getting a single item)
  3. What's the maintenance cost of a complex API? (High - versioning, docs, support)
  4. Can we add flexibility later? (Yes, via backward-compatible additions)

Decision: "Let's build the simplest API that handles our current use cases. Two endpoints: GET /items and GET /items/:id. We can add filtering, sorting, and pagination when we have real requirements."

Outcome: Shipped faster, simpler code, easier to maintain. Added complexity only when needed.

Lesson: YAGNI (You Aren't Gonna Need It) - Build for today's requirements, not tomorrow's hypotheticals.


Example 4: The Feature Flag Decision

Situation: Product wants to test a new pricing model. They suggest deploying it to 10% of users.

Without Product/Business Sense: "Sure, I'll build it and we'll deploy it."

With Product/Business Sense:

Questions:

  1. How will we measure success? (Conversion rate, revenue per user)
  2. What's the rollback plan if it hurts conversion? (Feature flag)
  3. How will we avoid bias in the test? (Random assignment, track cohorts)
  4. What's the minimum sample size? (Statistical significance calculator: need 2000 users)
  5. How long will the test run? (2 weeks to collect enough data)

Approach:

  1. Implement with feature flag
  2. Build instrumentation first (before the feature)
  3. Create dashboard to track key metrics by cohort
  4. Document the experiment hypothesis
  5. Set a decision deadline (2 weeks)

Outcome: Ran a clean experiment, got clear data, made an informed decision.

Lesson: Experiments need infrastructure and planning, not just code.


Example 5: The Refactoring Business Case

Situation: Your codebase has technical debt. You want to refactor a messy module.

Without Product/Business Sense: "This code is ugly. I'm going to rewrite it."

With Product/Business Sense:

Analysis:

  1. What's the business cost of this tech debt?

    • Bugs per month: 2
    • Customer impact: 50 users affected
    • Engineering time: 1 day/month firefighting
  2. What's the cost of the refactor?

    • Time: 1 week
    • Risk: Potential for new bugs
  3. What's the ROI?

    • Save 1 day/month = 12 days/year
    • Payback period: ~1 month
    • Improved velocity for future features in this area

Presentation to Leadership: "I'd like to spend 1 week refactoring the payment module. Here's why:

  • Current state: 2 bugs/month, affecting 50 customers each
  • Cost: 1 day/month of firefighting = $15K/year in engineering time
  • Benefits: Cleaner code, fewer bugs, faster feature development
  • Payback: 1 month
  • Low risk: Comprehensive test coverage, feature flag for rollout"

Outcome: Got approval because you spoke in terms of business value, not just code quality.

Lesson: Technical debt is a business problem, not just an engineering problem.


Example 6: The Metrics Instrumentation Debate

Situation: Building a new feature. Team wants to ship fast and "add analytics later."

Without Product/Business Sense: "Sure, we'll instrument it after we ship."

With Product/Business Sense:

Pushback: "If we don't instrument this from day one, we won't know if it's working. We'll be flying blind. Let's spend 1 extra day now to:

  1. Track feature usage
  2. Monitor key user flows
  3. Set up error tracking
  4. Create a basic dashboard

This will save us weeks of guessing later."

Outcome: Discovered within 3 days that 80% of users dropped off at step 2. Fixed the issue immediately. Without instrumentation, we would have assumed the feature was working fine.

Lesson: You can't improve what you don't measure. Instrument early.


Example 7: The Scope Creep Situation

Situation: You're building feature A. Midway through, PM says "While you're in there, can you also add B and C?"

Without Product/Business Sense: "Sure, I'll add those too." [Feature ships 3 weeks late]

With Product/Business Sense:

Response: "Let's talk about the trade-offs. Adding B and C will add 2 more weeks. That means:

  1. Feature A ships 3 weeks late (missing the planned launch)
  2. We push back other roadmap items
  3. We haven't validated that users want A yet

What if we:

  1. Ship A next week as planned
  2. Get user feedback
  3. Prioritize B and C based on what we learn

If A doesn't work, we'll have wasted less time. If it does work, we'll know which follow-ups matter most."

Outcome: Shipped A on time, learned users loved it but didn't care about B or C. Built D instead based on feedback.

Lesson: Protect scope. Learn before iterating.


Example 8: The "Competitor Has It" Fallacy

Situation: Leadership says "Competitor X has feature Y. We need it too."

Without Product/Business Sense: "Okay, I'll start building it."

With Product/Business Sense:

Questions:

  1. Do our users actually need this? Have they asked for it?
  2. Why does competitor X have it? Different market segment?
  3. What's the cost to build? Is it worth it?
  4. What are we not building instead? Opportunity cost?

Investigation:

  • Talked to 10 customers: Only 1 mentioned this feature
  • Competitor targets enterprise; we target SMBs
  • Feature would take 6 weeks to build
  • Roadmap has 3 features customers are actively requesting

Recommendation: "Competitor X has this because they target enterprise customers who need it for compliance. Our SMB customers haven't asked for it. Let's stay focused on the features our users are actually requesting. We can revisit if we see demand."

Outcome: Avoided 6 weeks of wasted effort. Built features users actually wanted instead.

Lesson: Don't blindly copy competitors. Understand your unique value proposition.


Example 9: The Technical Perfectionism Problem

Situation: Engineer wants to refactor code to follow a new architecture pattern they learned about.

Without Product/Business Sense: "This is the right way to architect this. Let's do it."

With Product/Business Sense:

Analysis:

  1. What problem does this solve? Cleaner code (subjective)
  2. What's the user impact? None (it's internal)
  3. What's the business impact? None
  4. What's the cost? 2 weeks of engineering time
  5. What's the opportunity cost? Two features we could ship instead

Decision: "This pattern is interesting, but it doesn't solve a user or business problem right now. Let's apply it to new code as we write it, but not refactor existing working code. If this area becomes a bottleneck, we'll revisit."

Outcome: Team stayed focused on user value. Applied the pattern incrementally where it made sense.

Lesson: Perfect is the enemy of good. Optimize for business outcomes, not code aesthetics.


Example 10: The Data-Driven Feature Validation

Situation: Product team proposes a new feature based on user interviews.

Without Product/Business Sense: "Sounds good, let's build it."

With Product/Business Sense:

Questions:

  1. How many users did we interview? (5)
  2. Are they representative of our user base? (All power users)
  3. What's the market size for this feature? (Unknown)
  4. Can we validate demand with less code? (Yes)

Proposal: "Before we build this, let's validate demand:

  1. Add a fake door test: Show a button for the feature, track clicks
  2. If >10% of users click, send them a survey
  3. If survey shows strong interest, build it
  4. Total cost: 1 day of engineering time instead of 3 weeks"

Outcome: Fake door test showed only 2% of users clicked. Saved 3 weeks of building something nobody wanted.

Lesson: Validate demand before building. Use data, not just interviews.


Example 11: The Database Migration Decision

Situation: The product has grown, and the current database is showing signs of strain. Team debates migrating to a different database technology.

Without Product/Business Sense: "Let's migrate to [trendy database]. It's faster and more scalable."

With Product/Business Sense:

Analysis:

  1. What's the actual problem?

    • Specific queries are slow (>5 seconds)
    • Affecting 5% of users
  2. Can we solve it without migration?

    • Add indexes: 1 day of work
    • Optimize queries: 2 days of work
    • Add read replicas: 3 days of work
  3. What's the migration cost?

    • Engineering: 2 engineers × 6 weeks = $120K
    • Risk: Data loss, downtime, bugs
    • Opportunity cost: Entire quarter of feature development
  4. What's the business urgency?

    • Users are complaining but not churning (yet)
    • We have 6-12 months before it's critical

Decision: "Let's optimize the current database first:

  1. Week 1: Add indexes and optimize queries
  2. Week 2: Add read replicas if needed
  3. Measure: If query times drop below 2 seconds, problem solved
  4. If not, then we consider migration

This buys us time to understand whether we need migration or just optimization."

Outcome: Indexing and query optimization solved 90% of the problem. Saved $120K and a quarter of feature development.

Lesson: Try the simple solution first. Migrations are expensive. Make sure the juice is worth the squeeze.


Example 12: The Mobile-First Oversight

Situation: Designing a complex data visualization dashboard. Design mocks look great on desktop.

Without Product/Business Sense: "These designs are beautiful. Let's build them."

With Product/Business Sense:

Questions:

  1. What percentage of our users are on mobile? (Check analytics: 65%)
  2. How do these complex visualizations work on small screens? (They don't)
  3. What are mobile users trying to accomplish? (Quick status checks, not deep analysis)

Insight: Building a desktop-first dashboard for a mobile-first user base is backwards.

Proposal: "Our analytics show 65% of users are on mobile. These desktop designs are great, but they don't work on mobile. Let's redesign with mobile as the primary experience:

  1. Mobile: Simple cards with key metrics, drill-down for details
  2. Desktop: Enhanced with the full visualizations

This matches how users actually access the product."

Outcome: Built mobile-first, desktop users got enhanced experience. Mobile engagement increased 40%.

Lesson: Design for how users actually use your product, not how you wish they would.


Example 13: The "Quick Fix" That Wasn't

Situation: A bug is reported. PM says "can we get a quick fix for this?"

Without Product/Business Sense: "Sure, I'll patch it." [Applies band-aid fix without understanding root cause]

With Product/Business Sense:

Questions:

  1. How many users are affected? (5000)
  2. What's the business impact? (Blocking checkout - $50K/day in lost revenue)
  3. What's the root cause? (Race condition in payment processing)
  4. Is a quick fix safe? (No - could cause data corruption)

Response: "This bug is blocking $50K/day in revenue, so I understand the urgency. However, a quick patch is risky - it could corrupt payment data. Here are two options:

  1. Quick workaround (2 hours): Add a retry mechanism. Reduces failure rate from 10% to 2%, but doesn't fix root cause.
  2. Real fix (1 day): Fix the race condition properly. Eliminates the bug entirely.

Given the revenue impact, I recommend the quick workaround now to stop the bleeding, then the real fix tomorrow."

Outcome: Deployed workaround in 2 hours, recovered 80% of lost revenue. Fixed properly the next day.

Lesson: Understand urgency and risk. Sometimes you need a band-aid, but always fix the root cause.


Example 14: The Internationalization Question

Situation: Building a new feature. Engineer asks: "Should we build this with internationalization support?"

Without Product/Business Sense: "Yeah, let's add i18n. It's best practice."

With Product/Business Sense:

Questions:

  1. Are we planning to expand internationally? (Not in the next 12 months)
  2. What's the cost of adding i18n now? (2 extra weeks)
  3. What's the cost of adding it later? (3 weeks, plus refactoring)
  4. What's the opportunity cost? (Other features we could ship)

Analysis:

  • We're not going international soon
  • Adding i18n now: 2 weeks
  • Adding i18n later: 3 weeks
  • Extra cost if we wait: 1 week
  • But we get 2 weeks now to ship other features

Decision: "Let's ship without i18n for now, but:

  1. Use string constants (not hardcoded text) so we're ready to translate
  2. Document that we'll need i18n eventually
  3. Budget for it when international expansion is on the roadmap

This saves us 2 weeks now, costs us 1 extra week later, but we're not paying that cost until we need it."

Outcome: Shipped 2 weeks earlier. When international expansion came 18 months later, refactoring took 3 weeks as expected.

Lesson: Don't pay for optionality you don't need yet. Make future changes easy, but don't build them today.


Example 15: The Notification Overload

Situation: Product wants to add notifications for every user action to "increase engagement."

Without Product/Business Sense: "Sure, more notifications means more engagement, right?"

With Product/Business Sense:

Analysis:

  1. Look at data on existing notifications

    • Email open rate: 8%
    • Push notification click rate: 3%
    • Unsubscribe rate: increasing 2% per month
  2. Research industry benchmarks

    • Notification overload is a top reason users uninstall apps
    • Quality > quantity for notifications
  3. Understand the actual goal

    • Goal: Increase DAU/MAU from 0.35 to 0.45
    • Hypothesis: More notifications = more engagement

Counter-Proposal: "I'm concerned that more notifications will hurt, not help engagement. Our current notification metrics suggest users are already tuning us out. Instead of more notifications, what if we:

  1. Audit existing notifications - which ones actually drive engagement?
  2. Implement smart batching - group related notifications
  3. Add user preferences - let users choose what they care about
  4. A/B test notification frequency - find the optimal cadence

This approach is more likely to increase engagement without annoying users."

Outcome: Reduced notification volume by 40%, increased click-through rate by 3x, unsubscribe rate dropped. DAU/MAU improved to 0.42.

Lesson: More isn't always better. Understand the user experience and actual goals.


Case Studies

Case Study 1: Stripe's API Design - Business-Aware Engineering

Background: Stripe is known for having one of the best developer experiences in the industry. Their API design reflects deep product and business sense.

The Engineering Decision: When designing their API, Stripe made several business-aware choices:

  1. Idempotency keys: Prevent duplicate charges during network failures
  2. Webhooks with retries: Reliable event delivery
  3. Test mode with live-like data: Easy integration testing
  4. Versioning that doesn't break: API changes are additive

The Product/Business Thinking:

These weren't purely technical decisions. They were driven by understanding:

  • User pain point: Developers fear breaking payment systems
  • Business impact: Easier integration = faster adoption = more revenue
  • Competitive advantage: Developer experience is a moat

Cost: Each of these features took 2-3x longer to build than a basic implementation.

Result:

  • Stripe's API is considered the gold standard
  • Faster enterprise sales cycles
  • Lower customer acquisition cost
  • Engineers recommend Stripe to their companies

Lesson: Engineering excellence aligned with business goals creates competitive advantage. The "extra" work on developer experience directly drove revenue.

How This Applies to You: When building APIs or developer tools:

  1. Talk to users (developers) about their pain points
  2. Understand how your API affects your company's sales cycle
  3. Invest in DX (developer experience) as a growth lever
  4. Measure API adoption metrics, not just technical metrics

Case Study 2: Slack's "Tiny Speck" Pivot - Product Sense Saves the Company

Background: Slack wasn't originally a team communication tool. It was an internal tool built by Tiny Speck while making a game called "Glitch."

The Situation:

  • Glitch (the game) was failing
  • Team was using internal chat tool to collaborate
  • Running out of money, needed to pivot or shut down

The Engineering Observation: Stewart Butterfield (CEO, but also an engineer) noticed:

  • Team loved their internal chat tool
  • Other dev tools (IRC, HipChat) were terrible
  • Every company has this problem

The Product Sense:

  • User need: Teams need better communication
  • Market gap: Existing solutions were poor
  • Unfair advantage: They had already built a great version

The Business Sense:

  • TAM: Every company with more than 10 employees
  • Monetization: Freemium model, upgrade for history and integrations
  • Go-to-market: Bottom-up adoption (developers bring it into companies)

The Decision: Shut down the game. Bet everything on the chat tool (Slack).

The Engineering Execution:

  • Spent 6 months polishing the internal tool for external use
  • Focused on "quality over features" (unusual for a pivot)
  • Obsessed over tiny details: emoji reactions, loading messages, search

Result:

  • Fastest-growing SaaS company in history
  • $27B market cap at IPO
  • "Slack" became a verb

Lesson: Product sense isn't just about building features. It's about recognizing what's valuable and having the courage to bet on it.

How This Applies to You:

  1. Pay attention to internal tools your team loves - there might be a product there
  2. Don't just build what's on the roadmap - advocate for what users need
  3. Market gaps are opportunities
  4. Quality compounds - sweating the details matters

Case Study 3: Amazon's Two-Pizza Team Rule - Business-Aware Organizational Design

Background: Amazon famously organizes teams by the "two-pizza rule": if a team can't be fed by two pizzas, it's too large.

The Problem:

  • Large teams had communication overhead
  • Decisions were slow
  • Ownership was diffuse
  • Innovation was stalling

The Engineering Insight: Jeff Bezos (who has a CS degree) understood:

  • Communication overhead grows as n² (where n = team size)
  • Small teams ship faster
  • Autonomy drives innovation

The Business Insight:

  • Time to market is competitive advantage
  • Fast iteration beats slow perfection
  • Distributed decision-making scales better than centralized

The Implementation:

  • Broke up large engineering orgs into small, autonomous teams
  • Each team owns a service or product area end-to-end
  • Teams have P&L responsibility (understand business impact)
  • Built internal tools to enable team autonomy (AWS grew from this)

Result:

  • Amazon can now run thousands of small teams independently
  • Each team can deploy without waiting for others
  • Innovation accelerated
  • AWS became a $60B business

Lesson: Organizational design is an engineering problem with business implications.

How This Applies to You:

  1. Advocate for small, autonomous teams
  2. Push for end-to-end ownership (not just "backend" or "frontend")
  3. Understand that team structure affects product velocity
  4. When team communication feels slow, that's a business problem, not just a process problem

Case Study 4: Airbnb's Near-Death and Resurrection - Product Sense in Crisis

Background: In 2008, Airbnb was failing. Revenue was $200/week. They were running out of money.

The Hypothesis: The founders (engineers and designers) studied why listings weren't converting.

The Discovery:

  • Talked to hosts and guests
  • Noticed: Listings with professional photos got 2-3x more bookings
  • Problem: Hosts were using terrible iPhone photos

The "Unscalable" Solution:

  • Founders flew to New York
  • Personally photographed listings
  • Professional photos led to more bookings

The Product Sense:

  • Quality photos = trust = bookings
  • This was a critical variable, not a nice-to-have

The Business Decision:

  • Hired photographers in every city (doesn't scale! VCs said no!)
  • Gave free professional photography to hosts
  • Cost: Millions in photographer fees

The Business Sense:

  • Photography cost: $100 per listing
  • Increased booking conversion: 2-3x
  • LTV increase: $300-500 per listing
  • ROI: 3-5x on photography investment

Result:

  • Revenue went from $200/week to $400/week immediately
  • Within a year: Millions in revenue
  • Professional photography became a core part of the platform
  • Airbnb is now worth $80B

Lesson:

  • Do things that don't scale when it proves the model
  • Understand what actually drives conversion
  • Be willing to make high-touch investments if the unit economics work
  • Product sense sometimes means doing manual work to validate assumptions

How This Applies to You:

  1. Don't dismiss "unscalable" solutions if they validate the model
  2. Calculate unit economics - sometimes high-touch is high-ROI
  3. Talk to users to understand what actually drives value
  4. Be willing to do manual work to prove a hypothesis before automating

Case Study 5: GitHub's Failed Meritocracy - When Engineering Culture Ignores Business Reality

Background: Early GitHub had a "flat" organization with no managers. Pure meritocracy. Engineers worked on what they wanted.

The Philosophy:

  • Trust engineers to self-organize
  • No managers, no hierarchy
  • Pure technical decision-making
  • "Rubyist culture" - do what you love

The Problem:

  • No accountability for business goals
  • Engineers worked on pet projects, not customer needs
  • Critical bugs went unfixed because they weren't "interesting"
  • Sales team couldn't get engineering support
  • Product roadmap didn't exist

The Business Impact:

  • Enterprise customers frustrated
  • Competitors (GitLab, Bitbucket) gaining ground
  • Revenue growth slowing
  • Valuation concerns

The Reality Check:

  • 2014: GitHub raises $250M at $2B valuation
  • Pressure to grow into valuation
  • Realization: Need to build what customers need, not just what engineers want

The Pivot:

  • Hired experienced leadership
  • Implemented product management
  • Created roadmap tied to business goals
  • Engineers learned to balance technical interests with customer needs

Result:

  • Product quality improved
  • Enterprise adoption accelerated
  • Microsoft acquired GitHub for $7.5B in 2018

Lesson: Engineering excellence requires both technical skill and business context. Pure meritocracy without direction leads to chaos.

What Went Wrong:

  • No connection between engineering work and business outcomes
  • Assumed technical quality would automatically create business value
  • Ignored customer needs in favor of engineer preferences

How This Applies to You:

  1. Engineering autonomy is good, but it needs guardrails
  2. Someone needs to connect engineering work to business goals
  3. "Working on what interests you" is a privilege earned by also delivering business value
  4. Technical excellence without product/business sense is insufficient

Case Study 6: Superhuman's Path to Product-Market Fit - Data-Driven Product Sense

Background: Superhuman is an email client that charges $30/month. They spent 2 years in beta before launching publicly.

The Challenge: How do you know when you've achieved product-market fit?

The Framework: Rahul Vohra (founder/CEO, former engineer) created a systematic approach:

  1. The Survey:

    • "How would you feel if you could no longer use Superhuman?"
    • Options: Very disappointed, Somewhat disappointed, Not disappointed
  2. The Metric:

    • PMF benchmark: 40%+ answer "Very disappointed"
    • They were at 22% - not good enough
  3. The Analysis:

    • Segmented users by response
    • Studied "very disappointed" users: What did they love?
    • Studied "not disappointed" users: Why didn't it work for them?
  4. The Strategy:

    • Double down on what "very disappointed" users loved
    • Fix or remove features that "not disappointed" users mentioned
    • Explicitly chose not to build for everyone

The Product Decisions:

  • Focused on "email power users" (their core segment)
  • Removed features casual users wanted but power users didn't care about
  • Added keyboard shortcuts, speed, and powerful search
  • Deliberately didn't build a mobile app initially (contrary to conventional wisdom)

The Business Sense:

  • TAM: Smaller (power users only), but willingness to pay is 10x higher
  • CAC: Lower (word of mouth from passionate users)
  • Churn: Lower (users are addicted)
  • Better to have 1000 users who love you than 10,000 who think you're okay

Result:

  • PMF score went from 22% to 58%
  • Built a waitlist of 300,000 people
  • Sustainable business at $30/month (most email clients are free)
  • Team stayed small and profitable

Lesson: Product-market fit is measurable. Use data to make product decisions, not opinions.

How This Applies to You:

  1. You can quantify product-market fit - use surveys and data
  2. Building for everyone means building for no one
  3. It's okay to explicitly exclude user segments
  4. Deeply understanding your core users beats shallow understanding of everyone
  5. Use data to prioritize ruthlessly

Case Study 7: Spotify's Squad Model - Engineering at Scale with Business Alignment

Background: Spotify grew from a small startup to 1000+ engineers. They needed a way to scale without losing velocity.

The Problem:

  • Traditional org structure (frontend, backend, mobile) created handoffs
  • Features required coordination across multiple teams
  • Velocity was slowing as the company grew

The Solution: Squad Model

Structure:

  • Squad: Small autonomous team (6-12 people), cross-functional (engineers, designers, PM)
  • Tribe: Collection of related squads (e.g., "Discovery Tribe" has Playlist squad, Search squad, Radio squad)
  • Chapter: Engineers with similar skills across squads (e.g., "Backend Chapter")
  • Guild: Informal groups around interests (e.g., "Web Technology Guild")

Key Principles:

  1. Autonomous: Each squad owns a feature area end-to-end
  2. Aligned: Squads understand company strategy and make decisions accordingly
  3. Loosely coupled: Squads can ship without coordinating with other squads
  4. Mission-oriented: Squads own outcomes (metrics), not just outputs (features)

Example:

  • Search Squad Mission: "Help users find music they'll love"
  • Their Metrics: Search usage, successful searches, music plays from search
  • Their Autonomy: They can change search algorithms, UI, data pipelines without asking permission

The Business Alignment:

  • Each squad understands how their work affects business metrics
  • Quarterly business reviews where squads present impact
  • Squads are accountable for outcomes, not story points

Result:

  • Spotify scaled to 6000+ employees while maintaining velocity
  • Squads ship independently, multiple times per day
  • Innovation didn't slow down with scale

Challenges:

  • Requires mature engineers who can make autonomous decisions
  • Can create duplication (multiple squads solving similar problems)
  • Needs strong technical standards (guilds help with this)

Lesson: Organization structure should optimize for business velocity, not engineering efficiency.

How This Applies to You:

  1. Push for mission-oriented teams, not function-oriented teams
  2. Understand and articulate how your work affects business metrics
  3. Advocate for end-to-end ownership
  4. Small autonomous teams ship faster than large coordinated teams
  5. When you join a project, ask: "What's the mission? What metric are we moving?"

Building These Skills

Self-Assessment Framework

Rate yourself on each dimension (1-5):

Product Sense

1. User Understanding

  • 1: I build what I'm told without questioning
  • 3: I ask about user needs before building
  • 5: I proactively talk to users and advocate for their needs

2. Product Thinking

  • 1: I focus only on technical implementation
  • 3: I consider how features fit into the broader product
  • 5: I contribute to product strategy and roadmap decisions

3. Problem Definition

  • 1: I take requirements at face value
  • 3: I ask clarifying questions about requirements
  • 5: I challenge requirements and propose better solutions

Business Sense

4. Business Model Understanding

  • 1: I don't know how my company makes money
  • 3: I understand our revenue model at a high level
  • 5: I can explain our unit economics and key business drivers

5. Impact Orientation

  • 1: I measure success by shipping code
  • 3: I understand which metrics my work should move
  • 5: I proactively calculate and communicate business impact

6. Resource Thinking

  • 1: I don't consider cost or opportunity cost
  • 3: I think about engineering time as a resource
  • 5: I make build vs. buy decisions based on ROI analysis

Cross-Functional Collaboration

7. Communication

  • 1: I speak only in technical terms
  • 3: I can explain technical concepts to non-engineers
  • 5: I translate technical decisions to business value automatically

8. Stakeholder Management

  • 1: I avoid talking to non-engineering stakeholders
  • 3: I respond to stakeholder questions when asked
  • 5: I proactively involve stakeholders and manage expectations

Target Scores by Level:

  • Junior: 12-16 (focus on learning)
  • Mid: 20-24 (building competence)
  • Senior: 28-32 (driving impact)
  • Staff+: 36-40 (organizational influence)

Practical Exercises and Habits

Daily Habits

1. The "Why?" Habit Before starting any task, write down:

  • What user problem does this solve?
  • What business metric will this move?
  • What's the expected impact?

2. The "Opportunity Cost" Habit For any work estimated over 1 week, ask:

  • What else could we build with this time?
  • Which option has better ROI?
  • Can we validate the assumption with less work?

3. The "User First" Habit When reviewing PRs or designs:

  • How does this affect the user experience?
  • Is this solving a real user problem?
  • Would I want to use this?

Weekly Exercises

1. Business Metric Review (30 minutes)

  • Review your company's key metrics dashboard
  • Understand what's moving up or down
  • Identify how your work connects to these metrics

2. User Feedback Review (30 minutes)

  • Read recent user feedback, support tickets, or user research
  • Identify patterns and pain points
  • Consider how your current work addresses (or doesn't address) these

3. Competitive Analysis (30 minutes)

  • Check out competitor products
  • Identify what they're doing well
  • Consider how your work compares

Monthly Deep Dives

1. Business Model Deep Dive Pick one aspect of your company's business:

  • How does this revenue stream work?
  • What are the unit economics?
  • What are the key drivers and constraints?

2. Product Strategy Session

  • Read product/company strategy docs
  • Map your team's work to strategic goals
  • Identify gaps or misalignments

3. Customer Interview

  • Talk to one customer (sales can connect you)
  • Understand their use case and pain points
  • Bring insights back to your team

Questions to Ask in Every Project

Before Starting

  1. User Questions:

    • Who is the user?
    • What problem are they trying to solve?
    • Have we talked to them about this?
    • How will we know this solved their problem?
  2. Business Questions:

    • What business metric will this move?
    • What's the expected impact (with numbers)?
    • What's the cost (engineering time, infrastructure, etc.)?
    • What's the ROI?
  3. Strategic Questions:

    • How does this fit into our roadmap?
    • What does this enable for the future?
    • What constraints does this create?
    • Are we building for today or for 3 years from now?

During Development

  1. Reality Check Questions:

    • Are our assumptions still valid?
    • What have we learned so far?
    • Should we adjust the plan?
    • Are we still solving the right problem?
  2. Scope Questions:

    • What's the MVP? (minimum to learn, not minimum to ship)
    • What can we cut to ship faster?
    • What can we add later based on feedback?

After Shipping

  1. Impact Questions:
    • Did we move the target metric?
    • What did we learn?
    • What should we do next?
    • Was this worth the investment?

Resources and Recommended Reading

Books

Product Sense:

  • "The Mom Test" by Rob Fitzpatrick - How to talk to users
  • "Inspired" by Marty Cagan - Product management fundamentals
  • "The Lean Startup" by Eric Ries - Build-measure-learn
  • "Hooked" by Nir Eyal - How products build habits

Business Sense:

  • "The Personal MBA" by Josh Kaufman - Business fundamentals
  • "Zero to One" by Peter Thiel - Startup strategy
  • "Good Strategy Bad Strategy" by Richard Rumelt - Strategic thinking
  • "The Innovator's Dilemma" by Clayton Christensen - Market dynamics

Decision Making:

  • "Thinking in Bets" by Annie Duke - Decision-making under uncertainty
  • "The Lean Product Playbook" by Dan Olsen - Product-market fit
  • "Measure What Matters" by John Doerr - OKRs and goal-setting

Blogs and Podcasts

Blogs:

  • Lenny's Newsletter (lennysnewsletter.com) - Product + growth
  • First Round Review - Startup advice
  • Stratechery by Ben Thompson - Business strategy
  • Your company's internal product/business updates

Podcasts:

  • "Lenny's Podcast" - Product and growth
  • "Acquired" - Deep dives on company strategies
  • "The Product Podcast" - Product management

Internal Resources

Make Use of What You Have:

  1. Company dashboards: Bookmark and review regularly
  2. All-hands meetings: Pay attention to business updates
  3. Product roadmap: Understand the big picture
  4. User research: Read every report
  5. Sales/CS calls: Ask to listen in occasionally

Mentorship and Peer Learning Approaches

Finding Mentors

Look for:

  • Senior engineers who understand business
  • Product managers who respect engineering
  • Engineering leaders who've scaled teams
  • Anyone who asks "why?" before "how?"

What to Ask:

  • "How do you evaluate whether to build something?"
  • "How do you prioritize technical work?"
  • "How do you communicate technical decisions to non-engineers?"
  • "What's the biggest business-aware decision you've made?"

Peer Learning

Start a Book Club:

  • Pick a book from the reading list
  • Meet monthly to discuss
  • Apply concepts to real work examples

Create a Study Group:

  • Review business metrics together
  • Analyze product decisions
  • Practice ROI calculations
  • Mock product/business reviews

Cross-Functional Shadowing:

  • Shadow a PM for a day
  • Sit in on sales calls
  • Attend customer interviews
  • Join business review meetings

Teaching Others

The best way to learn is to teach:

  • Give a brown bag talk on business metrics
  • Write internal docs on ROI thinking
  • Mentor junior engineers on product sense
  • Review RFCs with a business lens

For Engineering Leaders

Coaching Your Team

As an engineering leader, developing product and business sense in your team is one of your most important jobs. Here's how:

1. Model the Behavior

Your team learns by watching you.

In every 1:1, meeting, and code review, demonstrate:

  • Asking "why?" before "how?"
  • Connecting technical decisions to business outcomes
  • Speaking in terms of user value, not just features
  • Calculating opportunity cost

Example: Instead of: "Let's use PostgreSQL for this." Model: "Let's use PostgreSQL because we already have expertise, it handles our scale, and we can ship in 1 week instead of 3. The opportunity cost of learning a new database isn't justified for this project."

2. Create Learning Opportunities

Exposure Creates Understanding

Tactics:

  • Invite engineers to product roadmap meetings
  • Share business metrics dashboards and explain them
  • Have engineers present at business reviews
  • Rotate engineers through customer support for a week
  • Bring engineers to sales calls and user interviews
  • Share board deck or investor updates (when appropriate)

Make it safe to ask "dumb" questions about business.

3. Change How You Review Work

RFCs and PRs Are Teaching Moments

When reviewing technical proposals, always ask:

  1. What's the user/business impact?
  2. What alternatives were considered?
  3. What's the opportunity cost?
  4. How will we measure success?
  5. What happens if we're wrong?

Reject proposals that don't answer these questions.

4. Tie Promotions to Product/Business Sense

What gets measured gets managed.

Include in your engineering ladder:

  • Mid: "Understands how their work connects to product goals"
  • Senior: "Contributes to product decisions, understands business metrics"
  • Staff: "Drives business outcomes, influences product strategy"

Make it clear: You can't advance without developing these skills.

5. Coaching Conversations

1:1 Framework:

For Junior Engineers:

  • "Tell me about the feature you're building. What user problem does it solve?"
  • "What metric do you think this will move?"
  • "Why do you think the PM prioritized this?"

For Mid-Level Engineers:

  • "How did you decide between option A and option B?"
  • "What's the opportunity cost of this approach?"
  • "Have you talked to users about this?"

For Senior Engineers:

  • "What's the business impact of this project?"
  • "How does this align with our strategy?"
  • "What would you prioritize differently if you were the PM?"

6. Create Accountability

Set expectations:

  • Every project should have defined success metrics
  • Engineers should present impact, not just completion
  • Postmortems should cover "was this worth building?" not just "did we build it well?"

Creating a Culture of Business-Awareness

1. Information Transparency

Kill the "Need to Know" Mentality

Share:

  • Business metrics (revenue, growth, churn)
  • Strategic priorities and why
  • Customer feedback and market dynamics
  • Financial constraints and runway

Why: Engineers can't make business-aware decisions without business context.

2. Change the Metrics You Celebrate

Stop Celebrating:

  • "We shipped 50 story points this sprint!"
  • "Zero bugs in production!"
  • "100% code coverage!"

Start Celebrating:

  • "We increased conversion by 15%"
  • "We saved $50K/month in infrastructure costs"
  • "We validated our assumption before building"
  • "We chose not to build this based on data"

What you celebrate shapes what people optimize for.

3. Embed Product Thinking in Processes

Sprint Planning:

  • Start with "what's the user problem?" not "what's the ticket?"
  • Require business impact statements for all work
  • Challenge work that doesn't have clear value

Standups:

  • Instead of "what did you do?", ask "what impact did you create?"
  • Discuss blockers in terms of user/business impact

Retros:

  • Add "Did we build the right thing?" not just "Did we build it well?"

4. Build Cross-Functional Relationships

Break Down Silos:

  • Co-locate engineers with product and design
  • Have engineers attend customer calls
  • Invite product to architecture reviews
  • Joint OKRs across engineering and product

The more engineers interact with product, design, and business, the more they'll internalize these perspectives.

Performance Evaluation Criteria

Include in Performance Reviews:

For All Engineers:

  • Understands and can articulate how their work connects to business goals
  • Asks questions about user needs and business impact
  • Makes technical decisions with user/business context

For Senior ICs:

  • Contributes to product roadmap discussions
  • Identifies opportunities for business impact
  • Communicates technical decisions in business terms
  • Challenges requirements based on user/business analysis

For Staff+ ICs:

  • Drives business outcomes through technical decisions
  • Influences product strategy
  • Makes build vs. buy decisions with ROI analysis
  • Mentors others on product/business thinking

Sample Review Questions:

  1. "Describe a time you changed a technical decision based on business context."
  2. "What business metric did your work this quarter impact? How much?"
  3. "Tell me about a time you pushed back on a requirement. What was your reasoning?"
  4. "What's the ROI of the largest project you worked on?"

Interview Questions to Assess Product/Business Sense

When hiring, assess product and business sense alongside technical skills:

For Mid-Level Engineers:

"Tell me about a feature you built. Why did the company build it?"

  • Poor answer: "The PM asked for it"
  • Good answer: Explains user problem, business context, success metrics

"Describe a time you had to make a trade-off between code quality and shipping speed. How did you decide?"

  • Poor answer: "Code quality always comes first"
  • Good answer: Evaluates based on business context, urgency, user impact

For Senior Engineers:

"Walk me through a technical decision where you had to consider business constraints."

  • Look for: Explicit cost/benefit analysis, understanding of opportunity cost

"Describe a time you disagreed with a product requirement. How did you handle it?"

  • Look for: Respectful challenge, proposing alternatives, understanding the underlying goal

For Staff+ Engineers:

"Tell me about a project where you had to choose between technical perfection and business pragmatism."

  • Look for: Sophisticated analysis of trade-offs, long-term thinking, business impact quantification

"How do you prioritize technical work when everything seems important?"

  • Look for: Clear framework (impact vs effort, strategic vs tactical), business value assessment

"Describe a time you influenced the product roadmap through technical insight."

  • Look for: Bridging technical and business perspectives, data-driven advocacy

Conclusion

Product sense and business sense aren't "soft skills." They're the skills that separate engineers who write code from engineers who create value.

Every engineer in my organization, from junior to staff+, is expected to develop these skills. It's not optional. It's how we:

  • Make better technical decisions
  • Ship products users love
  • Move faster than competitors
  • Build careers, not just code

Start small:

  1. This week: Ask "why?" three times before starting work
  2. This month: Talk to one user
  3. This quarter: Calculate the ROI of one project
  4. This year: Influence one product decision through technical insight

Remember:

  • Users don't care about your code. They care about their problems being solved.
  • The company doesn't exist to give you interesting technical problems. It exists to create value.
  • Your job is to find the intersection: elegant technical solutions to real user problems that drive business outcomes.

That's the job. Now go do it.


This guide is a living document. As you develop these skills and learn new frameworks, I encourage you to contribute examples, case studies, and lessons learned back to this guide.


Appendix: Quick Reference

The Product Sense Checklist

Before building anything:

  • What user problem does this solve?
  • Have we talked to users about this?
  • What's the simplest version that tests our hypothesis?
  • How will we measure success?
  • What could we learn with less code?

The Business Sense Checklist

Before committing to work:

  • What business metric will this move?
  • What's the expected impact (in numbers)?
  • What's the engineering cost (in time/money)?
  • What's the opportunity cost?
  • What's the ROI?

The Decision-Making Framework

When making any technical decision:

  1. Context: What's the business/user situation?
  2. Options: What are the alternatives?
  3. Analysis: What are the trade-offs?
  4. Decision: What are we doing?
  5. Rationale: Why, with business justification?

The Communication Template

When proposing technical work:

Problem: [User problem or business need] Proposal: [What we'll build] Impact: [Expected business/user outcome with metrics] Cost: [Time, money, opportunity cost] Alternatives: [What else we considered] Success Criteria: [How we'll measure success]

Key Questions to Ask

To Product: "What's the user problem we're solving?" To Business: "What's the revenue impact?" To Design: "Have we validated this with users?" To Yourself: "Is this the simplest solution?" To Everyone: "What happens if we're wrong?"

@yowainwright
Copy link

yowainwright commented Dec 3, 2025

This is great! Definitely worth a re-read and even another re-read!

As I grow into my role as an engineering leader, I fing myself most attached to this section.
In practice, this makes perfect sense! However, I struggle here—especially during "full pivot" interactions. I'd love to read more of your wisdom regarding, keeping calm when, for example, a company-wide rfc has been approved and is beginning to be built and then gets re-prioritized by the business. It's so important to perform well in these situations and I, ah, struggle. How do you stick to the "yes, and" framework?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment