Metrics That Matter: Data Points for Technology Leaders — Open Source CXO Ep. 18 | Active Logic
What should engineering leaders actually measure? In a world of dashboards and data, it’s easy to track everything and understand nothing. In this episode of Open Source CXO, Sebastian Kline, VP of Engineering at Nelnet Community Engagement, cuts through the noise to identify the metrics that genuinely indicate engineering health — and the ones that create perverse incentives.
Sebastian’s perspective is shaped by a unique career arc: building engineering culture at Aware3 (a startup) and then navigating the integration of that team into Nelnet’s larger corporate structure. He’s seen metrics work at both scales and knows which ones survive the transition.
Key Insight: Deployment Frequency as a Health Indicator
Sebastian identifies deployment frequency as one of the most revealing metrics for engineering teams. It’s not about deploying fast for its own sake — it’s about what frequent deployment implies about the rest of your engineering process.
Teams that deploy frequently tend to have:
- Strong automated testing (you can’t deploy often if tests are slow or unreliable)
- Good CI/CD infrastructure (the deployment pipeline has to be fast and reliable)
- Small, well-scoped changes (large changes are risky to deploy frequently)
- Confidence in their systems (you deploy what you trust)
Low deployment frequency, conversely, often indicates accumulated fear — teams that are afraid to ship because they don’t trust their tests, their infrastructure, or their code. Tracking this metric doesn’t fix the problem, but it reveals where the problems are.
Key Insight: Customer-Detected Bugs vs. Internal Detection
One of the more nuanced metrics Sebastian advocates is the ratio of bugs detected by customers versus bugs detected internally (through testing, code review, or monitoring). This ratio tells you something important about the effectiveness of your quality practices.
A high ratio of customer-detected bugs means your internal quality gates are failing. Not necessarily that your code is bad — but that your testing doesn’t cover the scenarios your customers encounter. This often happens when test suites are written by the same developers who wrote the code, testing the same mental model rather than challenging it.
Bringing this ratio down requires investing in QA practices that think like customers, not like developers — scenario testing, exploratory testing, and production monitoring that catches issues before users report them.
Key Insight: Building a No-Blame Culture with Data
Sebastian makes an important distinction: metrics should inform coaching conversations, not punishment. The moment metrics become punitive — “your code coverage dropped, explain yourself” — people game them. Code coverage becomes a checkbox exercise. Deployment frequency becomes reckless shipping.
The right approach is to use metrics as conversation starters. If deployment frequency drops, the question isn’t “why are you slow?” — it’s “what’s blocking you?” Maybe the test suite is flaky. Maybe the deployment pipeline is broken. Maybe scope creep is making changes too large to ship safely.
A no-blame culture doesn’t mean no accountability. It means accountability is forward-looking (“how do we fix this?”) rather than backward-looking (“whose fault is this?”).
Key Insight: Startup Culture Inside a Corporate Structure
Sebastian’s experience integrating Aware3 into Nelnet provides practical lessons for anyone navigating an acquisition. Startups move fast because they’re small, focused, and unconstrained by process. Large organizations move deliberately because they’re managing risk across many teams and products.
The integration challenge is preserving what made the startup effective (speed, ownership, direct communication) while adopting what the large organization provides (stability, resources, compliance). Sebastian describes tactics that worked: maintaining team identity within the larger org, advocating for engineering practices that proved effective at startup scale, and finding allies in the corporate structure who value the same things.
Key Insight: Retention Through Engagement, Not Just Compensation
The conversation addresses engineer retention — a persistent challenge in the industry. Sebastian argues that compensation is table stakes (you have to be competitive), but retention is driven by engagement: meaningful work, growth opportunities, autonomy, and a culture where people feel valued.
The metrics that indicate retention risk aren’t always obvious. Watch for: declining code review participation, reduced initiative in proposing improvements, withdrawal from team discussions, and increasing time-off patterns. By the time someone gives notice, the decision was made months ago.
Takeaways
- Deployment frequency reveals engineering health. Low frequency indicates accumulated technical or process debt.
- Track the customer-detected bug ratio. It measures the effectiveness of your quality practices better than code coverage alone.
- Use metrics for coaching, not punishment. Punitive metrics get gamed; coaching metrics get addressed.
- Preserve startup culture through acquisitions intentionally. It doesn’t survive by accident.
- Retention is an engagement problem, not a compensation problem. Pay competitively, then focus on meaningful work and growth.
- Networking isn’t optional for leaders. Sebastian emphasizes that professional development through industry connections is as important as technical skills.