1) Improved delivery predictability across multiple teams
-
At HelloFresh, I led a migration away from a legacy vendor with 8 months left on the contract.
-
I worked with the PM to map all use cases and dependencies, then with a staff engineer and partner teams to size the work and define migration paths.
-
We started with low-risk use cases first, then moved to a phased migration plan by team and week, which gave everyone clarity and reduced surprises.
2) Project did not go as planned
-
Early as an EM, my team took ownership of the browser SDK because backend improvements alone were not enough.
-
Our first SDK version was too much of a black box, which led to more support requests when behaviour was unexpected.
-
I ran user interviews and shadowed developers, then redesigned the SDK with a more explicit interface that improved troubleshooting and implementation quality.
3) Right level of abstraction in the second version
-
We defined a clearer boundary between what belonged to the developer and what belonged to the platform.
-
Developers could pass inputs like user context because that was part of their domain.
-
We kept implementation details like cache size hidden so the platform could manage them consistently and safely.
4) Influenced without direct authority
-
At Zalando, just before Cyber Week, a team wanted a new feature to measure the network effect of a video.
-
Building the full solution at scale would have put Cyber Week readiness at risk, so I first spoke to them to understand the real rollout need.
-
They only needed it in one country initially, so I proposed a narrower solution, aligned the trade-offs with leadership, and enabled the business use case without increasing operational risk.
5) Quarterly planning and capacity planning
-
I start with goals agreed with the PM, based on customer discovery, leadership priorities, and team input.
-
Then I run a workshop with the team to identify initiatives, high-level estimates, and confidence levels.
-
From there, I build a roadmap around impact, dependencies, delivery risk, and actual capacity, while leaving room for support work and unexpected issues.
6) Pushed back on leadership request
-
At Zalando, when the company moved to React Native, my team was asked to do a one-to-one replacement of the existing SDK.
-
I felt that would carry legacy problems into the new architecture, so I worked with the staff engineer to make the case for a redesign.
-
We showed that the redesign would improve developer experience, experiment velocity, and maintainability without affecting timeline, and got alignment on the better approach.
7) Measured whether a team was delivering well
-
I used DORA metrics to understand delivery speed and reliability.
-
I gathered stakeholder feedback through a quarterly NPS-style survey and direct input from partner teams.
-
I checked team health through one-to-ones, retrospectives, and general team health signals.
- Tell me about a time you improved delivery predictability across multiple teams or stakeholders.
For me, delivery predictability starts with making scope, risks and sequencing visible early, and then communicating proactively as reality changes.
One example was at HelloFresh, where I was responsible for migrating away from a legacy vendor to a new platform. The challenge was that the existing vendor had been in place for around seven years, was deeply embedded across the ecosystem, and we only had about eight months left on the contract. So the delivery risk was not just technical migration, but also unknown dependencies across teams.
The first thing I did with the product manager was build a complete view of the use cases and team dependencies, so we were working from shared scope rather than assumptions. Then I worked with a staff engineer and partner teams across the organisation to do a deeper technical assessment and come up with rough sizing and migration paths.
Instead of trying to move everything at once, we started with lower-risk, more straightforward use cases. That helped us validate the platform, learn where the real friction was, and build confidence with stakeholders. Once we had that, I translated it into a phased migration plan that showed which teams could start in which weeks, what the prerequisites were, and where the main risks sat.
I think that is what improved predictability: teams were not surprised, the roadmap was aligned to incremental value delivery, and we could have early conversations when something needed to move rather than reacting too late.
In general, I have found that predictability does not come from trying to control everything upfront. It comes from making uncertainty visible early, reducing it step by step, and keeping stakeholders closely aligned throughout.
- Tell me about a time a project or migration did not go as planned. What happened, and what did you change afterwards?
A good example was early in my time as an Engineering Manager. My team mainly owned backend systems for the experimentation platform, but I realised that to really improve the customer experience we needed end-to-end ownership, including the browser SDK. Otherwise, backend improvements only solved half the problem.
After aligning with leadership, my team took on ownership of the SDK. Our first version did not go as planned. We designed it as a black box, which looked neat from an engineering perspective because developers could call a function and get a result. But that abstraction hid too much. When behaviour was unexpected, developers could not easily understand what was happening, and support requests increased.
The mistake was that we had not spent enough time understanding the right level of abstraction for our users. We designed for elegance, not for how developers actually worked day to day.
To fix that, I spoke directly with users, ran interviews, and shadowed developers in their workflow. Based on that, we redesigned the SDK to make the interface more explicit and easier to reason about. That improved developer experience, made troubleshooting easier, and improved implementation quality as well.
The main lesson for me was that platform ownership is not just about owning more surface area. It is about owning the full user experience, and that means staying close to the people using what you build.
- How did you decide what the right level of abstraction was in the second version?
In the second version, we decided the abstraction level by being much clearer about the boundary between what belonged to the developers’ domain and what belonged to the platform’s domain.
The principle was that developers should control the inputs that were meaningful to their product and use case, but they should not have to manage lower-level platform concerns that were better standardised centrally.
For example, we allowed developers to explicitly pass user context, because that was part of their domain and important for correct behaviour. But we did not expose things like cache size, because that was an implementation detail the platform could manage more consistently and safely.
So the goal was not to expose more knobs for the sake of it. It was to expose the right controls, while keeping platform internals hidden where that helped reliability and consistency.
- How do you influence teams and engineering managers without direct authority, especially when priorities conflict?
One example from Zalando was shortly before Cyber Week, when a team requested a new feature to measure the network effect of a video, essentially how viral a video becomes.
The challenge was that building the full solution at Zalando scale would either have pushed our Cyber Week preparations or forced us to release a new feature after code freeze, both of which were risky options.
Since I did not have direct authority over the requesting team or the broader business priority, my first step was to understand the actual requirement rather than responding to the initial ask at face value. I spoke with the team about their goals, rollout plans, and what success looked like.
That discussion surfaced an important detail: before Cyber Week, they only planned to use the capability in one country. Once I knew that, it became clear that we did not need the full-scale solution immediately. We could build a narrower version limited to one geo, which would unlock the business use case while keeping delivery and operational risk under control.
I then wrote up a plan that made the trade-offs explicit, including the limitations of the geo-specific solution, the follow-up work needed afterwards, and why this was the safest path given the timing. I aligned that with leadership and the relevant stakeholders, got approval, and we executed against that plan.
The outcome was that we enabled a use case that supported a top-line business opportunity, while still protecting Cyber Week readiness and avoiding unnecessary risk to a critical period.
For me, that is usually how influence works without direct authority: understand the real need, reframe the problem if needed, and make the trade-offs visible so stakeholders can align around a pragmatic path forward.
- How do you approach quarterly planning and capacity planning when there are many dependencies and priorities competing for time?
For me, quarterly planning starts with clarity on goals. I work with the Product Manager to align on the outcomes that matter most, based on customer discovery, leadership priorities, and ideas from the team.
Once those goals are clear, I involve the team to identify initiatives that can contribute to them, along with high-level estimates and a confidence level. From there, I work with the PM to shape a roadmap that reflects not just impact, but also dependencies, sequencing, delivery risk, and actual capacity. Surfacing capacity constraints explicitly is important, because a lot of missed commitments come from hidden assumptions rather than poor intent. I also try not to plan to full utilisation, since support work and unexpected issues always appear.
If priorities compete, I make the trade-offs explicit so we can decide consciously. My goal is to build a plan that is impactful, but also realistic enough that the team can execute it with confidence.
Looking ahead, I think AI can make this process much faster and lighter. It can reduce manual planning overhead and help keep plans current. So while I still believe in quarterly alignment, I am increasingly interested in moving towards a more continuous discovery and delivery model, where strategy remains stable but execution adapts more fluidly as new information comes in.
- Tell me about a time you had to push back on a stakeholder or leadership request. How did you do it?
A recent example from Zalando was when the company made a strategic decision to move its client-side architecture to React Native. My team was responsible for building the React Native SDK for the experimentation platform.
The original ask from leadership was to do a one-to-one replacement of the existing SDK. But my view was that if the wider initiative was meant to correct legacy architectural decisions, then simply reproducing the old SDK in React Native would carry forward the same limitations into the new setup.
So I pushed back, but in a constructive way. I worked closely with the staff engineer to build a case for redesigning the SDK so it was better suited to the new architecture. We framed the benefits not only in technical terms, but also in terms of developer experience, experiment velocity, and long-term maintainability.
A key part of that conversation was showing that this redesign would not materially affect the timeline. That made the trade-off much easier to discuss, because it was not a choice between quality and speed.
We were able to align stakeholders around the revised approach, which meant we did not carry old design debt into the new client architecture. For me, that is how I usually handle pushback: I try to understand the intent behind the ask, make the trade-offs visible, and propose an alternative that better serves the long-term goal without ignoring delivery constraints.
- How do you measure whether a team is delivering well?
I do not think one metric tells you whether a team is delivering well. I usually look at it from three angles: flow of work, customer perception, and team sustainability.
First, I use DORA metrics to understand how quickly and reliably the team is delivering. They help me spot bottlenecks or stability issues, but I treat them as directional indicators rather than targets to optimise blindly.
Second, I look at customer or stakeholder perception. Especially in platform teams, that matters a lot, because a team can look productive internally but still create friction for others. So I gather that signal through things like a quarterly NPS-style survey and direct feedback from partner teams.
Third, I look at team health. I want to know whether people have clarity, whether there are recurring tensions, and whether the pace is sustainable. I usually get those signals through one-to-ones, retrospectives, and team health checks.
If I see a mismatch between those signals, that is usually where the real insight is. For example, strong delivery metrics with poor team health often means the performance is not sustainable. So for me, a team is delivering well when execution is reliable, customers feel supported, and the team can sustain that level over time.