<- Back to the blog

How our product engineering workflow has evolved

As we explained in a previous blog post, we decided to pivot at the end of summer 2020. Pivoting our products has been a major change in our cross-functional team’s organization, and we used it as an opportunity to start our UI/UX and an engineering processes from scratch.

One of the aspects of that change is the organizational changes it implied, driven by our desire to iterate fast with the first pioneer users of the product that were—and still are—helping us build it.

Settling on our current organization didn’t happen in one month, and we went through many trials and errors to get where we are now. This blog post will present what we tried, what we dropped, and what we kept. This is highly related to our needs, however I think there are good learnings to share here to improve your cross-functional teams.

Step 1: challenge everything.

As we decided to drastically change the vision of our product and its target market, we also decided to stop piling-up the legacy codebase on our old product and start anew. Two factors drove it:

  1. Complexity of the technical stack
  2. Fundamental change in product’s purpose

In the legacy product, we worked in sprints of three weeks. All the tickets attached to a given sprint were meant to be shipped the same sprint. Meanwhile, the next sprint’s tickets were prepared by the product and design team with the specifications aligned with the previous sprint’s outcome. 

Working with sprints had one main issue: we were constantly late on deadlines. In our experience, it came from two main factors: 

  • You can’t know the full complexity of a feature until you start digging into it. However thoughtful, complete, and researched the feature may be, surprises will emerge.
  • We had a Front / Back end architecture on two different technologies. This required two engineers to work on a single ticket: one front end, one backend. Any complex interface change would need both sides to sync and work together.

So we decided to switch to the magnificent monolithic architecture, with one clear goal: Any engineer on the team should be capable of shipping a complete feature on their own. This means our engineers would learn more into their full-stack roles and handle both the front and backend of a feature.

While preparing for that big reboot, we also dropped the sprints. It is a big endeavor to reboot your technical stack. All the foundational features are dead, and you need to build them back individually. On the product side, it implied rebooting the feature map and being very picky on what to carry over. 

For example, we had a complex approach to team management and user roles in the legacy product because it was conceived as a bottom-up, self-onboarded developer tool with limited security concerns. A single user would only need one account across the whole platform, similar to how other products in the space handled users.

Our new product was drastically different: a staff guided-onboarding, top-down approach with high-security concerns. There was no point back-porting the whole team management and onboarding features. We decided to start with a straightforward team management feature built around organizations.

We imposed a very ambitious release deadline to push us to prioritize and make those tough decisions. We weren’t working in sprints anymore, so setting a timeline ensured we weren’t running behind. The product/engineering team, our GTM, and the founders decided on three months to build it all and start onboarding our early adopters.

Step 2: adjusting to our new market.

As we redesigned the minimum viable product, we needed to explain the new direction to our users. To make sure we were on the right track, we used a lot of Figma prototyping. Figma is a great product to do this, because a motivated designer can make their prototypes feels like an real product. It’s a comment we have often received when showing prototypes: “Is this production already?”

Full-fledged application prototyping aims to validate the natural path of users’ navigation without waiting for the actual app to develop. Assessing the value of an application based solely on a prototype is challenging. That’s why we ensure that our prototypes always have accurate and realistic data, looking very close to what the user would see on the live product.

This was also when we expanded our personas to add legal and compliance users to the workflow. It was a challenge, as we mostly came from engineering backgrounds. We had to learn a lot about the legal implications of privacy and data security. We fast tracked our learning by hiring experts to guide us through it. To learn even faster, we decided to dog-food the whole flow on ourselves and to be exemplary in our choice of implementation for maximizing data privacy. Have a look at our privacy policy for more details.

From a product/engineering team’s point of view, it meant: 

  • Challenging the choices we made for user behavior data collection. We switched from Fullstory to a self-hosted product, Posthog, to make sure all private data stays with us.
  • Rebuilding our whole emailing system and removing unnecessary tracking features.
  • Change our decision-making strategy when it comes to new third-party services, and assessing their data impact.

While learning our new market and customers' priorities, we made big-sweeping changes throughout the product. Rather than releasing small changes as they were finished, we went with a more traditional iteration approach. This allowed us to better assess the app’s direction as a whole, and receive more holistic feedback from users.

We called each iteration a “Version”. And as it wasn’t small changes, the whole process was called “Big Versions”. It was a very waterfall-y approach to product development. We would proceed to ship a big version like this: 

  • Build the new navigation, information architecture, and copy for the whole product for a given version.
  • Create a mile-long specification with all the tiny details we researched with our users.
  • Wait for the development team to be ready with the previous version (pushing it to QA).
  • While product & design is doing QA on the previous version, engineering would read, comment, and question on the next version.
  • After the QA is done on the product/design side, we will answer all the engineering team’s questions while they would implement the QA feedback.
  • And finally, when everything has been QAed and refined, the previous version would be shipped to users and the next one put in development.

The whole process of QAing and refining specifications of the next version would take roughly two weeks. Sometimes three weeks. So the entire timeframe to get a new “big version” out there would be nine weeks. 

We had good reasons to work this way: 

  • At this time, we landed our first customer. We had to make sure the product would be production-ready, despite the significant changes we would make to UI, UX, and copy. 
  • UI & UX were always coherent and copied across a given environment, increasing the product's perceived quality.
  • Big specification files let the engineering team decide how to slice it for implementation and in which order to tackle it. 
  • It allowed us to make large, sweeping changes without worrying about carrying over partial progression. This led to some major product shifts that still live on today.

But it was outweighed by the drawbacks:

  • The size of the plan for each big version made it difficult to find the tiny details or problems before starting development by engineering. In addition, we were touching almost 100 screens each time and not counting the backend part of the product for analyzing the codebase.
  • Doing any change to the plan in flight was almost impossible. The pressure was enormous on product/design to nail the “right” behavior, as coherency update was a nightmare. Even changing the copy was painful.
  • The size of the QA to perform at the end of development was huge and painful for everybody. 
  • Pressure on a delivery date meant a lot of legacy code from the previous version was left behind. 
  • We did rituals like kick-off, planning, or retrospectives only once per version, which meant almost two months apart.

This organization was optimized for early consistency and perceived quality, at the expense of a frictionless collaboration between product, design, and engineering. As a result, we didn’t communicate often enough to realign priorities, nor did we have the capability to change our plans without generating much frustration with those who planned them and those who were implementing them. We needed to change and get back to agile.

Step 3: acknowledge reality and bring agility back. 

We wanted to behave as one team, working on one product from the beginning. Everyone on the team had to know everything about everything. This isn’t realistic. You can’t be an expert on everything. Engineers needed to expertise in everything from details driven Front-end to the intricacies of data detection algorithms in a specific language. It’s too broad. It took us some time to realize that behaving as one team didn't mean everyone needed to perform the same role and know everything. When looking at the state of the product and planning the next big feature push, we didn’t want to go in another tunnel. We knew that our big technological push to have a very scalable detection capability would be a road paved with holes, and our only hope to not crash was to adjust course just in time.

We split up into two teams with different goals and objectives. 

  • The detection engine team: building the core feature reads the codebase, finds components, integrations, and data. Very technical, very sophisticated, driven by detection quality KPIs.
  • The dashboard team: building our SaaS dashboard that displays the detected data in a way that fits our user’s workflows. User-centric, fast-paced, driven by adoption and usage KPIs.

We also made an inventory of what we wanted to keep, drop, and start in our product/engineering organization. 

A regular issue of agile development is sizing how big something is to build and capturing how long it will take. From an engineering perspective, we didn't differentiate what was experimental and core to our product. Everything was developed with the same level of care and expectation to last. As a result, we over-invested in features that still needed to be refined by user interaction.

We had tried many ways to assess the size of work, from t-shirt size to points with a Fibonacci suite to give the number of days an engineer should spend on a ticket. None worked, as asking engineering, “how much time will it take to have this feature” is the wrong question. So instead, we decided to frame each feature with a quality level. This was informed by our confidence in the feature.

We decided on three levels of feature investment: 

  • Experimental features: new features built to research behavior or validate findings as quickly as possible. Minimal investment as the feature may be gone in a few weeks or completely rebuilt for scale.
  • Edge features; a production feature that isn’t relied on heavily. They are built to last but are less critical than core features as they won’t be part of chain failures.
  • Core features; a production feature with other features relying on it. Heavy investment, as it defines the essence of our product and everything else depends on it.

This framework helps us share context and intention around upcoming features. So far, it has been very effective in limiting over-investment in new features. After a while, an experiment can be elected to a production-grade level, or simply discarded. We don’t start a new feature without experimenting first and getting users' feedback from the live product. 

Switching back to short iterations, we also decided to re-introduce a faster-paced ritual habit. “Big versions” didn’t allow much time for kick-off and retrospectives, and no demos happened. However, they are critical to adjusting the course and the team’s morale and cohesion. Finally, we decided to bring back sprint-like iterations with clear differences from our old sprint way of working. For clarity, we now call them cycles.

  • A Cycle lasts two weeks. We do a kick-off at the beginning, a demo on the first Friday, and a demo + retro on the last Friday. Everybody demos (PMs, PDs, EMs, Devs).
  • Each contains many features, grouped by epics, to be tackled by the team.
  • Launching in production, a new feature isn’t tied to a specific cycle and may drift.
  • We use cycles to share the product’s pace and velocity with stakeholders and customers. As per sprints, we say something is likely to release cycle X.

We changed the way we do retros. We ask team members to prepare them independently. All notes are read by product + engineering leadership, writing a note. That note is then read by all team members and debated for action: 

  • It pushes everyone to remember what happened in the cycle, the good and the bad.
  • It asks for actionable items to tackle and an assessment of the team’s performance.
  • The personal note gives a quiet space for team members less open about their feelings or more comfortable with written communication to express their opinions and be heard.

Both our teams use the same rituals to maintain coherence throughout the company. As a general feeling after implementing it, the teams were thrilled. Here are some examples from various cycles:

Cycle 1:

Cycle 2:

We also know when things aren’t too good and can adjust.

Cycle 10:

From a stakeholder perspective, we can give better predictions of when a given feature will happen, and we can change course as needed. So our overall velocity has increased, not only because of the new organization, but it’s a major contributing factor.

Our product/design/engineering methodology is a living thing. It has loose rules and focuses on ensuring a healthy and enjoyable work environment. Our main goal isn’t to build a comprehensive work methodology to ship more features. It is to deliver our technology's total value and potential to our users as fast and efficiently as possible. This way of working together will change over time as new people join our team, and we have more and more parallel development threads. The implementation will change for sure. However, we hope to retain the core values and principles we learned.

We hope this piece of our company’s knowledge will help you shape a great work environment for your organization's product, design, and engineering teams. Feel free to reach out if you have any questions on how we do things here at Bearer! 

Cheers

The Bear Den
Share this article:

Ready to shift data security left?

Request early access and start discovering data in minutes.