How our tech stack evolved
A bit of Bearer history. We started as a Ruby on Rails heavy team, but as our original product grew, so did the needs of our architecture. Pretty soon, it was incredibly reliant on microservices. For good reasons, too. We were handling over a billion requests a month, we required resiliency as we sat in between our users requests and the outside world. The stack was very diverse to put it lightly. We ran a Rails API, GraphQL, ReactJS on the front-end, and a handful of other services running Elixir, NodeJS, and Rust to name a few. This was compounded by the in-app agents that our customers installed in their applications—which at the time we transitioned away from the product covered PHP, Go, Ruby, NodeJS, and Python. It was quite the engineering burden for a small team. We’ve spoken in the past about how we processed all that data effectively.
This stack quickly became fragile. The front end never had a true owner, we ran into caching and loading issues, and the development time went rogue. It became very tricky to deploy changes to the React app and the Rails app at the same time without one side getting out of sync. We ended up with something close to this process:
- Develop a dummy endpoint on the backend. This meant generating graphql schemas that the front-end developers could utilize immediately, without waiting for backend features to finish. We used graphql-code-generator to make schema validation with typescript easier.
- Develop the feature on the frontend.
- Develop the feature on the backend (in parallel with the frontend).
We were essentially maintaining two applications where one should have been enough. The big learning was that this architecture might have made sense for large, stable applications, but it didn’t make sense for a company iterating fast to find a good product market fit.
Time for a change
Since our above system wasn’t working well for prototyping, our lead engineers were building a proof of concept for a new product in pure Rails. It let them work quickly and keep it really simple. Quick feedback loop—meaning quick iteration—and no context switching.
Right around this time (late December 2020), Basecamp announced Hotwire.
It seemed like the timing lined up perfectly to seriously consider a change in architecture. The new product was very different—and is even more different today—and it was less complicated than the old product. It was much closer to a pure SaaS application than a mix of services and embeddable agents. Our engineering team sat down and looked at the current state of the app, where we wanted to go, and the technical choices we had. It was really important to get everyone’s input, as this was going to be a big departure from the past product. Everyone was on board to move forward with a Rails and Hotwire-centric approach.
How we use Hotwire
- Hotwire = HTML-over-the-wire = Turbo + Stimulus (for now)
- Turbo: techniques to handle HTML responses (Turbo Frames, Turbo Streams, …)
- Stimulus: Lightweight JS Framework
It began with Stimulus
For us, our Hotwire journey actually began with the idea to use Rails with Stimulus. Coming from a place where we had Rails and React, we knew we needed some JS and Stimulus seemed like an appropriate choice. This did get us quite far, but we found that the Stimulus controllers we were using became large and complex. It turned out that this was a pretty big code smell in Hotwire, as the intention is to write less custom JS and instead rely on partial page updates to handle complex UI changes.
To better understand Turbo, let’s look at two use cases for how we use Turbo at Bearer.
Using Turbo streams allowed us to greatly reduce JS usage for common UI behaviors, like modal popups, adding and removing items from a list, and responsively disclosing form elements. Streams allows us to send a request to the server, and receive back multiple Turbo streams that update multiple blocks in the UI. We’re also using streams to keep the application state on the server, and then enforcing any updates on the client.
In the example below, when the “X” button for an item was clicked, we were using custom JS to hide the item and update a hidden form field to delete the item on form submission.
This led to an edge case where all items could be deleted but the state of the “Next” button would not be updated, meaning that we could submit the form with zero items selected. We addressed this using Turbo. Now, when the “X” button for an item is clicked, it sends a DELETE request to the server which returns two Turbo Streams, one to remove the HTML element containing the deleted item, and another to update the “Next” button in order to reflect any state change.
Lazy-loaded Turbo frames for a quick performance win
For many of our original components, we were loading everything at page load and then showing/hiding components as needed. This was resulting in a large page load, and really quite a waste for instances where not many components are seen by the user. In this example, we responsively disclose more content when a user selects hide/show content—a pretty common interaction for most applications.
Instead of that heavy direct load, we lazy-load a Turbo frame with very little code. For example:
The big pro: Built for a Rails ecosystem. As a team with tons of Rails fluency, it was low risk to move over to Hotwire. This meant it was quick to implement, and easy to maintain, especially since most Turbo use cases involve “regular” Rails routes, controller actions and partials.
We also found that it allowed us to quickly iterate. We could build a product feature using standard HTML pages for our proof of concept, and then gradually introduce Turbo as needed to optimize flow and improve the rendering performance.
The other main advantage, especially compared to our old stack, was that data and validation can all be kept in once place—the server. This avoids logic duplication and typing duplication—we’re looking at you Typescript and GraphQL.
The primary con is that Hotwire is still, relatively, new. The documentation is good, but not extensive.
For those coming from Rails, you are accustomed to an opinionated set of “Best practices.” Right now, many of the Hotwire best practices are still up for debate. For example, is it acceptable to return multiple Turbo Streams in a single response and if so, what’s the preferred way to do this? How should we approach dynamic forms in Hotwire? It’s never a great experience drilling through GitHub issues for the best approach to a problem. There is good news here. It seems like Hotwire will be a core, default part of Rails 7 in the future. This should greatly improve the lack of solid conventions that we’ve seen so far.
Where we go from here
If you couldn’t tell, we’re pretty happy with the pace, performance, and quality we’ve been able to achieve with Hotwire. On top of that, in our recent hiring round we kept hearing how attractive the stack was to prospective Rails developers. It's normally easy to find a "hot new tech stack", but it's hard to find one that scales, works for your product cadence, and is useful for hiring.
We look forward to the community stabilizing further as best practices become solidified. To help with that, we have some new Rails and Hotwire articles in the pipeline so keep an eye out for them. The team is always happy to share their findings, so reach out to us @BearerSH on Twitter if you’d like to know more.