Wigwam Travel backend

Challenge: serving millions of prices

Wigwam Travel is a startup that offers travel packages, a trip including a flight and a hotel. One difference between travel- and traditional retail e-commerce, is that a single travel product will contain a multitude of prices. The price depends on a set of parameters: out- and inbound travel date, the number of travelers and their ages, and your airport of departure.

Combining all of these parameters will quickly result in thousands of prices per travel package. You can find an example of these combinations in the Wigwam calendar of a package. Every price you see in the calendar represents a unique combination of the parameters mentioned above.

The Wigwam calendar for a travel package. Every price represents a unique combination of parameters.

Implementing the calendar didn't prove too difficult. Even if a specific package would have thousands of prices, that's still a number that's relatively quickly served. A bigger problem, as it turned out, were what we call aggregated prices.

The overview of all available Wigwam deals. The price that is shown is the lowest price that is available given the filters that the user configured.

An aggregated price is a minimum price for a subset of all prices for a given deal. The simplest example would be the "from" price of a deal ("vanaf" in the screenshot). This is the absolute lowest price for which one could book that specific deal.

An experienced database engineer might already sense the problem here. If we're rendering a page with all available deals (over 300 in our case), each of which calculated an aggregated price over the total set of prices for that deal, this will result in a database query that runs over all prices in the database. In our case, we were looking at millions of rows.

Quick exploratory testing showed us that implementing this page naively would result in second- if not minute long queries. This is unacceptable, especially in e-commerce, where every millisecond matters for the conversion rate.

Solution: rollups

Our dataset started relatively small, a handful of deals, each with around one hundred prices. As we gradually expanded our offering with more deals and more prices per deal, the number of prices quickly grew. We needed a way to tame the sheer volume of data, in order to serve these aggregated prices quickly. Enter rollups.

As Matthias Mullie describes on his blog:

Rollup tables are tables where totals for (combinations of) conditions are saved. A “summary table”, that holds pre-aggregated values for all conditions you may need to fetch totals for.

Intuitively, it makes sense that multiple customers requesting the "from" prices of deals will receive the exact same responses. Flight and hotel prices can be volatile, but they won't change from second to second. Building on this assumption, it's fair to state that performing a fresh calculation of the aggregate price for each separate user is a bit wasteful. We can calculate all aggregates in advance, and store them in a table, ready to be served to users.

Tradeoffs

As with almost anything in software, rollups are no silver bullet. When implementing a rollup procedure, the first thing you'll notice is the added complexity. Whereas previously, all you were doing was writing to, and reading from the database on demand. Now you'll need to add a separate background process, that periodically rolls up your data. This adds a bit of infrastructure and code overhead, but it should not be your biggest issue.

An arguably more difficult problem is inconsistency. Without rollups, you're writing and reading on demand. Given that you've properly normalized your underlying database schema, your reads and writes will always be consistent; the changes made by a write will be instantly observed by subsequent reads.

Using rolled up data changes this. Your data will become more stale as time goes on, until you perform the next rollup. In many cases this is quite okay, for Wigwam we know that updating a single price is very unlikely to change the minimum price for that deal. Moreover, the minimum price is just an indication, the prices shown in the calendar are always live reads. This means we'll never run the risk of selling a deal for a stale price, which would effectively cut into our bottom line.

Conclusion

As your dataset grows, you'll have to start making tradeoffs between performance and consistency. There is no way around this. Even the biggest tech companies are bound to this constraint. Just think of the Google search results, it might take hours, or days before Google picks up on changes to your website. The reason for this is their search index, which is effectively an immense rollup table.

My go-to strategy regarding rollups is that I always start out without them. I opt for consistency at first, and only look to rollups when I'm actually observing performance problems. After all: premature optimization is the root of all evil.

Looking for experienced backend developers?

Contact us