When we were due to implement our Alert Costs feature we decided it was time to try out Redux on something real, over the Backbone models and collections we’d been using in the past. We’d heard about, and certainly wanted to benefit from, something that stored, tamed and consolidated state (something we have a lot of).
Onwards we went to, at first, utilise Redux for Alert Costs. We made numerous mistakes along the way but we learnt from these and made changes to gain even more from Redux going forward and to remove our pain points. Redux now powers a much larger portion of our data layer than just Alert Costs, and with the incremental improvements we’ve made, working with Redux continually becomes more enjoyable. This post outlines 3 of our biggest “uh-ohs”.
A lack of normalisation
Redux allows you to store state in any format you want; there are no hard and fast rules about state structure other than “what works best for your application”. This flexibility is both amazing and a little scary at first.
We all know we should pick the correct data structures for the jobs we’re going to perform and so on, but until you’ve been through the full cycle of fetching data via an action, storing data via a reducer, creating a container component to consume the state and a presentational component to display that state as props it isn’t always obvious what the best representation would be. This is especially true when this is the first time switching to a radically different way of thinking about state.
However, as with database design, a solid starting point is to normalise all data unless it’s been proven denormalisation is needed (for performance etc). Normalisation is, at it’s core, the process of efficiently organising data in a database (or in our case our store).
Looking at the two core goals of normalisation we have:
- Reduce data redundancy
- Improve data integrity
In the case of a database, this would ensure we’re utilising less space and that things are logically stored – and the great news is, it’s no different for our store.
At the very start of our Redux journey we hadn’t quite “locked down” the final outline of our Store structure, i.e. the very top level keys we would hold. Tis is where we made mistakes via a lack of normalisation. Our original Alert Costs state structure looks like this:
{
modules: {
alertCosts: {
pagination: {
perPage: 50,
total: 0
},
isFetching: false,
isFetchingEventsFor: [],
errors: [],
entries: []
}
}
}
Oh no, no no no. Looking back now this little snippet makes me instantly feel uncomfortable but at the time it wasn’t so obvious. When we took on the Alert Costs feature, our store was empty – just a blank slate. For this feature we needed to fetch and store data (the various Alert Cost rows / entries) and also store some transient UI information (pagination data, which will change upon interaction). There would also be the ability to view “events” under a top level row.
In this first stab, everything was just bundled under a key of alertCosts
, under modules
. When the top level container for this route mounted, it would perform a fetch
action which would lookup pagination
information from this location, and the page
itself from another portion of our store which stores query parameters, then fetch the relevant set of data. That data upon arrival would blitz whatever was in the entries
array.
There are several smaller mistakes here, which add up to a very brittle, hard-to-work-with architecture.
Our first big mistake in this example is not separating data from UI state. In the original example they’re just bundled in to one location. We assessed this and decided to separate our data in to a portion of the store called entities
(a popular pattern in the community). Saving the modules
portion for transient, UI related state (state that would actually map to a “screen”) — things like what’s open, what’s selected, where’s pagination currently etc. We later renamed modules
to features
as modules was something derived from our navigation menu (other popular keys include things like ui
, screen
, stage
etc).
Our new store structure looked like so:
{
entities: {},
features: {
alertCosts: {}
}
}
Separating data and UI concerns in the store immediately means things start to slip in to place. Remember that original fetch
action that the top level route container dispatched? Much of that flow would remain the same, there’d still be a fetch
action for the Alert Costs feature. This fetch
action would create a query based off of the UI’s pagination state, but things would be reduced differently this time. Rather than just storing everything under alertCosts
, our normalised approach would look like the following:
{
entities: {
alertCosts: {
byId: {
1: {},
2: {},
3: {}
}
}
},
features: {
alertCosts: {
pagination: {
perPage: 50,
total: 0
},
alertCostIds: [1, 2, 3]
}
}
}
I’ve left out some keys / functionality for brevity, but this demonstrates the separation. This separation yields multiple benefits:
- If an item with a certain ID is updated, it only needs to change in one place and that will be reflected everywhere.
- Reducers can deal with much simpler logic, because more state can exist as simple to access top level keys. In our first pass of Alert Costs we stored “events” which are just Alert Costs entities under each Alert Cost item directly, this type of nesting makes reducer logic more complicated and data harder to access overall.
- There’s a consistent location to store / lookup uniquely identifiable pieces of data.
In our concrete example above, any other part of the application now has access to all of the stored entities. And when pages are navigated the items themselves won’t be lost, the only thing that will change is the IDs that are referenced.
One of our most reused entities is that of inventory items – these cover all devices and services (for server monitoring and availability monitoring respectively). These, as you can imagine, are used / displayed / referenced all over the application. When the application boots we dispatch a bootstrap
action which dispatches a preliminary inventory fetch
action – this will fetch and store (up to) the first 400 items. From this point on items are just fetched as and when they’re needed (i.e. they’re not in the store) via a fetchById
action which will accept either a single ID or an array of IDs. Because all of these entities are cached they can just be simply reused, or refreshed / updated if some sort of time period has passed.
Our normalised entities.inventory
state structure looks like so:
{
isFetching: false,
isFetchingIds: [],
isPerformingSearches: [],
entries: []
}
This is one of the oldest parts of our store now, and whilst data is separated out it could still be improved with better organisation / use of data structures. Currently we still use a key of entries
which has an array of items. This is hard to work with and requires looping through the array to find the item you’re looking for. A much more efficient structure would be:
{
isFetching: false,
isFetchingIds: [],
isPerformingSearches: [],
byId: {
abc123: {
name: 'My test server'
}
}
}
Legacy code huh? The important thing is to learn from these mistakes, and do it better next time. In our latest work we’ve done just that with a portion of our store that handles metrics information:
{
isFetching: false,
isFetchingForInventoryIds: [],
isFetchingForSearchTerms: [],
metadata: {
'some.key': {},
'some.other.key': {}
},
byInventoryId: {
someDeviceId: ['some.key'],
someServiceId: ['some.other.key']
},
bySearchTerm: {
someSearchTerm: ['some.key', 'some.other.key'],
anotherSearchTerm: []
}
}
By using the correct structure lookups will now be much easier, and much more efficient. For example, _.pick(thingWithKeys, keysToPick)
is much easier to parse mentally and work with than nested loops and arrays.
So far on our journey we haven’t had to deal with many-to-many relationships, but this would also follow the ideas set by relationship handling in databases. A normalised database will use a join table for many-to-many relationships, and the idea is the same in Redux.
These days the official Redux docs have a great page on normalising state shape.
Incorrect usage of containers
In Redux we have container components (how things work) and presentational components (how things look). Our container component will read state from the store, map that state to props, and then pass those props to a presentational component. These things can be wired together manually, or you can use something like React-Redux’s connect
method (which is what we use).
In the days of old, one container thinking was pushed, i.e. containers would be top level and would effectively be route handlers. But then Dan Abramov did a 360 on this advice. This is important because not only does it highlight our next mistake, but it links back in with our normalisation point from before:
“Note that a normalized state structure generally implies that more components are connected and each component is responsible for looking up its own data, as opposed to a few connected components looking up large amounts of data and passing all that data downwards. As it turns out, having connected parent components simply pass item IDs to connected children is a good pattern for optimizing UI performance in a React Redux application, so keeping state normalized plays a key role in improving performance.”
In the beginning we got locked in to very “one container” thinking – we were trying to cram far too much information and functionality in to one container.
In our codebase we do very well at extracting generic functionality in to generic (presentational) components. Yet for some reason we weren’t thinking about data in the same generic sense. Once this “clicked” it made the consumption of data much easier.
In our application we have several locations where we need to fetch and display available metrics (configuring a graph, configuring an alert configuration etc). This is a data concern in its own right. These available metrics are real and tangible entities, yet they’re never the main data concern – they serve to inform other pieces of data. For example, when a metric is selected in the context of a graph that is then stored against a graph configuration, we then go on to fetch the relevant time series data to match that metric. The main concern here is the time series data (so we can view the graph), but in no way can it be configured without the help of the available metrics.
In some of our latest work we’ve been refactoring the data layer in these areas. Now we have a container component that can be composed wherever metrics need to be selected. It can be used like so:
<MetricsSelector
store={Store}
selectedMetricKey={'a.metric.key'}
onMetricSelection={(key) -> console.log(key)}
/>
The available metrics can also be scoped to an inventory item (by passing an inventoryId
prop) or to a search term (by passing a searchTerm
prop).
The mapStateToProps
function always provides ownProps
along with state
and that makes this type of flexibility incredibly easy. For instance, the mapStateToProps
function for the previous container component looks like this:
mapStateToProps = (state, ownProps) =>
if ownProps.inventoryId then return getPropsForInventoryId(state, ownProps)
else if ownProps.searchTerm then return getPropsForSearchTerm(state, ownProps)
else return getPropsForAll(state)
These small, succinct functions take what they need from the store’s state and pass them to the presentational counterpart. The beauty of this is the presentational component just needs to know it’ll receive a few props, the most important being isFetching
and the metrics
themselves in our example.
The presentational component then has the power to make decisions on various action dispatches being made, e.g. in our example here if we’re not fetching and the data doesn’t exist, then we need to fetch it. We perform these checks in lifecycle methods, which are the perfect place to check if we have what we need (and if we don’t, to fetch it).
componentWillReceiveProps: ->
if not @props.isFetching and not @props.metrics then @props.fetchMetrics()
Whilst @props.isFetching
and @props.metrics
were derived from mapStateToProps
, our @props.fetchMetrics
action derives from mapDispatchFromProps
. In the past we also made a further mistake of attempting things like fetching missing data (via an action) within mapStateToProps
itself. This was, understandably, nasty. We were convoluting something very pure and simple – we were taking a simple function that takes state and returns props and we were causing side effects within it.
Our metrics selector container component crops up in multiple places, for example in the device list you can quickly see a particular metric across all of your servers:
The same component is also used when configuring graphs:
We do still have multiple instances of this type of selection taking place with old Backbone based code, and whilst having the old code itself might not be ideal, knowing that we can instantly replace it as soon as a refactor opportunity crops up is a very nice feeling. No longer will code relating to this type of data be repeated, instead we just have a single component we can compose and use as normal.
Focussing on the correct granularity of container components has served us very well, in one of our previous posts, Richard goes in to detail on how a new container component saved the performance of our graph tooltips.
The milage for “correct” will always be different for every application. If a single top level route handler container component works, then there’s absolutely nothing wrong with that. Sometimes multiple data concerns can be handled in one place without any issues. But if things start to feel uncomfortable or repetitive it might be time to look at introducing more specialised container components lower in the view hierarchy.
Remember, components don’t have to emit DOM. They only need to provide composition boundaries between UI concerns. Dan Abramov
Awkward usage of the <provider> component
React-redux provides a <provider>
component. Usage of this component will make the Redux store available to all components connect()
-ed in the component hierarchy below. Its general usage looks like this:
<Provider store={store}>
<App />
</Provider>
Here the store would be available to absolutely any component that’s connect()
-ed from the point of <App />
and below. At first we tried to use this component, and it was incredibly awkward:
ConnectedBase = ReactRedux.connect(mapStateToProps)(Base)
ProviderComponent = React.createClass
render: ->
return (
<Provider store={store}>
<ConnectedBase />
</Provider>
)
module.exports = ProviderComponent
Rather than the module’s export being the ReactRedux.connect()
call itself, it was instead an out-of-place wrapper component that tried valiantly to use the <provider />
component. This was one of those cases of missing something critical from the docs.
If you really need to, you can manually pass store as a prop to every
connect()
-ed component, but we only recommend to do this for stubbing store in unit tests, or in non-fully-React codebases. Normally, you should just use<Provider>
.
They key here is non-fully-React codebases, which is very much our codebase. The simple fix here was to stop this awkward wrapping and to just pass the store
as a prop manually. Unfortunately, this little mistake did cause some other side effects. We gave far, far too much ceremony to the idea of a “provider” and we’d actually ended up naming certain files (which exported a container component) as provider.csx.coffee
. Along with making sure we just passed the store manually as a prop we also made changes to our directory structure and how we name things.
Conclusion
All in all I consider these to be our 3 biggest mistakes when we first started using Redux. We’ve certainly made other mistakes, although these were a bit less obvious. For instance there are several places we dispatch several actions at once, rather than dispatching a single action with multiple reducers responding. Both of these are legitimate uses depending on what’s happening, but in our case we’ve certainly used the wrong option in some places.
We’ve learnt from all of these, and if we haven’t fixed particular things already, we know how to fix it when we get to it. I personally found the shift from thinking in terms of Backbone Models and Collections to a single top level store quite big (and very different), but when things “clicked” it was beautiful. I hope this helps anyone starting their journey with Redux (or Other State Management Library™).
The post Lessons learned migrating from Backbone to Redux appeared first on Server Density Blog.