Category: WordPress

  • Prefer Jest real timers when testing with React Testing Library

    When testing React components with Testing Library, we should always be using real timers. Fake timers should be a rare exception. Let me offer some reasons why.

    The philosophy of Testing Library is that it runs your React code in an environment as close as possible to the browser. React components are rendered using the default DOM renderer, and a real DOM tree is constructed. The DOM is jsdom, you get very limited CSS styles, no layout or painting, and element dimensions are always 0, but other than that, it’s a pretty good DOM. You perform your test assertions on this DOM, not on some artificial data structure like a component tree produced by react-test-renderer (which is not used by Testing Library at all, except in the react-native flavor). Events are dispatched to this DOM tree, too, TL’s fireEvent is a very thin wrapper around element.dispatchEvent().

    userEvent also tries to be as realistic as possible. Part of that is doing a delay: 0 between events, because that’s close to what the browser also does. Consider this code:

    function handleEvent(e) {
      console.log('one', e.type);
      Promise.resolve().then(()=>console.log('two', e.type));
    }
    return <div onMouseDown={handleEvent} onClick={handleEvent}/>

    It logs the mousedown and click events, once synchronously and once after a microtask tick.

    In a browser you get this sequence logged to console:

    one mousedown
    two mousedown
    one click
    two click

    Both events are dispatched in separate event loop ticks, and all microtasks scheduled by mousedown run before click is dispatched.

    If you used userEvent.click() with delay: null, you would get a different order:

    one mousedown
    one click
    two mousedown
    two click

    Here both mouse event’s are dispatched synchronously, with no tick between them. The microtasks have a chance to run only after both dispatches. That’s why the default delay is delay: 0. It leads to a setTimeout(0) wait between the events, and that leaves room for the scheduled microtasks to finish. The result is more realistic scheduling.

    Generally, Testing Library offers an environment where there’s as little mocking ans as little magic as possible. But Jest fake timers? They are very magical. For example, one striking feature of a test like this:

    function callAfterSecondAndThenAgain(cb) {
      setTimeout(() => {
        cb();
        setTimeout(() => {
          cb();
        }, 1000 );
      }, 1000 );
    }
    
    it('calls the callbacks', () => {
      const cb = jest.fn();
      callAfterSecondAndThenAgain(cb);
      jest.advanceTimersByTime(2000);
      expect(cb).toHaveBeenCalledTimes(2);
    });

    is that although the tested function is clearly async, the test is completely synchronous. It’s completely executed within one event loop tick. There is no done callback to be called, no promise returned and awaited. Fake timers keep track of scheduled timeouts and advanceTimersByTime() will synchronously execute them one by one before returning.

    But that’s no longer true when your code uses promises. Promises are always async, they are not affected by fake timers at all. If your async code uses both setTimeout (or setInterval or setImmediate) and promises, fake timers convert it into something that’s half-sync/half-async, and the execution environment is no longer realistic.

    There’s this example code posted on one StackOverflow question:

    jest.useFakeTimers() 
    
    it('simpleTimer', async () => {
      async function simpleTimer(callback) {
        await callback() // without await here, test works as expected.
        setTimeout(() => {
          simpleTimer(callback)
        }, 1000)
      }
    
      const callback = jest.fn()
      await simpleTimer(callback)
      jest.advanceTimersByTime(8000)
      expect(callback).toHaveBeenCalledTimes(9)
    }

    With await callback() on line 5, the test fails, calling callback only two times. Removing the await “fixes” it, calling callback nine times. Let’s dissect what happens:

    The await case:

    1. simpleTimer is called, callback is called (call 1)
    2. in next microtask tick (after await), timeout is scheduled. simpleTimer returns.
    3. advanceTimersByTime is called. It sees one scheduled timeout, so it executes it. The timeout callback calls simpleTimer again.
    4. This simpleTimer calls callback immediately and sychronously (call 2), and then immediately returns a promise. That’s because it’s an async function. They execute synchronously until the first await and then return a promise to wait for the rest. The setTimeout call is scheduled for the next microtask tick, after await.
    5. The timeout callback returns (the promise returned by simpleTimer is ignored) and advanceTimersByTime takes control again. There are no more timers schedules, so it returns.
    6. expect check the number of calls to callback and finds two.
    7. The test finishes, and only after it finished, the microtask with setTimeout is executed. A new timer is added to the fake timers queue, but nobody cares anymore: advanceTimersByTime has finished already. The scheduled timer will be probably removed in some afterEach fake timer’s cleanup.

    The no-await case:

    The crucial difference is in step 4. The setTimeout call in simpleTimer will schedule another timer before simpleTimer returns. When control returns to advanceTimersByTime, the timer is already scheduled and advanceTimersByTime sees it. So it will advance timers by another 1000ms and execute the timer callback. This (infinite) loop will continue until advanceTimersByTime spends its entire budget of 8000ms and then it returns. Now callback has been called 9 times.

    That’s fairly complex, isn’t it? You need to track the tasks very carefully to understand this. In a real-life complex code, I’d argue that fake timers with promises become intractable. In the Testing Library codebase, in the waitFor implemenation, the part that handles the fake timers + promises combo, even the library author admits he doesn’t really know what he’s doing:

    It’s really important that checkCallback is run *before* we flush in-flight promises. To be honest, I’m not sure why, and I can’t quite think of a way to reproduce the problem in a test, but I spent an entire day banging my head against a wall on this.

    Kent C. Dodds
  • What’s the point of generators and controls in @wordpress/data?

    At the end of the Motivation for Thunks post we arrived at a thunk function that fetches stuff from a REST endpoint and stores it into state by dispatching an action:

    function fetchFeatures() {
      return async ( { dispatch } ) => {
        const { features } = await window.fetch( '/features' );
        dispatch.receiveFeatures( features );
        return features.length;
      };
    }

    This is a good JavaScript function that’s going to do the fetching and receiving, and the return value from the thunk is available as the return value from the dispatch call (asynchronously):

    const count = await dispatch( 'features' ).fetchFeatures();
    console.log( `fetched ${ count } features` );

    It all works perfectly. But! For a functional programmer, the fetchFeatures function has a very serious issue: it’s not a pure function. Instead of just returning a value and nothing else, it does side-effects like calling window.fetch or dispatch.receiveFeatures. In a purely functional language like Haskell or Elm, you couldn’t do this at all. So, what if we wanted to write our fetchFeatures JavaScript function in a purely functional way? That looks quite impossible doesn’t it? We want fetchFeatures to be a pure function that merely returns a value, and at the same time we want it to perform network fetches and store updates. You can’t get both at the same time.

    The functional solution, used by Haskell or Elm, and one we’re going to implement now in JavaScript, is to divide the problem into two parts:

    • pure function fetchFeatures that returns descriptions of effects it wants to perform.
    • an effect runtime that reads these descriptions and performs them.

    Now please look carefully at this weird fetchFeatures function:

    function fetchFeatures() {
      return {
        type: 'fetch',
        params: { path: '/features' },
        next: ( { features } ) => {
          return {
            type: 'dispatch',
            params: { action: receive( features ) },
            next: () => {
              return {
                type: 'return',
                params: { value: features.length }
              };
            }
          }
        }
      }
    }

    What does it do? It returns an object with shape { type, params, next }. The type of this object could be called Effect and it contains a description of what to do, and what to do next. We want to perform a fetch effect and when it’s done, to call the next callback with the result.

    The next callback again returns the same Effect type, this time requesting a dispatch effect. And so on. Finally the return effect requests to “exit” the program, and to return a certain value to the caller.

    This fetchFeatures function is indeed a pure function. It does nothing but return a value of type Effect. You could write this function in Haskell, too, and actually Haskell programmers really do it this way — only instead of Effect, Haskell names the effect type as IO.

    Now to actually execute the effects, you need an effect runtime that takes an Effect as a parameter and executes it:

    function runEffect( effect, next ) {
      switch ( effect.type ) {
        case 'fetch':
          window.fetch( effect.params.path ).then( body => effect.next( body ) );
          break;
        case 'dispatch':
          registry.dispatch( 'foos' )( effect.params.action );
          effect.next();
          break;
        case 'return':
          next( effect.params.value );
          break;
        default:
          throw `unknown effect: ${ effect.type }`;
      }
    }

    This little runEffect function will bring life to our inert and purely functional fetchFeatures function. Running them together like this:

    runEffect( fetchFeatures(), ( count ) => {
      console.log( 'number of features:', count );
    } );

    will actually do all the fetching and storing and will print the count of received features.

    This is exactly how Haskell or Elm works, too. The runEffect runtime is hidden from you, because it’s part of the language runtime (or the Elm “kernel”) and is likely written in C. You, as a functional programmer, write purely functional programs that return instances of the IO type (i.e., effects), and the language runtime then looks at what kind of IO did you return, executes it, and calls a next callback, which is encapsulated in a monad type (something like a Promise with a then handler).

    A Haskell example if you’re curious

    Here is an example of a Haskell program that prints a prompt, then reads a line, and then prints a greeting using the line that was just read:

    main = putStrLn "your name?" >>= (
      \_ -> getLine >>= (
        \s -> putStrLn ("Hello " ++ s)
      )
    )

    The >>= operator (called bind) is something like a .then method on a promise, or the next callback in our fetchFeatures example. The (\_ -> ...) syntax is a lambda function. This program constructs a structure of IO operations, with callbacks saying what to do next, and returns it from the main program. The language runtime is then responsible for executing these IO operations and calling the callbacks with their results.

    You can try this program out in an online Haskell REPL.

    Doing it with generators

    One ugly thing about our purely functional fetchFeatures function is that it contains a lot of callbacks which are nested and it’s common knowledge that as your program gets more complex these nested callbacks become a hell.

    So, with a little bit of syntactic magic we can convert these nested callbacks into generators. This is a generator version of the fetchFeatures function:

    function* fetchFeatures() {
      const { features } = yield {
        type: 'fetch',
        params: { path: '/features' },
      };
      yield {
        type: 'dispatch',
        params: { action: receive( features ) },
      };
      return features.length;
    }

    We are still working with Effect objects, but this time we’re yielding them from a generator. The next callbacks are gone. We are still purely functional, just with a bit of syntactic sugar on top.

    The effect runtime that works with a generator/iterator is a bit more complex, you need to understand generators and iterators in some detail to get it, there is tail recursion etc, and looks like this:

    function doEffect( effect, next ) {
      switch ( effect.type ) {
        case 'fetch':
          window.fetch( effect.params.path ).then( body => next( body ) );
          break;
        case 'dispatch':
          registry.dispatch( 'foos' )( effect.params.action );
          next();
          break;
        default:
          throw `unknown effect: ${ effect.type }`;
      }
    }
    
    function runEffect( effectIterator, next ) {
      function nextEffect( value ) {
        const nextItem = effectIterator.next( value );
        // process return statement
        if ( nextItem.done ) {
          next( nextItem.value );
          return;
        }
        // process effects
        doEffect( nextItem.value, nextEffect );
      }
      nextEffect();
    }
    

    The code that connects the generator function and the runtime and brings them to life is exactly the same as for the first callback version!

    runEffect( fetchFeatures(), ( count ) => {
      console.log( 'number of features:', count );
    } );

    Calling the fetchFeatures() generator returns an iterator (sequence of Effects) and the runtime loops through the iterator and executes the effects.

    If you’re still interested in analogies with Haskell, this generator syntactic sugar we just described is equivalent to the Haskell do notation. Our example program that reads and prints lines would be rewritten to:

    main = do
      putStrLn "your name?"
      s <- getLine
      putStrLn ("Hello " ++ s)

    Instead of a series of nested callbacks with the >>= operator, we can write the same program using a do syntax that has a structure similar to async/await.

    The connection to @wordpress/data

    Looking at the fetchFeatures generator, it probably looks familiar to what you’ve seen in @wordpress/data stores and you’re starting to see the connection.

    These generators are pure functions that yield effect descriptions.

    The various effect types that the runtime can handle in the big switch statement are controls and they can be registered dynamically in the @wordpress/data store runtime. There are controls for selecting (reading) and dispatching (writing) to a store, the apiFetch control etc.

    What’s the point of this additional complexity? Well that’s a good question. If you want to write purely functional code without explicit side-effects, then the runEffect or rungen runtime gives you tools to do exactly that and that fact alone is probably a sufficient justification for you.

    If you’re more pragmatic and believe that even code with explicit side-effects can be good code, the answers are not that clear. Some claim that the purely functional code is easier to test and mock. Instead of mocking window.fetch and other random APIs, you create one super-mock for the runEffect runtime and then test your actions against that. There is a well-known Effects as Data talk by Richard Feldman from the Elm team that explains the case for the functional approach in great detail. But I’m personally not very convinced.

    Thunks or Generators?

    A final note about relationship between thunks and generators. These are two concepts that are not on the same level of abstraction I would say. It’s more precise to say that generators are a layer on top of thunks. What I mean by that is that I can write a thunk that is implemented as a generator and effect runtime:

    function* fetchFeatures() {
      const { features } = yield { type: 'fetch', /* ... */ };
      /* ... */
    }
    
    function fetchFeaturesThunk() {
      return ( runEffect ) => {
        return runEffect( fetchFeatures() );
      };
    }

    In other words, runEffect( fetchFeatures() ) is a normal, impure and side-effect-ful function call that can be used anywhere in imperative JavaScript code. The runEffect runtime call is the boundary between the purely functional and imperative world.

  • Motivation for thunks

    The redux-thunk package is by far the most widely used middleware in Redux, and now our own @wordpress/data package also supports its own flavor of thunks. Yet the concept of thunks is often poorly understood, the motivation for them is unclear, and they are thought of as something magical.

    In this section I will show how even in a very simple Redux store, without any middlewares, we can run into serious limitations when trying to implement seemingly trivial operations. And how these limitations can be overcome with thunks. We won’t need any asynchronous operations or side effects (i.e., code reaching outside the store) to run into these issues.

    So, look at this @wordpress/data store that has a reducer composed from two sub-reducers with combineReducers, one selector and two actions:

    function defaults( state = {}, action ) {
      if ( action.type === 'SET_DEFAULT' ) {
        return { ...state, [ action.feature ]: action.value };
      } else {
        return state;
      }
    }
    
    function flags( state = {}, action ) {
      if ( action.type === 'SET_FEATURE' ) {
        return { ...state, [ action.feature ]: action.value };
      } else
        return state;
      }
    }
    
    const isFeatureActive = ( state, feature ) => (
      state.flags[ feature ] ??
      state.defaults[ feature ] ??
      false
    );
    
    function setDefault( feature, value ) {
      return { type: 'SET_DEFAULT', feature, value };
    }
    
    function setFeature( feature, value ) {
      return { type: 'SET_FEATURE', feature, value };
    }
    
    register( createReduxStore( 'features', {
      reducer: combineReducers( { flags, defaults } ),
      selectors: { isFeatureActive },
      actions: { setDefault, setFeature }
    } ) );

    This store acts as a key-value map for feature flags. I can set a flag value:

    dispatch( 'features' ).setFeature( 'gallery', true );

    and then read the flag value with the selector:

    select( 'features' ).isFeatureActive( 'gallery' );

    If a feature was not explicitly set with setFeature, it defaults either to false or to a default I previously set with setDefault:

    dispatch( 'features' ).setDefault( 'likes', true );

    Now, isFeatureActive( 'likes' ) will return true if I never set it before with setFeature.

    I could also easily implement a resetFeature action that resets a feature flag value back to the default, by adding a new branch to the flags reducer that removes a key from the state map, forcing the selector back to using a default.

    So far, this looks like a textbook example of a Redux store, doesn’t it? A reducer nicely composed from two sub-reducers, a selector that looks at two places in the state tree, several actions with some reducers reacting to them and some ignoring them.

    Our task now will be to add a new action to the store, one that allows us to toggle a feature flag value, i.e., change it to false if it was true and vice versa:

    dispatch( 'features' ).toggleFeature( 'gallery' );

    You might be tempted to add a new case statement to the flags reducer:

    if ( action.type === 'TOGGLE_FEATURE' ) {
      return {
        ...state,
        [ action.feature ]: ! state[ action.feature ],
      };
    }

    But this is not going to work correctly because the reducer doesn’t know what the old value of the flag really is. When the state (which is state.flags in the combined reducer) doesn’t have a record for the feature flag, we need to look at state.defaults but the flags reducer doesn’t have access to that. It’s not possible to make the following test pass:

    dispatch( 'features' ).setDefault( 'likes', true );
    dispatch( 'features' ).toggleFeature( 'likes' );
    expect( select( 'features' ).isFeatureActive( 'likes' ).toBeFalse();

    Wow! The fact that our reducer is nicely decomposed into sub-reducers makes it impossible to implement something as trivial as toggleFeature! That’s quite a serious limitation.

    On the other hand, it’s quite straightforward to implement toggleFeature as a little helper function:

    function toggleFeature( feature ) {
      const active = select( 'features' ).isFeatureActive( feature );
      dispatch( 'features' ).setFeature( feature, ! active );
    }

    See, the isActiveSelector can look at both state.flags and state.defaults and we can implement the desired behavior in just two lines of JavaScript code.

    But we can’t package toggleFeature as yet another action on the store, on par with setFeature or resetFeature, because toggleFeature can’t be implemented as an action object processed by a reducer. And that’s a bit silly.

    Here, thunks come to the rescue. What thunks do is that they expand the meaning of what is a Redux action. In addition to treating plain objects with a type field as actions:

    function toggleFeature( feature ) {
      return { type: 'TOGGLE_FEATURE', feature };
    }

    a store with thunk support treats functions as actions, too!

    function toggleFeature( feature ) {
      return () => {
        const active = select( 'features' ).isFeatureActive( feature );
        dispatch( 'features' ).setFeature( feature, ! active );
      };
    }

    Now this toggleFeature function still has one serious problem and that’s the fact that it uses external identifiers select and dispatch. Where these come from? Do we need to import them from some module and how? We need to define them somehow before the thunk function is really executable. Our solution is to inject them as thunk parameters:

    function toggleFeature( feature ) {
      return ( { select, dispatch } ) => {
        const active = select.isFeatureActive( feature );
        dispatch.setFeature( feature, ! active );
      };
    }

    The engine that executes the thunks (i.e., the thunk middleware in our store) provides these parameters, binding select and dispatch to the current store, calling the thunk function something like this:

    thunkAction( {
      select: select( 'features' ),
      dispatch: dispatch( 'features' ),
    } );

    This latest version of toggleFeature will actually work in practice and can be registered as an action with our store:

    const store = createReduxStore( 'features', {
      /* ... */
      actions: {
        setDefaults,
        setFeature,
        resetFeature,
        toggleFeature,
      }
    } );

    Some of these action creators return objects with a type field and some return thunk functions, but the store user doesn’t need to care. It’s an implementation detail that’s completely invisible.

    So, we’ve seen that the motivation for thunks is something as banal as being able to write JavaScript code and call functions from other functions: we’re using the isFeatureActive and setFeature functions to write a new function, toggleFeature.

    A thunk doesn’t need to do anything asynchronous to be a useful thunk. While it’s true that we often write thunks to communicate with a REST API:

    function fetchFeatures() {
      return async ( { dispatch } ) => {
        const body = await window.fetch( '/features' );
        dispatch.receiveFeatures( body.features );
      };
    }

    the fact that the function is async doesn’t matter that much. It’s a piece of code that is able to select and dispatch things from/to the store, and can be exposed as an action on the store, that’s all.

    The fact that the window.fetch call reaches out of the store and does a network request is also not fundamental. Yes, you’d better be aware that your store talks to the network, and yes, this is a side-effect in the functional programming terminology, but so what, there’s nothing magical about it, is it?

    In the next post in this series we will compare thunks to the classic @wordpress/data generators and controls, which in turn are very similar to the redux-saga middleware in classic Redux.

  • Best practices for using useSelect() from @wordpress/data

    During a recent debugging session where I was trying to discover why e2e tests for code that uses @wordpress/data are failing, I found out that we often use the useSelect hook in a suboptimal way. One that either introduces outright bugs, or is slower and causes more React rerenders than it needs to.

    In this post I’m describing several best practices that aim to prevent some common issues. Hopefully it’s helpful and gives you a better understanding of one of WordPress core JavaScript libraries.

    Always call selector functions inside the callback

    Suppose the onboard store has a getSiteTitle() selector. It might be tempting to call it in a React component like this:

    function Title() {
      const { getSiteTitle } = useSelect( ( select ) => select( 'onboard' ), [] );
      return <h1>{ getSiteTitle() }</h1>;
    }

    But this code is buggy — the component will not be reliably rerendered when the store’s siteTitle state changes and will keep showing the old value.

    Why? useSelect doesn’t just read the desired values from the store, which is the obvious and visible part of what it does. It also establishes a subscription to the store and triggers the rerender of the component when the relevant parts of the state change. What does “relevant” mean here? The precise condition is: when the new return value of useSelect is not shallowly equal to the previous one.

    In our case, the return value is the { getSiteTitle } object and the getSiteTitle property value never changes. It’s still the same function. It’s only the return value of that function that changes, but that’s not what we are checking. The return value is always shallowly equal to the previous one.

    The Title component will be rerendered with the new siteTitle value only when the rerender is triggered by something else. Maybe the component has some internal state that changes, a prop changes, or the rerender is triggered merely by a parent component rerendering.

    The fixed component looks like this:

    function Title() {
      const { siteTitle } = useSelect( ( select ) => ( {
        siteTitle: select( 'onboard' ).getSiteTitle()
      } ), [] );
      return <h1>{ siteTitle }</h1>;
    }

    Here useSelect triggers a rerender whenever the old siteTitle and new siteTitle are different, just as we’d expect.

    Calling selectors inside event handlers

    There is, however, one case where returning the selector function itself from useSelect makes sense: when the selector is called inside an event handler, to get data from the store that are valid at the time when the event handler is called (as opposed to the time when the component was last rendered. Then this code works:

    const { getSiteTitle } = useSelect( ( select ) => select( 'onboard' ), [] );
    function onClick() {
      recordAnalyticsEvent( 'click', { site: getSiteTitle() } );
    }
    return <Button onClick={ onClick } />;

    This code works, but we can do better! One thing that’s suboptimal is that the useSelect call will establish a subscription to the onboard store, and the select( 'onboard' ) callback will be executed on every update in that store. But that’s all pointless work because the set of the store’s selectors is constant for the entire lifetime of the store. The getSiteTitle function is guaranteed to be always the same.

    useSelect has a special form where it just returns the set of selectors, without any reactivity:

    const { getSiteTitle } = useSelect( 'onboard' );

    You can also write this, which is exactly the same thing:

    const { getSiteTitle } = useRegistry().select( 'onboard' );

    This code is a simple map lookup in the store registry that returns the store’s selectors, nothing else.

    Select the data you need inside the callback

    Now suppose the onboard store maintains value for several fields like siteTitle, siteDesign and siteDomain. And it provides a getState() selector that returns an object will all these fields together. Then you might get the siteTitle value like this:

    function Title() {
      const { siteTitle } = useSelect( ( select ) => select( 'onboard' ).getState() );
      return <h1>{ siteTitle }</h1>;
    }

    This code behaves correctly and doesn’t cause bugs with missed updates, like the getSiteTitle example, but its performance is suboptimal. Rerenders will be triggered too often.

    Although the component is interested only in the siteTitle value, useSelect doesn’t know that. It’s asked to return the entire getState(), including the siteDesign and siteDomain values. And it will trigger a rerender whenever any of them changes, even when siteTitle remains the same.

    A more performant version would be:

    function Title() {
      const { siteTitle } = useSelect( ( select ) => {
        const state = select( 'onboard' ).getState();
        return { siteTitle: state.siteTitle };
      }, [] );
      return <h1>{ siteTitle }</h1>;
    }

    I can see why the slower version may look more intuitive and elegant: the faster version is not as concise, you’ll often find yourself fighting with the “siteTitle is already declared in the upper scope” ESLint error. But it’s faster.

    Another variation of the same principle is this component:

    function ContinueButton() {
      const { siteTitle } = useSelect( ( select ) => ( {
        siteTitle: select( 'onboard' ).getSiteTitle()
      }, [] );
      return <button disabled={ siteTitle.length === 0 }>Continue</button>;
    }

    This button will rerender every time siteTitle changes, e.g., as you type into an input field. But most of these rerenders will be wasted, because the disabled prop will remain true. It’s better to calculate the boolean derived value inside the useSelect callback:

    function ContinueButton() {
      const { hasTitle } = useSelect( ( select ) => ( {
        hasTitle: select( 'onboard' ).getSiteTitle().length > 0
      }, [] );
      return <button disabled={ ! hasTitle }>Continue</button>;
    }

    Prefer returning objects with properties from the callback

    You could rewrite the Title example into a more concise form:

    function Title() {
      const siteTitle = useSelect( ( select ) => select( 'onboard' ).getState().siteTitle );
      return <h1>{ siteTitle }</h1>;
    }

    Can I return the siteTitle value directly instead of the { siteTitle } object? Is it a good idea?

    The answer is that this will almost always work, changes of siteTitle will almost always be correctly detected, but not 100% of the time.

    The return values will be compared using the shallow-comparison function from @wordpress/is-shallow-equal, and it depends whether that library can compare values of your data type correctly. Consider this counterexample where the library will fail:

    const { default: eq } = require( '@wordpress/is-shallow-equal' );
    
    function get() {
        return this.value;
    }
    
    function createBox( value ) {
        const rv = { get };
        Object.defineProperty( rv, 'value', { enumerable: false, value } );
        return rv;
    }
    
    const one = createBox( 1 );
    const two = createBox( 2 );
    
    console.log( `Are ${ one.get() } and ${ two.get() } equal? ${ eq( one, two ) }` );

    If you try to run this script in Node.js, you’ll see a surprising result:

    Are 1 and 2 equal? true

    The value property is semi-private, and Object.keys won’t return it. The shallow compare function will see only the get property which is always the same.

    If you store instances of createBox in your data store, useSelect might fail to see changed values.

    Another fun way how to implement objects with hidden private properties and trick @wordpress/is-shallow-equal is:

    const valueMap = new WeakMap();
    
    function get() {
      return valueMap.get( this );
    }
    
    function createBox( value ) {
      const rv = { get };
      valueMap.set( rv, value );
      return rv;
    }

    This technique is used by Babel to transpile JavaScript private class properties, so expect to see these weakmaps in your transpiled code soon 🙂

    You might say that you only use nice objects and strings and numbers in your state and you’d be right. Returning them directly from useSelect will be fine. But shallow-comparing arbitrary objects can become messy and one day it might backfire. On the other hand, returning an object to be destructured is guaranteed to be always safe.

    Be careful about transforming data inside the callback

    The following useSelect call will have suprising behavior. It will cause the component to rerender on each update in the taxonomies store, even if the tags haven’t changed at all:

    const { tagNames } = useSelect( ( select ) => {
      const tags = select( 'taxonomies' ).getTags();
      return { tagNames: tags.map( ( t ) => t.name ) };
    }, [] );

    This happens because every invocation of the callback returns a different array (the result of .map) even though the raw tags array in the store hasn’t changed. Because the returned array is not equal (===) to the previous value, it is detected as a change that needs to trigger a re-render.

    The solution to this problem is to move the data transformation outside the useSelect hook, and wrap it in useMemo:

    const { tags } = useSelect( ( select ) => ( {
      tags: select( 'taxonomies' ).getTags(),
    } ), [] );
    
    const tagNames = useMemo( () => {
      return tags.map( ( t ) => t.name );
    }, [ tags ] );

    This will cause a re-render only when the raw tags really change.

    Do all selections from the same store in one callback

    Which one of the following is better?

    const { siteTitle } = useSelect( ( select ) => ( {
      siteTitle: select( 'onboard' ).getSiteTitle()
    }, [] );
    const { siteDesign } = useSelect( ( select ) => ( {
      siteDesign: select( 'onboard' ).getSiteDesign()
    }, [] );

    or

    const { siteTitle, siteDesign } = useSelect( ( select ) => {
      const store = select( 'onboard' );
      return {
        siteTitle: store.getSiteTitle(),
        siteDesign: store.getSiteDesign(),
      };
    }, [] );

    The answer is that the second one is faster and it also uses resources more economically. Two calls to useSelect will make the component establish two subscriptions to the data store. On each change, the subscription handler inside useSelect will be called twice, and at least one of the calls will be redundant and wasted.

    The fact that every useSelect hook call establishes its own store subscription is a weak spot of the Redux architecture, performance-wise. If you’re writing a component that can get mounted many times in the editor, like when registering an editor.BlockEdit filter that wraps every instance of every block, you should be aware of how many store subscriptions are being created. Without care, their numbers can grow into thousands and tens of thousands.

    What about selecting from multiple stores?

    When your component selects from multiple stores, and if some of the selected values are used only conditionally, there are two facts to consider:

    1. Store subscriptions are granular, the useSelect hook subscribes to each store individually.
    2. Store subscriptions are established only when the corresponding select( store ) call is really executed.

    For example, consider this useSelect call:

    const showBlockSidebar = useSelect( ( select ) => {
      const sidebarOpened = select( 'editor' ).isSidebarOpened();
      if ( ! sidebarOpened ) {
        return false;
      }
      return select( 'blocks' ).hasSelection();
    }, [] );

    This code returns a boolean value that says whether “the block sidebar is opened”. It’s a combination of two conditions: whether the editor sidebar is opened at all, and whether it should show a block sidebar UI (that happens only when a block is selected).

    There are a few notable details about this hook call. First, If the sidebarOpened value, selected from the editor store, is false, the select( 'block' ) call is not going to be executed. That means that the hook won’t subscribe to the blocks store and the callback won’t be executed on a blocks store update. Because these updates would be irrelevant anyway, they can’t change the return value. The blocks store subscription will be established just-in-time only when the sidebarOpened value becomes true. This way we can optimize the number of store subscriptions and eliminate these that are guaranteed to be redundant.

    Second, in this case we select from both stores in one useSelect hook. The alternative would be:

    const sidebarOpened = useSelect( ( select ) => select( 'editor' ).isSidebarOpened(), [] );
    const hasSelection = useSelect( ( select ) => select( 'blocks' ).hasSelection(), [] );
    const showBlockSidebar = sidebarOpened && hasSelection;

    This would be inefficient because the two values are not used independently. A React component re-render is triggered on every hasSelection change, even though the showBlockSidebar value, the only one that’s really used by the component, doesn’t change.

    But you’ll want to prefer the two independent useSelect calls when the values are used independently to render the component, like:

    return (
      <div>
        <div>sidebar: { sidebarOpened }</div>
        <div>selection: { hasSelection }</div>
      </div>
    );

    Then you’ll be better off with two useSelect calls because each of them will do its own select from its own store, on that specific store update, without wasting time on selecting from the other store.