This setup function should return the stuff you need in the tests. Anything returned will be available via the DATA
variable inside the test cases. Running the setup function is not part of the benchmark and it's run separately for each test case. To learn more, check out this more advanced example .
Global dependencies are available in the setup function and every test case.
Note: No statistical analysis is used to validate the results. The tests are run in parallel (unless disabled) for 3 seconds (with a 500ms warmup) and then operations per second are calculated.
Each test runs in a separate web worker. This means that the actual ops/s might be higher in a real-world scenario, but the relative difference between the tests should be accurate.