42
Developer UX comparison of javascript testing libraries
Hi, over a year ago I made this project for personal use, with intention to share the results once it was good enough. Since this might never happen, here it is for those who want to help improve it, give me some tips or just might otherwise make something useful out of it.
- "flicker free" watch mode, that is I hit "CTRL + S" and VScode's terminal shows my test result without even blinking
- I use almost exclusively deep equal comparison in all of my tests
- it should be easy to change between testing libraries
- makes for a more consistent experience when reading other people code with different test frameworks
- I only need one line of stacktrace to find my error, I don't want it to be the 5th of 10 lines
- if the change I made broke hundreds of test, I don't need to see all of them
- unit run in parallel, integration in serial
- this include many of the tap reporters
- for one off tests, some need to be run in watch mode because of the costly startup time (Jest)
- for the whole test suite, some need to run in parallel mode to really shine
- some are fast regardless how you use them (zora, tape)
- the best one, tap-difflet needed to be merged with tap-dot for less verbose outputs
- currently I don't need these
Jest
- needed to include testEnvironment
- huge performance cost otherwise (~80% on cold start tests)
- needed to include testRegex
- didn't recognize my command line pattern
- weird errors when improper mocking
Mocha
Ava
Lab
Tape
t.test
)Zora
- has its own reporter, but only if using its own runner
t.test
, await test
, and others)npm scripts
, but they have a overhead when first invoking thembash scripts
give me more flexibilitymocha | 10 |
zora | 9 |
tape | 8 |
jest | 7 |
lab | 7 |
ava | 6 |
- most of the time people run tests in a batch, not like this
./perf.sh
to run allzoraSingle | 0,37 |
zora | 0,52 |
tape | 0,70 |
zoraReport | 1,12 |
tapeReport | 1,26 |
mocha | 1,72 |
lab | 3,60 |
tap | 5,40 |
ava | 8,06 |
jest | 9,52 |
42