“Urunjikudikunna manga” literally mango that you suck juice out of. This is part of childhood memories.
Recently came to know that they are sold in Bangalore with the name “sugar babies”. Yum!
“Urunjikudikunna manga” literally mango that you suck juice out of. This is part of childhood memories.
Recently came to know that they are sold in Bangalore with the name “sugar babies”. Yum!
TLDR: Integration tests often provide the most bang for the buck, but unlike unit tests, their benefits are hard to quantify. Linking the success of integration tests to user stories can provide a framework to think about integration testing success.
Kent C. Dodd’s testing trophy provides a great way to think about the software QA practice. Unlike the earlier test pyramid that focuses on speed of tests and stability of the product, the trophy focuses on delivering customer value, and that’s probably the index by which product teams should be measured. The testing trophy effectively makes the novel case that integration tests (not unit) provide the most value, and most tests in a codebase should be integration tests, even at the expense of unit tests.
Now, this topic is more nuanced, and Kent’s later articulation is probably more correct:
But it’s undeniable that for the vast majority of web applications written now, integration tests (& not unit tests as the earlier test pyramid would suggest) are the most valuable.
However, there is a problem: unit tests are easily measured by automated tools that output code coverage. While it’s a very basic measure of code quality and getting to 100% code coverage is not desired, teams often aspire to code coverage in the high 70s or 80s. It’s a good metric to aim for, and a nice, clean way to measure proactive QA success.
How do you measure integration tests? While tools like Jest with react-testing-library and even newer ones like Playwright allow you to test components in isolation & hence can generate code coverage equivalent measures, I would argue that these are not the right measures to use when we think from the integration testing perspective.
In the vein of Kent’s tweet above, the way you measure integration testing should resemble the way your software is used, not how it’s made.
Efforts such as BDD and the Gherkin syntax already bring much of the thought of user-stories to testing, and what follows is just a logical extension:
Story coverage = % of user stories that are covered by integration tests.
This is a better measure because most product teams already have a user story library. If they don’t, then it’s easy enough for product owners or even engineering managers to reverse-engineer user stories from a working product or a design spec.
Well written user stories reveal not only the persona of the user, but also their intent (or JTBD). This provides a lot of context to write integration tests around. As an example, a user story that adds a 1-click checkout link on a product page will naturally emphasize convenience, and a good engineer can then convert that to integration tests that measures performance regressions too.
Product owners often have an innate grasp of what are “critical” user journeys, so it’s easy to prioritize which integration tests to write. And similarly, engineering managers often know which product areas are the most brittle and a source of bugs (possibly due to underlying tech debt), and they can prioritize important user journeys in those areas.
And finally, one of the hardest bits of code coverage is understanding which % number is good enough. The answers to this Stack Overflow question are clear indicators that perhaps the question is wrong: most answers are heuristic or experience-based, and while everything should be interpreted based on context, it’s nice to have a deeper understanding. When you link testing quality to user behavior, then you have the right instincts: if you prioritize and cover all user stories that are business critical, you have good-enough coverage. And if you prioritize and cover all product areas that are brittle, and you continuously work on improving the failure rates of such tests, you are working on reducing your technical debt.
If you don’t have committed engineering managers or product owners who don’t have adequate business context, it’s hard to understand which bits of user stories are most relevant for tests. This is particularly important when junior engineers are assigned to write integration tests with just a design spec as input. Engineers often have trouble interpreting design specs and getting to the most valuable bits, so some collaboration with product or a senior engineering manager is essential to write quality tests. I’ve tried using Gherkin syntax for this in the past to promote collaboration between engineers and product, but that has done more harm than good, people often contest the details of vocabulary and discussions derail into the vagaries of the syntax. What is important is for the person writing the tests to have a good understanding of what is important from a product and user perspective. What I’ve found works best to start is a meeting between product folks and engineers (ideally with an EM refereeing) where product folks walk through the design or the product explaining the ideal customer journey and the business objectives. Good engineers often pick up “what’s important” very quickly, and then they are driven to write tests that emulate customer behavior. High performing or async teams can often replace this meeting with async verbose descriptions of user journeys, and thinking early about integration tests even while they develop the feature.
Another problem is figuring out: when is coverage good enough? Even within a user story, it’s possible to write hundreds of tests each covering an edge case a customer might encounter. While unit tests often encourage writing such tests (in the name of covering every code path), I consider this bad practice for integration tests. Remember: the test pyramid is still true and writing tests for all sorts of edge cases for every user story will make your test suite horribly slow. It’s better then to focus on what is important from the user perspective and write tests for just that. Here’s a couple of thumb rules I have:
So that’s it! I hope you can employ story coverage in your own product organizations and let me know how that goes.
I’ve started to wear two watches: a mechanical one, and an Apple Watch.
For anybody who knows me, the Apple Watch is obvious: I’m a huge Apple fan and the Watch has made a measurable difference in my life: this watch-face above is what I use most of the time, and it helps me track my activity, sleep and water intake. It also has my calendar, and a shortcut that lets me start an exercise. In short: it’s the utility watch. A beautiful utility watch.
The other one is probably the cheapest good-looking mechanical watch I could find: a Fossil. Watch aficionados will probably cringe at the brand, but I’m still very early into mechanical watches, and this one looked sweet at that price point. The mechanical watch is pure indulgence, and that’s because just like my Leica and my record player, it’s the love of the device, its history, how it feels, and the intangible that attracts me here. It’s delightful to think about a purely mechanical contraption, entirely without batteries, on my wrist. Feels like magic.
A few of my friends have already said I’m a bit crazy for wearing not one, but two anachronistic devices for telling the time, when most of them make do with just their phone. I thought a bit about the why of dual wristing, and I felt a couple of things:
Our future self does not have to let go of our past. There is beauty, delight and wonder to be found in things that are outdated. Especially if, for their time and technology, they are well designed. I’ve always had this dual nature in me: I love the newest gadgets, but I seek the old and mellow. I’ve tried at times to analyze why is it that the Leica M (with its really old, inefficient manual rangefinder focus) is still my ideal camera. Part of it is that in search of efficiency, a lot of new technology has forgotten that we buy gadgets also for the experience. What does it feel when you sit in a chair? Or you see a delightful interaction on screen?
But frankly: a larger part is just self-serving idiosyncrasy.
This is just the way I am. Time to embrace it.
Get promoted to a team lead at Automattic, and get these really sweet goodies. Loved the fountain pen, it seems like an unsubtle nod to me to go back to writing Morning pages instead of typing em out on Day One.
Yesterday we did a road trip from Bangalore to Lepakshi temple. It was around 110kms each way, and it’s an easy day trip from north Bangalore, and takes about 2 hours each way. The drive was good, although on the way to the temple we ended up deviating through a stretch of bad roads (courtesy Google Maps and its insistence on the shortest roads instead of the best ones).
There are only 2 things to see at Lepakshi really, the temple, and a recently constructed statue of Garuda, which we visited first.
You can’t really go to the Garuda (we assumed because of crazy Indian crowds that would desecrate it), but there are couple of viewpoints from where you can take photos.
The temple itself is the highlight really. Constructed by the old Vijayanagara empire, it really brought back memories of my trip to Hampi. An interesting distinction being that the temple is still functional, and has a daily pooja.
A good trip overall, and very much recommended. I took the photos in the post using an iPhone 12 mini with Halide, and then edited them using RNI Films.