{"id":943,"date":"2025-11-05T09:00:00","date_gmt":"2025-11-04T22:00:00","guid":{"rendered":"https:\/\/jm.armijo.au\/dev\/?p=943"},"modified":"2025-11-04T22:39:12","modified_gmt":"2025-11-04T11:39:12","slug":"fixing-flaky-tests-matters","status":"publish","type":"post","link":"https:\/\/jm.armijo.au\/dev\/blog\/2025\/11\/05\/fixing-flaky-tests-matters\/","title":{"rendered":"Fixing Flaky Tests Matters"},"content":{"rendered":"\n<p>When automated tests seem to pass or fail randomly, we say that they are unstable, non-deterministic, or flaky. Beyond the annoyance this can be, fixing them is in fact critical to the success of any software product, as I will explain later.<\/p>\n\n\n\n<p>Flaky tests present themselves in different ways. For example, a couple of weeks ago, I encountered an automated test that would randomly fail in my team&#8217;s repository. The test was supposed to check that we could filter a list of users by their name. It was something like this:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: ruby; gutter: false; title: ; notranslate\" title=\"\">\nlet!(u1) { FactoryBot.create(:user, name: &quot;Bill&quot;) }\nlet!(u2) { FactoryBot.create(:user) }\nlet!(u3) { FactoryBot.create(:user) }\n\nit &quot;filters users by name&quot; do\n  all_users = users.get_all_from_db\n  filtered_users = all_users.filter_by_name(&quot;Bill&quot;)\n\n  expect(filtered_users.length).to eq(1)\nend\n<\/pre><\/div>\n\n\n<p>This test uses FactoryBot to create database records for our test. We pass arguments when needed, or use defaults otherwise. In our case, we just needed to specify the name of the first user to test the filter functionality.<\/p>\n\n\n\n<p>Simple as it is, this test sometimes failed because the filter returned two users instead of one.<\/p>\n\n\n\n<p>In my experience, automated tests are usually unstable due to tests not properly isolated (dependencies on other tests, components, or the environment), or the execution of non-deterministic operations in the code being tested, or in the test.<\/p>\n\n\n\n<p>Given how the test was written, I was confident the issue was not about dependencies, so I started looking for any non-deterministic operations, but the code was immaculate. However, there was an issue with the test itself.<\/p>\n\n\n\n<p>It&#8217;d be very boring (and sometimes inconvenient) if all test users had the same name. For this reason, our user factory sets the default name using Faker, a convenient utility that can generate names, emails, and other useful strings at <strong>random<\/strong>. <\/p>\n\n\n\n<p>To set a name, <code>Faker<\/code> randomly selects a name from a <a href=\"https:\/\/github.com\/faker-ruby\/faker\/blob\/8fbda8de78d6827cae94bc2dee7dfa4d9a91f261\/lib\/locales\/en-US.yml\" target=\"_blank\" rel=\"noreferrer noopener\">pre-defined list of names<\/a>. It turns out that the list contains seven names that could match our filter.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: ruby; gutter: false; title: ; notranslate\" title=\"\">\n# Male names:\nBill\nBillie\nBilly\n\n# Female names:\nBilli\nBillie\nBilly\nBillye\n<\/pre><\/div>\n\n\n<p>When the test initialises <code>user_2<\/code> and <code>user_3<\/code>, it selects their names at random. In most cases it will select names other than those, for example, Antonia and Stephen. But sometimes it will select one (or even two) of these names. The probability of that happening is ~0.25%, equivalent to about one failure per week, considering all the builds run in our build pipeline.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Trust<\/h3>\n\n\n\n<p>In my opinion, the biggest issue with unstable tests is trust. Broken tests undermine trust in our automated test suite.<\/p>\n\n\n\n<p>Once, I worked for a company whose main product had so many flaky tests that if we re-ran them we&#8217;d still get failing tests, just not always the same. When a test failed, we could not tell whether a failure was real, or if it was just due to flakiness.<\/p>\n\n\n\n<p>In the end, nobody would even try to confirm if test failures were legit or false alarms. It was assumed that the tests were just flaky, and were just ignored. We were driving blind.<\/p>\n\n\n\n<p>It&#8217;s not a secret that we make mistakes as we write software. It is thanks to automated tests that we can prevent some of these mistakes from becoming bugs in production. If we let automated test instability grow out of control, we\u2019ll eventually be doomed to either go back to manual testing or spend a lot of time working on incidents.<\/p>\n\n\n\n<p>Whenever possible, we should fix flaky tests as we encounter them, or at least document them so fixing them can be prioritised by the team at some point. By taking these simple steps we can have a great positive impact on our teams.<\/p>\n\n\n\n<p>Happy coding!<br>Jos\u00e9 Miguel<\/p>\n\n\n\n<p><em>Share if you find this content useful, and Follow me on&nbsp;<a href=\"http:\/\/www.linkedin.com\/comm\/mynetwork\/discovery-see-all?usecase=PEOPLE_FOLLOWS&amp;followMember=jmarmijo\">LinkedIn<\/a>&nbsp;to be notified of new articles.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>When automated tests seem to pass or fail randomly, we say that they are unstable, non-deterministic, or flaky. Beyond the annoyance this can be, fixing them is in fact critical to the success of any software product, as I will explain later. Flaky tests present themselves in different ways. For example, a couple of weeks [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":955,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-943","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/posts\/943","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/comments?post=943"}],"version-history":[{"count":13,"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/posts\/943\/revisions"}],"predecessor-version":[{"id":957,"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/posts\/943\/revisions\/957"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/media\/955"}],"wp:attachment":[{"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/media?parent=943"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/categories?post=943"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jm.armijo.au\/dev\/wp-json\/wp\/v2\/tags?post=943"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}