Life sometimes teaches us lessons that we can apply in the workplace. During my last years of university, I learnt that assuming can have serious consequences. This is the story.
As part of my master’s thesis, I needed to take detailed measurements of the environment to make 3D computer models. For this, I used a SICK LMS200.

One of the first things I did was, of course, read the user manual. It said the device could measure distance in 180 degrees in a horizontal plane every 0.25 degrees. It measured up to 80,000 millimetres, and its accuracy was +- 15 millimetres. All seemed fine, except the accuracy, but my supervisor assured me that the real accuracy was +- 1 mm, and that the manual had a typo, so that we were good.
This device communicated with a computer to receive commands and return measurement data through the serial port. I spent the first year or so doing research, learning skills to complete my project, and coding my project.
When I finally finished writing the source code, and after months of debugging and trying different things, I was unable to get my code to work as expected. There were many moving parts, anything could have been wrong, but after close examination nothing seemed to be wrong. I did not know what to do.
As a last resort, I decided to reverse engineer my code. I tweaked the bad results obtained with a specific input, and then I went backwards function by function to confirm that every piece of code was working according to the algorithms that I had implemented. The whole process took a couple of days, as I needed to check complex numeric matrix operations, and I needed to be completely sure that they were correct.
In retrospective, it would have been much easier to do this with unit tests, but unit tests were just taking over in those days, and I had not even heard of them, so I did this the old-school way, literally.
In any case, after completing this very detailed process, I could confidently say that there weren’t any bugs in the code. It was spotless, immaculate.
It was then that the problem became clear to me. If for a given input I was not getting the right output, and the code was right, the only possible explanation was that the input was wrong!
Thanks to my reverse-engineering debugging session, I had the exact values that the code expected in order to work fine. I calculated the average difference against the actual values that I had gotten from the device, and it was close to 15 mm, just as described in the manual. It turns out that my supervisor was wrong. He made an honest mistake, and given his broad experience, I just assumed that he was right.
I had a mix of feelings at that moment: satisfaction after finally root-causing the problem with my code, despair after realising that the whole thesis was at risk, and anger against me for not having verified this important piece of data.
Over the years I have given this a lot of thought, particularly from the trust perspective. Part of effective collaboration is to trust others when they make decisions or share information with us. We shouldn’t have to doubt others, or should we?
The answer lies in risk analysis. Expected loss is usually defined as the probability of a bad event, times the impact of the event. If the cost of validating is lower than the expected loss, we should validate that the information is correct.
In my situation, let’s say the impact was 1.5 years. My supervisor’s reassurance set a low probability of a bad event, let’s say 1%. So, based on this, it’d be worth doing any kind of validation if the effort was less than 5.5 days. In that time, I would not have been able to test the device, but I could have emailed the manufacturer to ask whether that was a typo or not, but I didn’t.
Life would later give me another opportunity to verify if the manual was right or wrong. About six months after I started, I had already completed the code to communicate with the device. At that time, the expected loss would have been 3.65 days (365 days x 1%). Doing such a test would have taken half that time at most, but the idea did not cross my mind. I didn’t even remember that there was a potential discrepancy.
In summary, we should validate when the cost of checking is lower than the expected loss. Despite the high impact involved and the low cost of checking, I decided to assume that my supervisor was right, and that gamble didn’t pay off for me. In the end, I managed to save my thesis by achieving reasonably good results after applying some normalisations to the errors, but that was not the badass outcome that we had planned.
Now, whenever I receive some info, I try to use this approach to decide whether it needs to be confirmed or not. In most day-to-day cases any wrong data will be self-validated, so usually there’s nothing to do. But other times, I prefer to check. After all, “when you assume, you make an ass out of you and me“.
Cheers!
José Miguel
Share if you find this content useful, and Follow me on LinkedIn to be notified of new articles.

