The funny thing is many problems aren’t big enough to use the fanciest big data solutions. Sure, companies like Google or Yahoo track all of our web browsing; they have data files measured in petabytes or yottabytes. But most companies have data sets that can easily fit in the RAM of a basic PC. I’m writing this on a PC with 16GB of RAM—enough for a billion events with a handful of bytes. In most algorithms, the data doesn’t need to be read into memory because streaming it from an SSD is fine. InfoWorld, April 16, 2018, 21 hot programming trends—and 21 going cold. Photo by William Iven on Unsplash
I simply showed how long, on average, it was taking for us to fix a defect. Everyone howled. It was way too long. Outrageously so. “Why are you now taking so long?” Someone demanded seriously. My reply? We weren’t taking any longer than it had taken in the past I responded. The difference was that I had taken our defect data, analyzed it and then summarized it. I was simply showing them the average based upon our actual data. It was about twice as long as the typical response product development made to the question of how long will it take to fix the current critical defect. Three days was the reply for how long it typically takes to fix a just found defect. Turns out it was actually seven days at this point in the project. A month earlier the average time was closer to ten days. Our historical data showed that the best we had ever achieved during the life of the project was an average of five days.
This organization had around twenty projects going at any one time. Everyone had access to the defect tracking database for these projects and all past projects. I simply pulled up the data for each project, past and present, and plotted the defect repair trend and compared all the projects together. There was noticeable consistency across the projects as the defect arrival rate increased, peaked, and then decreased over the life of each project. Our current project was right on the characteristic defect curve of our past projects. So, not only could I tell them the average time to fix a defect, but also how many more defects we would statistically encounter before the customer would inevitably accept our product.
For more see Defect Reports Are Our Best Friend
Of course, for this company which regularly and predictably delivered their products late and buggy, the numbers I showed and the delivery dates these numbers predicted were … unacceptable! The final result was that our simple numbers from our “little” data, consisting of tens of thousands of defects, better characterized and managed the project than all the predictions offered up by our highly promoted and experienced managers. I did all my analysis by simply downloading the defect database to my PC and then analyzing it using Microsoft Excel and Microsoft Access.
What “little” data do you have at your fingertips that might make you become an even more successful project manager?