We Probably Already Have All The Information We Need For Success
Activities executed by people, machines, and software leave trails in so-called event logs. What events (such as entering a customer order into SAP, a passenger checking in for a flight, a doctor changing a patient’s dosage, or a planning agency rejecting a building permit) have in common is that all are recorded by information systems. … Business processes thus ought to be managed, supported, and improved based on event data rather than on subjective opinions or obsolete experience. Process Mining, Communications of the ACM, August 2012.
Amen. Sorry, but things like lessons learned meetings and retrospectives or initiatives to implement metrics just don’t have the same impact as simply using the data that already exists. Using what I call “naturally occurring metrics,” data that comes from the actual work being done and often created by the tools we use, is almost always more reliable and more predictive than trying to remember what we had done in the past or asking people to start collecting or recording data.
One day the software build and integration team announced that they had gotten faster and better at doing their part of the process (e.g., software builds in this case) so now everyone else needed to speed up (e.g., development feature deliveries and defect fixes). This was to help reduce the total cycle time of getting a new software release out the door. The sore spot was that the build team got the Corporate VP to make this announcement. It seemed to make everyone else look bad.
Well, it turns out that the build team was correct. Their process had indeed gotten faster. What they didn’t know was that the development teams had also sped up and in fact had sped up more than the build team. So as a percentage of the total cycle time of a release, the build team’s portion of the whole process was taking longer than it had in the past!
How did we know this? I had been pulling data out of the build process logs and out of the development process (e.g., bug fix database). As a project manager all I really wanted was to correlate when a bug fix went into a release and, as was normal for me, to figure out the average time it was taking for this to happen. I didn’t ask anyone for a regular status report on how long things were taking, instead I just used the data from our build and development systems to capture and calculate it.
For more on simple averages see: Your Average Is Powerful
No, it was not easy to get this data. It was all there, it just had to be coaxed out (found, parsed, interpreted) and aligned with other data before it would divulge its secrets. In many ways it was a science experiment to extract the data, figure out what it means (i.e., how it reflects the real world) and then understand what it was telling us. I was mining the data that already existed.
“Process discovery often provides surprising insight that can be used to redesign processes or improve management ….” Process Mining, Communications of the ACM, August 2012.
I once spent an afternoon in a room full of archived defect reports. Yes, these were paper reports of past defects bundled into folders. These were the forms used by a military organization to keep track of defects moving through the repair and test process. I am fairly certain I was the only person in the history of this military unit to ever spend any time digging through all this data. The results? We finally understood the nature of the defects we were making (i.e., what dumb mistakes we kept repeating!). We even got to a point where our test team went into a panic because they could find very few defects, and no serious defects, in our software releases.
See related: Want To Stress Your Test Team? Give Them Good Quality!
Most organizations probably have all the data they need to understand how their process performs and where to improve it. We just need to dig the data out and make use of it. It won’t necessarily be easy, but it is almost certainly worth the effort and will often provide more reliable and more comprehensive insight than trying to implement new metrics, new reporting or new meetings.
What information systems used by your project might contain useful project management data?
3 thoughts on “We Probably Already Have All The Information We Need For Success”
Comments are closed.
Comments from around the web:
Andrea Herrmann • Even more important is the implicit knowledge within the people´s heads! When people would throw their knowledge together, they could improve the processes, without any external consultant. The main problem are tactics and politics. Therefore, an external consultant can serve as mediator.
Bruce Benson • Andrea,
Good point. I’ve found a few catches in the “implicit knowledge” and that is too often when we compare it to actual performance data it can be significantly different with the implicit part being just about always the wrong part. I encourage our most experienced folks to also capture and record actual data on what they do and what they know, to help tune up their recommendations and insights.
My classic example is our best people telling us we’ll finish the project by “next week” and we went nine months saying each week: “next week, for sure.” Another example includes “we fix defects in 3 days” turning out to be 7-10 days when looking at actual performance over the life of the project.
Too often our smart people are part of the inherent under-performance of the organization because everyone trusts them but in fact they consistently can lead us down the wrong path. The solution I’ve found that works well is to find ways to help them better track what they know, and this has often included asking them if I can see their past performance data (i.e., OK, I hear what you say, now show me evidence). When they dig up the information themselves (as opposed to me doing it) they are much quicker to update their insight and to provide much improved insights and estimates and to continue doing it into the future.
So an external consultant can be useful, but I find the best results seem to come from engagements that finish with “those consultants really didn’t do much – we did it all, but yeah we didn’t mind having them around.” Working as a mediator is a good approach.
Good feedback, thanks.
Jean-Pierre Fayolle • I do even think that a lot of companies don’t use what they have although this is easy to use. It’s like if the most important is to have as most data as possible, because the more you have, the more you can do. While it should be: ‘how to do better with what I have’ and not ‘how to get more’.
http://qualilogy.com/en/10-to-20-metrics/
Clifford Shelley • Yes – absolutely right. Now so much of our work is electronically mediated and servers are groaning under the weight of data we have a major asset, and don’t even know it. This ‘pervasive data’ is a game changer: before, getting hold of data meant setting up collection tools, irritating busy developers and waiting for data to accrue – now it’s there by default, now, waiting, and free. The trick is to know how to refine this data to get to the information. (Shameless plug: http://www.osel.co.uk/analytics.htm.)
Bruce Benson • Clifford, Jean-Pierre,
Agreed, except I never found it particularly easy to mine this kind of data. Once we figured out where it was and how to use it (which usually meant to mesh it together with other data so that it provided insight) then it turned out to be amazingly useful. I do hope more tools are available that make this much easier to do.
Thanks for the feedback and references.