Part I identified lessons learned from a complex challenge. One lesson was that projects failed and were restarted for the same reasons. Part II explores how project failure for the same reason became normal.
Until the mid-1990s, the data centres for which I worked had practical versions of ITIL (a service management best practice) and other best practices in production. The process descriptions were only half a page long. Becoming stuck in a process was an exception. It was an exception because it was common practice to get to the root cause of a problem and to fix it once and for all. Solutions were identified with the following question in mind: “Considering all clients, the company and employees, what is the best possible solution for all involved”? We also constantly strove to answer this second question: “Do we feel that the solution is executable, that risks and costs are acceptable and that it will deliver to its value proposition?”
But then, from the mid-1990s onwards, something changed. Enterprises sought ways to manage increasing complexity and to achieve new levels of efficiencies. As best practices had produced good results, managers and clients demanded their implementation and certification. Best practices became popular. In the IT services company where I worked, this trend led to replacing practical implementations of best practices with their formal counterparts. At the time, doing so made sense. From today’s perspective, however, things look rather different. The consequences included:
- The process approach changed from guiding users through the processes to requiring users to find their way through processes. Becoming stuck became normal.
- Having a best practice and its associated software tools in production became more important than achieving its value proposition.
- Following a best practice became a higher priority than meeting client, enterprise and employee needs.
- In an increasingly complex world, root cause analysis changed from unrestricted analysis to analysis within defined boundaries, such as the boundary of an organisation or a best practice.
Closer inspection revealed that traditional best practices (governance, ITIL, quality management and the likes) worked well in environments with low levels of complexity. When software tools made it possible to bring best practices to a new level, a new line of thinking emerged: Just implement a best practice, and efficiency will increase. If it does not, the cause must be poor implementation. Overlooked, though, were three crucial factors:
- Traditional best practices worked well with predictable situations in both, stable and complex environments. However, they lacked the capability to handle unpredictable situations, which are common in complex environments (observe that traditional best practices remain highly effective with repeatable tasks and predictable situations).
- With root cause analysis within boundaries, certain root causes of failure, as well as lessons learned, had no attention anymore as they were located outside these boundaries.
- As projects had to follow the processes of traditional best practices and with project management methods belonging to the group of traditional best practices, projects were confronted with more issues than they were able to handle.
This situation leads to the following conclusions:
- With root cause analysis within boundaries, matters such as unpredictable situations and lessons learned (also see Part I) were not addressed anymore. As nobody felt responsible, projects were restarted and failed for the same reasons.
- As projects were confronted with more issues than they could handle, project success rates decreased.
The question now is: What can we do about it? Check out Part III.