The huge problem with Project Dashboards
The negative ‘behaviour‘ they encourage and the damage they do to real data:
In recent years, Project Dashboards have become very popular in major businesses engaged in large-scale projects.
The principle behind them is simple and fine (in theory). It is to give visibility to senior managers and executives of the performance of individual projects. However, what they often ‘encourage’ destroys the chances of meeting this goal.
The format and content can vary greatly, but usually, they will include key metrics, for example, the financial risk exposure of each project. Typically they will also include metrics on performance against cost and schedule and other aspects and targets.
These are important pieces of data, in which project managers and others (especially Commercial Managers in certain environments) will have a great interest in.
Moving on, the picture becomes further ‘sensitive’ to what we are about to describe if we add an aspect of ‘Traffic light signal’ (red/amber/green – known as RAG) to the dashboard either at the project level or at the level of individual KPI. No project manager wants to flag up their project as red, and not even amber in truth.
If we include the RAG element in the dashboard, we have to define thresholds or tolerances for amber and red (e.g. a CPI of less than 0.8 triggering a Red).
Things can become even more sensitive, if the same data is then reported outside the organisation, to customers for example and in the UK, even regulators in some industries, as in the real example below. By now, you may be able to guess where this is heading.
So that’s the theory – what happens in practice?
In practice, human nature kicks in (in most cases) as soon as projects reach the amber condition and certainly once they approach or reach red. To be fair, corporate culture often drives this. Having received their first ‘roasting’ for publishing a project as ‘red’ Project Managers often quickly learn never to do that again – or at least delay its publication for as long as they can.
Others too may even have a veto on the data. It is not uncommon in some industries for example, for Commercial Managers to determine the status of Cost and Risk. What happens? You get a Commercial view of a project – not a statement of actual performance.
What’s the evidence?
So is this an isolated issue or event?
Sadly, no not at all. In the UK, it is endemic to certain industries in particular. It has to be said that ‘Contracting’ environments elevate the ‘temptation’ to do this, but this is not the only motivation by far. Internal pressures can be just as strong.
A real example of a shocking scale:
We see this time and time again. One example recently, was when I was given the equivalent output for all the projects involved in an annual multi-billion pound (£) investment programme, only to find not one of them reporting anything but green for every single item for every project, ever (¹). Pretty unique. It took no time at all to figure out how this was being achieved. It was not an outstanding performance by the projects. The baselines against which all the projects were being measured were changed every single month. What a massive waste of time, money and opportunity. The information was totally worthless and for many of the projects was inaccurate and highly misleading.
So what are the solutions?
Audit – perhaps, or perhaps not the solution?
Is audit or reliance upon independent assurance the answer? No – too often this checks conformance to process not data quality. Also, it’s just far too easy for project teams to delay the disclosure of bad news on bigger projects if they want to. We believe the combination of Dashboards and thresholds together have had a hugely negative impact on the availability of accurate project data in recent years.
A huge move forward would be:
I would say the most simple major improvement could be to remove thresholds and if that means losing RAG in my book that would be a major step forward. There are other things too – allow people to publish numbers – good bad and ugly.
Best of all:
One practice that is a huge improvement, is to provide regular reviews of projects, often called Business Reviews, where Executives (with an appropriate background) discuss progress with project managers on a face-to-face (or at least interactive) basis. You may not be able to do this for every project, every month, but perhaps more often for strategic projects, and for other projects on a random basis. I have witnessed this practice alone have a hugely positive impact on the accuracy and timeliness of crucial project data and more importantly on the actual management of individual projects.
1 – this example included CPI/SPI for every project and every project reported 1.0 every single month for both CPI and SPI (meaning schedule and budget performance is precisely ‘on plan’ and ‘on budget’). This level of performance is unprecedented in projects and should have triggered suspicion. However, it had been going on for years.