March 2013
Issue Map
March 2013
Newsletter Article

Managing and Improving Through Measurement

What doesn’t get measured doesn’t get done, or at least doesn’t get done well. The serious and responsible practice of performance improvement requires the use of data to diagnose performance gaps. It is equally important to track those gaps over time, both through periodic needs assessments as well as evaluations. 

Without accurate and timely performance feedback—provided by ongoing measurement and tracking of performance indicators—it becomes nearly impossible to efficiently and effectively see progress toward closing performance gaps. It becomes equally difficult to make intelligent decisions about what to change, how to change, what to leave alone, and what to abandon altogether. The feedback provided by performance measurement therefore provides a unique and crucial role in the improvement of human and organizational results. 

A compass to keep everything on course 

Performance measurement provides a compass that keeps an organization on course toward a desired destination, while providing the intelligence to make day-to-day decisions about how to best get there. Performance measurement can, nevertheless, speak both to effectiveness (Was the target destination reached?) and efficiency (Was the destination reached in the most economical way—whether in terms of time, cost, and other resources?). 

Performance dashboards can facilitate the continuous tracking of performance measures required for continuously improving performance from needs assessment to evaluation, and everything in between. The following discussion on performance dashboards is based on previous work done by Ingrid Guerra-Lopez. 

Performance dashboards 

Performance dashboards go by several labels, including performance data systems, executive dashboards, performance measurement systems, automated performance management systems, performance intelligence systems, or automated performance measurement and management systems. Regardless of the label, they are a collective set of measures or metrics used to gauge performance and in turn manage and improve it. They are computerized, often web-based instruments that can support objective and proactive decision making for improving performance. The concept of a dashboard was adopted from automobile dashboards, which provide drivers with critical data that helps them drive and maintain the automobile safely, efficiently, and effectively. 

From a global perspective, this system is a multicriteria instrument for informing decision makers about a variety of different things. For example, it can track current levels of performance, the set of factors for poor or good performance, and the criteria required for improvement in an efficient and timely manner. 

Performance dashboards can provide multiple views to multiple levels of users so that each group has access to information that is related to that group’s responsibilities. For example, the executive team could have access to mega- and macro-level data and only access other levels as desired, while middle management could have ready access to macro- and micro-level data most relevant to carrying out their responsibilities. Likewise, lower management, supervisors, and employees could have access to micro-level data that is required for competently carrying out their responsibilities. However, a strong argument for universal access could also be made, as it better assures that everyone understands not only their own contributions, but the impact and consequences at all other levels. What you want to avoid is overwhelming all users with all data. For daily use, each user should focus on tracking the performance data that is central to his job responsibilities, and the results for which he is accountable, while reviews of the “whole” could be more periodic and in an appropriate team discussion setting. 

What users are able to see are usually graphical representations of quantitative data that enable them to detect gaps between optimal and current levels of performance. Depending on the design of the system, root causes can be linked to such indicators, although the complexity of organizations represents a challenge in tracking all possible factors impacting the indicators. 

Performance dashboard views can also provide aggregate information, summaries, reports, context, and highlighted exceptions. Some dashboards provide strata for various levels of concerns (for example, high risk, moderate risk, low risk), that can be defined with specific criteria by stakeholders. This also enables users to detect trends more easily, without the use of more sophisticated analysis techniques. Some performance dashboards are configured to offer various plausible courses of action, in part related to potential causes, and the level of risk. 

Issues with performance dashboards 

If measurement systems are to really facilitate the continual improvement process through monitoring and sound decision making, some issues must be addressed. In 2002, Sérgio P. Santos, Valerie Belton, and Susan Howick published an article in the International Journal of Operations & Production Management (“Adding value to performance measurement by using system dynamics and multicriteria analysis”) in which they point to two key issues that inhibit performance measurement and management systems from reaching their full potential: problems in their design and implementation and problems with the analysis and use of the information produced by the measurements. 

Design. Poorly designed measurement systems can compromise their implementation and, in turn, their effectiveness. One important factor for organizations to consider is the selection of an appropriate measurement framework. Some strides have been made to design procedures to identify and group performance measures in a way that makes interpretation more straightforward. However, much still has to be done in way of identifying relationships between measures. 

While some may recognize the importance of understanding relationships between the various performance measures tracked, organizations continue to design performance dashboards without formally accounting for the interdependencies between the measures, which could ultimately undermine the validity and utility of the information produced by the system. 

To address the identification of relationships, in 2000 P. Suwignjo, U.S. Bitici, and A.S. Carrie developed quantitative models for performance measurement systems (QMPMS) using cognitive maps, cause-and-effect diagrams, tree diagrams, and the analytic hierarchy process. They describe a technique used to identify factors impacting performance and their relationships, structure them hierarchically, quantify the effect of the factors on performance, and express them quantitatively. 

We suggest using Kaufman’s Organizational Elements Model and Guerra-López’s Impact Evaluation Process as two integrated frameworks for the design of effective performance dashboards that can be used for both continuous needs assessments and evaluations.  


Moreover, we suggest a customized causal analysis framework, as a secondary element of this design. Together, we believe these three dimensions will help ensure a performance dashboard that is better poised to improve decision making and performance. 

Implementation. In addition to design considerations, organizations that are interested in using performance measurement systems must also consider implementation. Poor implementation is a common reason that new organizational initiatives fail. Effective implementation requires careful planning and management of the desired change. Leadership must play an active role in establishing expectations and appropriate consequences, modeling desired behaviors, and motivating those affected. A performance measurement and management system must be seen as one component within an entire performance management system, not as an addition seemingly unrelated to work and management responsibilities.  

Analysis and interpretation. Another set of challenges facing these systems is related to the proper analysis of the data and the use of the information to improve performance. A rigorous analysis must take into account the context of the performance data observed. This includes the many other factors that are actually affecting performance. With the obvious limitations of the human mind and the performance measurement system in accounting for every performance factor, the task is not straightforward. 

For instance, one may have to account for the fact that the gains in one performance indicator come at the expense of another performance indicator. If one studies the latter independent of the former, they might draw wrong conclusions, which could lead to poor and costly decisions. Performance improvement professionals often face this situation in conducting needs assessment and analyses, where they limit their search to symptoms and stop before they identify actual performance gaps and root causes. Organizations are dynamic and the design and delivery of change must take that dynamism into account. Dashboards can provide excellent information on what is going on so that managers of change can quickly see what is working and what is not. 

Santos, Belton, and Howick point out that many authors (including B.F. Skinner in 1974 and Giovani Da Silveira, Nigel Slack in 2001) have argued that organizations cannot succeed in every single performance indicator and that explicit decisions about tradeoffs must be defined. Prioritization of indicators, objectives, and gaps is relevant and useful. 

When to apply

Performance dashboards are particularly useful when organizational leaders are committed to integrating them into their management practices and decisions. These displays are particularly helpful in supporting the processes of clarifying and deploying appropriate resources for meeting organizational objectives and plans, such as in needs assessments, tracking the status and relative effectiveness of various organizational initiatives, and making timely decisions about what to change, what to keep, what to modify, and how. For example, if a top-level executive wants to ensure that everyone within the organization clearly understands the strategy over the next five to 10 years, and wants sound and justifiable leadership decisions that are well aligned with this strategy, she may call for the implementation and use of a performance dashboard as a tool. 

Avoiding pitfalls of dashboards 

However, if those intended users are not on board with the idea of using the performance dashboards or they do not receive support on the proper use of the dashboard (for example, if the performance indicators that are most relevant to their area of responsibility and management are not made explicit, and they are not supported in the interpretation of the data or how to “translate” data into useful information and recommendations), the dashboards will likely not be used either consistently or appropriately—or at all. 

In most organizations, timing plays an important role in the effectiveness of performance dashboards. Having timely, and in some cases, real-time performance data can save time, costs, and other precious resources that can be lost as a result of waiting for end-of-cycle reports (such as monthly, quarterly, or annual reports). Performance dashboards simplify the process and time required to have access to and use these data. 

Performance dashboards will not be helpful—in fact could destroy organizational effectiveness—if the wrong measures are being tracked, that is, if irrelevant or generic measures are being tracked independently of important management decisions. They also will not help if intended users do not actually use the information from the systems to support their decision making. Finally, cost is an important consideration. While a useful system will require resources, it is also important to weigh the costs with the potential benefits. The system does not have to be the most expensive and sophisticated for it to work well; it just has to enhance the management function, particularly as it relates to decision making. To accomplish this while not spending exorbitant amounts of resources on a system, you may want to limit the measures and system functions to the most critical. 


Note: This article is excerpted from Needs Assessment by Roger Kaufman and Ingrid Guerra-Lopez. 


© 2013 ASTD, Alexandria, VA. All rights reserved.

About the Author

Roger Kaufman, PhD, is professor emeritus, Florida State University and distinguished research professor at the Sonora Institute of Technology (Mexico). Kaufman is the recipient of a U.S. Homeland Security/U.S. Coast Guard medal for Meritorious Public Service. He has also been awarded the International Society for Performance Improvement’s (ISPI) top two honors: Honorary Member for Life and the Thomas F. Gilbert Award. He is a past ISPI president and a founding member, and is the recipient of ASTD’s Distinguished Contribution to Workplace Learning and Performance recognition. Kaufman has published 41 books and more than 285 articles; his latest book is Needs Assessment for Organizational Success (ASTD Press).

About the Author
Ingrid Guerra-López, PhD, is an internationally recognized performance improvement expert and bestselling author. She is the chief executive officer of the Institute for Needs Assessment and Evaluation, a firm that provides consulting, coaching, and training and development services focused on strategic measurement, management, and alignment of learning and performance improvement programs. Ingrid is also a professor at Wayne State University, where she conducts research and teaches graduate courses focused on performance measurement, management, and strategic alignment. She recently completed a term as director on the board of the International Society for Performance Improvement, and completed her tenure as editor-in-chief of Performance Improvement Quarterly.

Ingrid has authored seven books, including Needs Assessment for Organizational Success and Performance Evaluation: Proven Approaches to Improving Programs and Organizations. She has also authored approximately 100 articles and facilitated hundreds of international and national presentations and workshops on topics related to performance assessment, monitoring and evaluation, and strategic alignment. Her clients include international development agencies, government, education, military, healthcare, and corporate organizations. Ingrid has coached and mentored hundreds of graduate students, executives, managers, and other professionals, disseminating evidenced-based performance improvement practices internationally in more than 30 countries.
Be the first to comment
Sign In to Post a Comment
Sorry! Something went wrong on our end. Please try again later.