Performance Measurement for a Customer Focus Strategy

January 1, 2011
By

Introduction

Many organizations are in a perpetual state of change. Changing markets, changing competition, changing organization structures, total quality initiatives and reengineering are often the rule rather than the exception. Often these initiatives fail to yield the desired results or, in the extreme, fail entirely. The reasons for this failure, of course, can be many and multiple. However, one that often stands out is the lack of change in the performance measurement system as the needs for measurement change.

The recommended solution is often to establish new measures appropriate to the new techniques employed and appending them to the existing measurement system without regard for the contradictory nature of the old and new measures and without any understanding of which measures are more important. At worst, the organization continues to emphasize and use old “hard” measures, ignoring their negative effect on the new organizational initiatives.

What is at work here is a lack of basic understanding of what measurement is, what it is designed to accomplish and what its effect is on the behavior and culture of the organization. Therefore, the objectives of this paper are to: 1) define what performance measurement is, 20 what its characteristics are, 3) analyze the requirements of an effective measurement system, 4) determine how it affects behavior and culture in an organization and 5) recommend a direction toward improving performance measurement in the knowledge organization.

Performance Measurement Today

Peter Drucker, in a recent article in The Wall Street Journal (12/1/92), stated that information technology has provided the means of collection vast amounts of data, but, in order for data to be converted into information it must be organizes for the task, directed toward specific performance and applied to a decision. Many managers don’t know what information they need to do their job and how to get that information. Others don’t understand how the availability of that information has changed their management task. Finally, few managers know what information they owe to the organization to insure its success.

Chester Barnard, in his book, The Functions of the Executive (1938), recognized the difference between efficiency, that is, doing things right and effectiveness, that is, doing the right thing. However, information systems have historically measured only efficiency and largely ignored effectiveness. There are compelling reasons for this: 1) our ability to gather information in a timely manner has been limited, 2) efficiency measurement are much easier to identify, 3) efficiency data is much easier to gather and 4) efficiency measures are much easier to quantify.

This emphasis on efficiency has led to a subtle transformation of performance from the “greatest benefit (effectiveness) for the least cost (efficiency) to the “greatest MEASURABLE benefit for the least MEASURABLE cost”. This has resulted in three significant problems:

  • Cost is more measurable than benefit, so performance is reduced to economy.
  • Efficiency assumes that costs that are not measurable do not exist.
  • Measurable benefits drive out unmeasurable benefits, even when they miss the point.

Efficiency without effectiveness is like working on the budget while the building is on fire. If this seems extreme, consider these common examples of efficiency driven measures found in may companies:

  • Scurrying around finding things to ship at the end of the month, even things not completed, inspected or due, in order to “make the numbers”.
  • Being unable to purchase a critical piece of equipment or being forced to make useless expenditures because its “in the budget”.
  • Purchasing two years supply of something (can you imagine a Supermarket Product Department doing this?) because the volume price break will improve your Purchase Price Variance.
  • Producing product you don’t need in order to make the department Machine Utilization Percentage look good.
  • Making the decision not to improve quality because its “too expensive”.

These examples point out clearly that controls often lead to performance that is not beneficial, indeed often detrimental, to the organization. How does this happen? No one deliberately sets out to establish measures that degrade organization performance. To understand how this happens, it is necessary to understand some characteristics of controls.

Because organizations are social systems and social systems can not be measured outside of themselves, controls can be neither objective nor neutral. What gets measured affects how the organization behaves. This is a critical characteristic, little understood by traditional measurement systems. Certainly many measurement are instituted to influence behavior (increase production, meet a schedule, etc.). However, there are both intended and unintended behavioral change consequences of controls. This accounts for some of the behavior mentioned above.

Characteristics of Performance Measurement Systems

There are nine characteristics of performance measurement systems that fall into two categories as follows:

  • Structural Characteristics.
    • Static measures vs. Vector measures.
    • Hard vs. Probability vs. Soft measures.
    • Direct vs. Indirect measures.
    • Precision vs. Accuracy.
    • Level of Detail.
    • Underlying Assumptions.
  • Behavioral Characteristics.
    • Intended vs. Unintended consequences.
    • Adaptability to performance measures.
    • Measurement sets to manage behavior.

This paper will briefly discuss each of these characteristics in turn and describe their implications for performance measurement systems.

Static vs. Vector measures.

Static measures measure a position of a variable at an instant in time while vector measures measure the velocity and direction in which a variable is moving. While both are necessary, traditional performance measurement systems tend to emphasize static measures. There are several reasons for this: 1) they are easier, both to understand and to collect the data, 2) they are more easily quantified and compared, 3) they require less information in order to make a measurement of a variable. On the other hand, vector measures require 1) a baseline from which to measure, 20 a goal toward which toe variable should be moving and 3) a time series of measurements in order to measure direction and velocity. The minimum is two measurements if the variable is linear and more if the variable is non linear, probabilistic or the velocity is undergoing acceleration or deceleration.

In a continuous improvement environment, vector measures take on increasing importance since the focus is no longer on where we are (static) but the direction in which performance is heading (vector).

Hard vs. Probability vs. Soft measures

Hard measurements are those that can be quantified. Soft measures are those that can be measured only in relative terms. Few things that we measure in a business organization (e.g. inventory counts) can be considered hard measures. Many things which we have traditionally considered hard measures such as performance standards, budgets and forecasts are probability distributions.

In reality there is a whole range of measures between a truly hard measure (inventory count) and a truly soft one (e.g. this year is “better” than last year). These measure fall into two categories: 1) relative and 2) probability. Relative measures are those that can be bracketed between two more concrete measures. For example, we can know (by focus group, survey or some other research) that our customer service has improved over a baseline such as last year, but that it has not yet attained our goal. Assigning a quantifiable value may be possible, but only very indirectly (e.g. number of customer complaints). However we can be fairly certain that customer service is improving via the vector measurement.

Probability measures can determine that a particular value or set of values of a variable are within the same probability distribution as an expected value. Validity of the measure increases as the number of data points increases for both the expected value (the standard) and the variable (measured value). Standard statistical hypothesis testing methods can then be used to determine if the variable is from the same distribution as the expected value. As an example, the performance e of several employees can be tested to see if it is within the probability distribution of a performance standard.

Relative and probability measures can be combined in powerful ways. For example, a series of customers can be surveyed and compared to a similar series from the previous year. If it can be shown that these two samples came from different probability distributions, it can be inferred that customer service is improving (or deteriorating) relative to a previously measure baseline and a pre-established goal.

Direct vs. Indirect measures

Most of what we measure in a business organization is indirect to the result we hope to obtain. For example, taking “Smile Surveys (How did you like us?)” or tracking returns and warranty claims, while useful information, allows us only to infer that our customer service is adequate, improving, declining, etc. This is because there is no direct way to “get into the customer’s mind” to determine what they really believe about our customer service and our performance compared to competitors.

This is different than our traditional belief that measurements are direct. In the days when the primary goal of measurement was to “count pieces and hours”, this assumption was largely correct. Today, however, most of the things that are important to know in a world class customer oriented organization cannot be known directly any more than they are hard numbers.

If indirect measures are misinterpreted as hard measures this can lead to some potential pitfalls: 1) indirect measures require judgment in order to draw the correct inference to the result desired. A literal interpretation or incorrect judgment can lead to an inappropriate conclusion regarding the meaning of the measurement; 2) Both those measuring and those measured must clearly understand and agree on the purpose of the measurement, its relation to the desired result and the fact that the measurement is not an end in itself; 3) Interpreting a measure as direct when its indirect or failure to understand the result fro which the measurement is a surrogate will lead to incorrect motivation and behavior on the part of those being measured. This will be discussed further in the section on Behavioral Characteristics.

Precision vs. Accuracy

Calculating a measurement to five decimal places is precision and has little to do with accuracy. Precision adds no value to the measurement in most cases. Accuracy can be measured as the difference between observed and actual value in the case of existing discrete quantities (e.g. inventory quantities) and as the standard deviation of a probability distribution for forecasts, standards or goals. Many measurements exhibit weakness here by substituting precision for accuracy and by treating probability distributions as “hard” numbers and ignoring the standard deviation.

Levels of Detail

An incorrect level of detail can lead to an improper focus on the wrong activities. Before the computer and modern data collection methods, organizations collected whatever data at whatever level of detail they could because it was all that was readily available. Also, because performance measurement has not been considered a system and has been the object of automation but little change, many functions that formerly were important but now are less so continue to be measured in excruciating detail while others that have taken on importance are glossed over.

A clear example of this is the continuing detailed measurement of direct labor distribution (currently averaging 8-12% of product cost in the United States) while overhead (often more than 50% of product cost) is collected in little detail and “painted” over products in an arbitrary manner. This leads to an overemphasis on detailed costs and lack of emphasis on those factors not as well known and understood. An organization might know that Ralph made 98 or 99% efficiency on some 2 hour job yesterday but haven’t a clue what their customers think of them or what types of problems they perceive with the organization. Customer service incidents should be tracked individually while production might be better served by knowing only how much quality, salable product came off the line and whether the schedule was met and the customer satisfied.

Underlying Assumptions

It the critical measures in a customer service focused organization are mostly vector, probability or soft and indirect which often is the case, it is important that the underlying assumptions of any performance measurement be made explicit. Historically, most measurement were made “because we could get the data” and little concern was given to creating performance measurement systems or considering what was assumed about the measurement and the related expected result. Today a business organization must not only be clear about the structural characteristics of each performance measurement and system, but also must be clear about the rationale that makes the indirect measurement a clear surrogate for the intended result. Drucker (1973) indicates that, among other things, performance measures must be meaningful (a good test — what is meaningful to the customer?), congruent (accurately reflect the movement of the result) and understandable by those who benefit from them. This requires a conscious definition of each performance measurement and its place and purpose in the performance measurement system.

Behavioral Characteristics

Intended vs. Unintended consequences

Drucker (1973) has indicated that performance measurement in a social system can be neither objective nor neutral. One of the purposes of performance measurement is to get people to perform to a certain standard or goal. If the performance measurement is an indirect measure of the goal, if the measure is interpreted to be a hard number, the measure often becomes a substitute for the goal itself. This can lead to the unintended consequence of meeting the measure while ignoring the goal. The result is at best the measure provides no relevant motivation toward the expected result and at worst can be counterproductive, leading to unintended consequences.

Examples of performance measures leading to unintended consequences are numerous and exhibit themselves as a focus on meeting or exceeding the “hard” measure while neglecting the expected result. Several specific examples were listed earlier in the paper.

Unintended consequences are more likely to occur when measures are: 1)less direct, 2) static, rather than vector or a combination, 3) singular or predominant measures of performance rather than combinations of measures, 4) statistical expected values treated as hard number measures and 5) used for command/control of individuals rather than information which can be used to adjust the process.

It appears that performance measurement systems that have certain characteristics run the risk of at least not measuring anything meaningful and at worst leading to unintended consequences that can interfere with good performance, especially in a continuous improvement or customer service focused environment. Those performance measurement systems that focus on precise, hard numbers used to measure specific performance to specific indirect parameters to the exclusion of vector measurement, relative and statistical measurements and management judgment, seem to be most at risk.

In short, this is the very performance measurement system many companies have carried over from their hierarchical command/control days and are now trying to use to coax quality, participative, continuous improvement from their organization. It is not hard to understand why many organizations meet what seems like intentional resistance to organizational change and continuous improvement initiatives.

Adaptability to performance measures

A wise executive once said, “Tell my how you measure someone and I’ll tell you what he’s doing”. Especially in organizations that treat all measures as hard and direct, people over time figure out how to maximize their performance against the performance measure, often without regard for and sometimes to the detriment of the underlying expected result. People shipping anything that’s not nailed down at the end of the month and spending unused budget on unneeded equipment and activities are acting in ways that are counterproductive to the organization while benefiting the person being measured by maximizing the performance against the measurement.

Clearly, something must be done to resolve this problem. Four possible solutions are: 1 Recognize the nature of the measure (hard/soft, direct/indirect, etc.) and use it accordingly, 2) Clarify the underlying assumptions of the measure and make sure they are understood by both the person measuring and the person measured, 3) Create measurement sets that manage behavior (see below), and 4) Continually change and adjust the measurement system parameters over time to a) reflect changing goals and results and b) compensate for modified behavior resulting from adjustment to the requirements of the performance measure and away from focus on the underlying expected result.

Measurement Sets to manage behavior

Performance Measurement as an Integrated System

Too often performance measurement in an organization is an outgrowth of the financial system. Measurements are added haphazardly as needs to clarify or control are perceived without regard for a performance measurement system integrated with and supporting organization goals and objectives. Just as, in the pre MRP days, we believed that if each department produced as much as it could, the result would be maximum production, may organizations still believe that if we measure each individual and function to some quantifiable standard, the sum of the results will be organizational effectiveness.

The understanding of what constitutes a comprehensive performance measurement system is still evolving, but here are some key elements that are believed to be necessary:

  • Performance measurement must be a system designed as a pare of the implementation plan of overall corporate strategy.
  • Each measurement should be traceable to and shown to support overall corporate purpose.
  • Measurement systems and methodologies must be aligned with desired corporate cultural values.
  • Vector measures must predominate in a continuous improvement strategy.
  • A clear distinction must be drawn on the definition and use of performance measures that are deterministic (e.g. matching physical inventory counts to “book” balance) and those which are statistical in nature (e.g. standards, forecasts, budgets).
  • The system must focus on measurement as information, not measurement as control.
  • Measurement systems must leave room for management judgment.
  • Measurement systems must be constantly re-evaluated:
    • With each change in strategy or goals.
    • With each system or process revision.
    • When a measure or set of measures becomes dysfunctional (i.e. exhibits unintended consequences.
    • Variables measured and measurement methodology must be reviewed in addition to measurement values and goals.

Clearly there is work to be done in the performance measurement area. While the research is still new and there is much to be learned, most organizations are so far behind what we already know that there is already much to be gained form analyzing the performance measurement system from an organizational perspective those organization that do will benefit, those that do not will fall behind.

References

Barnard, Chester I., The Functions of the Executive, Harvard University Press, Cambridge, MA (1938).

Cooper, Robin & Robert S. Kaplan, The Design of Cost Management Systems, Prentice Hall, Englewood Cliffs, NJ (1991).

Drucker, Peter F., Management Tasks, Responsibilities, Practices, Harper & Row, New York (1973).

—, Managing for the Future, Truman Talley Books/Dutton, New York, (1992).

—, “Be Data Literate — Know What to Know”, The Wall Street Journal, 12/1/92, p. 16.

Kaplan, Robert S., editor, Measures for Manufacturing Excellence, Harvard Business School Press, Boston (1990).

Maskell, Brian H., Performance Measurement for World Class Manufacturing, Productivity Press, Cambridge, MA (1991).

Merchant, Kenneth A., Rewarding Results, Motivating Profit Center Managers, Harvard Business School Press, Boston (1989).

Miller, DeMeyer & Nakane, Benchmarking Global Manufacturing, Irwin, Homewood, IL (1992).

Senge, Peter M., The Fifth Discipline, Doubleday, New York (1990).

Simon, Herbert A., Administrative Behavior, The Free Press, New York (1976).

Categorised in: , ,

   ©2014 The ACA Group.