The practice of including ratings in internal audit reports to highlight or summarize results is not something new. I began exploring and lecturing on the pros and cons of ratings more than 10 years ago. But the subject came up recently at a CAE roundtable, reminding me how popular — yet controversial — the practice continues to be.
Almost 40 percent of those in the room use ratings in some form, and the last time I formally surveyed on the practice, more than two-thirds of respondents said they were including ratings in their audit reports.
Ratings are often assigned based on the overall results of the audit, and they can take on adjectival forms, such as "satisfactory," "needs improvement," or "unsatisfactory." More creative approaches include assignment of ratings to individual findings, or using color-coded indicators, such as green, yellow, red. Regardless of the methodology, the objective for assigning ratings is typically the same: It is a powerful way to draw management and the board's attention to the bottom line of an internal audit.
From my experience, executive management and the audit committee tend to have the greatest appreciation for ratings. They enable them to quickly focus on what's important in the internal audit report. A CEO once told me that, when he received an internal audit report, he looked first at the overall rating. If it was "satisfactory," he said, he "threw it in the trash can." If the rating was "needs improvement," he placed it in his in-box for review the next day. And, if the result was "unsatisfactory," he stuck the report in his briefcase to read on the train home that evening.
Meanwhile, an audit committee chairman observed that ratings can "shine a light" and help the audit committee quickly focus on the most important findings in a report. However, while ratings may be a "light" for some, they are ultimately a "lightning rod" for others.
Ratings can be a powerful tool, but if management and the audit committee place undue emphasis on them, they tend to have a polarizing effect on line and operating managers whose performance ends up being summarized in a single word: "unsatisfactory." In a lecture I delivered several years ago, I summarized the undesirable consequences of ratings in internal audit reports:
- Ratings may foster friction between internal audit and operating management. This is particularly true when ratings are used as negative indicators in performance management plans. In such cases, responsible managers can lose some or all of their incentive compensation. In other organizations, managers whose areas of responsibility earn unsatisfactory ratings must come before the audit committee to explain their corrective action plans. A CAE once told me that, in those rare occasions when he included an "unsatisfactory" rating in an audit report, it was his signal for the responsible manager to be fired! No wonder ratings turn into lightning rods.
- Ratings add to the reporting process time (increasing how long it takes to finalize an audit). One of the most significant contributors to delays in finalizing an audit report is the amount of time it takes to receive management's response/concurrence with the draft report. The CAEs in the recent roundtable discussion acknowledged that this is often exacerbated by negative ratings. The problem can be so acute that some CAEs won't assign a rating until after management has responded — a practice that certainly doesn't endear internal audit to management.
- Ratings may diminish the significance of important audit findings. If the ratings are assigned only to the final report, and not to individual findings or issues, the reader may overlook important results in the audit report. Rushing to look at the rating may impair the reader's ability to see the trees for the forest.
- Management is less likely to openly share known control weaknesses with the audit teams. It's only human nature to not draw attention to your flaws. If the consequences of alerting internal audit to known control or risk management weaknesses are likely to be severe (loss of incentive compensation), many managers will take the attitude of "let the internal auditors find it for themselves." That will serve to only slow down the audit process, and diminish internal audit's overall value, as it takes longer to complete audits.
It would be easy to conclude that ratings are more trouble than they are worth. But it is important to remember that internal audit's key stakeholders often derive a lot of value from them. So, before you make a hasty retreat from this practice, it would serve you well to have an extensive discussion with executive management and the audit committee. For those who do use ratings in internal audit reports, there are five important points to remember to mitigate some of the challenges:
- Identify adjectival or numeric ratings that are clearly understood and accurately reflect the results of the audit. Be as objective in assigning ratings as possible. Never afford those you audit the opportunity to accuse you of bias.
- Communicate the rating scheme in advance. It is only fair for management to know the rules before the "game is played."
- Identify objective criteria for assigning findings/report ratings, coordinate with management in advance, and stick to it. Clearly defined criteria will always afford you a more defensible position in the event of disagreement over the ratings you assign.
- Afford management an opportunity to respond to draft "ratings," and include those responses in the final report. I strongly discourage the assignment of ratings after management has provided responses to the draft report. You may win the battle, but lose the war.
- Try to discourage the use of ratings for punitive actions against management or operating officials. This is the single biggest reason that ratings become lightning rods.
Good luck as you grapple with ratings. I welcome your thoughts on how to enhance the process.