I may be way behind the curve on this one. And if I am — if I am accusing internal auditors of doing something they no longer do — then feel free to correct me. But I have a feeling …
Why do auditors conduct tests?
And at this point, I have to interrupt our regularly scheduled program to admit that I learned something new. You see, at this point I fully intended to quote, at length, what the IPPF had to say about "testing." However, after lo these many years, I leaned an interesting tidbit. The word "testing" does not appear in the Standards. And the word "test" only appears once, in relation to technology-based audit techniques. And, in that instance, it is used as an adjective modifying the noun "data generators."
Does that mean that, because the Standards are silent on that particular word, we don't have to do any testing? Perish the thought. It all falls under a more umbrella-like passage. "[Audit] work programs must include the procedures for identifying, analyzing, evaluating, and documenting information during the engagement." Read some more and you will find that all of this must be in support of the engagement's objectives.
(I know, I know. You already know all this. Hang in there. I find that it is never a waste to go back to what we assume we already know to figure out why we do the things we do. For example, did you know the word "testing" didn't appear? I didn't think so. Let's continue.)
So, now that we're back on course, let's try another authoritative source. What does Sawyer's Guide for Internal Auditors say about testing?
Unfortunately, I must interrupt one more time for a confession. My current version of Sawyer's is not that … current. In fact, it was published in 1973 and is called The Practice of Modern Internal Audit. (That was back when Sawyer was younger than I am now. But let's move past that sobering thought.) One works with the resources one has. And I'll go out on a limb and bet that, while some of the words may be different in more recent editions, the concepts are the same.
In my version, Sawyer calls testing "the measurement of selected transactions or processes, and the comparison of the results of those measurements with established standards."
There are volumes in that simple statement. For example, if you're paying attention, you are seeing the foundations for findings: Standards are the criteria, the measurement is the condition, and the comparison of the two is what we are trying to discover when searching for a cause. But let's keep moving.
Sawyer goes on to state (a long time ago): "The audit test usually implies the evaluation of transactions, records, activities, functions, and assertions, by examining all or part of them."
I've just provided a lot of verbiage that may seem elementary. But clear back in 1973, the experts recognized that testing could not be put in a cubbyhole. For example, I'll bet that, when I said the word testing to you, your first thought was pulling samples, reviewing documents, and completing spreadsheets (hard copy and electronic). I'll bet interviews didn't even come into the conversation.
And therein lies the rub. Again, why are we testing?
I had a simple rule for my auditors. If the auditees have already indicated they are doing it wrong, there is almost no need to do any more testing. One exception: If we needed to determine how big an issue it was — dollars, hours etc. Otherwise, why test to prove they were doing things wrong when they already admitted to doing things wrong? It was time to move on to figuring out why and developing corrective action.
The concept is simple, and seems to be proven out time and time again. When someone says they did it wrong, you can usually take their word for it; when they say they do it right, you're going to want to get just a bit more confirmation.
(And to prove there is an exception to every rule, there was an auditor hired shortly after I joined the department. He was not well liked in the office. In fact, one line supervisor disliked him so much that she would always tell him the opposite of what was occurring. If they were doing it wrong, she said they were doing it right; if they were doing it right, she said they were doing it wrong. There's a lot more stories there, but I'm running short on time.)
So, if the answer to "Why are we testing?' is to support the objectives of the engagement, and the objectives of the engagement include assurance that things are working right, and we already know that things are not working right, then there is a much better question internal audit should be asking itself:
"Why are we doing so many tests we don't need to conduct?"
Again, going out on a limb, I'd bet that a significant portion of the testing being conducted is not necessary. If the initial audit work has been completed correctly — if the auditors have had robust, in-depth conversations and just plain watched what was going on — then the amount of testing necessary to prove what is already known can probably be sharply reduced.
As I say, maybe I'm way behind on this one. Maybe everyone already knows this and makes the necessary adjustments. Feel free to correct me if I'm misguided on this.
But based on discussions I've had where auditors tell stories of being told to complete the tests that were outlined when the audit was first planned — the work program that was developed before anyone really knew what was happening — I'm guessing this is still a real problem.
Or to put it another way, take a closer look at how often you change your program based on what you have discovered. I think you'll learn something interesting about why there is never enough time to complete the audit work you think you need to get done.