In one stage of my career, I was the guy who was kind of in charge of overseeing our fraud investigations. Which meant I was the one the auditors would call for advice, suggestions, please-tell-me-I-didn't-mess-this-up confirmations, and other pearls, nuggets, and morsels of wisdom related to forthcoming, ongoing, and completed investigations.
Interestingly, the main calls came not while the auditors were determining if an investigation was warranted, while gathering facts and information prior to the investigation, or while performing the investigation. No, the phone calls generally came when the auditors were preparing the final report. They would send a rough draft and then call to discuss how it might be improved. Generally, they couldn't explain what was wrong; they just knew they weren't saying what they thought needed to be said.
The issues were seldom the simple cases of grammar, word choice, information flow, or logical structure the auditors expected. That meant, to provide any real insight, I needed information on how the investigation had been conducted. I would ask the auditors questions that dug deeper and deeper into what had and what had not been done. And, more often than not, the auditors didn't like the way that conversation went. They were looking for quick, easy writing tips, and I was asking questions about the strategies used in conducting the investigation. And, far too often, it became quickly apparent that the investigation had not been done that well.
The root cause of the struggle with completing the reports was not that the reports were written poorly, it was that the reports were being written about work that had been done poorly.
I learned a very long time ago that many of the report writing issues people struggle with have less to do with writing skills than with audit skills. (Or, as in the case above, investigation skills.) At one point, our department began doing in-depth report reviews led by the chief audit executive. In front of the entire audit department he would point out what was wrong (and, very occasionally, right) with every report that was going to be issued. It didn't take me too long to realize: 1) this wasn't the best approach in the world and 2) we were beating the wrong end of the snake. We could work forever on trying to get that report to look perfect, but it would never be any good until we addressed issues with how the work was being performed.
These stories came to mind while reading Annie Duke's book Thinking in Bets. Ms. Duke is a world champion poker player who came to the game while conducting graduate-level research in psychology. While the book does include many references to poker, it is not a poker book. Instead, it is a book that closely examines how we think — in particular, how we do it poorly.
She talks about her initial forays into discussions with other players about hands of poker that had recently been played. Yes, those conversations included much commiseration about how "bad luck" had caused each of them to lose. But she quickly realized that real learning occurred when the discussion went beyond the luck behind an individual hand into discussions about what had gone right or wrong with the underlying strategies. And it required putting her ego on the shelf, admitting that she was not doing everything perfectly, accepting that every hand offered the opportunity to learn, and being willing to accept criticism, even when things had gone well, with the ultimate goal of becoming even better.
Why did my stories of poor report writing and poorer audit work come to mind while reading about analysis of poker hand strategies? Because, within Ms. Duke's approach — within the idea of looking beyond specific results to determine if the overall strategy was sound while recognizing that even the best approaches offered opportunities to be better — is the recognition that part of the reason the auditors wanted writing tips and not audit tips was that they were not looking for honest feedback about the work that was being done. Their assumption was that they were doing it correctly. And people, in spite of protestations to the contrary, want to hear how good they are doing, not how to do it better (and, relatedly, what they did wrong).
Which raises the question, when we look back at the success or failure of an individual audit, are we looking at the success or failure of individual tests/interviews/meetings/documentation, or are we looking at the success or failure of the audit's overall strategy? And do we even want to hear what we might have done wrong?
Which means, I guess, that this whole blog post is just a teaser to a couple of posts that will follow.
So, I hope you take to heart the concept that much of the problem with reporting is often a problem with our audit work. But I also hope you will come back Friday when we'll talk more about honest feedback, avoiding groupthink, and the need to find an individual or a group we can trust to tell us the truth.