FRANKFURT SCHOOL

BLOG

Audit Data Science: How to improve the quality of audits over the long term
Executive Education / 15 June 2021
  • Share

  • 5022

  • 0

  • Print
Patrick Müller ist Diplom Wirtschaftsinformatiker und war als forensischer Datenanalyst in der Beratung sowie als Data Scientist in der Industrie tätig. Seit 2020 ist er selbstständig mit Beratungsschwerpunkt auf Vorbereitung und Implementierung von Datenanalyse Projekten. Er ist Dozent der Zertifikatsstudiengänge "Certified Fraud Manager" und "Certified Audit Data Scientist" an der Frankfurt School. Seine berufliche Leidenschaft ist „Turn Fraud into value und Insights into EBIT“.

To Author's Page

More Blog Posts
Collective Artificial Intelligence: Federated-Learning in Financial Auditing
Totgesagte leben länger! Bekommen PayPal und Co. mit "Wero" Konkurrenz aus Europa?
Die Rente ist sicher?! Ein Märchen aus uralten Zeiten

A major cause of sleepless nights for auditors and forensic accountants is the thought that a finished report might be challenged in court because the auditor’s opinion is based on an incomplete set of data and consequently draws erroneous conclusions. This scenario becomes much more likely when data forensics experts start working for unknown companies. Even internal auditors are not immune to this risk.

Detecting erroneous or fraudulent operations in large datasets extracted from business process systems is always a challenge. Most methods of data analysis are equivalent to rules-based auditing procedures. Hypothesis-based test procedures are often applied to isolated steps in a process; they make it possible to identify, for example, orders without matching requisitions, or transactions posted outside normal working hours. Practical experience shows that while these methods are useful for revealing many different kinds of violations and non-compliances, they may still fail to detect certain patterns of behaviour that are harmful to the company. The truth is, auditors often leave IT forensic considerations out of the equation.

To be absolutely sure of detecting all fraudulent transactions, we recommend the following steps:

Ensure the data set is complete

Forensic procedural models focus on ensuring that all steps in a work process are considered, correctly executed and documented. This involves reviewing not just the data-driven aspects of the process, but also the origins of the data, as well as the IT processes in the data-generating (source) systems. This is intended to ensure that the auditors are working with the complete, unmodified raw data. Data validation includes, among other things, several steps for verifying that this is indeed the case. A variety of exploratory and rules-based analytical methods have become established for validating data. Was the data in the system under analysis entered by users, or was it copied into the system via interfaces or automated processes? If the latter, additional information in the form of trace data may have been lost or modified due to the copying process – additional information that could be especially important for pattern-checking or pattern-recognising analytical procedures.

Analyse known patterns of behaviour

Rules-based analysis can be divided into several different subtypes. Process analysis checks compliance with process rules – so for example, checks whether every order is accompanied by a purchase requisition. Business-rules analysis checks compliance with the dual-control or multiple-control principle, and with the segregation-of-duties principle. System-rules analysis focuses on how the IT system works, identifying missing technical references or operations involving input by maintenance programs. Additional rules are used for red-flag analysis (e.g. rarely used delivery addresses) and fraud analysis (e.g. critical changes to bank master records). All these analyses can be combined in what are called scoring or pattern analyses.

Testing for the segregation of duties along the entire process chain has become well-established practice, and shows why a completeness audit is essential. For example, when auditing the procurement process, it is important to verify which users were involved at each procurement stage: who recorded the purchase requisition, the purchase order, the (optional) delivery notice, the invoice, the final payment? The various user IDs are used to check for violations of the segregation-of-duties principle. If some of the data was copied from process variants running on legacy or upstream systems, it is always possible that the copying process itself was based on embedded technical user IDs, making it impossible to detect segregation-of-duties conflicts.

Identify unknown data anomalies

Innovative AI-based methods – such as deep learning – could be a valuable way of boosting traditional hypothesis-based test procedures. This is particularly true as auditing requirements continue to evolve due to the ongoing digitisation of business processes. The paradigm underlying AI-based audit procedures is fundamentally different from traditional analytical auditing activities, and is best described as “learning rather than programming”. The focus is on learning procedures capable of recognising regularities (standard patterns) in datasets autonomously – without human supervision or input – and differentiating them from irregularities (anomalies). The findings of these AI-based audit procedures can subsequently be combined with, for example, the above-mentioned rules-based audit procedures.

Bottom line

Analyses for auditing purposes should always be based on detailed raw data. If, for example, datasets or records are consolidated during copying processes, it is always possible that key data properties could go missing – properties that would otherwise play a vital role in establishing the facts. This can cause anomalies to be overlooked, even if the analyses are performed correctly using modern methods.

Not every data analysis must be defended before a Supervisory Board committee or court of law. But even so, audit results based on data analytics must meet high standards of reliability. Data validation and sampling is directly relevant to staff working in internal audit, risk management, ICS and compliance functions, as well as lawyers and external auditors. Our Certified Audit Data Scientist course covers practical examples of the latest technological advances and associated applications, such as visual analytics (continuous auditing) and deep learning (anomaly detection).

Marco Schreyer co-authored this blog post and lectures on Frankfurt School’s Certified Fraud Manager and Certified Audit Data Scientist courses.

0 COMMENTS

Send