Audit has always been a support function for banks that has served as the last line of defence for keeping the organisation safe from external attacks and provide assurance to business & stakeholders about the safely of the business from any malicious attempt
to harm the data or assets of the business. With all companies virtually converting into data goldmines, Audit these days transforms itself into an IT function that ensures the ultimate defence of our prized data assets. Imagine Audit as the plumber / janitor
who ensures there are no leaks and the supply pipes are safe from any corrosion or any other issues that may have started to infest the system.
Many of us see our touch point with Audit as a drag or an unending demand for application logs, access matrices, process documents (loads of them) and sometimes data trails by Audit for no reason. Development teams see certain BCP / BCM exercises as a huge
overhead on the development process and from there comes the resentment for any department that enforces these processes or asks for evidence for compliance to those processes.
People can be motivated to comply with the processes. Programs can be built that way to comply to certain norms, but with a lot of things now getting automated and potential to have a machine learning & AI to churn the repetitive processes does that mean
Audit teams will be irrelevant? May be, or maybe not.
People get ahead of themselves when they talk about AI. The advancements AI as a field took in past 2 years is more than past 10 years before that, and that’s because of the change in computational trends and contributions coming in by open source community
to provide libraries that others can build on. This enables a lot of AI programs that are reliant on the default libraries available in the market.
This brings some tough questions to the fore. Yes people are working hard to answer these questions and put them to rest but once a while these questions resurface. Some of these are:
- Are these libraries plagues with their own risks?
- Who assessed these libraries for the bias they bring with the data?
- What type of data set you run through with these libraries?
- How to manage inherent bias in the data, as it could be Garbage In - Garbage Out case?
These questions start from logical questions and stumble into the ethical fabric of the AI that we’re pledging ourselves to use. As an Audit function, the exercise to inspect a system with any sort of vulnerabilities gets even tougher. If the audit process
misses the inherent risk introduced by the base model / algorithm used, then all other holes that we plug will not provide the full solution.
We’ve seen different models provide different level of accuracy, which can be improved with training, re-training and supplying more and more data for the model to evolve and build its own connections among the data sets. But if multiple risk assessment
systems deploy AI systems independently then wouldn’t 80% accuracy of both systems brings the overall accuracy down to 64%.
It is Audit function’s responsibility to have an overarching view of this complex ecosystem and its moving parts to keep things in perspective. As I mentioned Audit’s main function is to provide Business Assurance with independent assessment. I am sure that
AI adoption in the organisations will pose more headaches to Audit than any other department.
External | what does this mean?