The chief of corporate regulator ASIC has warned of the security and privacy risks associated with “blind trust” of new financial and regulatory technologies such as artificial intelligence (AI) and data analytics.
Speaking at the ASIC annual forum in Sydney today, Greg Medcraft highlighted changing customer expectations in the “fourth industrial revolution”.
While financial and regulatory technologies have the potential to reduce costs, empower customers, and improve the customer experience, he warned against blind trust of the algorithms underpinning artificial intelligence and automation products.
“As we see the potential of automated financial services, and AI playing a role, we need to look to the algorithms behind these services,” Medcraft said.
“These algorithms will need to be transparent so their decisions can be challenged, and not just be considered opaque ‘black boxes’.”
It’s an issue that has hit home for the federal government in recent months as it grapples with the fallout of its controversial Centrelink data matching program, which is underpinned by an algorithm that appears to generate a high rate of errors.
Medcraft said using technology to give customers greater control over services as well as their data without appropriate understanding of the risks involved could have negative results.
“There will be new channels through which fraud may be perpetrated both within and across borders,” he said.
“The growing use of technology to capture, store and analyse data increases the risks of that data being misused, and the systems used to capture and store the data being subject to cyber attacks.”
Privacy is also an issue, he said – data might be stored, used, and shared without a customer’s consent.
As the finance industry transitions to a state where information is transmitted in real-time via mobile and digital channels, thought needs to be given to its underlying foundation, Medcraft said.
“We are entering a period where responsible leadership has never been more critical, especially in the leaders who drive the innovative strategies of the future,” he said.
“Our efforts must be focused on fostering both inclusion, and trust and confidence. Our future success requires this foundation of trust, and that trust will not be given automatically – it must be genuinely earned.”
The challenge for ASIC is to create an environment that encourages innovations without compromising consumer and investor trust and confidence, and market operations, he said.
It has already opened a regulatory sandbox that gives fintechs the opportunity to test new technologies without having to apply for a financial services licence.
ASIC dips into machine learning, analytics
However, whilst warning of the potential risks with new technologies, Medcraft revealed ASIC is implementing machine learning and data analytics in its own operations.
It is embedding analytics into its surveillance, investigations, and enforcement teams, including using machine learning to identify misleading marketing, using algorithims to spot suspicious trading, and social media analytics to monitor certain hotspots.
“We are tailoring machine learning software for use in investigations, which allows visualisation pattern matching across our evidence database, using algorithms for both structured and unstructured data. It allows us to visually map relationships of persons and entities and create time chronologies,” he said.
ASIC is also piloting Nuix technology to better identity the most relevant documents for court purposes.
The agency is currently recruiting data analytics and behavioural insights specialists, and has developed a Dynamics-based data repository managed by a new chief data office and data governance council.
It is similarly creating “data labs” that will provide both ASIC and, potentially, regtech companies with access to “datasets, opensource tools and an environment to test new technologies”.