An algorithm may decide your next compliance review

By Simon Doyle | November 14, 2017 | Last updated on January 23, 2024
5 min read

If you don’t like machines deciding your next portfolio rebalance, wait until they decide your next compliance review.

Regulators’ use of data-based, machine-learning platforms to support audit decisions and compliance investigations is not far off. Indeed, it’s already begun.

IIROC, for instance, recently signed a contract with Nasdaq to use its new Nasdaq SMARTS software, a surveillance technology that uses cognitive computing to identify risks and abnormal events among dealers. (IIROC currently uses an older version without this functionality.)

The newest version of the software is capable of sifting through emails, instant messages, social media, phone calls and other unstructured data, from which it builds personal profiles and provides a complete view of a trader’s communications both within the organization and externally. The aim is to detect wrongdoing such as insider trading, market manipulation or collusion.

The program can use “unsupervised learning,” meaning it is capable of discovering traders’ patterns, behaviours and relationships, all without human instruction. While IIROC will use the new version of the Nasdaq platform starting in April 2018, the regulator says it will be taking a very “measured approach” toward the use of such artificial intelligence (AI).

“Although we’ve always had the data, how we approach that data will be different,” says Victoria Pinnington, senior vice-president for market regulation at IIROC in Toronto. “What we’re going to start looking at is adding in more behavioural analysis, and looking at who is behaving differently [or] who is an outlier, based on a number of measures.”

Compliance reviews

IIROC, like many of its peers, is going where technology is going. While securities regulators continue to rely significantly on tips and risk assessment questionnaires, they are increasingly looking at new ways to identify risks.

The Alberta Securities Commission says it has developed a risk model using “qualitative and quantitative” data inputs for its compliance review process. The British Columbia Securities Commission, with assistance from external consultants, has created a data-based, predictive risk model that helps the regulator identify firms with higher probabilities of compliance issues. While BCSC would not disclose its full criteria, its examinations risk assessment model uses criteria such as business model risk, resource risk and economic risk.

IIROC currently runs risk models on all registered firms at the beginning of each fiscal year. These include analyses of training conduct, business conduct, and financial and operations compliance. The risk models indicate comparative scores for firms and, based on those results, design a compliance exam schedule for the year. Nasdaq SMARTS is meant to dig deeper into the data, allowing for more specific types of alerts and red flags, perhaps related to anomalies in a dealer’s behaviour.

“If we start to see a firm’s trading starting to change, or if there’s new behaviours that may require us to ask questions, we may make a decision to move up an audit or do a targeted audit based on that,” Pinnington says.

More data

Faced with more complex regulation and the availability of a growing amount of data, resource-strapped regulators are opting for more advanced tools. Regulators around the globe are incorporating additional inputs into their risk analyses — using, for instance, news and social media posts to deepen profiles and feed behavioural analysis.

And where the regulators go, firms will follow. Financial services firms are also turning to big-data platforms, or “regtech,” for more sophisticated monitoring tools.

Micro Focus, a U.K. technology firm that recently acquired HPE’s software assets, offers real-time, preventative compliance software called Investigative Analytics. The customizable, algorithmic software examines communications (e.g., phone calls and messages) between employees internally, as well as between employees and customers, to develop patterns and profiles, and identify risks.

Results are offered via a dashboard for managers or the firm’s compliance department, raising red or yellow alerts where the software sees something suspicious. Communications may be cross-referenced with other data, depending on how the firm wants its rules customized.

“It’s searching for patterns that would indicate some sort of malicious behaviour, or violations of business conduct, or any type of concessions given in emails. It is searching for word patterns in emails, in any type of chat message. It would then [use that to] start to raise flags,” says Markus Ogurek, Palo Alto-based global financial services and strategic alliances lead at HPE. “We are looking, real time, at any type of communication, to simply make sure we can probably prevent a situation from escalating, or [prevent] a type of agreement or a wrongdoing from happening in the first place.”

The Investigative Analytics software is designed to get smarter from each analysis it performs. It can also learn from another of the company’s products, eDiscovery, which helps financial services firms generate a response when faced with a regulatory investigation or legal action.

eDiscovery sifts through archived data (e.g., phone calls, emails, chat systems and collaborative work platforms) for internal evidence of what took place. The results might indicate whether the firm should settle or object to the allegation.

Total surveillance?

For advisors and their supervisory programs, is total surveillance the new normal? Intelligent surveillance systems call to mind something akin to Reginald Whitaker’s treatise The End of Privacy, penned almost two decades ago. You already knew there was little margin for compliance risk. Now there is less. Machines will let your compliance division know if you’re close to the margins because they will know everything.

For regulators, the challenge is ensuring the machines don’t implicate the innocent. Pinnington says Nasdaq’s new AI technology has mechanisms that help identify false positives, allowing analysts to ignore them and focus on real risks. “You don’t want to extend your net so broadly that you’re capturing innocence and innocent behaviour, but broadly enough that you’re capturing the bad behaviour,” she says.

In a total surveillance scenario, there arises the question of what happens when both the regulators and the regulated access similar data revealing similar compliance alerts. Would it be unfair if financial services firms possessed a compliance dashboard like the regulator’s? Or, is this a more sophisticated preventative supervision model, with superior investor protection?

Ogurek acknowledges that regulators worldwide use Micro Focus products, though each has customized, private rule sets. What regulators are watching for—and what kind of alerts they’re receiving—does not necessarily directly reflect their regulations. That means a large bank, for example, is unlikely to see the exact same dashboard and alerts as its regulator.

Pinnington says IIROC doesn’t specify how firms should do their monitoring and supervision—they need to decide based on their risk tolerance and budgets. But, she suggests that she would not be fussed if firms somehow decided to use the same or similar alerting system as the regulator. Pinnington says, “All of us play a role in protecting the markets. If we’re using the same system, that’s great.”

by Simon Doyle, an Ottawa-based financial writer.

Simon Doyle