Banking professionals are good at crunching numbers. It’s in their DNA—a big part of the entire industry is based on measuring and managing risk. They make macro predictions on the global economy, and decide on the smallest small business loans by running numbers through various algorithms. It’s not a perfect science by any means, but it is science.
Many of the new technologies that come into the infrastructure are supposed to help with those equations, and they surely do. But when it comes time to make predictions about the security issues related to those technologies, it’s a whole other story.
To be blunt, banks don’t like to make predictions on cyber-crime. They don’t say when they expect the next hack, which department it will affect, and how far it will go. They don’t say because they don’t know.
The issue is taking on new relevance because of Bank of America just announced that it has developed a set of metrics for this area. It’s applying sophisticated data analytics to evaluate cyber-crime prevention, detection and mitigation. Again, this isn’t a perfect science, but there’s more science than before.
“Unlike most things in banking, where you model history to understand what is likely to happen in the future, that’s not true of cybersecurity,” Catherine Bessant, chief operations and technology officer at BofA just told the Wall Street Journal. Given how fluid the field is, with new threat matrices emerging on a regular basis, “You’re only as good as your last minute of safety.”
This is a huge issue by any measure. BofA spends some $3 billion a year on IT, and the company’s tech staff is in excess of 100,000. The other major financial services institutions have similar outlays, and their smaller counterparts are equally diligent. No corporation takes the issue lightly, and yet the hacks continue apace: in 2014, fellow behemoth JP Morgan Chase got hit with a data breach that compromised more than 80 million accounts, most of them personal.
In BofA’s case, one of the metrics is based on tracking how frequently system scans are performed, and another counts the problems identified during those scans. Correlations between those metrics are also factored in. Then there’s question of ‘dwell time, which measures the hours and minutes needed to locate, isolate and remove the source of the problem.
For the record, IT security metrics have always been a thorny topics—even high-ranking executives in the discipline are loath to quantify their fears of the next hack. They can involve arcane mathematics and standard deviations based on probability, with details make any reader’s eyes glaze over. However, applying economic principles to security investments isn’t just reasonable, it’s vital.
CSO recently ran a lengthy profile of different ways in which metrics can be applied to enterprise computing at virtually all levels, and the options range from baseline defense coverage (anti-virus, anti-spyware, firewall, etc.) and patch latency to password strength and legitimate e-mail traffic analysis. While it’s not simple, it doesn’t take a Nobel Prize-winning mathematician to do it.
Many institutions presumably have their own metrics and don’t publicize the fact, and that’s understandable. No one expects these companies to tell the world their weaknesses, or what kind of trouble they expect, and when.
However, transparency is a key concept in modern technology. Cyber-crime is a reality of modern life, and the fact that certain financial services providers are confident enough in their own analytics to make quantifiable predictions regarding vulnerabilities in the infrastructure and even future hacks is a sign of strength, not an invitation to disaster. BofA deserves credit for even a top-line discussion of its defense strategies, and more conglomerates should follow suit.