Euro Security Watch with Mathew J. Schwartz

Cloud Security , Cybercrime , Fraud Management & Cybercrime

Cyber Insurer Sees Remote Access, Cloud Databases Under Fire

Reminder: Real-World Attacks Often Focus on Small Subset of Known Vulnerabilities
Cyber Insurer Sees Remote Access, Cloud Databases Under Fire

Criminals lately have been prioritizing two types of attacks: exploiting Remote Desktop Protocol and penetrating cloud databases.

See Also: LIVE Webinar | Stop, Drop (a Table) & Roll: An SQL Highlight Discussion

So warns cyber insurer Coalition, based on evidence collected via a year's worth of underwriting and claims data, scans of billions of IP addresses and the global network of honeypots that it runs.

Security experts have long warned that criminals, and especially initial access brokers and ransomware groups, have a predilection for exploiting RDP.

Despite repeat warnings that RDP-using organizations need to lock it down, Coalition reports that attackers' top scanning activity remains looking for open or poorly secured RDP access.

Poorly secured big data repositories also remain a top target of extortionists. "Elasticsearch and MongoDB databases have a high rate of compromise, with signals showing that a large number have been captured by ransomware attacks," Coalition reports.

In the bigger picture, Tiago Henriques, Coalition's vice president of research, says the findings are a reminder that despite thousands of new vulnerabilities coming to light every month, attackers remain most likely to target only a small set of them. Again, when attackers find something that works, they often prefer to spend their time putting it to work, rather than coming up with brand-new tactics. This goes for criminals as well as nation-state attackers, and of course sometimes one moonlights as the other.

What to Fix First

Last year, the Five Eyes intelligence alliance - comprising Australia, Canada, New Zealand, the United Kingdom and the United States - released a joint advisory detailing the 15 most routinely exploited vulnerabilities they'd seen in real-world attacks over the prior year. The message to organizations: Patch these flaws first - if you haven't already done so - since they're most likely to get targeted by attackers.

This is useful information, but of course it's just one source of intelligence, and every organization needs to be looking at multiple sources to help identify which systems to patch first. As Britain's National Cyber Security Center says: "Patching remains the single most important thing you can do to secure your technology, and is why applying patches is often described as 'doing the basics.'"

Fresh Flaw Overload

But the bug-squashing discipline - better known as vulnerability management - faces numerous hurdles.

In part, that's because there are just so many fresh bugs to keep patching. Last year, the U.S. National Vulnerability Database, run by the National Institute of Standards and Technology, listed 25,059 new Common Vulnerabilities and Exposures, or CVEs.

About 10% of all vulnerabilities pose a critical risk, scoring nine or above on the 10-point Common Vulnerability Scoring System, according to CVE Details.

Based on CVE trends from the past decade, Coalition predicts that this year, nearly 2,000 new CVEs will appear monthly, "including 270 high-severity and 155 critical-severity vulnerabilities." That would be a 13% increase from the monthly volumes seen in 2022.

That's a lot of bugs for IT teams to prioritize and then get tested and rolled out onto production systems.

For handling this programmatically, research firm Gartner recommends firms pursue a continuous threat exposure management - aka CTEM - program, which looks both at the assets inside an organization and intelligence about vulnerabilities, reconciled against an organization's region, industry and assets.

If handled correctly, organizations will have a robust system for knowing which vulnerabilities to patch - or otherwise mitigate - first, to best minimize their exposure.

Maintaining a rapid patching cadence can have a massive payoff in terms of risk reduction. Based on its review of new vulnerabilities that came to light last year, Coalition learned that if a newly found flaw got exploited, a majority of the time that happened less than 30 days after details about it become public. If a flaw wasn't exploited within 90 days of it becoming public, then chances are it didn't get targeted.

So rather than stressing about the large - and growing - number of new CVEs, organizations need to stay focused on what to do about them. "When the growing number of vulnerabilities becomes overwhelming, organizations need to focus on the small number of vulnerabilities that are actually being exploited in the wild," Coalition's Henriques says.



About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.com, you agree to our use of cookies.