Feature Some vulnerabilities remain unreported for the longest time. The 12-year-old Dell SupportAssist remote code execution (RCE) flaw – which was finally unearthed earlier this year – would be one example.
Others, however, have not only been long since reported and had patches released, but continue to pose a threat to enterprises. A joint advisory from the National Cyber Security Centre (NCSC) and the US Cybersecurity and Infrastructure Security Agency (CISA), published in late July, listed the top 30 publicly known vulnerabilities that are routinely being exploited by threat actors. Many of these are a good few years old, including one Microsoft Office RCE that was patched in 2017 but had been around since the year 2000.
Eoin Keary, CEO and founder of Edgescan, told The Register that the oldest common vulnerability discovered in its latest quarterly vulnerability scans report (CVE-1999-0517, impacting Simple Network Management Protocol) dated back to 1999. Which raises the question, why are threat actors being allowed to party like it’s, um… 1999?
These elderly vulnerabilities still pack quite a punch
Before we look at the why, let’s explore some of the what: the old vulnerabilities that are still being used in very real world enterprise attacks to this day. Be they botnet-driven or reliant upon more hands-on attack reconnaissance, old vulnerabilities are commonplace in the initial access armoury of threat actors, not least ransomware groups.
Not every company can be like Microsoft and afford to support a product for a decade after release…
“We have seen several SSL-VPN vulnerabilities in Citrix (CVE-2019-19781), Fortinet (CVE-2018-13379) and Pulse Secure (CVE-2019-11510) used by threat actors over the last few years in ransomware attacks, as well as by nation-state groups,” Satnam Narang, a staff research engineer at Tenable, tells us.
The first four digits of the CVE typically, but not always, reveal the year the flaw was disclosed to and/or recognized by the affected vendor. That Microsoft Equation Editor memory corruption vulnerability (CVE-2017-11882) mentioned earlier that has been patched since 2017, is “used especially in attacks on the healthcare industry” according to Topher Tebow, a cyber security analyst with Acronis. Milad Aslaner, senior director of cyber-defence strategy at SentinelOne, says it is also “frequently used by state-sponsored threat actors from China, Russia, North Korea and Iran.”
Then there’s the aptly named Eternal Blue, an exploit originally developed by the National Security Agency and used to devastating effect during the 2017 WannaCry ransomware attacks. You’d think that would have been well and truly laid to rest as a threat by now, wouldn’t you? But no.
“We saw from the leak of the Conti gang’s technical manuals earlier this month that they still regularly exploit Eternal Blue, despite Microsoft releasing a patch in 2017,” says George Glass, head of threat intelligence at Redscan. And, for that matter, despite the incredibly high-profile nature of the WannaCry attack.
Then there’s the critical 2019 Citrix vulnerability (CVE-2019-19781) that Don Smith, senior director of the counter threat unit cyber intelligence cell at Secureworks, tells us has been used in “several different incident response engagements” across this last year. Threat actors have used it to deliver “a range of malware, including web shells and cryptocurrency miners”.
Patch management is a moving target
Patch management is always a moving target, Glass says. And just because a vulnerability was identified and a fix made available in 2018, it’s way too simplistic to argue every enterprise should have patched it by now. “Identifying which vulnerabilities to prioritise is a perennial challenge in IT security, especially as the volume of CVEs only continues to grow,” Glass states.
Indeed, Redscan’s own analysis of the vulnerabilities recorded by NIST in its National Vulnerability Database last year revealed that 57 per cent were rated as high or critical severity. That is the highest recorded figure for any year to date. Inevitably, this means organisations must “prioritise patching those vulnerabilities with the highest potential impact, and which are readily being exploited in the wild,” Glass says. “However, both variables can and will change over time.
So, a low-risk, low-impact vulnerability in 2018 may become a high-risk one this year. Or, it could become part of an exploit chain including other lower-level vulnerabilities that present a much more impactful problem.
The complexity of patch management is highlighted further by the alarming statistic that the National Institute of Standards and Technology (NIST) Common Vulnerabilities and Exposures (CVE) database expands by around 1,500 every month.
“Just triaging new vulnerabilities is a mammoth task, whether that’s being done manually or via scanning,” says Charl van der Walt, head of security research with Orange Cyberdefense. Now throw in the risk of applying untested patches, the determination of system ownership, accurate inventory maintenance, budget and licence constraints and the sheer logistical challenges of deploying diverse patches across an enterprise.
“The result of all this is what you see,” Van der Walt says. “Very few teams consistently get it right.” Not least, he adds, as “vulnerability and disclosure timing philosophies vary across vendors, leaving the security team with no opportunity to plan or structure their efforts and the threat and potential impact associated with a mitigation are frequently difficult to articulate, leading to customers deferring necessary actions or inappropriately accepting risk”.
A situation 20 years in the making
The current situation enterprises find themselves in has been 20 years in the making, says Bob Rudis, chief security data scientist at Rapid7, adding that “it is going to take many years to dig out from under it.”
This is partly because organisations are generally either in the business of crunching data or building widgets, and tend to focus on said processes instead of dedicating time, personnel and finances towards discovery and mitigation of system, network and software weaknesses.
“For quite a while, this focus on the core business processes worked pretty well,” Rudis continues, suggesting that executives have their confirmation bias dopamine fix reinforced year after year by not having down time or breaches.
“Organisations also try to keep capital investments (computer systems) going for as long as possible with as little interaction (updates) as possible,” he says.
“Not upgrading hardware, operating systems, and/or software for years makes it difficult to just patch, especially when there are no systematic processes in place to perform said actions without causing harm to the business processes themselves.”
The legacy problem reaches much deeper than just the vulnerabilities themselves, it would seem.
The small matter of the constant legacy threat
Legacy tools are a constant threat in the security landscape, according to Busra Demir, a senior solutions architect at HackerOne. “Not every company can be like Microsoft and afford to support a product for a decade after release,” Demir says. But every vendor “should set out criteria for a support lifetime at the outset”.
Apple would be a good example, setting a five-year cap on supporting products but still patching outside that window if the exploit impact is significant enough. “In an enterprise environment, we have to take in another consideration: the cost of evolution,” Demir continues. Legacy products are often king, especially in certain sectors such as manufacturing. The cost of replacing a single management system could equate to upgrading an entire plant. “All of a sudden, a hundred thousand dollar software/hardware package upgrade could be in the tens or hundreds of millions of dollars,” Demir explains.
While some software vendors provide the source code to companies to allow them to self-patch, or do so by way of a third-party vendor, ultimately “it’s up to the IT and security teams to figure out how to fence off and protect these fragile legacy systems,” Demir concludes. “Otherwise, they are an open door to anyone who stumbles on the right key.”
Mitigating the mitigation mess
According to Rudis, at a very minimum, organisations need to have a vulnerability triage process and a patch cadence (emergency, 7-day, 30-day, 60-day, 90-day) plan in place along with a regularly updated inventory of systems and software. That’s the starting point to mitigate the vulnerability mitigation mess.
“They should monitor vulnerability disclosures from vendors and dedicated forums such as attackerkb.com and be prepared to evaluate and categorise vulnerabilities as they come in,” Rudis suggests. Patching should be prioritised based on system/asset risk analysis and there should be processes in place to validate that the mitigations remain in place.
All of which, Rudis helpfully tells The Register, “is also a seriously unreasonable ask for any organisation that hasn’t been performing those tasks for a while – it’s asking an infant to run a marathon before they even crawl.”
Not that it’s impossible, he says, for an organisation that is starting from zero to get to that level of maturity: “I see it every day when I talk to organisations of every flavour.”
Ultimately, what is needed is a change of thinking to accept that patch prioritisation isn’t driven by the vulnerability scanning cycle, Van der Walt says.
“Instead, vulnerability scanning and other processes create a rich dataset that can be queried to determine priorities and plan the patch cycle.”
With that, questions such as “which of my internet-facing systems have vulnerabilities that are most likely to be targeted right now?” or “do I have any critical systems that could be impacted by PrintNightmare?” can be answered.
“This approach is by no means a silver bullet,” Van der Walt concedes. “But supported by the right technologies and processes, we believe that the shift in paradigm would have a meaningful impact.” ®