In Vulnerability Assessment, Accuracy is Vital
By Brian Pearce, COO of Beyond Security
Vulnerability Assessment, also known as Vulnerability Management, is a process that identifies and classifies the security holes in a computer, a network, or communications infrastructure.
The primary requirement for a Vulnerability Assessment (VA) solution is testing accuracy. Ease of use and clear reports are important, but if accuracy isn't there then very little else matters. Poor accuracy in VA produces two kinds of testing error.
Overlooking a vulnerability (a false negative) leaves a security flaw you are not aware of and can be potentially damaging if discovered by an attacker. Reporting that a vulnerability exists when it doesn't (false positive) is a waste of time. It is important for a solution to find the vulnerabilities but an inaccurate report can be more trouble than it's worth.
If the first 4 vulnerabilities reported by your VA solution are false positives, it becomes difficult to take the 5th vulnerability it reports seriously. 'Crying wolf' creates complacency. A VA report that says there are dozens of serious security issues when there are only 2 is more a distraction than assistance.
The hidden cost of most VA systems is the man-hours it takes to chase down false positives, prove that they are false and check them off the list. The total cost of ownership of a VA system with a 5 to 8% false positive rate is nearly doubled when the time to verify and eliminate false positives is included. Even a 2% error rate can be a headache.
Nearly all VA solutions depend upon version checking as their primary method of assessing the presence of vulnerabilities. VA solutions typically look at the response header and from the version data reported there they deduce whether the hardware or software is vulnerable. If an old application or operating system version is known to have 5 vulnerabilities and the header says that the old version is in use, then it is assumed that all 5 of those vulnerabilities exist.
Version checking has an advantage for the vendor and a disadvantage for the user. It is easy to program these tests and so the vendor's cost to maintain their test database is low. Additionally, a version analysis test can produce a long and impressive list of vulnerabilities.
However, the disadvantage of version analysis is poor accuracy. it can miss issues and can list dozens of vulnerabilities that are nonexistent. Version information contained in a header doesn't reflect the presence or absence of a security issue with high accuracy.
The alternative is to use behavior based testing. The conclusive indicator of a vulnerability is 'unwanted response to a query'. Vulnerabilities can be exactly and accurately identified by how the host responds when given a special query.
Vulnerability Assessment solutions that use specially crafted queries and depend primarily on the resulting behavior of network components and web applications as its primary indicator of whether a specific vulnerability exists or not are the most accurate. This strategy requires a great deal more effort in the programming of vulnerability tests but produces very few false positives.
The version number reported in the header is only a general indicator of vulnerability. It is not accurate enough for mission critical application in Vulnerability Assessment. It is the behavior, the response to query of a host that is conclusive proof that a vulnerability exists.
A 5% (or greater) false positive rate may not be a problem for small networks - depending upon the time the admin has available to follow up. If there are 15 false positives in a network of 300 IPs, that may not seem like a big deal. What if you have 1000 IPs with 150 false positives? It may take weeks to sort out, or worse yet your trust in VA as a key tool to improving your network security is lost.