Vulnerability Assessment

vulnerability is considered a weakness that could be used in some manner to compromise the confidentiality, integrity, or availability of an information system.

In a vulnerability assessment, your objective is to create a simple inventory of discovered vulnerabilities within the target environment. This concept of a target environment is extremely important. You must be sure to stay within the scope of your client’s target network and required objectives.

Creeping outside the scope of an assessment can cause an interruption of service, a breach of trust with your client, or legal action against you and your employer.

In most cases, an automated tool, such as the ones in the Vulnerability Analysis and Web Applications categories of the Kali Tools site and Kali desktop Applications menu, is used to discover live systems in a target environment, identify listening services, and enumerate them to discover as much information as possible such as the server software, version, platform, and so on.

This information is then checked for known signatures of potential issues or vulnerabilities. These signatures are made up of data point combinations that are intended to represent known issues. Multiple data points are used, because the more data points you use, the more accurate the identification.

A very large number of potential data points exist, including but not limited to:

  • Operating System Version: It is not uncommon for software to be vulnerable on one operating system version but not on another. Because of this, the scanner will attempt to determine, as accurately as possible, what operating system version is hosting the targeted application.
  • Patch Level: Many times, patches for an operating system will be released that do not increase the version information, but still change the way a vulnerability will respond, or even eliminate the vulnerability entirely.
  • Processor Architecture: Many software applications are available for multiple processor architectures such as Intel x86, Intel x64, multiple versions of ARM, UltraSPARC, and so on. In some cases, a vulnerability will only exist on a specific architecture, so knowing this bit of information can be critical for an accurate signature.
  • Software Version: The version of the targeted software is one of the basic items that needs to be captured to identify a vulnerability.

These, and many other data points, will be used to make up a signature as part of a vulnerability scan.

As expected, the more data points that match, the more accurate the signature will be. When dealing with signature matches, you can have a few different potential results:

  • True Positive: The signature is matched and it captures a true vulnerability. These results are the ones you will need to follow up on and correct, as these are the items that malicious individuals can take advantage of to hurt your organization (or your client’s).
  • False Positive: The signature is matched; however the detected issue is not a true vulnerability. In an assessment, these are often considered noise and can be quite frustrating. You never want to dismiss a true positive as a false positive without more extensive validation.
  • True Negative: The signature is not matched and there is no vulnerability. This is the ideal scenario, verifying that a vulnerability does not exist on a target.
  • False Negative: The signature is not matched but there is an existing vulnerability. As bad as a false positive is, a false negative is much worse. In this case, a problem exists but the scanner did not detect it, so you have no indication of its existence.

As you can imagine, the accuracy of the signatures is extremely important for accurate results. The more data that are provided, the greater the chance there is to have accurate results from an automated signature-based scan, which is why authenticated scans are often so popular.

With an authenticated scan, the scanning software will use provided credentials to authenticate to the target. This provides a deeper level of visibility into a target than would otherwise be possible.

A well-conducted vulnerability assessment presents a snapshot of potential problems in an organization and provides metrics to measure change over time. This is a fairly lightweight assessment, but even still, many organizations will regularly perform automated vulnerability scans in off-hours to avoid potential problems during the day when service availability and bandwidth are most critical.

The cost of doing a scan is to occupy system resources.


Scanning Threads

Most vulnerability scanners include an option to set threads per scan, which equates to the number of concurrent checks that occur at one time. Increasing this number will have a direct impact on the load on the assessment platform as well as the networks and targets you are interacting with.

When a vulnerability scan is finished, the discovered issues are typically linked back to industry standard identifiers such as CVE numberEDB-ID, and vendor advisories.

This information, along with the vulnerabilities CVSS score, is used to determine a risk rating. Along with false negatives (and false positives), these arbitrary risk ratings are common issues that need to be considered when analyzing the scan results.

Since automated tools use a database of signatures to detect vulnerabilities, any slight deviation from a known signature can alter the result and likewise the validity of the perceived vulnerability.

A scanner is often said to only be as good as its signature rule base. For this reason, many vendors provide multiple signature sets: one that might be free to home users and another fairly expensive set that is more comprehensive, which is generally sold to corporate customers.

The other issue that is often encountered with vulnerability scans is the validity of the suggested risk ratings.

Depending on your environment, these ratings may or may not be applicable so they should not be accepted blindly.

While there is no universally defined agreement on risk ratings, NIST Special publication 800-30 is recommended as a baseline for evaluation of risk ratings and their accuracy in your environment. NIST SP 800-30 defines the true risk of a discovered vulnerability as a combination of the likelihood of occurrence and the potential impact.

Leave a Reply