Device Hardening, Vulnerability Scanning and Threat Mitigation for Compliance and Security

All security standards and Corporate Governance Compliance Policies such as PCI DSS, GCSx CoCo, SOX (Sarbanes Oxley), NERC CIP, HIPAA, HITECH, GLBA, ISO27000 and FISMA require devices such as PCs, Windows Servers, Unix Servers, network devices such as firewalls, Intrusion Protection Systems (IPS) and routers to be secure in order that they protect confidential data secure.

There are a number of buzzwords being used in this area – Security Vulnerabilities and Device Hardening? ‘Hardening’ a device requires known security ‘vulnerabilities’ to be eliminated or mitigated. A vulnerability is any weakness or flaw in the software design, implementation or administration of a system that provides a mechanism for a threat to exploit the weakness of a system or process. There are two main areas to address in order to eliminate security vulnerabilities – configuration settings and software flaws in program and operating system files. Eliminating vulnerabilites will require either ‘remediation’ – typically a software upgrade or patch for program or OS files – or ‘mitigation’ – a configuration settings change. Hardening is required equally for servers, workstations and network devices such as firewalls, switches and routers.

How do I identify Vulnerabilities? A Vulnerability scan or external Penetration Test will report on all vulnerabilities applicable to your systems and applications. You can buy in 3rd Party scanning/pen testing services – pen testing by its very nature is done externally via the public internet as this is where any threat would be exploited from. Vulnerability Scanning services need to be delivered in situ on-site. This can either be performed by a 3rd Party Consultant with scanning hardware, or you can purchase a ‘black box’ solution whereby a scanning appliance is permanently sited within your network and scans are provisioned remotely. Of course, the results of any scan are only accurate at the time of the scan which is why solutions that continuously track configuration changes are the only real way to guarantee the security of your IT estate is maintained.

What is the difference between ‘remediation’ and ‘mitigation’? ‘Remediation’ of a vulnerability results in the flaw being removed or fixed permanently, so this term generally applies to any software update or patch. Patch management is increasingly automated by the Operating System and Product Developer – as long as you implement patches when released, then in-built vulnerabilities will be remediated. As an example, the recently reported Operation Aurora, classified as an Advanced Persistent Threat or APT, was successful in infiltrating Google and Adobe. A vulnerability within Internet Explorer was used to plant malware on targeted users’ PCs that allowed access to sensitive data. The remediation for this vulnerability is to ‘fix’ Internet Explorer using Microsoft released patches. Vulnerability ‘mitigation’ via Configuration settings ensures vulnerabilities are disabled. Configuration-based vulnerabilities are no more or less potentially damaging than those needing to be remediated via a patch, although a securely configured device may well mitigate a program or OS-based threat. The biggest issue with Configuration-based vulnerabilities is that they can be re-introduced or enabled at any time – just a few clicks are needed to change most configuration settings.

How often are new vulnerabilities discovered? Unfortunately, all of the time! Worse still, often the only way that the global community discovers a vulnerability is after a hacker has discovered it and exploited it. It is only when the damage has been done and the hack traced back to its source that a preventative course of action, either patch or configuration settings, can be formulated. There are various centralized repositories of threats and vulnerabilities on the web such as the MITRE CCE lists and many security product vendors compile live threat reports or ‘storm center’ websites.

So all I need to do is to work through the checklist and then I am secure? In theory, but there are literally hundreds of known vulnerabilities for each platform and even in a small IT estate, the task of verifying the hardened status of each and every device is an almost impossible task to conduct manually.

Even if you automate the vulnerability scanning task using a scanning tool to identify how hardened your devices are before you start, you will still have work to do to mitigate and remediate vulnerabilities. But this is only the first step – if you consider a typical configuration vulnerability, for example, a Windows Server should have the Guest account disabled. If you run a scan, identify where this vulnerability exists for your devices, and then take steps to mitigate this vulnerability by disabling the Guest Account, then you will have hardened these devices. However, if another user with Administrator privileges then accesses these same servers and re-enables the Guest Account for any reason, you will then be left exposed. Of course, you wont know that the server has been rendered vulnerable until you next run a scan which may not be for another 3 months or even 12 months. There is another factor that hasn’t yet been covered which is how do you protect systems from an internal threat – more on this later.

So tight change management is essential for ensuring we remain compliant? Indeed – Section 6.4 of the PCI DSS describes the requirements for a formally managed Change Management process for this very reason. Any change to a server or network device may have an impact on the device’s ‘hardened’ state and therefore it is imperative that this is considered when making changes. If you are using a continuous configuration change tracking solution then you will have an audit trail available giving you ‘closed loop’ change management – so the detail of the approved change is documented, along with details of the exact changes that were actually implemented. Furthermore, the devices changed will be re-assessed for vulnerabilities and their compliant state confirmed automatically.

What about internal threats? Cybercrime is joining the Organised Crime league which means this is not just about stopping malicious hackers proving their skills as a fun pastime! Firewalling, Intrusion Protection Systems, AntiVirus software and fully implemented device hardening measures will still not stop or even detect a rogue employee who works as an ‘inside man’. This kind of threat could result in malware being introduced to otherwise secure systems by an employee with Administrator Rights, or even backdoors being programmed into core business applications. Similarly, with the advent of Advanced Persistent Threats (APT) such as the publicized ‘Aurora’ hacks that use social engineering to dupe employees into introducing ‘Zero-Day’ malware. ‘Zero-Day’ threats exploit previously unknown vulnerabilities – a hacker discovers a new vulnerability and formulates an attack process to exploit it. The job then is to understand how the attack happened and more importantly how to remediate or mitigate future re-occurrences of the threat. By their very nature, anti-virus measures are often powerless against ‘zero-day’ threats. In fact, the only way to detect these types of threats is to use File-Integrity Monitoring technology. “All the firewalls, Intrusion Protection Systems, Anti-virus and Process Whitelisting technology in the world won’t save you from a well-orchestrated internal hack where the perpetrator has admin rights to key servers or legitimate access to application code – file integrity monitoring used in conjunction with tight change control is the only way to properly govern sensitive payment card systems” Phil Snell, CTO, NNT

See our other whitepaper ‘File-Integrity Monitoring – The Last Line of Defense of the PCI DSS’ for more background to this area, but this is a brief summary -Clearly, it is important to verify all adds, changes and deletions of files as any change may be significant in compromising the security of a host. This can be achieved by monitoring for should be any attributes changes and the size of the file.

However, since we are looking to prevent one of the most sophisticated types of hack we need to introduce a completely infallible means of guaranteeing file integrity. This calls for each file to be ‘DNA Fingerprinted’, typically generated using a Secure Hash Algorithm. A Secure Hash Algorithm, such as SHA1 or MD5, produces a unique, hash value based on the contents of the file and ensures that even a single character changing in a file will be detected. This means that even if a program is modified to expose payment card details, but the file is then ‘padded’ to make it the same size as the original file and with all other attributes edited to make the file look and feel the same, the modifications will still be exposed. This is why the PCI DSS makes File-Integrity Monitoring a mandatory requirement and why it is increasingly considered as vital a component in system security as firewalling and anti-virus defences.

Conclusion Device hardening is an essential discipline for any organization serious about security. Furthermore, if your organization is subject to any corporate governance or formal security standard, such as PCI DSS, SOX, HIPAA, NERC CIP, ISO 27K, GCSx Co Co, then device hardening will be a mandatory requirement. – All servers, workstations and network devices need to be hardened via a combination of configuration settings and software patch deployment – Any change to a device may adversely affect its hardened state and render your organization exposed to security threats – file-integrity monitoring must also be employed to mitigate ‘zero-day’ threats and the threat from the ‘inside man’ – vulnerability checklists will change regularly as new threats are identified



Source by Mark Kedgley

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: