Friday, October 29, 2010

Whats new with PCI-DSS 2.0 - Part 1

The much-awaited PCI-DSS standard v 2.0 is out now. The PCI Security Standards Council (SSC) has released the standard on the 28th of Oct 2010 and is available for download at their site, along with the Summary of Changes from v 1.2.1 to 2.0. Some of the changes are small changes in verbiage, just clarifying the stance on certain issues, while some others are changes which may have medium to large scale impacts in certain PCI environments. In this two part blogpost I will be discussing some of the key changes which may have that kind of impact.

The first thing I noticed was the amount of detail provided to QSAs at the beginning of the document. The Report on Compliance Details, Scoping, etc. have been detailed very heavily. This is good, because over some fears over lack of quality norms by QSAs, the reports were lax on details (due to laxity of testing). The SSC has been consistently trying to improve the quality of assessment and the initial sections of the Security Assessment Procedures are evidence of that.

Requirement 1, The Firewall and Network Security Requirement has largely gone through changes in verbiage, clarifying some of the questions about implementation of firewalls and network segmentation. The IP Masquerading requirement using NAT/PAT as the benchmark has been extended to including load balancers, content caches, firewalls, etc. Also, employees with personal firewall software on their computers in the PCI scoped environment should be unable to turn them off. Sensible, but largely basic.

Clarity on the "One Primary Function Per Server" rule has been given at last. The rule has been interpreted in several ways, but there is a measure of clarity with the 2.0. The Standard stipulates that you must use one primary server per function where the security levels of those functions vary, for instance, DNS, Web and DB server, or Card Management Application Server and Database Server. They have also indicated that in a VM environment, one primary function per virtual machine is in order. This was a requirement that was being taken to a ridiculous level on both sides of the spectrum.

Another important change, which comes across as innocuous but can have far reaching implications is the non-console administrative access requirement, where the PCI stipulates that when accessing system components like network devices and servers from a non-console administrative perspective, encryption like SSH, SSL, etc have to be used to access. In earlier avatars of the standard, this was just a simple allusion to SSL or SSH or IPSec, but they have now mandated strong cryptography. This causes quite an issue with network devices that ship with SSL certificates that still support SSLv2 or MD5, or in case of SSH with SSHv1. These were taken as compliant (with PCI 1.2.1) because they supported encrypted non-console admin access. Now, however with the strong crypto requirement for non-console admin access, these will have to be overhauled with better SSL certs and SSH implementations, and even in the case of IPSec, stronger crypto.

One of the requirements that I believe will set PCI back by some measure is Requirement 3.2. This requirement mandates that entities should not store Sensitive Authentication Data under any circumstances (even if encrypted). This requirement was extremely difficult to enforce in Issuing Banks or Issuing Processors as several of them are on Mainframe legacy apps and these apps, not only store CVV(aka Card Security Code), but also log the full card track data and transaction in cleartext. However, certain issuing banks/processors have adopted the standard where the CVV is generated on the fly (by a Hardware Security Module) for authorization and compared with the CVV sent in the transaction and if the CVVs are found to match, the transaction is authorized. This is a good practice, which ensures that CVVs arent stored by the organization. But these implementations (in my experience) are still the minority. The PCI 2.0 has allowed Issuing organizations to store sensitive authentication data like the CVV. They hav excepted issuers and processors from this requirement. I believe this is a bad move, because issuing orgs now, do not have an impetus to change over to better (and more secure) practices in relation to storage of Sensitive Authentication Data.

The Standard also changes some key issues with key management (no pun intended). The standard has mentioned that key-encrypting-keys (KEK) need to be equivalent to the Data-Encrypting-Key(DEK) in terms of size. Now, as it can be imagined, the DEK encrypts the Data and the KEK, as an additional measure of security is used to encrypt the DEK. In many applications, the DEK is a symmetric cipher (like AES 256 ot 3 DES 128) and the KEK is an asymmetric cipher (RSA, DSA, etc). This is usually done because reasons of efficiency. Data encryption is a heavy process, hence symmetric encryption is utilized for higher speeds of encryption. Asymmetric is used for KEKS, because the private-public keypair and is easier to secure than another symmetric key. However, with the mandate of the PCI on DEKs and KEKs having equivalent size, length and complexity, the tables are turned. The equivalent of a 256 bit symmetric cipher is a 15460 bit asymmetric cipher. Ouch.

A good fallout of the key management requirements is the 'split-knowledge' requirement. Earlier, the standard mandated split knowledge and dual-control of the encryption keys by key custodians for key generation,etc. This was a serious issue with Applications, as key management could be automated, but because of this requirement mandating split-knowledge and dual control of keys, developers used to come up with clunky executables (or other kludges) that would allow key custodians to enter half a key each for generation or key change. Now, however the stance has changed to the fact that split-knowledge of keys by custodians is only necessary in case of manually driven key management processes (where split knowledge and dual control make sense). Great news for applications, where the key management processes are (and ideally should be) automated.

This ends Part 1 of my PCI-DSS 2.0 review. This will be followed up with the rest of the review in Part 2. Hope you find it useful!

Thursday, October 28, 2010

What's wrong with Penetration Tests, and how we can set it right (India Edition)

Penetration Testing is complicated, especially so for organizations that have to fix the issues from the debris, that is their IT infrastructure components. Over the past few months, I have had tons of experience leading and handling pen-tests for companies in the sub-continent and decided to rant. I thought of writing some of the problems that are out there and also some possible solutions to some of these issues, which will make these pen-tests a whole lot easier and a whole lot more efficient. They are:
Test Everything - Fix Nothing
This is a condition I have seen management typically have with Internal Pen-Tests or Large Application Pen-Tests. Management would like to include a ridiculously large scope of components to test. They include everything from their Database server, right from the laptops that they use at home as part of the Pen-testing activity. The results in most cases (especially with Internal Pen-Tests involving client side systems) is really ugly. Multiple exploits, backdoors and in some cases, traces of popular worms like Conficker (yes, I am not kidding). What follows is a gigantic report and what follows that is......Nothing. I have often seen that organizations who adopt this policy usually dont get anywhere in fixing the problem. They find so many loose ends to tie up, they huff and puff and eventually pack up and go home. This brings me to my first point in this rant-post:
Start small (or manageable) - Many organizations (especially ones new to VAs and Pen-tests) usually go gung-ho and then fizzle out after seeing adverse reports. My advice to you is to test everything in doses that you can handle. Prioritize on the critical components first and then phase your testing for across the year. Also, testing everything (literally) may not be required. Mirroring results for similar IT components/applications is easier than literally testing every single component. For instance, upon finding security holes in a Debian Server, it is probably a good idea to fix issues on a sample of servers and roll out similar operations across the other similar servers in the environment. Additionally creating a solid Hardening standard for Debian Servers, coupled with Patching would not necessitate the need for testing every single one of these similar components.
Ridiculous Time Frames
Sometimes, we are given ridiculous time frames to work with. "Hey, can you test my E-Commerce app in two days. I dont have more time than that. Also, I have to fix the problems after that." and my reaction to that is usually "Oh, you wont have to worry about too much to fix, I probably would need two days to just understand your site, and since you only have two days, I will give you a clean report" in my most sarcastic (and borderline mocking) voice. While time is a constraint, such super constrained timelines only result in a untested and potentially non-secure environment. I am sure no one, either management or the pen-tester would like to turn up with false negatives and find that their application/server/network component was super-vulnerable only because they hadnt the time to possibly test extensively. That will come back to bite a lot of people.
Fixers are Breakers
This is a condition I normally see with Application Pen-Tests, where the management is extremely clued in on the test before the commencement. Statements like "We would like you to be extremely comprehensive and give us all our security holes right between the eyes". Later when we deliver our report of their Hindenburg-like app and discuss with them, the very same people become the worst enemies of everything sane and secure. I would be discussing a gaping business logic flaw about a failed authorization flaw where a user would get to play admin with the application and they try to find explanations which are on the lines of "but the user would not be able to do much even if he/she became admin" or "I think this was an intentional feature that we had in case this eventuality." Earlier, I used to have a small explosion in my head, but yoga has taught me to react like Mr. Wolf from Pulp Fiction "I am here to help, if my help's not appreciated, tough luck gentlemen"

Disclaimer

The views presented in this blog are entirely mine and are not those of my company.

© Abhay Bhargav 2010