Friday, October 29, 2010

Whats new with PCI-DSS 2.0 - Part 1

The much-awaited PCI-DSS standard v 2.0 is out now. The PCI Security Standards Council (SSC) has released the standard on the 28th of Oct 2010 and is available for download at their site, along with the Summary of Changes from v 1.2.1 to 2.0. Some of the changes are small changes in verbiage, just clarifying the stance on certain issues, while some others are changes which may have medium to large scale impacts in certain PCI environments. In this two part blogpost I will be discussing some of the key changes which may have that kind of impact.

The first thing I noticed was the amount of detail provided to QSAs at the beginning of the document. The Report on Compliance Details, Scoping, etc. have been detailed very heavily. This is good, because over some fears over lack of quality norms by QSAs, the reports were lax on details (due to laxity of testing). The SSC has been consistently trying to improve the quality of assessment and the initial sections of the Security Assessment Procedures are evidence of that.

Requirement 1, The Firewall and Network Security Requirement has largely gone through changes in verbiage, clarifying some of the questions about implementation of firewalls and network segmentation. The IP Masquerading requirement using NAT/PAT as the benchmark has been extended to including load balancers, content caches, firewalls, etc. Also, employees with personal firewall software on their computers in the PCI scoped environment should be unable to turn them off. Sensible, but largely basic.

Clarity on the "One Primary Function Per Server" rule has been given at last. The rule has been interpreted in several ways, but there is a measure of clarity with the 2.0. The Standard stipulates that you must use one primary server per function where the security levels of those functions vary, for instance, DNS, Web and DB server, or Card Management Application Server and Database Server. They have also indicated that in a VM environment, one primary function per virtual machine is in order. This was a requirement that was being taken to a ridiculous level on both sides of the spectrum.

Another important change, which comes across as innocuous but can have far reaching implications is the non-console administrative access requirement, where the PCI stipulates that when accessing system components like network devices and servers from a non-console administrative perspective, encryption like SSH, SSL, etc have to be used to access. In earlier avatars of the standard, this was just a simple allusion to SSL or SSH or IPSec, but they have now mandated strong cryptography. This causes quite an issue with network devices that ship with SSL certificates that still support SSLv2 or MD5, or in case of SSH with SSHv1. These were taken as compliant (with PCI 1.2.1) because they supported encrypted non-console admin access. Now, however with the strong crypto requirement for non-console admin access, these will have to be overhauled with better SSL certs and SSH implementations, and even in the case of IPSec, stronger crypto.

One of the requirements that I believe will set PCI back by some measure is Requirement 3.2. This requirement mandates that entities should not store Sensitive Authentication Data under any circumstances (even if encrypted). This requirement was extremely difficult to enforce in Issuing Banks or Issuing Processors as several of them are on Mainframe legacy apps and these apps, not only store CVV(aka Card Security Code), but also log the full card track data and transaction in cleartext. However, certain issuing banks/processors have adopted the standard where the CVV is generated on the fly (by a Hardware Security Module) for authorization and compared with the CVV sent in the transaction and if the CVVs are found to match, the transaction is authorized. This is a good practice, which ensures that CVVs arent stored by the organization. But these implementations (in my experience) are still the minority. The PCI 2.0 has allowed Issuing organizations to store sensitive authentication data like the CVV. They hav excepted issuers and processors from this requirement. I believe this is a bad move, because issuing orgs now, do not have an impetus to change over to better (and more secure) practices in relation to storage of Sensitive Authentication Data.

The Standard also changes some key issues with key management (no pun intended). The standard has mentioned that key-encrypting-keys (KEK) need to be equivalent to the Data-Encrypting-Key(DEK) in terms of size. Now, as it can be imagined, the DEK encrypts the Data and the KEK, as an additional measure of security is used to encrypt the DEK. In many applications, the DEK is a symmetric cipher (like AES 256 ot 3 DES 128) and the KEK is an asymmetric cipher (RSA, DSA, etc). This is usually done because reasons of efficiency. Data encryption is a heavy process, hence symmetric encryption is utilized for higher speeds of encryption. Asymmetric is used for KEKS, because the private-public keypair and is easier to secure than another symmetric key. However, with the mandate of the PCI on DEKs and KEKs having equivalent size, length and complexity, the tables are turned. The equivalent of a 256 bit symmetric cipher is a 15460 bit asymmetric cipher. Ouch.

A good fallout of the key management requirements is the 'split-knowledge' requirement. Earlier, the standard mandated split knowledge and dual-control of the encryption keys by key custodians for key generation,etc. This was a serious issue with Applications, as key management could be automated, but because of this requirement mandating split-knowledge and dual control of keys, developers used to come up with clunky executables (or other kludges) that would allow key custodians to enter half a key each for generation or key change. Now, however the stance has changed to the fact that split-knowledge of keys by custodians is only necessary in case of manually driven key management processes (where split knowledge and dual control make sense). Great news for applications, where the key management processes are (and ideally should be) automated.

This ends Part 1 of my PCI-DSS 2.0 review. This will be followed up with the rest of the review in Part 2. Hope you find it useful!

No comments:

Disclaimer

The views presented in this blog are entirely mine and are not those of my company.

© Abhay Bhargav 2010