Tuesday, January 25, 2011

Post Pen-Test scenarios for the CISO

A Penetration Test can be hard for a company. Over the past years when I have been doing Penetration testing for companies, a lot of them undergoing a pen-test for the first time in their corporate existence. The results are anything but cheery. It is usually a gruesome mess of unpatched servers, misconfigured network devices and wildly non-secure apps that can weaken the resolve of the steeliest of CISOs (Chief Information Security Officers) or their equivalent.

A typical scenario emerges. The CISO receives this 'bloodstained' Pen-test report from the pen-testing company and one of the following usually happens:
  • The CISO utters words like 'This problem will be fixed in the next week, or I will not rest' and subsequently files it in his cupboard and forgets about it for posterity until there is a massive security breach.
  • The CISO delegates to the IT team or the team handling the implementation and they provide an excel sheet saying that they fixed (even though they didn't) and the CISO believes it because they are (supposed to be) doing their job. The problem festers.....
  • The CISO receives the report and shares very limited (or even wrong) information to his key staff because of 'security reasons' or reasons of distrust. This leads to limited action because no one trusts anyone and the organization stays blissfully insecure (and not only emotionally).
  • The CISO doesnt understand the technicalities. Blanket statements about IT security and governance are made and the ground realities are forgotten until a security breach shakes the very ground beneath them.
A successful pen-test is the first step towards better Information Security for an organization. The focus of a CISO should be to leverage on the results of a pen-test and drive efforts to ensure tactical information security and strategic security directives for the organization. Here are a few focus areas for the CISO who wants to get the best from a pen-test:

Quality: A good quality pen-test should be the first focus area for the CISO. The pen-test is conducted against the scoped IT environment (the IT components in scope for the test). The pen-tester should have used a solid methodology (Look for a specific 'methodology' section in the report). Most Pen-tests are not really Pen-Tests but cursory Vulnerability Assessments where the pen-tester has run some automated tools and identified vulnerabilities. The focus should be on depth, and depth is achieved through penetration attempts against the target IT component and the results from said penetration. A critical aspect of a quality pen-test is the report. The report has to be clear, comprehensive and provide specific recommendations on the given vulnerabilities. The report should ensure that the implementers should be able to comprehend and implement the provided recommendation. Another note to CISOs, DO NOT automatically accept a lower cost penetration test, it usually means lower quality. These are matters of your organization's information security.

Communication: I recently came across a CISO who wouldn't share results of the pen-test with the Implementation teams that required to fix it, citing 'Security reasons' as he didnt trust any of them to deliver. Many CISOs do not follow the 'Trust but verify' rule because they do not have the skills to verify. They provide limited (or even wrong) information to their implementation team on fixes and these cursory fixes are of little value in effectively correcting the vulnerability in the system(s). CISOs should communicate effectively with the Implementation teams, providing them enough information required to comprehensively fix the issue. Meaningful data, such as screenshots, downloaded files provided as a part of the pen-tester's results should be provided to the implementation team for them to grasp the issue and fix the vulnerability as effectively as possible.

Project Management: A Penetration Test, from a CISO's standpoint is a project in itself. The CISO has to define a project management plan for the fixes of the vulnerabilities. Based on the Risk Ranking of the vulnerabilities, the fixes should happen based on defined timelines. The culture of Information Security is tough to implement, as people will naturally tend to be convenience oriented, as opposed to security oriented, and this behaviour would manifest itself as vulnerabilities in the IT systems of the organization. The CISO has to cultivate a culture of security execution, where every team responsible for the fixes, delivers on the fixes and these fixes are verified by effectiveness and propriety before being signed off on. Sometimes, there are certain long-term or deep-rooted fixes that take a much longer time or effort in fixing. For instance, implementing encryption on a production database containing millions of records. The implementation of the fix in this scenario is a complicated one, requiring redesign and downtime. In that case, the CISO should actively consider and design compensatory controls to ensure that the lack of primary control is suitably compensated with the secondary control. The CISO should also contract with the pen-tester (either internal or external) to perform a re-test of the previous scope to ensure that the vulnerabilities have been fixed.

Review of Processes: Oftentimes, I have noticed that CISOs blindly fix results without paying heed to a flawed process/culture that would have led to the vulnerability existing in the first place. This is commonly seen with patch management, where a flawed patch management process would lead to inconsistent application of security patches across critical systems, allowing vulnerabilities in previous versions to still be at large, providing easy access to an attacker through a Code Execution exploit or a Denial of Service Exploit against the vulnerable system. The CISO should review and relook at the processes and procedures that exist in the organization and consider amending/overhauling the processes based on evolving security threats and the organization's response to it. This can ideally be achieved by reviewing pen-test or Vulnerability Assessment results from previous quarters or years.

Consistency: A Penetration Test is not a one-time activity (or atleast, it shouldn't be). The CISO should ensure that a pen-test is conducted bi-annually with a quarterly Vulnerability Assessment. Threats evolve consistently and exploit code is written every day for myriad software and applications. This necessitates the need for a repetitive assessment of the organization's IT environment over a period of time. In case the organization's IT environment is massive. The Penetration Test should cover representative samples of all IT components like routers, firewalls, desktops, servers and applications. The findings from all these tests should be used to harden the rest of the components in those sample classes.

Penetration Testing is a tough gig to take for entities. More often than not, they find themselves staring at very adverse results. They often lose heart and this exercise becomes nothing more than lip-service or a compliance check. Using some of these techniques, I have discussed (and I am sure there are many other concepts) I think CISOs or their equivalents in organizations can make a positive and meaningful change in the security stance of their organization.

Friday, October 29, 2010

Whats new with PCI-DSS 2.0 - Part 1

The much-awaited PCI-DSS standard v 2.0 is out now. The PCI Security Standards Council (SSC) has released the standard on the 28th of Oct 2010 and is available for download at their site, along with the Summary of Changes from v 1.2.1 to 2.0. Some of the changes are small changes in verbiage, just clarifying the stance on certain issues, while some others are changes which may have medium to large scale impacts in certain PCI environments. In this two part blogpost I will be discussing some of the key changes which may have that kind of impact.

The first thing I noticed was the amount of detail provided to QSAs at the beginning of the document. The Report on Compliance Details, Scoping, etc. have been detailed very heavily. This is good, because over some fears over lack of quality norms by QSAs, the reports were lax on details (due to laxity of testing). The SSC has been consistently trying to improve the quality of assessment and the initial sections of the Security Assessment Procedures are evidence of that.

Requirement 1, The Firewall and Network Security Requirement has largely gone through changes in verbiage, clarifying some of the questions about implementation of firewalls and network segmentation. The IP Masquerading requirement using NAT/PAT as the benchmark has been extended to including load balancers, content caches, firewalls, etc. Also, employees with personal firewall software on their computers in the PCI scoped environment should be unable to turn them off. Sensible, but largely basic.

Clarity on the "One Primary Function Per Server" rule has been given at last. The rule has been interpreted in several ways, but there is a measure of clarity with the 2.0. The Standard stipulates that you must use one primary server per function where the security levels of those functions vary, for instance, DNS, Web and DB server, or Card Management Application Server and Database Server. They have also indicated that in a VM environment, one primary function per virtual machine is in order. This was a requirement that was being taken to a ridiculous level on both sides of the spectrum.

Another important change, which comes across as innocuous but can have far reaching implications is the non-console administrative access requirement, where the PCI stipulates that when accessing system components like network devices and servers from a non-console administrative perspective, encryption like SSH, SSL, etc have to be used to access. In earlier avatars of the standard, this was just a simple allusion to SSL or SSH or IPSec, but they have now mandated strong cryptography. This causes quite an issue with network devices that ship with SSL certificates that still support SSLv2 or MD5, or in case of SSH with SSHv1. These were taken as compliant (with PCI 1.2.1) because they supported encrypted non-console admin access. Now, however with the strong crypto requirement for non-console admin access, these will have to be overhauled with better SSL certs and SSH implementations, and even in the case of IPSec, stronger crypto.

One of the requirements that I believe will set PCI back by some measure is Requirement 3.2. This requirement mandates that entities should not store Sensitive Authentication Data under any circumstances (even if encrypted). This requirement was extremely difficult to enforce in Issuing Banks or Issuing Processors as several of them are on Mainframe legacy apps and these apps, not only store CVV(aka Card Security Code), but also log the full card track data and transaction in cleartext. However, certain issuing banks/processors have adopted the standard where the CVV is generated on the fly (by a Hardware Security Module) for authorization and compared with the CVV sent in the transaction and if the CVVs are found to match, the transaction is authorized. This is a good practice, which ensures that CVVs arent stored by the organization. But these implementations (in my experience) are still the minority. The PCI 2.0 has allowed Issuing organizations to store sensitive authentication data like the CVV. They hav excepted issuers and processors from this requirement. I believe this is a bad move, because issuing orgs now, do not have an impetus to change over to better (and more secure) practices in relation to storage of Sensitive Authentication Data.

The Standard also changes some key issues with key management (no pun intended). The standard has mentioned that key-encrypting-keys (KEK) need to be equivalent to the Data-Encrypting-Key(DEK) in terms of size. Now, as it can be imagined, the DEK encrypts the Data and the KEK, as an additional measure of security is used to encrypt the DEK. In many applications, the DEK is a symmetric cipher (like AES 256 ot 3 DES 128) and the KEK is an asymmetric cipher (RSA, DSA, etc). This is usually done because reasons of efficiency. Data encryption is a heavy process, hence symmetric encryption is utilized for higher speeds of encryption. Asymmetric is used for KEKS, because the private-public keypair and is easier to secure than another symmetric key. However, with the mandate of the PCI on DEKs and KEKs having equivalent size, length and complexity, the tables are turned. The equivalent of a 256 bit symmetric cipher is a 15460 bit asymmetric cipher. Ouch.

A good fallout of the key management requirements is the 'split-knowledge' requirement. Earlier, the standard mandated split knowledge and dual-control of the encryption keys by key custodians for key generation,etc. This was a serious issue with Applications, as key management could be automated, but because of this requirement mandating split-knowledge and dual control of keys, developers used to come up with clunky executables (or other kludges) that would allow key custodians to enter half a key each for generation or key change. Now, however the stance has changed to the fact that split-knowledge of keys by custodians is only necessary in case of manually driven key management processes (where split knowledge and dual control make sense). Great news for applications, where the key management processes are (and ideally should be) automated.

This ends Part 1 of my PCI-DSS 2.0 review. This will be followed up with the rest of the review in Part 2. Hope you find it useful!

Thursday, October 28, 2010

What's wrong with Penetration Tests, and how we can set it right (India Edition)

Penetration Testing is complicated, especially so for organizations that have to fix the issues from the debris, that is their IT infrastructure components. Over the past few months, I have had tons of experience leading and handling pen-tests for companies in the sub-continent and decided to rant. I thought of writing some of the problems that are out there and also some possible solutions to some of these issues, which will make these pen-tests a whole lot easier and a whole lot more efficient. They are:
Test Everything - Fix Nothing
This is a condition I have seen management typically have with Internal Pen-Tests or Large Application Pen-Tests. Management would like to include a ridiculously large scope of components to test. They include everything from their Database server, right from the laptops that they use at home as part of the Pen-testing activity. The results in most cases (especially with Internal Pen-Tests involving client side systems) is really ugly. Multiple exploits, backdoors and in some cases, traces of popular worms like Conficker (yes, I am not kidding). What follows is a gigantic report and what follows that is......Nothing. I have often seen that organizations who adopt this policy usually dont get anywhere in fixing the problem. They find so many loose ends to tie up, they huff and puff and eventually pack up and go home. This brings me to my first point in this rant-post:
Start small (or manageable) - Many organizations (especially ones new to VAs and Pen-tests) usually go gung-ho and then fizzle out after seeing adverse reports. My advice to you is to test everything in doses that you can handle. Prioritize on the critical components first and then phase your testing for across the year. Also, testing everything (literally) may not be required. Mirroring results for similar IT components/applications is easier than literally testing every single component. For instance, upon finding security holes in a Debian Server, it is probably a good idea to fix issues on a sample of servers and roll out similar operations across the other similar servers in the environment. Additionally creating a solid Hardening standard for Debian Servers, coupled with Patching would not necessitate the need for testing every single one of these similar components.
Ridiculous Time Frames
Sometimes, we are given ridiculous time frames to work with. "Hey, can you test my E-Commerce app in two days. I dont have more time than that. Also, I have to fix the problems after that." and my reaction to that is usually "Oh, you wont have to worry about too much to fix, I probably would need two days to just understand your site, and since you only have two days, I will give you a clean report" in my most sarcastic (and borderline mocking) voice. While time is a constraint, such super constrained timelines only result in a untested and potentially non-secure environment. I am sure no one, either management or the pen-tester would like to turn up with false negatives and find that their application/server/network component was super-vulnerable only because they hadnt the time to possibly test extensively. That will come back to bite a lot of people.
Fixers are Breakers
This is a condition I normally see with Application Pen-Tests, where the management is extremely clued in on the test before the commencement. Statements like "We would like you to be extremely comprehensive and give us all our security holes right between the eyes". Later when we deliver our report of their Hindenburg-like app and discuss with them, the very same people become the worst enemies of everything sane and secure. I would be discussing a gaping business logic flaw about a failed authorization flaw where a user would get to play admin with the application and they try to find explanations which are on the lines of "but the user would not be able to do much even if he/she became admin" or "I think this was an intentional feature that we had in case this eventuality." Earlier, I used to have a small explosion in my head, but yoga has taught me to react like Mr. Wolf from Pulp Fiction "I am here to help, if my help's not appreciated, tough luck gentlemen"

Saturday, March 27, 2010

we45's Newsletter 'The Fortitude' released today

'The Fortitude' is we45's maiden Information Security Newsletter. Our aim is to bring the latest news, views and information from the world of Information Security. This month's articles focus on the following:

Website Security - Organizational Identity Attacks: This attack will focus on some of the newer threats that are affecting an organization's online identity, its website. This has been authored by Rahul Raghavan and the we45 Consulting Group and provides real life examples into the world of website security.

Information Technology Act 2000 - An Evolution: Is the IT Act 2000 enough for a dynamic and ever changing Information Security landscape? Sumana Naganand, Partner-Justlaw, explores some of the evolutionary trends of the Information Technology Act 2000 with reference to 'Phishing'

Access Control Flaws - Chinks in the Web Application Armour: we45's CTO, Abhay Bhargav delves into some of the serious flaws in access control logic that can cause your company to lose reputation and revenue.

Download it here!


Hope you enjoy it.

Wednesday, March 17, 2010

Targeted Phishing - for the Big Fish

Another article on the topic on the similar topic prompted me to chronicle my own experiences with "Targeted Phishing". Targeted Phishing is a variant of phishing that is specifically directed at an organization and its employees. So, someone pretending to be a part of (or somehow connected to your org) would send you an email with some news of an "Important Update" requiring you to login to an application to perform the update. The rest, as they say is history. While some would scoff at this notion, that employees of an organization would fall for this sort of thing, I would like to tell you that my experience with some organizations (some of whom are our clients) is otherwise. Let us explore the why and how and more importantly some of the sticky situations that can transpire as a result of Targeted phishing:

The Situation: More organizations are taking to SaaS apps and apps in the proverbial Cloud. While this is great for cost savings and ROI, it is also great for an individual intent on harvesting your organization's most sensitive information, leveraging on the lack of awareness your people have about phishing in general.

Imagine someone inside your organization setting up a dummy application copying HTML code from Google Apps, Salesforce.com, or several others with a login page, interfacing to a database that he/she controls. Worse, imagine someone on the outside setting up a similar application and sending emails to your employees requesting them to login to this application with their usernames and passwords. An even worse situation would be if this was setup targeting an internal application that your organization has hosted that may be carries customer data or other sensitive information.

What is the aftermath? Mostly, nothing. It is quite likely that this sort of an attack would never be detected (even if you have a security team, sometimes - personal experience with one of our clients). This attack would never be published on the Internet as an attack (because it is targeted at YOUR organization). There will be no advisories or newspaper articles (a la the Income Tax phishing email) This attack, most likely will never be discovered unless someone really at every form he/she is submitting and a lot of other details like the SSL, etc. Most people want to believe things and they would forget about this "Update" as soon as they "sign in" to the dummy app. So, CRM application may be harvested by an attacker for months.

What is the Solution?
I am sure all of your first reactions would be "No more SaaS and no more Cloud", but I urge you to abandon this abstinent approach and focus on some of the constructive solutions.

Education: My first one would be to educate users on such attacks. Some of our clients engage us to conduct Targeted Phishing attacks against their organizations and prove this point beyond doubt (because most of them fall for it), forcing their employees to take the Security Awareness training reaally seriously. In my opinion, the Awareness trainings that happen today lack in solid material and live examples and case studies. Ensure that your awareness trainings have solid material or get an outside agency to perform awareness training to drive this point home to your employees.

Monitoring: Monitoring is rarely taken seriously. People are the only defense (or vulnerability) in case of Social Engineering attacks like Phishing. Regular monitoring in the form of security surveys and questionnaires would provide the organization with some info on user security awareness and responses. Supplementing this with email pattern-matching emails flowing into the organization might also be a good way to keep this sort of attack at bay.

Saturday, February 13, 2010

My talk at the Business Technology Summit

I spoke on AppSec at the BT Summit on "Web Application Security for the Payment Card Industry". The talk was very well received and received great responses from a very "in-tune" audience. The slide-deck was supposed to be made available on the BT site, but requires some kind of authentication to access. Therefore, by popular demand, I have included the slide-deck on this blog. Hope it is useful!

Sunday, November 15, 2009

Why you might be 'Californicated' by SB-1386

SB 1386 is something most of us havent heard of. In the PCI and (fading) ISO juggernaut, organizations (especially outsourcing companies) have not taken cognizance of an important legal statute that might be a game changer for the way they do business with their principals in the US. Let me throw some light on what SB 1386 is all about. This is based on a conversation I had with another person from the outsourcing industry. The conversation might make a lot of sense to many people reading this....

What is the SB 1386?

SB 1386 is popularly known as The California Breach Security Information Act. It was an act enacted in the year 2002 and came to effect in 2003. The act focuses on the privacy of the personal information of the citizens of the state of California. The act states that any organization that believes that there has been a breach of un-encrypted personal information of California state residents is required to disclose the breach publicly.

What is 'Personal Information'? It is very vague...
No, its not vague. The act defines 'Personal Information' as the individual's first name or initial and last name in combination with one of the following: social security numbers, California State Identification Numbers, Credit/Debit Card numbers, PINS or access codes.

Ok. I am listening. Who does it apply to?
It applies to anyone doing business with anyone who is a California resident. If you have employees or customers in California, even a single one, it applies to you. If you are an outsourcing company that has a customer who has employees or customers who are California residents, then it applies to you. If you store data for entities that have information of California residents, then it applies. Large and small does not make a difference. It applies all the same.

That's alright. Its just a disclosure clause. No big deal....
That is where you are very wrong. You will have to disclose the breach to all those affected by it. These leads to a public relations war which you might have to wage with a great deal of reputational and financial expense. Your reputation WILL go to the cleaners because of a breach. You WILL face lawsuits from angry consumers and IF you are an outsourcing company, your customers will probably walk away from you and your prospects will NOT return your calls. Catch my point?

Yes, I think so. Wow, that seems worrying. I am an outsourcing partner for a lot of clients in the US. Can you tell me how it affects me?
Well, for starters if you are call center or a similar entity making outbound calls to US customers, you probably have the information which is defined by the act as "personal data", then you are in scope. If you are a back-end data processing center handling accounting or payroll or any other data processing activity for your client, then you are in scope. You will need to start securing all that data and doing it seriously. A lot of companies have breached your customer's data and you dont want that to happen. See here and here

I think I need some water now. My throat has gone dry. Anyway, what do I do now? How do I prevent a disaster from occurring?
For starters, call in a professional to audit your information security practices and let it be a thorough technical review and not a documentation and policy audit. Conduct a risk assessment for the data you handle and store and then formulate protection strategies in conjunction with your client. Have the auditor issue a formal audit report on completion and please, for heaven's sake, follow all the advice which you have been given. Dont try and cut corners on security practices, you will be in for a rude shock. Also be especially vigilant about employees who are working in your processes. It is very important to conduct periodic assessments and actively investigate any traces of malpractice from employees. Remember that insiders are the greatest cause of data theft in your industry.

Right then, but didnt you say something about encrypted data. So, if I encrypt data will I not have to disclose?
Well, yes, but have you encrypted data? and are you confident that your data has been consistently encrypted and the encryption keys managed properly for all your encrypted data?

I dont think I have encrypted any data. I am not really sure. I have got to check......
Then you most probably wouldn't have. Anyway, you better get going and do something about SB1386 otherwise you might be in for a world of pain. Think on the lines of being shot in the face with an AK-47.

(Gulps) Yes, not a pleasant situation. Anyway, I got to go. See you then...
Bye...

Disclaimer

The views presented in this blog are entirely mine and are not those of my company.

© Abhay Bhargav 2010