Archive for the Category ◊ Network Security ◊

PCI DSS checklist: Mistakes and problem areas to avoid
Friday, November 27th, 2009 | Author:

Neil Roiter, Senior Technology Editor
11.25.2009

The Payment Card Industry Data Security Standard (PCI DSS) has been a world-changing experience for many midmarket businesses, retailers and credit card processors that previously had little or no regulatory oversight for security.

“PCI has been their baptism,” said Steve Alameda, principal consultant of Data SafeGuard of San Francisco. “It’s one heck of a way to get baptized.”

Consultants who devote part or most of their activities helping smaller organizations — mostly those with Level 3, 4 and some Level 2 requirements for self-assessment — share some of the difficult lessons learned in the trenches.

Lesson 1: Don’t Underestimate PCI
Astonishingly, there’s anecdotal evidence that some smaller companies are still unaware they must comply with PCI. Level 4 merchants, those processing fewer than 20,000 transactions annually, are slower to get the word.

Assuming your business is not in that situation, you’re facing requirements that are growing increasingly demanding. Self-Assessment Questionnaire D, which most covered organizations are required to complete, is far more detailed than what the questionnaire originally required in 2007. Most companies often turn to consulting help for a variety of reasons:

  1. Lack of knowledge about their own environment. Small companies are wrapped up in doing business, not doing security. Once they realize what they have to protect and all the ways they might be exposed, light bulbs go off.
  2. Inability to comprehend the requirements. Few small companies have security people and most have, at most, a small IT staff that lacks the time and/or expertise to understand and complete the assessment.
  3. The requirements sink in. Organizations start out doing a self-assessment, then realize as they proceed they may have bitten off more than they can chew.
  4. Nobody wants to get it wrong. No one wants to go to the president and tell him/her that after all the time and money spent, the company is still not compliant.
  5. Companies think they have adequate security to meet the requirements. To most small businesses, that’s desktop AV and a firewall.

((Content component not found.))

This may also mean underestimating cost. Companies that do their homework, either internally or in combination with outside help, will have a realistic expectation of what they’ll need to spend in terms of manpower, technology and services. For example, while they have AV and a firewall, chances are they have never given a thought to purchasing log management, IDS or file integrity-monitoring tools, let alone a Web application firewall.

Also, small companies, unlike enterprises or smaller organizations in heavily regulated industries, are not accustomed to refreshing equipment, such as point-of-sale systems, every few years. In many cases, they need to either upgrade or replace older equipment to become or remain compliant.

Companies do not, typically, anticipate they will have to make some fundamental changes in the way they do business. It’s not a matter of tacking on security, even for the little guys. You may, for example, store credit card information in Excel spreadsheets. Now you need to convert all that information into databases and protect them.

“It’s one of the hidden costs of PCI. I can’t tell you how many businesses we walk into where they have paper records — a warehouse of credit card receipts that’s intermixed with invoices, etc.” said Seth Peter, CTO of Minneapolis-based consultancy NetSPI. “One big area where companies underestimate costs is how do you stop doing that and how do you go back and clean it up?”

“They feel their environment is in pretty good shape, and don’t think they’ll need to make many changes,” said Data SafeGuard’s Alameda. “Then the reality hits that there will be a lot of changes.”

Lesson 2: Learn PCI Problem Areas
PCI presents a laundry list of prescriptive data security requirements, many of which can be a challenge to smaller companies, but some are more likely to be especially problematic.

Encryption: The PCI requirement that stored credit card data must be encrypted can be a formidable challenge. Face it: Many large enterprises have flinched at encryption projects. The reason is not the encryption itself — that’s relatively easy. But key management, with all its complexity and administrative overhead, and concern about recovering data if keys are lost, is another matter.

The PCI practitioners we spoke to said most of their clients — somewhat to their surprise — had some encryption in place, but mostly in one-off situations where they could more or less set it in place and forget it. With PCI, the requirements become more complex and companies need to turn to products that simplify key management or seek outside help to manage it for them.

Policy: Midmarket companies are unlikely to have anything resembling a comprehensive security policy, unless they are already in a highly regulated industry, such as financial services. PCI Requirement 12 says that companies must maintain a policy that addresses information security. Sounds simple on the face of it, but when you dig into the details, this is really a complex set of requirements that impact many aspects of the business. It addresses all the other PCI requirements, and how to ensure that your employees and partners adhere to them.

This is a complex area because it touches all areas of the business and requires attention to things such as change management policy, which may be foreign to smaller businesses.

The best advice is to start with a set of base policies that can get companies through and build from there. There are good resources, such as the SANS Institute, that provide policy templates organizations can use as a starting point.

“We help companies to set policies specific to their environment and general enough to work with and expand,” said Michael LaBarge, president and CEO of Datassurant Inc. of Reston, Va. “It gives them a starting point to improve their security posture, checks the box, and gets them on the right road.”

Application security: Section 6.6 of version 1.2 of PCI DSS now requires either application code review or a Web application firewall (WAF). Even large enterprises have been slow to adopt strong application security in code development, application security assessments or even Web application firewalls.

Most companies, lacking the expertise for internal reviews, have opted for WAFs, but the requirement has come as something of a shock to small businesses. Small organizations can consider outsourcing if they can find a service provider at a reasonable price.

Lesson 3: PCI Compliance is Continuous
PCI ain’t over when it’s over. It’s very common for companies that don’t have a well-developed compliance program to put a lot of time and intense effort into PCI compliance, then be let down. They’re setting themselves up for a lot of unnecessary and redundant work when the next year’s assessment comes around.

Compliance often requires changing some basic business practices. Once a company is compliant, processes that were laid out are not followed through, because they cease to be urgent priorities, and management may have little appetite for changing operations.

In addition, if your smaller company is typical, the effort put into achieving compliance is taking people away from their day jobs. That means everyone is playing catch-up with responsibilities that have been neglected, and lose focus on compliance. That underscores the point that compliance processes need to become part of normal business operations, not simply a stack of “to-do tasks.”

Finally, roles and responsibilities at small companies are not clearly defined. Duties are not documented and may change quickly if the person who usually does it gets pulled off to do something else, is out sick or goes on vacation. If it’s not mission-critical for the business, it might not get done.

http://searchsecurity.techtarget.co.uk/tip/0,289483,sid180_gci1375428,00.html?track=NL-988&ad=736946&asrc=EM_NLT_10083467&uid=5392292

2010- A Security Odyssey
Thursday, November 26th, 2009 | Author:

By David Bell
Nov 25, 2009 5:30 PM

NetIQ’s David Bell presents his predictions for the IT security industry in 2010.

As yet another year draws to a close, it’s natural for any industry to glance back over the past 12 months and then wonder what the future holds. For IT security professionals, 2009 has been a year of manipulating constricted budgets to properly secure the enterprise against an ever-expanding network of threats.

Virtualisation and cloud computing have well and truly exploded, bringing with them a fresh breed of nasties for businesses to fend off. Compliance initiatives have continued to dominate our radars, especially in the credit card and online banking spaces where the challenge of securing customers’ electronic data has become a major focus in the boardroom.

In an effort to be one step ahead of whatever is on the horizon, it’s time to start asking what’s in store for 2010. Here are three predictions based on my discussions with customers out in security land: 

1. The slow rise of automated fraud detection
As financial institutions face ever more devious threats, automated fraud detection has been positioned as the next big thing. While it makes sense to automate information-gathering and event responses where possible, the technology is still too complex to be effective.

Part of the challenge is a lack of integration between security technology and processes. Fraud is typified by a complicated set of activities that cross many different elements of the organisation. The effectiveness of automated fraud detection programs is still a few years away because security programs lack the necessary maturity and information flow between technologies and operational silos.

I see more immediate value in the ability to monitor abnormal activity from privileged users, which could signify a potential breach.

2. Keeping your data secure AND accessible
The second trend that will continue into 2010 is the focus on securing critical data and the need to ensure the availability of that data to support business operations. Organisations have become very concerned with the security of large database management systems. These databases often hold particularly sensitive data, and require highly-specialised Database Activity Management technologies to administer and audit their access.

Protecting critical data, such as customer information, from being exposed in a breach has become the number one priority for organisations, and that will continue to be a main concern. Government legislation, industry mandates, and corporate best-practices all demand a data-centric and integrated security program. The real challenge for security teams over the coming year is how to take their existing investment in a broad range of security technologies and build a defence around sensitive, and therefore valuable, data stores.

3. More to compliance than security

Nearly every organisation is faced with the pressure of regulatory compliance. This is forcing security teams to provide far greater visibility into organisational risk, and for a larger number of stakeholders, than ever before.  There are more and more people within the business who now expect to see the results of the security team’s efforts in a form that is easy to understand. In 2010, this expectation will continue to drive a need for greater capabilities to measure risk and exposure, and to be able present that information in layman’s terms to stakeholders, particularly board members.

The challenge here is that board members see this investment and expect the money spent to benefit the business overall; they equate compliance success with good security. As security teams strive to demonstrate compliance to regulators and business stakeholders, they will also have to educate senior executives about the reality of security as an ongoing process. Technology and its threats evolve at such a rapid pace that a part of your network that’s secure today could easily be at risk tomorrow.

Onwards and upwards
These three predictions are intrinsically linked: in 2010 database security will be the defining goal of security and compliance teams. The visibility of breaches has reached the highest levels of the organisation, and the desire to avoid costly and embarrassing data violations has become something that everyone, from the CEO down, now takes seriously.

Data is the lifeblood of global businesses, and the costs of breaches are simply too high – we will have to adapt to a more managed, policy-driven and secure workplace. While 2009 was a year devoted to the security of newer technologies such as cloud computing, we should anticipate that the coming year will focus on the processes and policies surrounding data security and compliance. From awareness training, to policies on mobile computing, to greater scrutiny of user activity – process-driven security strategies will be key to protecting the reputation and ‘crown jewels’ of every enterprise.

David Bell is a Systems Engineer at NetIQ.

http://www.securecomputing.net.au/Opinion/161362,2010-a-security-odyssey.aspx

DIY: Defending Against A DDoS Attack
Friday, October 16th, 2009 | Author:

Proactive self-defense can make DDoS attacks less painful and damaging

Oct 14, 2009 | 05:41 PM

By Kelly Jackson Higgins
DarkReading

There’s no way to prevent a distributed denial-of-service (DDoS) attack, but there are some do-it-yourself techniques and strategies for fighting back and minimizing its impact.

DDoS victims can “tarpit,” or force the attacking bot to drastically scale back its payload, enlist the help of the botnet hunter community, or even get help to wrest control of the botnet. Joe Stewart, a researcher with SecureWorks’ Counter Threat Unit, says these self-defense techniques are little known or used today by victims of DDoS attacks, but they offer an alternative to purchasing a commercial DDoS product or service and working with ISPs to try to stop an attack.

“You can’t prevent someone from launching the attack, but you can do a better job at mitigating it through technical measures,” Stewart says. Tarpitting doesn’t work in every case, he says, but it’s easy to deploy and doesn’t cost anything.

“Just being able to respond better to these attacks is something that requires relationship-building with people who have pieces of the puzzle,” such as the research community, he says.

Tarpitting works against HTTP-based attacks, which researchers say make up the majority of DDoS attacks today. HTTP-based DDoS attacks are often more effective than SYN flood DDoS attacks, and it’s easier to max out the Web server’s connections or CPU/memory than to overload the pipe with a SYN flood, experts say.

The tarpit method works with TCP/IP features embedded in Linux, namely the NetFilter feature, according to Stewart, and can be used with a Windows server with the help of a tarpit toolkit, such as LaBrea. Tarpitting basically forces the bot to send the victim’s server less traffic. “You use it to say to the attacker, ‘I’m so congested that you can’t send me any more than 1 byte before I respond to you,’ for instance,” Stewart says. “The attacker gets in a loop trying to send 1 byte and waiting for a response [he] never gets.”

And unless the botnet operator is closely monitoring his bots, he won’t notice the slowdown. The only clue that the DDoS attack was foiled? Its target didn’t go down as the attacker had expected, Stewart says.

Stewart says when he tested tarpitting against an attack bot, he found another interesting side effect of the defense method: One bot’s CPU hit 100 percent, rendering the system unusable. “It almost reflected the DDoS attack back onto them. In their attempt to maintain all these connections and retries, it started using up all the CPU time on the system,” Stewart says.

Jose Nazario, manager of security research for Arbor, says he sees few DDoS victims using these techniques today. “Unfortunately, it’s pretty rare. It’s valuable,” he says. “The [tradeoff] is that it can have a negative impact on legitimate PC users [who are bot-infected]. After a while, they can’t make any requests at all.”

The safest defense against DDoS attacks is to recruit the help of researchers with expertise in botnets. Stewart recommends IT security teams get out and meet their peers and researchers and attend ISSA and InfraGuard meetings, for instance. They key is getting help in tracking down the offending botnet’s command and control (C&C) servers, he says. “It could be something as simple as getting a hosting provider to take down a C&C by providing them proof that a host [using their service] was attacking you,” he says.

And there are some researchers willing to venture into a grey legal area and actually go in and take over a botnet, he says. “Gaining unauthorized access to an infected computer is not something [SecureWorks] would do here,” he says. “But there are some other researchers who’ve shown they are willing to take over botnets and issue them commands. If you’re under attack, it’s a kind of self-preservation.”

Stewart says C&C servers are often vulnerable themselves to common Web attacks, like cross-site scripting and SQL injection. “They are usually sloppily programmed,” he says. “And you can get a lot of knowledge from a SQL injection [vulnerability] in their script. But legally, this is probably not a good idea.”

Meanwhile, some security experts like HD Moore have used more aggressive methods to fight a DDoS attack. Moore, creator of Metasploit, had a little fun at his DDOS attackers’ expense earlier this year, turning the tables on the botnet that hammered away at Metasploit’s servers. Moore changed DNSes in an attempt to evade the attackers, and also tried using Google Sites’ Web hosting to mitigate the DDoS, but once Google Sites hit its page limits, he had to abort that tack.

He was able to eventually narrow down the C&C domains after enlisting the help of botnet researchers. The researchers black-holed one of the domains, and Moore then executed a “reverse” on the other two C&C domains, pointing the traffic that was flooding his Metasploit site back onto the attackers’ domains so they were DDoS’ing themselves.

But these techniques are bit too technical and risky for most enterprises. SecureWorks’ Stewart, who was one of the researchers who helped Moore find the culprit C&C domains, says it would be possible for an enterprise hit by a DDoS to follow Moore’s lead by changing its IP address to that of the C&C’s IP. “If the bots are attacking you by looking up your host name, you can change your IP address to the C&C IP once you learn where it is. This is easy, but causes [your site] to be down still, and causes your legit traffic to visit a botmaster-owned site — a little scary if it comes back up before you change the DNS back,” he says.

He says it’s best to use legitimate abuse-reporting channels in the security community to help take down a botnet.

Reference:

http://www.darkreading.com/security/attacks/showArticle.jhtml?articleID=220600886&cid=nl_DR_WEEKLY_H