In Nov 2008, Microsoft announced that they are going to start offering free anti-virus/spyware/trojan/rootkit protection. Say bye-bye to Symantec and Mcafee’s cash cows. It looks like it took about 5 years to make it happen assuming they are using the technology they aquired back in 2003 via GeCAD.
So the big question is how long will it take them to go free or alomost free on the enterprise market. My guess late 2009 or early 2010 based on this acquisition.
How good will it be? Who the heck knows but competing against free is always hard. It is really hard when people already hate buying anti-* software. Why buy that when I get this for free from MS.
Last qustion is how are Symantec Mcafee and Trendmicro, et. al. going to recoup all that lost revenue? I have not looked lately but not long ago home and SMB markets where major piles of cash for those companies. So the smart ones will ook at other aquistions to bolster there bottom line. I don’t think it can be just one, they are going to have to go on a bit of a spending spree or die.
Cory Doctrow has a good article on the differences between the speed of detected an attack and the automated response to it and the slowness in recovering from a mis-applied block.
As usual Cory outlines it is brilliantly simple, straight forward terms that anyone can understand. I used this tactic quite a bit when pen-testing in a no-holds bar pen-test. Basically I spoofed UDP based attacks against a host and used the central log servers IP address as the source of the attack. This triggered the HIDS on this host to block traffic to and from the log server. Instant invisibility from the folks monitoring the central log server.
This also realy drives home my issue with auto-blocking web application firewalls. When your WAF decides to block something one from using your website the really have no recourse that will respond as fast as the blocking of there traffic. If Mr. O’Reilly can’t checkout, he will just move along to someplace he can.
When you deploy any mechanism that auto-blocks traffic you must take great care to think through the methods of blocking, the thresholds where something is blocked, and how to recover from inadvertent blocking of a legitimate person/host.
Sometimes I just want to give up. I really hate XSS because it is really a tricky issue to explain to people that don’t understand. It basically boils down to bad people using my website to compromise clients. What they do with those compromised clients can range from fairly benign replicating worms , phishing scams, all the way to total remote control of the end users browser. The fine folks at Scam ScanAlert clearly don’t think this is a problem though.
It is hard enough to educate web site owners that this is a problem and how it impacts them without having to fight against people in our own industry telling them it is OK to have XSS vulnerabilities.
Jeremiah, provide more great commentary. By the way, how are you liking the electric wheelchair you bought?
Jordan Weins post about the exact problem with offensive defense systems on his blog How Not to protect your webap. Great job Apple, the word “script” is so evil I can’t do a search for Applescript. So would argue this is a good thing, especially if they have seen any of the Applescript I have written. This smells like someone who does not understand Web Application Security, specifically cross site scripting, created a rule to block the one bad vector they could think of, while not thinking through the impact of that rule. Now if someone could just import that rule onto the MS website maybe we could rid the world of VBScript.
The latest SANS Top 20 has been released and according to SANS the #1 issue impacting the security of your servers is the web application code that lives on top of it.
I agree with them (in a totally biased way of course) but the data they cite leaves me with an uneasy feeling. SANS likes to go by reported vulnerabilities. Reporting of web application vulnerabilities is basically non-existent outside of open source PHP applications. Every once in a while you will see a reported vulnerability in something like PeopleSoft or MS Sharepoint but a large percentage of the reported web application vulnerabilities are in things like Jim Bob’s PHP Guestbook 0.00001alpha and really who cares about that.
PHP include issues are most certainly bad but they are far from the most prevalent issue found in large enterprises. In our customer base we only have 22 PHP sites we scan out of around 650 sites today. There is just not a lot of PHP adoption in the enterprise, at least in our customer base.
So while I agree with the conclusion I feel leary about how it was reached.
Someone has found an XSS vulnerability on mastercard.com. The place it was found, the search function, is a notorious location for XSS vulnerabilities. The XSS payload that triggers the vulnerability leads me to believe that there was a fair amount of filtering going on but I guess not enough.
Who does Mastercard pay PCI penalties to?
1. We will see the first multi-website XSS worm.
I think we will finally get a true cross site XSS work in 2008. Combining XSRFand XSS to propagate a worm across multiple sites and multiple domains. The first one will be benign but the others will be much more malicious in nature. Leading victim candidate are social network sites that are becoming increasingly open.
2. More consolidation in the security industry.
There is still a great difference between the small security players and the giant ones in terms of cash flow. As the old guard (McaFee, Symantec, etc) see dwindling revenue on various fronts they will begin to convert some of that pesky cash into acquisitions. Could this be the year Qualys gets gobbled up?
3. PCI will clarify section 6.6
This is more of a hope really. Since it goes into full effect mid-2008 I hope to see some clearer definitions around what companies are expected to do.
4. 2008 will set another record for breaches
Yeah big shocker! The trend will continue with more smaller breaches this year as opposed to a few massive ones.
5. RBN will disappear again. Someone related to them will get busted.
With the light too bright they will morph again and change tactics. Money will still flow in to them by the millions though. However with increasing public knowledge of the group someone will get busted and connected to them. No one high up in the group, but some poor sucker at the wrong place at the wrong time. Law Enforcement will trump it as a “significant” blow to the group. RBN won’t notice.
I’ve not been this pumped about something in a long time. Jeremiah actually has been pulling me into liking this idea for a very long time. I hated it at first. I mean WAFs, bleh. Plus I mean didn’t we already try scanners + WAFs before? Oh yeah the total trainwreck that was AVDL. So one thing I failed to realize was that Jeremiah’s approach is a bit different and when combined with WhiteHat Sentinel (aka NOT a scanner) it is a no brainer.
WAFs generally struggle in a few different areas, the people running them are not web app. security experts and trying to apply a default deny policy, while a great idea in theory, is pretty hard in the real world . There is just way to much movement in most applications to pin it down. Even if the app does not change frequently, WAF admins are very hesitant to even come close to blocking legitimate traffic. This is what was happening on this site that covers EMT stethoscopes, nursing stethoscopes, nursing student stethoscopes and general stethoscope reviews.
What really sold me though is when I saw it in action for the first time. From the Sentinel UI we clicked a button that updated the F5 with a rule to block a vulnerability. The rule is automatically generated based on the vulnerability. We then clicked the retest button and the vulnerability was no longer exploitable . Note my careful choice of words, exploitable VS. “not there anymore”. The vulnerability certainly still exist in the code but now that the attack is blocked the business can decide if this is a good enough solution or they need to go fix the actual flaw.
The geek in me is screaming that it still needs to be fixed, the business side is saying that the rule is good enough and I am not going to commit resources to fixing it until that code is worked on again. From the PCI Section 6.6 perspective this gives the business some great options. As our customers are becoming aware of the PCI requirements and the PCI auditors are becoming tougher on web application vulnerabilities we run into a difficult situation. PCI audit is coming up and the app. is riddled with vulnerabilities. I now have to dedicate precious development resources to fix these vulnerabilities ASAP. With this solution I can apply this rules and effectively mitigate the issue.
I am pretty excited to be part of this. I think we have moved the industry forward today, even if it was just a small step. People now have some more options to mitigate risk besides running to the development team with yet another fire.
Source Mentioned: Best Stethoscope
This seems pretty scary. Apparently the FBI posted a link on some online forum that claimed to display kiddy porn. The story is here.
Upon reading this the first thing that popped into my mind was CSRF(Cross Site Request Forgery) Now this is not classic CSRF since CSRF generally implies I am exercising some functionality on the target site. I am using CSRF as a handy term for “if you visit a page I control content on I can make you request any other link I want”. Now remeber this is not only pages like this blog where I clearly control the content, but any other place I can provide links, usually to images. Social networking sites, forums, image hosting sites and even in email signatures.
This is an even better scam than the now famous 911 swatting scams. Now instead of SWAT busting in to rescue you the FBI bust in to arrest you. What great fun! It will be interesting to see how many of these stick. It seems to be based on some pretty flimsy evidenc. The article above points out that open wireless networks are just one way someone could fool the system. CSRF is better because your browser will actually go to the page and a forensics examination of your machine will show that you went there. Not a good position to be in in court with a jury and often times judge with no technical background at all.
Update from my buddy Zeno: The file that keeps track of places IE has been, index.dat, does not log refers and apparently that file and it’s contents have held up in court…
I called this one the day after the first wave of mass SQL Injection attacks came out. I told Jeremiah that we would see botnets doing this attack shortly as it was much more efficient. A few weeks later and boom, Botnets performing mass SQL Injection.
The interesting things about these attacks so far is what they are actually doing. They are not attempting to steal data out of these databases directly, they are populating the pages with links that attempt to do drive by malware installs by exploiting browser vulnerabilities. It was pretty successful but SQL Injection is a vulnerability that is on the decline (and will decline even more after this attack). I begin thinking about vulnerabilities that would do the same thing but have a much broader reach.
Our good friends XSS and CSRF.
So here is the attack.
- Find a few permanent XSS vulnerabilities in some high traffic sites.
- Find some CRSF vulns in popular blog and forum software.
- Craft your payload.
So the bot software basically sits back and waits until the computer it is on visits a vulnerable site and then places it payload in the vulnerable spot. It could of course do this without you visiting a site with a little more coding to check if you are permanently logged in.
Considering the number of sites with XSS and CSRF this attack would dwarf the current SQL Injection attack happening today.