Sunday, December 03, 2006

AJAX (In)security

AJAX (Asynchronous JavaScript + XML) is a combination of web browser technologies that allows web page content to be updated “on-the-fly” without the user moving from page to page. In the background of an AJAX-enabled web page, data (typically formatted in XML, but also HTML, JavaScript, etc.) is transferred to and from the web server. In the case of Gmail, new email messages are displayed as they arrive automatically. In Google Maps, a user may mouse-drag through street maps without visiting additional pages. The mechanism for performing asynchronous data transfers is a software library embedded in all modern web browsers called XMLHTTPRequest (XHR) . XHR is the key to a website earning the “AJAX” moniker. Otherwise, it’s just fancy JavaScript.

If you’re thinking that none of this sounds security related, you’re right. AJAX technology makes website interactivity smoother and more responsive. That’s it. Nothing changes on the web server, where security is supposed to reside. If that’s the case, then what is everyone talking about? Word on the cyber-street is that AJAX is the harbinger of larger attack surfaces, increased complexity, fake requests, denial of service, deadly cross-site scripting (XSS) , reliance on client-side security, and more. In reality, these issues existed well before AJAX. And, the recommended security best practices remain unchanged. If you’re like me, you want to know what’s really important, so let’s take a closer look.



Does AJAX cause a larger “Attack Surface”? No.

The term “Attack Surface” applies to a concept used to measure security by analyzing the points in a system that are open to attack. For software, these points are areas of data input and output that can be manipulated by a third-party. Obviously the smaller attack surface an application has, the easier it is to secure. What’s also obvious is that web applications, or any application, only have as much functionality (attack surface) as has been programmed in. It doesn’t matter if the user interface uses AJAX, Flash, ASCII art, or anything else. Again, AJAX is a web browser (client-side) technology. It does not execute on the server. While the coolness factor of AJAX drives developers to publicly expose more functionality - which may introduce new “server-side” vulnerabilities - this can hardly be blamed on AJAX. New code has always meant an increased risk of vulnerabilities.

Furthermore, in my experience, AJAX-enabled web applications are no more functionally complex than standard web applications. Google Maps is actually a less sophisticated application than the seemingly simple craigslist. Gmail is less complex than Outlook Web Access. Also, web applications (re)-designed using AJAX stand a better chance of being developed on more up-to-date platforms (.NET, J2EE, etc). These platforms are inherently more secure and less prone to vulnerabilities such as SQL Injection, Credential Session Prediction, Directory Traversal, and a dozen other common threats than previous generations.


Does AJAX make the “Attack Surface” harder to find? Yes and No.
A corporate security program is incomplete without measurable results. The most common way to measure the security of a website is by simulating attacks--thousands of them (i.e. a vulnerability assessment). A vulnerability assessment can be performed either manually, or with an automated scanning tool, or preferably with a combination of the two. One of the first steps in the process is to locate input points in the web application, or the “attack surface.” Therefore, a complete vulnerability assessment requires finding them all.

Automatically crawling the entire website and mapping the links is standard practice. This method works fine on some websites, others not at all, and the rest fall in-between. The challenge is that new websites often utilize heavy JavaScript, Flash, ActiveX, Applets, and AJAX, where links are either buried or dynamically generated within complex client-side code. Parsing out these links is often hard and sometimes impossible. Therefore automated scanning becomes increasingly less reliable as a method for validating the security of an AJAX enhanced website.

Humans on the other hand have an easier time sifting through code and inferring relationships. Many times the JavaScript source documents all the areas of input into the website, almost like an XML web service, which is useful not only for the good guys, but for the bad guys as well.

In a normal website, there would be no such resource and an assessor must rely on link crawling. The conclusion is that AJAX doesn’t make websites less secure, but it can make them more challenging to assess.


Can AJAX cause “Denial of Service”? Not really.
It has been claimed that AJAX-enabled websites utilize an application design in which a larger volume of smaller HTTP requests are used as opposed to fewer, larger requests. For instance, Google Suggest may fire off a tiny HTTP request for each user keystroke in an attempt to perform automatic word completion. The assumption is that if there are 1,000 users on the system, moving to the AJAX rapid-fire model will exponentially increase the number of requests to the system. This could potentially result in a denial of service (DoS) scenario. I suppose this is possible, but whose fault is this really?

In my view, this problem is not caused by AJAX or even a bad software design strategy, but instead by a lack of proper implementation and performance testing. The solution is to tune the configuration or add more web servers. And to be realistic, if someone wanted to DoS a network, they could flood the network with HTTP traffic whether AJAX was used or not.


Does AJAX rely on client-side security? No.
OK, let’s return to web application security 101. Web applications must NEVER trust the client (web browser). This is gospel whether the web page interfaces use JavaScript, Flash, ActiveX, Applets, AJAX or any other protocol or language. Every developer should be aware that basic HTTP proxies may alter anything about the HTTP request, even those generated by XHR. Great care should be taken to ensure that all security checks are performed on the server--no exceptions.

Does this mean that security professionals should not use client-side security checks? No, quite the opposite. I actually recommend using client-side security in forms and other business process flows because it benefits the user experience by being more responsive. There’s no need for a round trip to the server to inform the user that he’s typed a letter into the phone number field. This also lessens server load by pushing some processing time onto the client.


Does AJAX lead to poor security decisions? Sort of.
The new Web 2.0 websites often include data from one or more third-party websites, creating something known as a “mash-up.” AJAX developers would prefer the user to pull in the data directly from the third-party, thereby reducing bandwidth; but, this is not possible with XHR technology. XHR has security protections built-in, preventing a user’s browser on Website A from making connections to Website B. This helps protect users from malicious websites, where JavaScript Malware on the page could force a user to download all your bank account information. Web developers, not wanting to stifle innovation, created a work around to enable access to third-party sites.

What developers often do is create a local HTTP proxy on the host web server. To have the client pull in data from a third-party website, they’ll direct an XHR request through the local proxy pointing to the intended destination. Consider the following example request generated by the web browser:

http://websiteA/proxy?url=http://websitesB/

Website A takes the incoming request. The “proxy” web application then sends a request to Website B designated by the “URL” parameter value. With the proxy, developers can use XHR to make off-domain requests. And since XHR won’t send the user’s authentication cookies to Website B, because Website A did not connect to it directly, it is safe for them as well. The security issue is that Website A is hosting an unrestricted HTTP proxy.

Attackers love finding open proxies because they can initiate attacks that cannot be traced to their origin. The capabilities of the proxy should be carefully controlled and restricted with regard to which websites it will connect to and how. In my opinion, the problem lies with developers circumventing security controls without adding appropriate safeguards, not AJAX.


Does AJAX make Cross-Site Scripting (XSS) attacks worse? I hope not.
Can it get worse? During my presentation entitled “Hacking Intranet Websites from the Outside” at BlackHat 2006, I demonstrated how JavaScript Malware is able to acquire internal NAT’ed IP addresses, port scan, blind web server fingerprint, steal browser history, and exploit web-based interfaces on an intranet. The Washington Post called it “disturbing.” All proof-of-concept code was achieved without AJAX, just plain old JavaScript.

XHR can initiate just about any desired HTTP request - provided the request remains on-domain - and view the response. Plain JavaScript can make the same requests, without the on-domain limitation, but can’t typically view the response. This means if a user is on Website A, XHR cannot force user connections and read data from Website B. However, plain JavaScript could. If you look at it that way, XHR (AJAX) is more secure!

AJAX has fired up interest in JavaScript. Research in JavaScript has led to new malware discoveries whose potential severity is amplified by ubiquitous XSS vulnerabilities. To be fair, the Samy Worm that hit MySpace and JS-Yamaner on Yahoo exploited XHR for propagation. However, the attack could have just as easily been perpetrated using plain JavaScript. AJAX is irrelevant in this scenario. What matters is finding and fixing XSS vulnerabilities in web applications. The WhiteHat Security white paper “Cross-Site Scripting Worms and Viruses” is an additional information resource.


Does AJAX change security best practices? No.
If a web application has vulnerabilities, it will be insecure no matter what techniques are used to develop it. If a web application is well designed, no amount of “insecure AJAX” will reduce its security posture.

Following are five tips for securing Web applications:

1) Secure by design. Start secure and stay secure by including security as a component in each stage of the software development lifecycle.
2) Rock-solid input validation. Never trust the client, ever.
3) Use reliable software libraries. From encryption to session management, it’s best to use components that are tried and thoroughly tested. No need to reinvent the wheel and repeat the mistakes of others.
4) Secure configuration. Every component of the website should be configured with separation of duties, least privilege, unused features disabled, and error message suppressed.
5) Find and fix vulnerabilities. Continuous vulnerability assessments are the best way to prevent attackers from accessing corporate and customer data. You can’t control what you can’t measure.

Following these best practices is the first step. Validation is the second. No company can be expected to write flawless code, or have staff available around-the-clock to address all its Web application vulnerability issues. That’s why WhiteHat created WhiteHat Sentinel, a continuous vulnerability assessment and management service for web applications. WhiteHat Sentinel is available 24/7, enabling companies to identify, prioritize and ultimately remediate the vulnerabilities that leave web applications open to attack.

Remember the fundamentals, use defense-in-depth, and your online business will be safer.

Microsoft has announced free downloadable program and seven online services for accounting

Help with the accounts

Microsoft has announced Office Accounting Express 2007 for small businesses and others still struggling to do their accounts using paper and pencil.

There are a free downloadable program and seven online services. As well as offerings from eBay and PayPal, the range include more specialised services from Equifax for credit ratings and ADP for payroll, which are priced separately.

For large businesses, Microsoft offers Office Accounting Professional 2007, available next year for $149.

Intel is also trying to jump on the Web 2.0 bandwagon

Web 2.0 suite

Intel is also trying to jump on the Web 2.0 bandwagon. It plans to promote a suite of web-based applications, called SuiteTwo, to small businesses. The suite comprises a variety of third-party tools for blogging, wikis and social networking. Intel's contribution seems to be providing a single sign-on capability so you do not have to visit the sites separately.