Monday, May 07, 2007

Security for Websites - Breaking Sessions to Hack Into a Machine

Security on websites is based on session management. When a user connects to a secure website, they present credentials that testify to their identity, usually in the form of a username and password. Because the HTTP protocol is "stateless," the web server has no way of knowing that a particular user has already logged in as they browse from page to page. Session management allows the web-based system to create a 'session' so that the user will not have to re-authenticate every time they wish to perform a new action, or browse to a new page.

In essence, session management ensures that the client currently connected is the same person who originally logged in. Unfortunately however, sessions are an obvious target for a malicious user, because they may be able to get access to a web server without needing to authenticate.

A typical scenario would involve a user logging on to an online service. Once the user is authenticated, the web server presents this user with a "session id." This session ID is stored by the browser and is presented wherever authentication is necessary. This avoids repeating the login/password process over and over. It all happens in the background and is transparent to the user, making the browsing experience much more pleasant in general. Imagine having to enter your username and password every time you browsed to a new page!

Security on websites is based on session management. When a user connects to a secure website, they present credentials that testify to their identity, usually in the form of a username and password. Because the HTTP protocol is "stateless," the web server has no way of knowing that a particular user has already logged in as they browse from page to page. Session management allows the web-based system to create a 'session' so that the user will not have to re-authenticate every time they wish to perform a new action, or browse to a new page.

In essence, session management ensures that the client currently connected is the same person who originally logged in. Unfortunately however, sessions are an obvious target for a malicious user, because they may be able to get access to a web server without needing to authenticate.

A typical scenario would involve a user logging on to an online service. Once the user is authenticated, the web server presents this user with a "session id." This session ID is stored by the browser and is presented wherever authentication is necessary. This avoids repeating the login/password process over and over. It all happens in the background and is transparent to the user, making the browsing experience much more pleasant in general. Imagine having to enter your username and password every time you browsed to a new page!

Another method is to attack the client. Microsoft Internet Explorer, for example, has had numerous flaws that allowed web sites to read cookies (often used to store the Session ID) to which they did not belong. Ideally, only the site that created the cookie should have access to it. Unfortunately, this is not always the case, and there are many instances of cookies being accessible to anyone. On top of this, a browser's cache is often accessible to anyone with access to that computer. It may be a hacker who has compromised the computer using some other attack, or a publicly accessible computer in an Internet café or kiosk. Either way, a cookie persistently stored in the browser cache is a tempting target.

Unencrypted transmissions are all too common and allow communication to be observed by an attacker. Unless the HTTPS protocol is used, a Session ID could be intercepted in transit and re-used. In fact, it is possible to mark cookies as 'secure' so they will only be transmitted over HTTPS. This is something I have rarely seen developers do. Such a simple thing can go such a long way.

Another way to that is used to compromise a Session ID is to attempt to predict it. Prediction occurs when an attacker realises that a pattern exists between session IDs. For example, some web based systems increment the session ID each time a user logs on. Knowing one session ID allows malicious users to identify the previous and next ones. Others use a brute force attack. This is a simple yet potentially effective method for determining a session identifier. A brute force attack occurs when a malicious user repeatedly tries numerous session identifiers until they happen upon a valid one. Although it is not complicated, it can be highly effective.

So what can you do to mitigate these attacks?

1. Always use strong encryption during transmission. Failure to encrypt the session identifier could render the online system insecure. In addition, for cookie based sessions, set the SSL-only attribute to "true" for a little added security. This will reduce the chance that an XSS attack could capture the session ID because the pages on the unencrypted section of the site will not be able to read the cookie.

2. Expire sessions quickly. Force the user to log out after a short period of inactivity. This way, an abandoned session will only be live for a short duration and thus will reduce the chance that an attacker could happen upon an active session. It is also wise to avoid persistent logins. Persistent logins typically leave a session identifier (or worse, login and password information) in a cookie that resides in the user's cache. This substantially increases the opportunity that an attacker has to get a valid SID.

3. Never make the Session ID viewable. This is a major problem with the GET method. GET variables are always present in the path string of the browser. Use the POST or cookie method instead or cycle the SID out with a new one frequently.

4. Always select a strong session identifier. Many attacks occur because the SID is too short or easily predicted. The identifier should be pseudo-random, retrieved from a seeded random number generator. For example, using a 32 character session identifier that contains the letters A-Z, a-z and 0-9 would have 2.27e57 possible IDs. This is equivalent to a 190 bit password. For example, using a 32 character session identifier that contains the letters A-Z, a-z and 0-9 is equivalent to a 190 bit password and is sufficiently strong for most web applications in use today.

5. Always double check critical operations. The server should re-authenticate anytime the user attempts to perform a critical operation. For example, if a user wishes to change their password, they should be forced to provide their original password first.

6. Always log out the user securely. Perform the logout operation such that the server state will inactivate the session as opposed to relying on the client to delete session information. Delete the session ID on logout. Some applications even force the browser to close down completely, thus ensuring stripping down the session and ensuring the deletion of the session ID.

7. Always prevent client-side page caching on pages that display sensitive information. Use HTTP to set the page expiration such that the page is not cached. Setting a page expiration that is in the past will cause the browser to discard the page contents from the cache.

8. Always require that users re-authenticate themselves after a specified period even if their session is still active. This will place an upper limit in the length of time that a successful session hijack can last. Otherwise, an attacker could keep a connection opened for an extremely long amount of time after a successful attack occurs.

9. It is possible to perform other kinds of sanity checking. For example, use web client string analysis, SSL client certificate checks and some level of IP address checking to provide basic assurance that clients are who they say they are.

All in all, web applications rely on good session management to stay secure. If you follow some of the steps outlined in this article and be aware of the risks, you are well on your way to leveraging the full benefits of web applications.

Cookie Path Best Practice

Cookies provide a method for creating a stateful HTTP session and their recommended use is formally defined within RFC2965 and BCP44.

Although they are used for many purposes, they are often used to maintain a Session ID (SID), through which an individual user can be identified throughout their interaction with the site. For a site that requires authentication, this SID is typically passed to the user after they have authenticated and effectively maintains the authentication state. If an attacker can use a mechanism (such as sniffing or cross site scripting) to gain access to the SID, then potentially they can incorporate it within their own session to successfully assume the users identity. The cookie specifications provide arguments for restricting the domain and path for which the user agent (browser) will supply the cookie. Both of these should be matched by the request before the user agent sends the cookie data to the server.



It is common for the path argument to be specified as the root of the origin server; a practise that can expose the application cookies to unnecessary additional scrutiny. It is worth noting however, that whilst the various “same origin” security issues still afflict the browser vendors, the specification of the cookie path argument is somewhat of a moot poin



1. Introduction

Cookies provide a method for creating a stateful HTTP session and their recommended use is formally defined within RFC2965 and BCP44. Although they are used for many purposes, they are often used to maintain a Session ID (SID), through which an individual user can be identified throughout their interaction with the site. For a site that requires authentication, this SID is typically passed to the user after they have authenticated and effectively maintains the authentication state. If an attacker can use a mechanism (such as sniffing or cross site scripting) to gain access to the SID, then potentially they can incorporate it within their own session to successfully assume the users identity.
The cookie specifications provide arguments for restricting the domain and path for which the user agent (browser) will supply the cookie. Both of these should be matched by the request before the user agent sends the cookie data to the server. It is common for the path argument to be specified as the root of the origin server; a practise that can expose the application cookies to unnecessary additional scrutiny. It is worth noting however, that whilst the various “same origin” security issues still afflict the browser vendors, the specification of the cookie path argument is somewhat of a moot point.

2. The Problem

The cookie standard is formally defined in RFC2965 [1]. This makes reference to the optional path argument that allows a cookie originator to specify “the subset of URLs on the origin server to which this cookie applies” [1]. The vast majority of web based applications simply set this argument to the root “/” of the origin server, either for simplicity or merely for lack of knowing any better. Where this oversight becomes useful is in conducting attacks against the session cookies of an application that does not suffer from any exploitable validation flaws, but that shares the same server environment with one that does.
As an example we shall imagine that a secure application shares a host with some sample files that were installed at the same time as the web server. Obviously, this would never happen in a live production environment (pauses to insert tongue firmly in cheek). The secure application is located within the “/secure” folder but sets the cookie path argument to
the root “/”. An attacker knows that the secure application has no useable vulnerabilities in itself.

However, they also know that the sample files have an exploitable cross-site scripting (XSS) flaw that would give them access to the all-important session cookies. All they now need is a method to get a valid user to access the sample files (a completely different problem to solve).
The secure application vendor might have otherwise followed all the best practise recommendations when developing their application, but they could still be exposing sensitive information through the loosely specified path argument.

3. The Solution

Fortunately the solution to this issue is a straightforward one. By simply specifying the cookie path argument accurately, an application can take measures to protect itself from flawed products that share the same hosting environment.

References [1] http://www.faqs.org/rfcs/rfc2965.html

RSS Security Threats With Financial Services

Web 2.0 technologies are penetrating deeper into the financial services sector as Enterprise 2.0 solutions, adding value to financial services. Analysts can leverage information sources to go beyond the obvious. Trading and Banking companies like Wells Fargo and E*Trade are developing their next generation technologies using Web 2.0 components; components that will be used in banking software, trading portals and other peripheral services. The true advantage of RSS components is to push information to the end user rather than pull it from the Internet. The financial industry estimates that 95% of information exists in non-RSS formats and could become a key strategic advantage if it can be converted into RSS format. Wells Fargo has already implemented systems on the ground and these have started to yield benefits. RSS comes with its own security issues that assume critical significance with regard to financial services. In this article we will see some of the security concerns around RSS security and attack vectors.

RSS feed manipulation with JavaScript and HTML tags

RSS stream gets builds from databases or input supplied by users. RSS streams can source information from third party sources such as news sites, blogs, etc. Financial services incorporate this information for end user’s benefits and it get served in the browser along with other sensitive information. If RSS feeds originate from untrusted sources then they are likely to be injected with JavaScript or other HTML tags. These malicious tags can have capabilities to exploit the browser. Financial systems must have sound filtering lists prior to forwarding any information coming from the end user to the system or filtering certain character sets that hit the end browser. Increasing RSS consumption is going to put at risk clients in financial sectors. To combat the threat, RS

Cross site scripting (XSS/CSS) with RSS feeds

The cause of successful RSS exploitation with XSS lies in RSS script injection. RSS that is injected with JavaScript and successfully passed to end clients in financial systems can lead to exploits such as RSS feeds with SCRIPT or HREF with “onClick” being successful on these systems. Several exploits written on top of XSS exist, by with attackers can hijack sessions or run keyloggers on the session. All these exploits can put the financial system at risk. Once again, countermeasures to this threat lie in “filtering” the characters before they hit the end client. Browsers don’t have any built in filtering capabilities and application layer needs to support it for better security. Extra precaution is needed against cross domain calls as well cross site RSS access.

CSRF with RSS feeds

Cross Site Request Forgery is another attack vector that can be exploited through RSS feeds. If a feed is injected with certain HTML tags like or any other tags that allow cross domain calls, these calls replay the cookie causing a CSRF exploit to be run. CSRF attacks expand possibilities for exploits to be run on financial applications that are vulnerable. An attacker has greater opportunity since the target set and scope is defined.

Consider a financial portal for banking operations application that runs with an RSS feed reader component. This component has a set of applications for trading and other services running on different domains. One of these domain applications is vulnerable to CSRF and shares the “single sign on” methods either by cookie or by a common database access. In this case, an attacker can craft an RSS feed in a way that is best suited for CSRF exploitation over broad range CSRF exploit distribution for maximum effect. Targeting RSS feed readers can help in leveraging this attack vector when the end user can be identified.

SQL injection for RSS feed manipulation

Usually SQL injection is a synchronous attack vector directed at Web applications. In a SQL Injection attack, an attacker sends a particular payload and observes the response. If responses conform to SQL injection success signatures then the situation can be exploited further.

Now, new applications provide RSS feeds for your customized needs. For example, RSS feeds for the last 10 transactions or statements for a particular period, etc. All these parameters can be supplied by the end user and will be used to craft the SQL query for the RSS feed generation program. If RSS feed generation program is vulnerable to a SQL injection, a SQL payload can be crafted and passed to the RSS feed to cause an asynchronous SQL injection attack. This attack gets successful over time when this feed generator program runs the user request and builds a customized RSS feed for the client, leading to unauthorized information access. A proper code review of the RSS feed generation routine is a must to prevent this attack; this attack vector is asynchronous and difficult to identify using a black box approach.

Authentication and Authorization issues with RSS feed

RSS doesn’t have an authentication header mechanism over HTTP so RSS feed delivery must be authenticated at the web server or at the application level. RSS is a static XML feed. From a security perspective, this is a difficult equation. It is possible to retrieve an RSS feed that is kept open without any authentication. If an application is serving RSS feeds with hidden parameters or security tokens then it may be possible to guess or bruteforce the parameters based on minimum available information. A legitimate user of a banking application who knows the URL to access his feed may try different combinations of the URL and get access to another user’s feed. This scenario is possible depending on the way the application layer is implemented for RSS feeds. Often RSS feeds that are locked using Basic/NTLM authentication can be bruteforced. A strong application layer feed defense integrated with session checking is required for critical financial information. Sensitive information such as passwords that are being passed to online RSS readers make for another security issue that must be addressed. Hence, “where to read your RSS feed” is very important when dealing with financial services.

RSS encryption issues

RSS encryption is not possible at XML level. Unlike Web services, there are no existing RSS security standards. Atom has XML encryption and signature methods but is yet to gain in popularity. To secure RSS information in transit one needs to use it over HTTPS. If a customized encryption mechanism is in place then one need to pass “key” information to some place, either to the browser or a third-party application. This in itself, is a risk. RSS encryption needs to be point-to-point for better security otherwise it could be sniffed in transit needlessly opening up a security issue. Hence, one needs to make sure the target RSS feed coming on HTTP/HTTPS before making final decision on configuring or consuming. It is imperative to have HTTPS when we are looking at financial services as a target.

RSS widgets

JavaScript widgets are popular and are available for RSS feeds as well. Third-party RSS widgets are easy to implement and integrate in web applications. Source code reviews must be conducted on RSS widgets used in the financial application to smooth out security issues and mitigate risk. It is also possible to use these widgets on personal pages or desktops – another scenario in which unsecured widgets can compromise user sessions.

Conclusion

RSS is getting popular, as a result of which it is being linked to important financial databases. It poses a threat in two dimensions. On the server side customized feed routines can be exploited by an attacker. On the client side session hijacking and malicious code execution is possible. RSS offers great flexibility and capability to push data to the client but the security cost involved is high. Feeds are available for the end client to read; how to consume feeds is up to the end client entirely. This makes the equation difficult and less secure. This feed may be consumed by vulnerable software running in a specific zone context and the client may be vulnerable to exploits. For financial services it is important to control the consumption of the RSS feeds along with the content to make a secure RSS compartment.



APACHE Rewrite Module mod_rewrite for URL

Looking around the web, you’ve run across plenty of URLs that look like:

/content.cgi?date=2000-02-21/article.cgi?»
id=46&page=1

Server side scripts generate the content of those pages. The content of a particular page is uniquely determined by the URL, just as if you requested a page with the URL /content/2000-02-01.html or /article/46.1.html. These pages are different than server-generated pages created in response to a form like a shopping cart, or enrollment. However, search engines will not index these content pages, because search engines ignore pages generated by CGI scripts as potential blind alleys.

A search engine would follow a URL like

/content/2000/02/21,

so some way of mapping a URL like /content/2000/02/21 to the script /content.cgi?date=2000-02-21 would be useful. Not only will search engines follow such a link, but the URL itself is easy to remember. A frequent visitor to the site would know how to reach the page for any day the site published content. When I changed the interface for viewing entries by topic in my WebLog from /meta.php3?meta=XML to /meta/XML, search engines such as Google started indexing, and I’m getting more visits referred by search engines.

The trick is to tell the outside world that your interface is one thing: /content/YYYY/MM/DD, but when you fetch the page, you’re accessing /content.cgi?date=YYYY-MM-DD. Web servers such as Apache and content management systems such as Userland’s Manila and the open source Zope support this abstraction.

The abstraction is also useful because a site’s infrastructure is rarely stable over time. When engineering replaces the Perl CGI scripts with Java Server Pages, and the URLs become /content.jsp?date=YYYY-MM-DD, your users’ bookmarked URLs break. When you use an abstraction, your users bookmark /content/YYYY/MM/DD, and when you change your back end, you update /content/YYYY/MM/DD to point at /content.jsp?date=YYYY-MM-DD without breaking bookmarks.

If you’re not publishing content dynamically, and have URIs like:

/content-YYYY-MM-DD.html,

you don’t have the problem with indexing that the dynamic content has. However, you still may want to adopt this type of URI for consistency with other sites. Remember people coming to your site want to use an interface they are familiar with, and URIs are part of your interface.

Rewriting the URL in Apache

The Apache Web server is ubiquitous on both Unix and NT, and it has an optional component, mod_rewrite, that will rewrite URLs for you. It isn’t part of the standard install. Pair Networks, Dreamhost, and Hurricane Electric, have it enabled on their servers. If you are running your own server, check with your systems administrator to see if it’s installed, or have her install it for you.

The mod_rewrite module works by examining each requested URL. If the requested URL matches one of the URL rewriting rules, that rule is triggered, and the request is handled by the rewritten URL.

If you’re not familiar with Apache, you’ll want to read up on the way its configuration files work. The best place to run mod_rewrite from is your server’s httpd.conf file, but you can call it from the per directory .htaccess file as well. If you don’t have control of your server’s configuration files, you’ll need to use .htaccess, but understand there’s a performance hit because Apache has to read .htaccess every time a URL is requested.

The Goal

The goal is to create a mod_rewrite ruleset that will turn code such as that shown below:

/content/YYYY/MM/DD

into a parameterized version such as is shown next, or into something similar, as long as it’s the right URI for your script.

/content.cgi?date=YYYY-MM-DD

The Plan

We start with the URI /content/YYYY/MM/DD and want to get to /content.cgi?date=YYYY-MM-DD. So we need to do a few things:

  1. Recognize the URI
  2. Extract /YYYY/MM/DD and turn it into YYYY-MM-DD
  3. Write the final form of the URI /archives.cgi?date=YYYY-MM-DD

Regular Expressions and RewriteRule

This transform will require two of the directives from mod_rewrite: RewriteEngine and RewriteRule. RewriteEngine’s a directive which flips the rewrite switch on and off. It’s there to save administrators typing when they want or need to disable rewriting URLs. RewriteRule uses a regular-expression parser that compares the URL or URI to a rule and fires if it matches.

If we’re setting the rule from the directory it fires using the .htaccess file, then we need the following:

RewriteEngine On
RewriteRule ^archives/([0-9]+)/([0-9]+)/([0-9]+)»
archives.cgi?date=$1-$2-$3

What that rule did was first match on the string ‘archives’ followed by any three groups of one or more digits (the [0-9]+) separated by ‘/’s, and rewrote it as archives.cgi?date=YYYY-MM-DD. The parser keeps a back reference for each match string in parentheses, and we can substitute those back in using $1, $2, $3, etc.

If your page has relative links, the links will resolve as relative to /archives/YYYY/MM/DD, not /archives. That means your relative links will break. You should use the base element in the head of the page to reanchor the page.

RewriteRule for Static Content

If you have a series of static HTML files at your document root:

/content-1999-12-31.html
/content-2000-01-01.html
/content-2000-01-02.html

...and want your readers to access them with URLs like /archives/1999/12/31, then you would need a rewrite rule at the document root, such as:

RewriteRule ^archives/([0-9]+)/([0-9]+)/»
([0-9]+)$ /news-$1-$2-$3.html
RewriteRule ^archives$ /index.html

If the news-YYYY-MM-DD.html files are in a folder called /archives, the rewrite rule should be:

RewriteRule ^/archives/([0-9]+)/»
([0-9]+)/([0-9]+)$ /archives/»
news-$1-$2-$3.html

If you want to use an .htaccess file at the archive folder level, then the rule becomes:

RewriteRule ^([0-9]+)/([0-9]+)/»
([0-9]+)$ news-$1-$2-$3.html

Also, you may delete the second rewrite rule since you can use a DirectoryIndex rule instead.

DirectoryIndex index.html

Corner Cases

What if someone enters http://www.yoursite.com/archives instead of http://www.yoursite.com/archives/YYYY/MM/DD? The rule is that mod_rewrite steps through each rewrite rule in turn until one matches or no rules are left. We can add another rule to handle that case.

RewriteEngine On
RewriteRule ^archives/([0-9]+)/([0-9]+)/([0-9])+»
archives.cgi?date=$1-$2-$3
RewriteRule ^archives$ index.html

In this case, redirect to an index page. But you could redirect to a page that generates a search interface.

What If My Server’s not Apache?

Unfortunately IIS does not come with a rewrite mechanism. You can write an ISAPI filter to do this for you.

If you are running the Manila content management system that comes with Userland’s Frontier, the options allow you to map a particular story in the system to a simple URL.

The Zope publishing system also supports mapping of paths into arguments for server scripts.

References

Good URLs are part of interface design. Jakob Nielsen discusses this in his Alertbox column: http://www.useit.com/alertbox/990321.html.

This article was inspired in part by Tim Berners-Lee’s observation that good URLs don’t change: w3.org/Provider/Style/URI

Rafe Engelschall has many examples of mod_rewrite in ‘cookbook’ form at his site: http://www.engelschall.com/pw/apache/rewriteguide/.

The 7 myths about protecting your web applications

Today Web Applications are delivering critical information to a growing number of employees and partners. Most organizations have already invested heavily in network security devices, thus they often believe they are also protected at the application layer; in fact they are not.

Myth 1: IPS defeat application attacks

Intrusion Prevention Systems, initially developed to monitor and alert on suspicious activity and system behavior, are becoming widely deployed. IPS’s are useful to detect known attacks, but are inadequate to protect against new types of attack targeting the web applications and are often blind for traffic secured by SSL technology.

Myth 2: Firewalls protect the application layer

Most companies have deployed firewall technology to protect and control traffic in and out of the network. Firewalls are designed to control access by allowing or blocking IP addresses and port numbers. As well as firewalls are still failing to protect against worms and viruses, they are not suited to protect web applications against application attacks neither.

Network firewalls only protect or "validate" the HTTP protocol and do not secure the most critical part: the application.

Myth 3: Application vulnerabilities are similar to network and system vulnerabilities

A common problem in web applications is the lack of input validation in web forms. For example, a web form field requesting an email address should only accept characters that are allowed to appear in email addresses, and should carefully reject all other characters! An attacker might potentially delete or modify a database ‘safely’ hidden behind state of the art-Network Firewalls, IPS and web servers by filling in SQL query syntax in the unsecured email field and exploit a SQL Injection vulnerability!


Web application attacks are not targeting protocols, but target badly written applications using HTTP(s).

Myth 4: Network devices can understand the application context

To correctly protect web applications and web services, a full understanding of the application structure and logic must be acquired. Track must be kept of the application state and associated sessions. Different technologies, such as cookie insertion, automated process detection, application profiling and web single sign on technology are required to obtain adequate application protection.

Myth 5: SSL secures the application

SSL technology is initially developed to secure and to authenticate traffic in transit. SSL technology protects against man-in-the-middle attacks (eaves dropping) or data alteration attacks (modifying data in transit), but do not secure the application logic.

Most vulnerabilities found in today’s web servers are exploitable via unsecured HTTP connections as well as via ‘secured’ HTTPS connections.

Myth 6: Vulnerability scanners protect the web environment

Vulnerability scanners look for weaknesses based on signature matching. When a match is found a security issue is reported.

Vulnerability scanners work almost perfect for all popular systems and widely deployed applications, but prove to be unable at the web application layer because companies do not use the same web environment software, most of them even opt for creating their own web application.

Myth 7: Vulnerability assessment and patch management will do the job

While it is often required to have yearly security assessments performed on a web site, the common web application life cycle requires more frequent security reviews. As each new revision of a web application is developed and pushed, the potential for new security issues increases. Pen Test or Vulnerability assessments will ever be out of date.

Furthermore, it is illusive to think that Patch Management will assist to rapidly respond to the identified vulnerabilities.

Real life

Web applications are currently proving to be one of the most powerful communication and business tool. But they also come with weaknesses and potential risks that network security devices are simply not designed to protect.

Key security concepts such as Security Monitoring, Attack Prevention, User Access control and Application Hardening, remain true. Since the web application domain is so wide and different, these concepts need to be implemented with new “application oriented” technologies.



Top 10 Ajax Security Holes and Driving Factors

One of the central ingredients of Web 2.0 applications is Ajax encompassed by JavaScripts. This phase of evolution has transformed the Web into a superplatform. Not surprisingly, this transformation has also given rise to a new breed of worms and viruses such as Yamanner, Samy and Spaceflash. Portals like Google, NetFlix, Yahoo and MySpace have witnessed new vulnerabilities in the last few months. These vulnerabilities can be leveraged by attackers to perform Phishing, Cross-site Scripting (XSS) and Cross-Site Request Forgery (XSRF) exploitation.

There is no inherent security weakness in Ajax but adaptation of this technology vector has changed the Web application development approach and methodology significantly. Data and object serialization was very difficult in the old days when DCOM and CORBA formed the core middleware tier. Ajax can consume XML, HTML, JS Array, JSON, JS Objects and other customized objects using simple GET, POST or SOAP calls; all this without invoking any middleware tier. This integration has brought about a relatively seamless data exchange between an application server and a browser. Information coming from the server is injected into the current DOM context dynamically and the state of the browser’s DOM gets recharged. Before we take a look at security holes let’s examine the key factors that seem to be driving Web 2.0 vulnerabilities.

Multiple scattered end points and hidden calls – One of the major differences between Web 2.0 applications and Web 1.0 is the information access mechanism. A Web 2.0 application has several endpoints for Ajax as compared to its predecessor Web 1.0. Potential Ajax calls are scattered all over the browser page and can be invoked by respective events. Not only does this scattering of Ajax calls make it difficult for developers to handle, but also tends to induce sloppy coding practices given the fact that these calls are hidden and not easily obvious.

Validation confusion – One of the important factors in an application is input and outgoing content validation. Web 2.0 applications use bridges, mashups, feeds, etc. In many cases it is assumed that the “other party” (read server-side or client-side code) has implemented validation and this confusion leads to neither party implementing proper validation control.

Untrusted information sources – Web 2.0 applications fetch information from various untrusted sources such as feeds, blogs, search results. This content is never validated prior to being served to the end browser, leading to cross-site exploitation. It is also possible to load JavaScript in the browser that forces the browser to make cross-domain calls and opens up security holes. This can be lethal and leveraged by virus and worms.

Data serialization – Browsers can invoke an Ajax call and perform data serialization. It can fetch JS array, Objects, Feeds, XML files, HTML blocks and JSON. If any of these serialization blocks can be intercepted and manipulated, the browser can be forced to execute malicious scripts. Data serialization with untrusted information can be a lethal combination for end-user security.

Dynamic script construction & execution – Ajax opens up a backend channel and fetches information from the server and passes it to the DOM. In order to achieve this one of the requirements is the dynamic execution of JavaScripts to update the state of the DOM or the browser’s page memory. This is achieved by calling customized functions or the eval() function. The consequence of not validating content or of making an insecure call can range from a session compromise to the execution of malicious content.

Web 2.0 applications can become vulnerable with one or more lapses mentioned above. If developers have not taken enough precautions in putting in place security controls, then security issues can be opened up on both the server as well as browser ends. Here is a list and brief overview of ten possible security holes.

(1) Malformed JS Object serialization

JavaScript supports Object-Oriented Programming (OOP) techniques. It has many different built-in objects and allows the creation of user objects as well. A new object can be created using new object() or simple inline code as shown next:

message = {

from : "john@example.com",

to : "jerry@victim.com",

subject : "I am fine",

body : "Long message here",

showsubject : function(){document.write(this.subject)}

};

Here is a simple message object that has different fields required for email. This object can be serialized using Ajax and consumed by JavaScript code. The programmer can either assign it to the variable and process it or make eval(). If an attacker sends a malicious “subject” line embedded with script then it makes the reader a victim of cross-site scripting attacks. A JS object can have both data and methods. Improper usage of JS object serialization can open up a security hole that can be exploited by crafty packet injection code.

(2) JSON pair injection

JavaScript Object Notation (JSON) is a simple and effective lightweight data exchange format and one that can contain object, array, hash table, vector and list data structures. JSON is supported by JavaScript, Python, C, C++, C# and Perl languages. Serialization of JSON is a very effective exchange mechanism in Web 2.0 applications. Developers choose JSON over Ajax very frequently and fetch and pass required information to the DOM. Here is a simple JSON object “bookmarks” object with different name-value pair.

{"bookmarks":[{"Link":"www.example.com","Desc":"Interesting link"}]}

It is possible to inject a malicious script in either Link or Desc. If it gets injected into the DOM and executes, it falls into the XSS category. This is another way of serializing malicious content to the end-user.

(3) JS Array poisoning

JS array is another very popular object for serialization. It is easy to port across platforms and is effective in a cross-language framework. Poisoning a JS array spoils the DOM context. A JS array can be exploited with simple cross-site scripting in the browser. Here is a sample JS array:

new Array(“Laptop”, “Thinkpad”, “T60”, “Used”, “900$”, “It is great and I have used it for 2 years”)

This array is passed by an auction site for a used laptop. If this array object is not properly sanitized on the server-side, a user can inject a script in the last field. This injection can compromise the browser and can be exploited by an attack agent.

(4) Manipulated XML stream

An Ajax call consumes XML from various locations. These XML blocks originate from Web services running on SOAP, REST or XML-RPC. These Web services are consumed over proxy bridges from third-parties. If this third-party XML stream is manipulated by an attacker then the attacker can inject malformed content.

The browser consumes this stream from its own little XML parser. This XML parser can be vulnerable to different XML bombs. It is also possible to inject a script in this stream which can again, lead to cross-site scripting (XSS). XML consumption in the browser without proper validation can compromise the end-client.

(5) Script injection in DOM

The first four holes were the result of issues with serialization. Once this serialized stream of object is received in the browser, developers make certain calls to access the DOM. The objective is to “repaint” or “recharge” the DOM with new content. This can be done by calling eval(), a customized function or document.write(). If these calls are made on untrusted information streams, the browser would be vulnerable to a DOM manipulation vulnerability. There are several document.*() calls that can be utilized by attack agents to inject XSS into the DOM context.

For example, consider this line of JavaScript code, Document.write(product-review)

Here, “Product-review” is a variable originating from a third-party blog. What if it contains JavaScript? The answer is obvious. It will get executed in the browser.

(6) Cross-domain access and Callback

Ajax cannot access cross-domains from the browser. One of the browser security features that exists in all flavors of browsers is the blocking of cross-domain access. There are several Web services that provide a callback mechanism for object serialization. Developers can use this callback mechanism to integrate Web services in the browser itself. The callback function name can be passed back so that as soon as the callback object stream is retrieved by the browser it gets executed by the specific function name originally passed from the browser.

This callback puts an extra burden on developers to have in-browser validation. If the incoming object stream is not validated by the browser then developers are putting the end client’s fate at the mercy of cross-domain targets. Intentionally or unintentionally, this cross domain service can inject malicious content into the browser. This cross domain call runs in the current DOM context and so makes the current session vulnerable as well. This entire cross-domain mechanism needs to be looked at very closely before implementation into an application.

(7) RSS & Atom injection

Syndicated feeds, RSS and Atom, are one of the most popular ways of passing site-updated information over the Internet. Several news, blogs, portals, etc. share more than one feed over the Internet. A feed is a standard XML document and can be consumed by any application. Web 2.0 applications integrate syndicated feeds using widgets or in-browser components. These components make Ajax calls to access feeds.

These feeds can be selected by end-users easily. Once selected, these feeds are parsed and injected into the DOM. But if the feed is not properly validated prior to injecting it into the DOM, several security issues can crop up. It is possible to inject a malicious link or JavaScript code into the browser. Once this malicious code injected into the DOM, the game is over. The end result is XSS and session hijacking.

(8) One-click bomb

Web 2.0 applications may not be compromised at the first instance itself, but it is possible to make an event-based injection. A malicious link with “onclick” can be injected with JavaScript. In this case, the browser is sitting on an exploit bomb waiting for the right event from the end-user to trigger the bomb. The exploit succeeds if that particular event is fired by clicking the link or button. This can lead to session hijacking through malicious code.

Once again this security hole is opened up as a result of information processing from untrusted sources without the right kind of validation. To exploit this security hole an event is required to be fired from an end-client. This event may be an innocuous event such as clicking a button or a link but the consequences can be disastrous. A malicious event that is fired may send current session information to the target or execute fancy inline exploit scripts in current browser context.

(9) Flash-based cross domain access

It is possible to make GET and POST requests from JavaScripts within a browser by using a Flash plugin’s Ajax interface. This also enables cross-domain calls to be made from any particular domain. To avoid security concerns, the Flash plugin has implemented policy-based access to other domains. This policy can be configured by placing the file crossdomain.xml at the root of the domain. If this file is left poorly configured – as is quite often the case – it opens up the possibility of cross-domain access. Here is a sample of a poorly configured XML file:



Now, it is possible to make cross-domain calls from within the browser itself. There are a few other security issues concerning this framework as well. Flash-based Rich Internet Applications (RIA) can be vulnerable to a cross-domain access bug over Ajax if deployment is incorrect.

(10) XSRF

Cross-Site Request Forgery is an old attack vector in which a browser can be forced to make HTTP GET or POST requests to cross-domains; requests that may trigger an event in the application logic running on the cross-domain. These can be requests for a change of password or email address. When the browser makes this call it replays the cookie and adopts an identity. This is the key aspect of the request. If an application makes a judgment on the basis of cookies alone, this attack will succeed.

In Web 2.0 applications Ajax talks with backend Web services over XML-RPC, SOAP or REST. It is possible to invoke them over GET and POST. In other words, it is also possible to make cross-site calls to these Web services. Doing so would end up compromising a victim’s profile interfaced with Web services. XSRF is an interesting attack vector and is getting a new dimension in this newly defined endpoints scenario. These endpoints may be for Ajax or Web services but can be invoked by cross-domain requests.

Exploitation of security holes and Countermeasures

Web 2.0 applications have several endpoints; each an entry point for threat modeling. To provide proper security it is imperative to guard each of these entry points. Third-party information must be processed thoroughly prior to sending it to the end-client.

To deal with Ajax serialization issues validation must be placed on incoming streams before they hit the DOM. XML parsing and cross-domain security issues need extra attention and better security controls. Follow the simple thumb rule of not implementing cross-domain information processing into the browser without proper validation. Interestingly, up until now, the use of client-side scripts for input validation was thoroughly discouraged by security professionals because they can be circumvented easily.

Web 2.0 opens up several new holes around browser security. Exploitation of these security holes is difficult but not impossible. Combinations of security issues and driving factors can open up exploitable holes that impact the sizeable Web community, such as those that can be leveraged by attackers, worms and viruses. Identity compromise may be the final outcome.

Conclusion

This article has briefly touched upon a few likely security holes around Ajax. There are a few more lurking around, such as the ones leveraging cross-domain proxies to establish a one-way communication channel or memory variable access in the browser.

With Web 2.0, a lot of the logic is shifting to the client-side. This may expose the entire application to some serious threats. The urge for data integration from multiple parties and untrusted sources can increase the overall risk factor as well: XSS, XSRF, cross-domain issues and serialization on the client-side and insecure Web services, XML-RPC and REST access on the server-side. Conversely, Ajax can be used to build graceful applications with seamless data integration. However, one insecure call or information stream can backfire and end up opening up an exploitable security hole.

These new technology vectors are promising and exciting to many, but even more interesting to attack, virus and worm writers. To stay secure, this is all the more reason for developers to paying attention to implementation detail.











Web 2.0 Threats and Risks for Financial Services

Web 2.0 technologies are gaining momentum worldwide, penetrating in all industries as enterprise 2.0 applications. Financial services are no exception to this trend. One of the key driving factors behind penetration of Web 2.0 into the financial services sector is the “timely availability of information”. Wells Fargo, Merill Lynch and JP Morgan are developing their next generation technologies using Web 2.0 components; components that will be used in banking software, trading portals and other peripheral services. The true advantage of RSS components is to push information to the end user rather than pull it from the Internet. The financial industry estimates that 95% of information exists in non-RSS formats and could become a key strategic advantage if it can be converted into RSS format. Wells Fargo has already implemented systems on the ground and these have started to yield benefits. Financial services are tuning into Web 2.0 but are simultaneously exposing their systems to next generation threats such as Cross site Scripting (XSS), Cross Site Request Forgery (CSRF) and Application interconnection issues due to SOA.

With regard to security, two dimensions are very critical for financial systems – Identity and Data privacy. Adopting the Web 2.0 framework may involve risks and threats against these two dimensions along with other security concerns. Ajax, Flash (RIA) and Web Services deployment is critical for Web 2.0 applications. Financial services are putting these technologies in place; most without adequate threat assessment exercises. Let’s look at threats to financial services applications using Web 2.0.

Cross site scripting with Ajax

In the last few months, several cross-site scripting attacks have been observed, where malicious JavaScript code from a particular Web site gets executed on the victim’s browser thereby compromising information on the victim’s system. Poorly written Ajax routines can be exploited in financial systems. Ajax uses DOM manipulation and JavaScript to leverage a browser’s interface. It is possible to exploit document.write and eval() calls to execute malicious code in the current browser context. This can lead to identity theft by compromising cookies. Browser session exploitation is becoming popular with worms and viruses too. Infected sessions in financial services can be a major threat. The attacker is only required to craft a malicious link to coax unsuspecting users to visit a certain page from their Web browsers. This vulnerability existed in traditional applications as well but AJAX has added a new dimension to it.

RSS injection

RSS feeds exist in Web 2.0 data format. This format can be pushed to the web application to trigger an event. RSS feeds are a common means of sharing information on portals and Web applications. These feeds are consumed by Web applications and sent to the browser on the client-side. Literal JavaScripts can be injected into RSS feeds to generate attacks on the client browser. An end user visits a particular Web site that loads a page with an RSS feed. A malicious script – a script that can install software or steal cookies – embedded in the RSS feed gets executed. Financial services that use RSS feeds aggressively can pose a potential threat to resource integrity and confidentiality. RSS readers bundled with applications run by end clients can cause identity thefts if they fail to sanitize incoming information.

Untrusted data sources

One of the key elements of Web 2.0 application is its flexibility to talk with several data sources from a single application or page. This is a great feature but from a security perspective, it can be deadly. Financial services running Web 2.0 application provides key features to users such as selecting RSS feeds, search triggers, news feeds, etc. Using these features end users can tune various sources from one location. All these sources can have different point of origin and are totally untrusted. What if one of these sources injects a hyperlink camouflaged as a malicious JavaScript code snippet? Applications that trust these sources blindly can backfire. Clicking a link can compromise the browser session and lead to identity theft. Dealing with untrusted sources in an application framework is a challenge on the security front.

Client-side routines

Web 2.0 based financial applications use Ajax routines to do a lot of work on the client-side, such as client-side validation for data types, content-checking, date fields, etc. Normally client-side checks must be backed up by server-side checks as well. Most developers fail to do so; their reasoning being the assumption that validation is taken care of in Ajax routines. Ajax has shifted a lot of business logic to the client side. This itself is a major threat because it is possible to reverse-engineer or decode these routines and extract internal information. This can help an attacker to harvest critical information about the system.

Widgets exploitation

Widgets are small components that can be integrated into an application very easily without obtaining actual source code. These widgets are offered as part of larger libraries or created by users and posted on the Internet. It is very tempting to use them to achieve short term goals. It must be kept in mind that it is possible that these widgets can be exploited by an attacker if they are poorly written. If financial applications use widgets then it must be made a focal point for analysis. Any weak spot in this widget can lead to script injection on the browser side. It is imperative to analyze the source code of the widget for viruses, worms or possible weaknesses.

Web Services enumeration

Web Services are picking up in the financial services sector and are becoming part of trading and banking applications. Service-oriented architecture is a key component of Web 2.0 applications. WSDL (Web Services Definition Language) is an interface to Web services. This file provides sensitive information about technologies, exposed methods, invocation patterns, etc. that can aid in defining exploitation methods. Unnecessary functions or methods kept open can spell potential disaster for Web services. Web Services must follow WS-security standards to counter the threat of information leakage from the WSDL file. WSDL enumeration helps attacker to build an exploit. Web Services WSDL file access to unauthorized users can lead to private data access.

XML poisoning and Injections

SOAP, XML-RPC and REST are the new standard protocols for information-sharing and object invocation. These standards use XML as underlying sources and financial applications use these standards for client-to-server or application-to-application communication. Not uncommon is the technique of applying recursive payloads to similar-producing XML nodes multiple times. An engine’s poor handling of XML information may result in a denial of services on the server.

Web services consume information and variables from SOAP messages. It is possible to manipulate these variables. For example, if 10 is one of the nodes in SOAP messages, an attacker can start manipulating this node by trying different injection attacks – SQL, LDAP, XPATH, command shell – and exploring possible attack vectors to get a hold of internal machines. XML poisoning and payload injections are another emerging threat domain for Web 2.0 financial applications.

CSRF with Web 2.0 applications

CSRF allows transactions to be carried out without an end user’s consent, making them one of the most effective attack vectors in financial applications. In Web 2.0 applications Ajax talks with backend Web services over XML-RPC, SOAP or REST. It is possible to invoke them using GET and POST methods. In other words, it is also possible to make cross-site calls to these Web services and in doing so, compromise a victim’s profile interfaced with Web services. CSRF is an interesting attack vector that takes on a new dimension in this newly defined endpoints scenario. These endpoints may be for Ajax or Web services but can also be invoked by cross-domain requests. Key financial transactions cannot depend simply on authenticated sessions, but must take extra care to process information, either by manually validating the password or by using CAPTCHA.

Conclusion

A lot more analysis needs to be done before financial applications can be integrated with their core businesses using Web 2.0. The Web security space is filling up with new attacks as we speak or offering new ways of delivering old attacks – both are dangerous where “monetary transactions” are involved. Here, we have seen just a small set of attacks. There are several other attack vectors with respect to Web 2.0 frameworks. A better threat model is required to undertake a thorough security analysis. Web 2.0 is a promising technology but also one that needs careful coding and usage practices prior to being consumed in applications.

Best Top 5 javascript frameworks

5) Yahoo! User Interface Library

The Yahoo! User Interface (YUI) Library is a set of utilities and controls, written in JavaScript, for building richly interactive web applications using techniques such as DOM scripting, DHTML and AJAX. The YUI Library also includes several core CSS resources. All components in the YUI Library have been released as open source under a BSD license and are free for all uses.

Features

Two different types of components are available: Utilities and controls. The YUI utilities simplify in-browser devolvement that relies on cross-browser DOM scripting, as do all web applications with DHTML and AJAX characteristics. The YUI Library Controls provide highly interactive visual design elements for your web pages. These elements are created and managed entirely on the client side and never require a page refresh.

utilities available:

  • Animation: Create “cinematic effects” on your pages by animating the position, size, opacity or other characteristics of page elements. These effects can be used to reinforce the user’s understanding of changes happening on the page.
  • Browser History Manager: Developers of rich internet applications want bookmarks to target not just pages but page states and they want the browser’s back button to operate meaningfully within their application’s screens. Browser History Manager provides bookmarking and back button control in rich internet applications.
  • Connection Manager: This utility library helps manage XMLHttpRequest (commonly referred to as AJAX) transactions in a cross-browser fashion, including integrated support for form posts, error handling and callbacks. Connection Manager also supports file uploading.
  • DataSource Utility: DataSource provides an interface for retrieving data from arrays, XHR services, and custom functions with integrated caching and Connection Manager support.
  • Dom Collection:The DOM Utility is an umbrella object comprising a variety of convenience methods for common DOM-scripting tasks, including element positioning and CSS style management.
  • Drag & Drop: Create draggable objects that can be picked up and dropped elsewhere on the page. You write code for the “interesting moments” that are triggered at each stage of the interaction (such as when a dragged object crosses over a target); the utility handles all the housekeeping and keeps things working smoothly in all supported browsers.

Controls available:

  • AutoComplete: The AutoComplete Control allows you to streamline user interactions involving text-entry; the control provides suggestion lists and type-ahead functionality based on a variety of data-source formats and supports server-side data-sources via XMLHttpRequest.
  • Button Control: The Button Control provides checkbox, radio button, submit and menu-button UI elements that are more impactful visually and more powerful programmatically than the browser’s built-in form widgets.
  • Calendar: The Calendar Control is a graphical, dynamic control used for date selection.
  • Container: The Container family of controls supports a variety of DHTML windowing patterns including Tooltip, Panel, Dialog and SimpleDialog. The Module and Overlay controls provide a platform for implementing additional, customized DHTML windowing patterns.
  • DataTable Control: DataTable leverages the semantic markup of the HTML table and enhances it with sorting, column-resizing, inline editing of data fields, and more.
  • Logger: The YUI Logger provides a quick and easy way to write log messages to an on-screen console, the FireBug extension for Firefox, or the Safari JavaScript console. Debug builds of YUI Library components are integrated with Logger to output messages for debugging implementations.
  • Menu: Application-style fly-out menus require just a few lines of code with the Menu Control. Menus can be generated entirely in JavaScript or can be layered on top of semantic unordered lists.

Download and more information: here

4) Prototype

Prototype is a JavaScript Framework that aims to ease development of dynamic web applications.

Featuring a unique, easy-to-use toolkit for class-driven development and the nicest Ajax library around, Prototype is quickly becoming the codebase of choice for web application developers everywhere.

Features

  • Easily deploy ajax applications: Besides simple requests, this module also deals in a smart way with JavaScript code returned from a server and provides helper classes for polling.
  • DOM extending: adds many convenience methods to elements returned by the $() function: for instance, you can write $(’comments’).addClassName(’active’).show() to get the element with the ID ‘comments’, add a class name to it and show it (if it was previously hidden).
  • Utilizes JSON (JavaScript Object Notation): JSON is a light-weight and fast alternative to XML in Ajax requests

Download and more information here

3) Rico

Designed for building rich Internet applications.

Features

  • Animation Effects: provides responsive animation for smooth effects and transitions that that can communicate change in richer ways than traditional web applications have explored before. Unlike most effects, Rico 2.0 animation can be interrupted, paused, resumed, or have other effects applied to it to enable responsive interaction that the user does not have to wait on
  • Styling: Rico provides several cinematic effects as well as some simple visual style effects in a very simple interface.
  • Drag And Drop: Desktop applications have long used drag and drop in their interfaces to simplify user interaction. Rico provides one of the simplest interfaces for enabling your web application to support drag and drop. Just register any HTML element or JavaScript object as a draggable and any other HTML element or JavaScript object as a drop zone and Rico handles the rest.
  • AJAX Support: Rico provides a very simple interface for registering Ajax request handlers as well as HTML elements or JavaScript objects as Ajax response objects. Multiple elements and/or objects may be updated as the result of one Ajax request.

Download and more information here

2) Qooxdoo

qooxdoo is one of the most comprehensive and innovative Open Source multipurpose AJAX frameworks, dual-licensed under LGPL/EPL. It includes support for professional JavaScript development, a state-of-the-art GUI toolkit and high-level client-server communication.

Features

  • Client detection: qooxdoo knows what browser is being used and makes this information available to you.
  • Browser abstraction: qooxdoo includes a browser abstraction layer which tries to abstract all browser specifics to one common “standard”. This simplifies the real coding of countless objects by allowing you to focus on what you want and not “how to want it”. The browser abstraction layer comes with some basic functions often needed when creating real GUIs. For example, runtime styles or positions (in multiple relations: page, client and screen) of each element in your document.
  • Advanced property implementation: qooxdoo supports “real” properties for objects. This means any class can define properties which the created instances should have. The addProperty handler also adds getter and setter functions. The only thing one needs to add - should you need it - is a modifier function.
  • Event Management: qooxdoo comes with its own event interface. This includes event registration and deregistration functions.

    Furthermore there is the possibility to call the target function in any object context. (The default is the object which defines the event listener.) The event system normalizes differences between the browsers, includes support for mousewheel, doubleclick and other fancy stuff. qooxdoo also comes with an advanced capture feature which allows you to capture all events when a user drags something around for example.

Download and more information here

1) Dojo

Dojo allows you to easily build dynamic capabilities into web pages and any other environment that supports JavaScript sanely. You can use the components that Dojo provides to make your web sites more usable, responsive, and functional. With Dojo you can build degradable user interfaces more easily, prototype interactive widgets quickly, and animate transitions. You can use the lower-level APIs and compatibility layers from Dojo to write portable JavaScript and simplify complex scripts. Dojo’s event system, I/O APIs, and generic language enhancement form the basis of a powerful programming environment. You can use the Dojo build tools to write command-line unit-tests for your JavaScript code. The Dojo build process helps you optimize your JavaScript for deployment by grouping sets of files together and reuse those groups through “profiles”.

Features

  • Multiple Points Of Entry: A fundamental concept in the design of Dojo is “multiple points of entry”. This term means that Dojo should work very hard to make sure that users should be able to start using Dojo at the level they are most comfortable with.
  • Interpreter Independence: Dojo tries very hard to ensure that it’s possible to support at least the very core of the system on as many JavaScript enabled platforms as possible. This will allow Dojo to serve as a “standard library” for JavaScript programmers as they move between client-side, server-side, and desktop programming environments.
  • Unifies several codebases: builds on several contributed code bases (nWidgets, Burstlib, and f(m)).