On occasions developers fail to restrict access to administrative pages, privileged pages or other file resources that can simply be accessed by a user who manually changes a URL. This can happen when an application or page fails to properly protect a requested page. This condition could stem from:

  • An improper configuration.
  • A lack of requests coding checks.
  • A lack of awareness of which pages to protect.
  • Hiding links and buttons from the user to avert page access.

Typical attacks that target URL access failures include:

  • Path Manipulation – an intruder specifies a path used in a file system operation and gain unauthorized access to a resource.
  • Path Traversal – an intruder attempts to access files and directories that are stored outside the web root folder with combination of “dot-dot-slash (../)” sequences. Flaws in input validation methods allow this condition to exist.
  • Forced browsing – an intrusion seeking find and connect with resources not referenced by the application, but are still available on the server. Forced browsing often takes the form of brute force attacks where attempts are made to discover unlinked contents in the domain directory looking for temporary directories and files, old backup and configuration files. Such an attack may yield source code, credentials, internal network addressing

The impact of these types of failures opens the possibility for some or all account to be accessed.

Common flaws that lead to this type of failure include allowing hidden URLs, for administrators or privileged users, allowing access to hidden files, having an out of date or insufficient code or access control policy, or only testing for access privileges on the client but not the server. Vulnerabilities should be assessed by examining every page to determine:

  • If it requires authentication?
  • Which authenticated user should access it?
  • Are there authentication and authorization checks?

 

This examination should be followed by penetration tests.

What can techniques can developers use to prevent these kind of attacks?

 

  • Never assume that users will be unaware of hidden functionality.
  • Make authentication and authorization policies role based
  • Make the access control rules a part of the business, architecture, and design of the application
  • Policies should be configurable rather than hard coded.
  • Enforcement mechanisms by default should deny all access.
  • Check to insure pages in a work flow are accessed under the proper conditions.
  • Ensure that all URLs and business functions are protected by an access control system.
  • Insure that include files cannot be accessed directly.
  • Block access to file types not handled by the application.

 

For preventing path traversal attacks:

  • Deny user input when calling the file system
  • Use indexes rather than actual file names
  • Do not allow users to supply portions of a path
  • Validate user’s input by accepting only known good rather than sanitizing the data
  • Use chrooted jails and code access policies to restrict user input
  • Verify user permission restrictions; can they read or modify files?
  • Use the highest possible restrictions when developing and deploying web applications
  • If possible, deploy the application on read only media
  • Remove writable directories and files from the server
  • Remove ‘everyone’ and ‘guest’ permissions
  • Remove insecure indexing
  • Remove unneeded files from root
  • Rename include files
  • Map remaining files to an error handler
  • Protect temporary files by using random file names, placing them outside of the web root and remove them through a process of garbage collection.
  • Check for vulnerabilities by trying to access files on the server.
  • Use a robots.txt to restrict crawling by search engines
  • When using PHP, do not use allow_url_fopen.

For more information, see:

Failure to Restrict URL Access

Forced Browsing

Testing for Path Traversal