Proxy server logs for incident response

Proxy server logs for incident response

When you do incident response having access to detailed logs is crucial. One of those treasure troves are proxy server logs.

Proxy server logs contain the requests made by users and applications on your network. This does not only include the most obvious part : web site request by users but also application or service requests made to the internet (for example application updates).

Ideally you have a transparent proxy, meaning that all outgoing requests are redirected by a firewall to a proxy. Unfortunately not all applications behave properly when they have to go through a proxy. As a result, in a lot of corporate environment you’ll find the use of a proxy being forced to users or applications via a configuration setting. If you’re using PAC files for proxy configuration then now might be a good time to read the notification on Proxy auto-config (PAC) files have access to full HTTPS URLs.

It would be a shame if you have proxy server logs for incident response only to find out that they do not contain the information that you need during an investigation. This post contains some of the settings you should take into consideration when configuring your proxy server.

Configuring proxy server logs for incident response

Time synchronization

If you try to reconstruct a timeline then correct timestamps are crucial. So make sure that your proxy server is NTP-synchronized. Also make note of the timezone being used for logging. Ideally you use UTC.

Log retention

A lot of security incidents are detected long after the initial compromise took place. If you can afford the storage you should keep proxy logs for a relatively long time (this means years, not weeks or months). If you don’t have enough storage you can include logs in the backup procedure and restore them if you conduct an investigation. Make sure that logs (and backups) are properly protected (access and integrity). According to Mandiant the median number of days that attackers were present on a victim’s network is 146 days (320 days for data breaches with external notification and 56 days with internal discovery).

Proxy log settings

Proxy server logs should track the below information for being useful during an investigation :

  • Date and time
  • HTTP protocol version
  • HTTP request method
  • Content type
  • User agent
  • HTTP referer
  • Length of the content response
  • Authenticated username of the client
  • Client IP and source port
  • Target host IP and destination port
  • Target hostname (DNS)
  • The requested resource
  • HTTP status code of reply
  • Time needed to provide the reply back to the client
  • Proxy action (from cache, not from cache, …)

Alerts on proxy server entries

Besides being useful during an incident you can also raise alerts based on the content of the proxy server logs.

Unusual protocol version

Most modern clients will now use HTTP/1.1. Requests with HTTP/1.0 require deeper inspection. Don’t be alarmed immediately, some older applications might just not support HTTP/1.0. Keep a list of those applications to exclude them from raising an alert.

User agents

You should not blindly trust user agent information, it’s something that can easily be crafted. But making statistics on the user agents can prove useful. Look out for user agents that indicate the use of a scripting language (Python for example) or user agents that don’t make sense. You can use User Agent String.com as a reference.

If you control your environment then you can develop a list of “known” and “accepted” user agents. Everything that’s out of the ordinary should then trigger an alarm.

If your proxy server logs the computer name you can add this as an extra rule to validate the trustworthiness of the user agent field.

HTTP request methods

Log the HTTP request method (for example GET, POST) and graph / alert on (an increase of) unusual methods (for example CONNECT, PUT)

Focus on POSTs with content types different than text/html. Especially POSTS with application/octet-stream or any of the MS Office document file types should raise suspicion. Repeated requests can indicate that something or someone is uploading a lot of (corporate?) documents.

GET requests contain the query string in the URL. This can easily be logged. POST requests however have the query string in the HTTP message body. This is not always straightforward to log. But without this information it’s sometimes very difficult for getting to know the actual payload that was exchanged. You’ll have to look into something similar as mod security for logging HTTP POST requests. Also don’t forget that logging the entire query string, regardless of GET or POST can raise privacy concerns. Consult the HR and Legal department for advice.

Length of the content response

Track the length of the content response. A host that repeatedly sends or receives the same length of content responses might indicate a host that requires further inspection. It can mean an application update but also malware beaconing out to control servers.

Also, excessive content lengths should raise an alarm.

Target host IP, destination port, hostname and requested resource

Requests that go to non standard HTTP or HTTPS ports should always raise an alert.

Last but not least you should use the information provided by threat information platforms like for example MISP to track requests for hosts or resources that are known to be bad.

As bonus you can also use passive dns information in addition to inspecting the requested resources. This becomes especially useful if your proxy servers logs both target IP and hostname. If a domain was hosting something malicious on a specific IP during a limited timeframe you can use both sets of data to check if you were affected.

Collecting proxy server logs

If you are using a BlueCoat proxy then you can use the article BlueCoat Proxy log search and analytics with ELK as a guideline on how to use ELK to analyse those logs.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.