HTTP 304 and Apache sinkhole

This is a short post, put here as a “reminder to self” on browser caching.

A colleague recently set up an HTTP sinkhole with Apache. The setup redirected all the user requests to one specific resource.

When deploying the sinkhole, the web server logs showed that the first requests where logged with HTTP status code 200 (“OK”). The next requests however were logged with HTTP status code 304 (“Not Modified”).

The HTTP 304 code basicallyRead more.

The New Glibc Getaddrinfo Vulnerability: Is It GHOST 2.0?

I had a guest post published on Security Training for Incident Handlers: The New Glibc Getaddrinfo Vulnerability: Is It GHOST 2.0?.

The post describes the critical issue found in glibc getaddrinfo (CVE-2015-7547) and gives you advice on patch management to deal with current (and future) issues in glibc.

Using Passive DNS for Incident Response

According to isc.org “Passive DNS” or “passive DNS replication” is a technique invented by Florian Weimer in 2004 to opportunistically reconstruct a partial view of the data available in the global Domain Name System into a central database where it can be indexed and queried.

In practical terms passive DNS describes an historical database of DNS resolutions. What does this all mean? It means that you can lookup to what IP address a domain resolvedRead more.

Introduction to Modbus TCP traffic

Modbus is a serial communication protocol. It is the most widespread used protocol within ICS.

It works in a Master / Slave mode. This means the Master has the pull the information from a Slave at regular times.

Modbus is a clear text protocol with no authentication.

Although it was initially developed for serial communication it is now often used over TCP. Other versions of Modbus (used in serial communication) are for example Modbus RTURead more.

Logging nfsen queries

In two previous posts I covered “What is netflow and when do you use it?” and “Use netflow with nfdump and nfsen“.

Nfsen provides a web interface on netflow data made available via nfdump. Because of the nature of the netflow data it is important to have strict access controls and extensive logging on the nfsen access. You should have a complete access and query log of who did what at any given time.

AccessRead more.

What is netflow and when do you use it?

Netflow is a feature that was introduced on Cisco routers and that provides the ability to collect IP network traffic as it enters or exits an interface. Netflow data allows you to have an overview of traffic flows, based on the network source and destination. Because of this it lets you understand who is using the network, the destination of your traffic, when the network is utilized and the type of applications that consume theRead more.

Intro to basic forensic investigation of a hard drive

For a recent project I had to do a basic forensic investigation of a hard drive. The assignment included two questions :

detect if there were viruses on the system analyzing the surf behavior of one of the users (policy related)

I want to share the steps that I took to do basic forensics on a cloned disk image. This is not an in-depth forensic investigation but it was enough for this assignment.

Read more.

Client side certificate authentication

TLS (Transport Layer Security) and its predecessor SSL provide secure communication over a computer network. The most common use for TLS/SSL is for establishing an encrypted link between a web server and a browser. This allows you to guarantee that all data passed between the browser and the web server is private and not tampered with.

You can use certificates on both sides, the server side and the client side.

Web site certificates, or serverRead more.

Bind DNS Sinkhole, Elasticsearch and Logstash

I wanted to track DNS queries that get send to nameservers that do not serve a particular domain or network. I used a Bind DNS server that logged the query and returned a fixed response. The logs get parsed by Logstash and stored in Elasticsearch for analysis.

Installing bind is easy via the bind9 package :

This will add a new user ‘bind’ and store the configuration files in /etc/bind.

For this setup IRead more.

Using ELK as a dashboard for honeypots

The Elasticsearch ELK Stack (Elasticsearch, Logstash and Kibana) is an ideal solution for a search and analytics platform on honeypot data.

There are various howto’s describing how to get ELK running (see here, here and here) so I assume you already have a working ELK system.

This post describes how to import honeypot data into ELK. The easiest way to get all the necessary scripts and configuration files is by cloning the full repository.

IfRead more.