5G – 101

A short introduction on 5G. What is 5G, why do we need it and where will it be used?

If you want to read about the security threats on 5G then these are a couple of interesting resources:

Handle phishing e-mails with a phishing alert button and TheHive

Phishing alert button

Your users are the first line of defence against threats, especially for what concerns phishing. One of the ways to get more involvement is offering a simple and easy way to report suspicious messages, such as phishing e-mails. You can do this via a phishing alert button that allows users to notify the helpdesk of a suspicious message. The technology behind such a button is straightforward:

  • Forward the message;
  • Remove the message from the inbox.

Instead of starting from scratch you can use the -free- Phish Alert Button (PAB) from KnowBe4.

Pipeline to handle phishing e-mails

The pipeline to handle phishing e-mail notifications from users is as follows.

  1. A user reports a suspicious message via the phish alert button. This button is integrated in Outlook.
  2. The message is send to the helpdesk. The mail flow to the helpdesk must be unfiltered, in the sense that you do not want your security controls to mangle the message you’re about to investigate. If these controls detect it, that’s good. It means your users are protected (and they shouldn’t have received the message in the first place) but you don’t want to investigate mangled URLs or attachments. Mails for the helpdesk are stored in a dedicated mailbox, accessible via IMAP.
  3. The mails in that IMAP mailbox are read with a tool from Xavier Mertens: IMAP2TheHive. This tool transforms the phishing e-mail to a security case in the case handling system TheHive. Cases are created according to a phishing handling playbook.
  4. The investigation is done with the playbook, and automated enrichments of observables is done via Cortex. This enrichment allows your helpdesk staff to investigate if a specific URL in the phishing e-mail is found on a list of known malicious sites, or if an attachment is known at Virustotal.
  5. From TheHive a threat event is created in MISP. The threat data in MISP is then used to automatically update the block list for the proxy server.



Phish Alert Button (PAB) by KnowBe4 in Outlook

The first step is to create the button. You can use this video as a guide. I did these modifications.

  • 1 – Recipient / Helpdesk : the address used to forward the notices;
  • 2 – E-mail prefix : the e-mail prefix;
  • 3 – Button text : the button text to display in Outlook.



Apart from these settings, there’s really nothing more that you need to configure. If you save your changes you can then download the button as an installable MSI file. After the installation of the MSI and restarting Outlook you’ll notice the button in the Outlook ribbon.



Users can then report a suspicious e-mail by clicking on the button. They are prompted to confirm the action, given a thank you message afterwards and then the e-mail is forwarded to the helpdesk and removed from their inbox.








Case creation in TheHive

The IMAP2TheHive tool from Xavier Mertens does the heavy lifting. This tool reads the IMAP folder that receives the phishing notices and then creates individual security cases in TheHive. These changes have been done to the configuration file.

  • the IMAP server address, user and password
  • the URL and API for TheHive
  • the case configuration is done with TLP:Amber (2) and according to a phishing template (playbook). An important note. The Phish Alert Button sends the phishing messages as attachments. To make sure that the original phishing e-mail is included in the case verify that the files directive in the script contains application/pdf,application/octet-stream.
  • The whitelist is extended with the URLs from w3.org and Microsoft.
[case]
tlp: 2
tags: email
template: Phishing
files: application/pdf,application/octet-stream

I run the IMAP2TheHive script via cron and it’s executed every 15 minutes. Afterwards you’ll see a new case created in TheHive. The new cases contains a number of tasks, which have been defined in the phishing handling playbook.



One of the observables in the security case in TheHive contains the original e-mail. This allows the analysts to review for example additional e-mail headers.



The import script automatically extracts useful indicators such as URLs found in the phishing e-mail. You can then use Cortex to verify if these (phishing?) URLs exist with Google SafeBrowsing or if the purpose or goal of these links is known with Virustotal.



Export to MISP and create block lists

I described in another post how to create a block list from data in MISP, see Feed honeypot data to MISP for blocklist and RPZ creation. The integration of MISP with TheHive is explained in detail in a post from TheHive : TheHive, Cortex and MISP: How They All Fit Together.

Happy filtering!

How to Support Defenders with the Permissible Actions Protocol

PAP and Courses of Action

In a previous article I described how to defend with the courses of action matrix and indicator lifecycle management. The courses of action matrix describes passive and active actions that defenders can take with a varying type of impact on the attacker (or intrusion). The Permissible Actions Protocol or PAP achieves something similar, but with a focus on what defenders are allowed to do.

What is PAP?

PAP is a protocol that describes how much that we accept that an attacker can detect of the current analysis state or defensive actions. It is designed to indicate what the receiver may do with the information and it achieves this by using a colour scheme.

Colour scheme

PAP bears ressemblances with TLP or the Traffic Light Protocol because it makes use of the same colour scheme.

  • PAP:RED : Non-detectable actions only. Recipients may not use PAP:RED information on the network. Only passive actions on logs, that are not detectable from the outside.
  • PAP:AMBER : Recipients may use PAP:AMBER information for conducting online checks, like using services provided by third parties (e.g. VirusTotal), or set up a monitoring honeypot.
  • PAP:GREEN : Active actions. Recipients may use PAP:GREEN information to ping the target, block incoming/outgoing traffic from/to the target or specifically configure honeypots to interact with the target.
  • PAP:WHITE : Open, no restrictions

Note that contrary to TLP, where sources can specify additional sharing limits for TLP:AMBER, no such exceptions exist for PAP:AMBER.

For whom?

Foremost, PAP is designed to be used by analysts, operational staff or defenders.

Implementations

PAP is included as a MISP taxonomy and is supported by TheHive.

Practical use

Automation and human consumption

PAP is primarily designed for human consumption. Where the courses of action can be used to automate follow-up actions, for example to automatically create filter deny lists, PAP is rather to be used by humans.



Passive and active

There are overlaps between PAP and the courses of action matrix.

  • The distinction between active or passive actions. If you want to prevent that the analysis stage is noticed by the attacker then limit the use of the threat data to passive actions under PAP:AMBER or PAP:RED. Whenever a change (‘active’) is expected, such as filtering traffic (CoA:Deny), use PAP:GREEN or PAP:WHITE.
  • Because PAP:WHITE does not add a lot of extra context to threat data it can also be omitted.
  • For passive actions. If you’re not allowed to use external systems then use PAP:RED, otherwise PAP:AMBER is fine.



Beware of enrichment

Some automated enrichment process can hinder the proper use of PAP. For example certain SIEMs or anti-virus consoles can do host lookups (DNS) in the background. This almost always involves querying external DNS servers, which can alert adversaries of an ongoing investigation. Note that this is also the case for MISP if you enable the hoover-enrichment plugins.

Difference between MISP REST API search for events and attributes

MISP and REST API

MISP includes a powerful REST API that allows you to automate the dissemination of threat intelligence and threat data. If you aren’t familiar with the API you can explore its features (and the inline documentation) via Event Actions, REST client.




In the latest versions of MISP the REST API client supports autocompletion, which is useful if you want to search for events or attributes with specific tags. And these tags are the vocabularies that we use to classify events and attributes.

Events and attributes

One thing that is sometimes confusing is the difference in results between searching for events and searching for attributes. Hence this small overview.

Searching for events is done via the endpoint events/restSearch. Now if you search for events with tag XYZ then

  • If an event is tagged with XYZ, all the attributes of that event are returned;
  • If an attribute in an event is tagged with XYZ, then all the attributes of that event are returned. Even if the event itself is not tagged with XYZ.

Searching for attributes is done via the endpoint attributes/restSearch. If you search for attributes with tag XYZ then

  • If an attribute is tagged with XYZ, then only that attribute is returned;
  • If an event is tagged with XYZ, then all the attributes of that event are returned.



Apart from the tags, there are some other useful selection criteria that you can apply, such as

  • type and category: filter on specific MISP types and categories;
  • last: return only the results since a given time, for example the last day or week;
  • enforceWarninglist: exclude most likely false positives;
  • excludeDecayed: exclude the aged out indicators;
  • published: only include published data. Note that ‘published’ is only documented for events, but does also work on attributes.

For reference, if you prefer to try out these queries via the command line you can also use this Curl command (which actually queries for the items classified as phishing):

curl -k \
 -d '{"returnFormat":"csv","tags":"rsit:fraud=\"phishing\""}' \
 -H "Authorization: API-KEY" \
 -H "Accept: application/json" \
 -H "Content-type: application/json" \
 -X POST https://MISP-URL/attributes/restSearch

Additional information is available via the MISP automation documentation.

Mindmap Demystifying the “SVCHOST.EXE” Process and Its Command Line Options

Nasreddine Bencherchali published an article on Demystifying the “SVCHOST.EXE” Process and Its Command Line Options where he describes how the svchost.exe process works, the different command line flags it uses and which two registry keys are important. For my own notes I documented his article in a mindmap.



From threat intelligence to client scanning

IOC / APT scanner

An antivirus solution is an indispensable component in your defence arsenal but it does not protect you against all threats. Complimentary to an antivirus is Loki, an open-source IOC scanner. Loki is a scanner that allows you to search for intrusion activity such as

  • Network connections to C2 servers or malicious domains;
  • Presence of files related to APT activity;
  • Process anomalies such as malicious implants or patches in memory ;
  • Credential dump activities;
  • Checks for Reginfs (Regin malware) or DoublePulsar.

The most common use case is a “Triage” or “APT Scan” scenario in which you scan all your machines to identify threats that haven’t been detected by common antivirus solutions. What makes Loki different to common antivirus solutions is that you can provide it your own set of detection rules. Why is this important? Organisations such as ICS CERT, Kaspersky ICS CERT and ESET publish on regular basis detection rules. In addition to this, if you are part of a threat sharing group (fe. see MISP communities) you receive frequent updates on the new threats targeting your sector. You can leverage this information with Loki to hunt for malicious activity. Loki supports

  • File hashes (MD5, SHA1, SHA256);
  • Hard and soft indicator filenames based on regular expressions;
  • C2 IOCs in the form of IP addresses and domain names;
  • YARA rules.

YARA rules are a flexible way to detect various types of -malicious- activity by combining detection elements with some logic.

Use case

My use case was

  • Collect threat information for specific sectors;
  • Use this information to create detection rules for specific target environments;
  • Make these rules available for Loki and have Loki scan the systems;
  • Collect and process the logs and make the results accessible in a dashboard.

Get the threat data!

The easiest way to collect threat data is by setting up MISP, an Open Source Threat Intelligence Platform and connecting your instance with those of CSIRTs, vendors and threat sharing groups. The next step is to extract the data from MISP and make it available in a format that can be used by Loki. The signatures of Loki are provided via a separate Github repository, which also allows contains tooling to fetch threat data from MISP. Unfortunately, the connector was outdated and didn’t support MISP warning lists (lists of well-known indicators that can be associated to potential false positives, errors or mistakes) or selection of threat data based on a taxonomy (fe. via MISP tagging). I submitted a pull request with a new version of get-misp-iocs.py.

Using the threat connector is easy

/var/www/MISP/venv/bin/python3 ./get-misp-iocs.py -t '["PAP:GREEN", "Target:CIIP", "Target:ICS"]' -w 1

The above command collects threat data and writes these in the files iocs/misp-<item>-iocs.txt

  • Classified with PAP:Green (Permissible Actions Protocol) and target sectors CIIP or ICS;
  • And make sure that most likely false positives are removed.

When all is done, your set of custom detection rules will be available in the folder IOCS.

Client scanning with Loki

Loki does not require an installation and can be run directly from the command line. You do need to take some preparatory actions.

  • Extract the latest release of Loki on a trusted system;
  • Run loki-upgrader;
  • Copy the folder with IOCS collected in the previous step to the folder signature-base/iocs;
  • Make the folder loki available to the other systems. For isolated environments it’s best to copy them to individual USB drives (one drive per system).

Loki requires administrator privileges to execute properly. A full system scan can take quite a while (on moderate systems, easily up to an hour).

loki.exe --reginfs --pesieveshellc

This will start Loki and have it scan for APT threat data based on your detection rules. In addition to this it will also scan for the presence of the Regin file system and attempt to detect shell code (memory regions that are not a part of any module, but contain executable code). Loki stores its output in a text log file in the directory where it was executed (you can change this with the option –logfolder). Loki also supports export to CSV (option –csv) but for the further process I’ll use the default log format.

Loki logs dashboard

Once Loki has finished scanning and you have collected the individual log files you can process and present them in an easy accessible format. I use the Elastic stack and rely on a previous project for analysing Linux logs : a setup of an Elastic DFIR cluster.

The cluster configuration contains an updated Logstash configuration file to process Loki log files and a patterns file for the timestamp. The docker-compose file is also changed to make the patterns directory available for the Logstash docker containers.

Importing the log files can be done by first copying the log files in the directory logs and then by running Filebeat from the ~/elastic-dfir-cluster directory.

docker run -d --net=elastic-dfir-cluster_elastic --name=filebeat --volume="$(pwd)/filebeat/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" --volume="$(pwd)/logs:/volume/:ro" -e -strict.perms=false docker.elastic.co/beats/filebeat:7.9.2 filebeat

The logs are available in Kibana once Logstash has finished processing them all. I created a number of dashboards and visualisations that you can use as inspiration to set up your own dashboards. One of the extra outcomes of this approach is that the processing of the Loki log files gives you a detailed view on

  • The running processes on each system;
  • The active and listening network connections on each system.

UPDATE 20201114: I added an export of the Kibana dashboard in the Github repository. You can import this dashboard directly to get visualisations as the ones below.

curl -X POST "<hostname>:5601/api/kibana/dashboards/import" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d'
... paste here everything from loki-dashboard.json
'











EDR

Some of the features from Loki are also available via EDR tools to detect and respond to security incidents. However not all organisations have an EDR tools and they are certainly very rare in isolated (not connected to the Internet) environments.

A walkthrough of Watcher

A walkthrough of Watcher

One of the nice things of working in infosec is that there is always a new tool available to make your work easier. It can also cause a lot of frustration, as there is yet another new tool that you need to master. A tool I recently discovered is Watcher, a platform for discovering new cybersecurity threats targeting your organisation. Some of its key features include

  • Detecting emerging trends via social networks and RSS feeds;
  • Monitor for information leaks, for example in Pastebin;
  • Monitor domains for changes;
  • Detect potentially malicious domain names targeting your organisation.

As I’m already using the AIL framework I was interested to see if Watcher is complimentary or could replace AIL. My use case is pretty straightforward. For a number of organisations (in fact keywords relevant for organisation asset descriptions) I want to receive notifications if

  • A new certificate is issued;
  • The keyword turns up in a phishing database;
  • A potential information leak with reference to the keyword was found;
  • New domains resembling a keyword are registered (cybersquatting/typosquatting);

I created my own tooling (Digital Footprint Light) for the first two cases and I’m using AIL to cover the third. For the new domains I use dnstwist with manual post-processing. Based on its feature list, Watcher might be an answer to both my third and fourth case. Hence a walkthrough of Watcher.

Getting started with Watcher

Watcher is available as a Docker instance, which makes it very easy to evaluate. Clone the Github directory and make sure you have Docker and Docker Compose installed. Before starting the Docker containers, edit the .env file. This file also allows you to configure the integration with TheHive and MISP.

TZ=timezone
DJANGO_SECRET_KEY=the secret key 
ALLOWED_HOST=set this to the FQDN of your server

After these changes you can start the containers with

docker-compose up    

If succesfull, create an admin user and populate your database, as described in the installation notes.

Watcher

Monitoring trends

The start page of Watcher presents you a tag-cloud of trending topics. Note that the tag cloud will only contain relevant information after running Watcher for a couple of hours.




You can then select one of the items in the tag cloud and get a list of articles related to the term, together with a trending graph of the term.


The tag cloud is based on the monitoring of a set of sources, defined in the administration interface.


The tag cloud and user interface for the trend monitoring is very simple and easy to use but the quality of the monitoring entirely depends on the sources that you provide it. This feature can prove very valuable if you want to watch a limited set of websites for new topics, but it’s maybe less useful for a more ‘global’ trend watching. Out of the box Watcher includes more than 100 sources.

Data leak alerts

The second option is monitoring for potential data (or information) leaks. You have to provide a number of keywords in which you’re interested and then Watcher will alert you if a hit is found. Apart from the web interface, Watcher can also send you e-mail notifications.



In the background Watcher uses a Docker image of Searx, a privacy-respecting, hackable metasearch engine. The straightforward user interface of Watcher allow you to easily add new terms, but it does not have additional filtering (for example exclude specific combinations). In order to use Watcher Pastebin API feature, you need to subscribe to a Pastebin pro account and whitelist the Watcher public IP.

Website monitoring

The third feature in Watcher is monitoring websites for changes. These changes can either be on the website itself, a change in the IP address or a change in the e-mail records. When you add the website, you can give an RTIR ticket number as reference.




When Watcher has detected a change it supports the option to export the details of that change to either TheHive or MISP.


Dnstwist

The last feature is a graphical layer on top of Dnstwist. Dnstwist allows you to discover permutations of domainnames, ideal to spot cybersquatting. As before, Watcher makes it easy to add new domains to monitor.




The output of Watcher will indicate the fuzzer, the discovered domain name and the domain to which it relates. A very nice feature is that you can immediately send the discovered domain to the list of websites to monitor. This makes a great chain.

  1. Dnstwist discover a new potential cybersquatting domain;
  2. You can then monitor the domain until it gets live;
  3. And then create a case in TheHive or an event in MISP.

Conclusion

During this walkthrough of watcher, I found the user interface and the ease of use of Watcher very good, it is very easy to add an item you’d like to have monitored. I very much like to web interface around the Dnstwist feature. In some cases having the domain monitored is a very valid solution, in other cases I’d like to report this directly to TheHive/MISP (see: Issue 16). The website monitoring is certainly useful but has room for improvement. For example monitoring changes in the web server environment (see: issue 17) or included libraries would be an interesting addition. Partially this can be covered by integrating with services such as Urlscan.io. And finally, compared to the AIL framework, the leak detection options are limited. This shouldn’t be a problem as such (also see AIL integration) but it still does require running multiple tools.

I’m now primarily using the website monitoring and twisted dns features of Watcher, pending the further integration with the AIL framework.

Analyse Linux (syslog, auditd, …) logs with Elastic

Introduction

The Elastic stack is a great tool to quickly visualise large volumes of log files. In a previous post I described how to load stored Windows EVTX logs in Security Onion, with the help of Winlogbeat. In this new post I describe something similar with the goal to analyse Linux auditd logs with Elastic. Instead of using the Elastic stack of Security Onion I use an Elastic cluster via Docker and instead of storing the Windows EVTX files, I now store traditional Linux log files such as syslog, cron and auditd in Elastic. For the shipment of the logs I’ll be using Filebeat, instead of Winlogbeat.

Setup the Elastic DFIR cluster

The first step is to deploy an Elastic cluster with Docker. I created a Github repository with all the necessary files: elastic-dfir-cluster. Make sure you have Docker and Docker-compose installed, clone the repository and you’re ready to go.

git clone https://github.com/cudeso/elastic-dfir-cluster.git

The key configuration is in the Docker compose file docker-compose.yml. This will start 2 Elasticsearch nodes, one Logstash node and one Kibana node. The data is stored as a local volume. If your machine is sufficiently powerful, you can add extra Elasticsearch nodes in the configuration.

First you have to init the cluster to remove any remaining old volumes and networks. When this is done, start the cluster. Both init and start are handled with bash scripts.

./init-elastic-dfir.sh
Removing kibana        ... done
Removing logstash      ... done
Removing elasticdfir02 ... done
Removing elasticdfir01 ... done
Removing network elastic_elastic
elastic_data01
elastic_data02
WARNING! This will remove all custom networks not used by at least one container.
Are you sure you want to continue? [y/N] y
WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y

./start-elastic-dfir.sh
There is a screen on:
	215088.elastic_dfir	(10/24/20 16:45:46)	(Detached)
1 Socket in /run/screen/S-dfir.

The containers are started in a screen, to allow you to periodically review the status of the cluster. Just attach to the screen. Alternatively, you can also dump the container logs.

screen -r elastic_dfir

Filebeat

Filebeat is a log shipper, similar to Winlogbeat. Where Winlogbeat is specific for Windows event logs, Filebeat can ship almost any log you can think of. For this post, I use Filebeat via Docker. There are two things to do to get Filebeat working:

  • Point it to the correct log files;
  • Setup dashboards and visualisations in Kibana.

Configuration

All the required configuration is in filebeat/filebeat.docker.yml and uses Filebeat modules. This makes it easier to directly have the correct field matchings of data in the log file and the storage in Elastic, and doesn’t require you to write your own Logstash log parser (grok!). The configuration enables the modules auditd, system and iptables. The system module supports syslog and authentication files. In summary, this config allows you to process:

  • Auditd log files (Red Hat Linux etc.);
  • Syslog messages;
  • Cron jobs (via system);
  • Sudo activity (via system);
  • Users or groups added (via system);
  • Remote access via SSH (via system);
  • Authentication activity (via system);
  • Firewall events (via iptables).
filebeat.config.modules:
  enabled: true
  path: /modules.d/*.yml


filebeat.modules:
- module: auditd
  log:
    enabled: true
    var.paths: ["/volume/audit*"]
    exclude_files: ['\.gz$']    

- module: system
  syslog:
    enabled: true
    var.paths: ["/volume/syslog*", "/volume/messages*", "/volume/cron*"]
    exclude_files: ['\.gz$']
  auth:
    enabled: true
    var.paths: ["/volume/auth.log*", "/volume/secure*"]
    exclude_files: ['\.gz$']

- module: iptables
  log:
    enabled: false
    var.paths: ["/volume/iptables.log*"]
    var.input: "file"
    exclude_files: ['\.gz$']

output.elasticsearch:
  hosts: ["elasticdfir01:9200"]

setup.kibana:
  host: "kibana:5601"  

The only important thing to remember is that the log files you want to process need to be stored in the directory ‘logs’. This directory needs to exist in the folder where docker-compose was executed. In most circumstances this will be the folder that you used to clone the repository (elastic-dfir-cluster).

mkdir elastic-dfir-cluster/logs

Once you have the log files in the correct location, start the container with

docker run -d --net=elastic_elastic --name=filebeat --volume="$(pwd)/filebeat/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" --volume="/home/dfir/elastic/logs/:/volume/:ro" -e -strict.perms=false docker.elastic.co/beats/filebeat:7.9.2 filebeat

Create the dashboards and visualisations

While the Elastic stack is processing the log files, you can continue with the second step: create the dashboards and visualisations. You don’t have to create them from scratch, Filebeat can do the bulk of the work for you.

docker exec -it filebeat sh -c "/usr/share/filebeat/filebeat setup -e"

Filebeat will greet you with a message when all is done.

2020-10-24T17:34:45.197Z	INFO	eslegclient/connection.go:99	elasticsearch url: http://elasticdfir01:9200
2020-10-24T17:34:45.199Z	INFO	[esclientleg]	eslegclient/connection.go:314	Attempting to connect to Elasticsearch version 7.9.2
2020-10-24T17:34:45.199Z	INFO	cfgfile/reload.go:262	Loading of config files completed.
2020-10-24T17:34:45.871Z	INFO	fileset/pipelines.go:139	Elasticsearch pipeline with ID 'filebeat-7.9.2-system-auth-pipeline' loaded
2020-10-24T17:34:46.014Z	INFO	fileset/pipelines.go:139	Elasticsearch pipeline with ID 'filebeat-7.9.2-system-syslog-pipeline' loaded
2020-10-24T17:34:46.428Z	INFO	fileset/pipelines.go:139	Elasticsearch pipeline with ID 'filebeat-7.9.2-auditd-log-pipeline' loaded

Note that in some circumstances, Filebeat will not immediately ingest the logs. If this is the case, you can restart the processing of the log files by restarting the container.

docker stop filebeat ; docker rm filebeat
docker run -d --net=elastic_elastic --name=filebeat --volume="$(pwd)/filebeat/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" --volume="/home/dfir/elastic/logs/:/volume/:ro" -e -strict.perms=false docker.elastic.co/beats/filebeat:7.9.2 filebeat

Elastic dashboards and visualisations

If all worked out fine, you’ll now have a couple of dashboards available in Kibana.


Analyse auditd logs with elastic

Auditd

Apart from the traditional Linux log files, my objective was to analyse auditd logs with elastic. The module is already enabled in Filebeat, there are just a few additional steps to take.

Log process execution with auditd

Auditd provides a very good visibility for Linux system activity, but in order to track the launch of every process you need to tweak the auditd config a bit. As described by secopsmonkey, add a file /etc/audit/rules.d/10-procmon.rules with these lines

-a exit,always -F arch=b64 -S execve -k procmon
-a exit,always -F arch=b32 -S execve -k procmon

Then restart the auditd service. If you can’t restart the auditd service, then check that the systemd script allows manual restarts. In /etc/systemd/system/multi-user.target.wants/auditd.service, RefuseManualStop should be commented.

...
#RefuseManualStop=yes
...

systemctl restart auditd.service

Kibana dashboard

Now there’s one final change you need to do. The Filebeat auditd module transforms the event.action (in fact, the action logged by auditd) to lowercase. In /usr/share/filebeat/module/auditd/log/ingest/pipeline.yml you’ll notice

- lowercase:
    ignore_failure: true
    field: event.action 

Unfortunately the dashboard for auditd doesn’t take this transform to lower case into account. But there’s a fix. Open the visualisation Top Exec Commands [Filebeat Auditd] ECS. In the top bar you’ll see a filter for EXECVE. Change this to “execve”. Then click Save (and make sure that the checkbox next to ‘Save as new visualization’ is not enabled).



Note that if this change doesn’t work the first time, then first refresh the Kibana field list (Stack Management, Index Patterns) and try again.

Now head to the auditd dashboard. If all went well, all visualisations should contain data.



Process tracking

The Kibana dashboard(s) gives you an initial overview what’s happening on your system. But there’s more. The tracking of process execution will not log the user who just executed the process (execve), this information is stored in another log line (the syscall entry). To get this information you have to use the raw data, via Discover.

For example, in the initial dashboard there is a notice that wget is used.




Then in Kibana discovery, filter for the auditd module (via event.module) and then for wget. There will be two events, one is the execve (event.action), preceded by a syscall (event.action) event.



This entry contains the PID, parent PID, and the user ID.


Visualise process tracking

Apart from the method above, there’s another interesting option to track which processes have been executed. In the Discover section of Kibana, add process.executable as one of the columns to display. The field list on the left has a button Visualize, which provides you direct access to visually represent all the process executions.




Monitor the stack

Optionally you can also install Stack Monitoring to monitor the statistics and health of the Elastic stack. Under the Management menu, choose Stack Monitoring and then choose self monitoring.



Incident Response: 5 Steps to Prevent False Positives

I published an article on the IBM Security Intelligence blog : Incident Response: 5 Steps to Prevent False Positives. The article describes how false positives look like and how they can interfere with your incident response and threat intelligence processes.

I propose 5 steps to prevent false positives, including

  • Prevent false positives from being added to threat intel report
  • Notify analysts on likelihood of false positives in threat intel reports
  • Report sightings, observables and false positives
  • Inform analysts about sightings
  • Disable the indicator to streamline cyber threat intel

MISP service monitoring with Cacti

MISP service monitoring with Cacti

I published a post on the misp-project website on MISP service monitoring with Cacti.

The post covers how to use Cacti to monitor the performance and well-functioning of a MISP server. This includes

  • CPU, load average, memory usage and swap usage (based on default Cacti templates)
  • Interface statistics, logged in users and running processes (based on default Cacti templates)
  • MISP workers and job count
  • MISP event, attribute, users and organisation statistics
  • HTTP response time