What Detection Can Look Like: Open Source Options
Previously, we talked about Detection Engineering and how it has become a career path within Cybersecurity. What used to be subset of the job within Detection and Response or IR, is now a branch on its own. It’s crazy to think how this field has changed over the years.
Detection Engineering is the practice of designing, implementing, and maintaining detection systems to identify and respond to security threats. It includes log data analysis and developing automation for your alerting.
Why Detection Engineering Matters
Detection engineering ensures that threats are identified before they can cause significant damage.
It's all about creating a proactive defense rather than a reactive one. By setting up robust detection systems, you can catch threats in their early stages. Heck, even if its detected in the later stages of the attack lifecycle, this is better than most companies. The average dwell time was reported to be around 8 days.
This proactive stance is crucial in an era where cyber threats are constantly evolving and becoming more sophisticated.
Here are the main components when it comes to successful Detection Engineering.
Key Components of Detection Engineering
Data Collection: Gathering logs and telemetry from various sources is the first step. This could include logs from firewalls, endpoints, network devices, and applications. The goal is to capture as much relevant data as possible.
Normalization: Once data is collected, it should be standardized. Normalization ensures that data from different sources is formatted consistently, making it easier to analyze and correlate based off of fields for example.
Then the idea is you can search across your data sources with these fields.
Correlation: Correlating data points across different sources helps in identifying patterns that might indicate a threat. This involves linking seemingly unrelated events that provide some context to form a coherent picture of potential malicious activity.
Alerting: Setting up alerts for suspicious activities. These alerts should be configured to notify the appropriate teams when a potential threat is detected.
Response: Developing and implementing response strategies. This includes defining procedures and priorities for different types of alerts, for example a standard user based alert vs a high severity PagerDuty alert. The response is obviously different for these.
In a nutshell, these are the building blocks of Detection Engineering. Yes, you will find someone saying there is more pieces that are crucial, but these get you most of the way there. (I don’t want to be dogmatic anyways 😉)
Now let’s take a look at what the above components are NOT.
Data Collection: Gathering logs from all sources and ingesting everything. A good way to blow up your license.
Normalization: Focus on the crucial fields that tend to come up the most. (hostname, username, src_ip, dst_ip, etc.) Usually not feasible to normalize everything.
Correlation: This one is simply not doing any kind of correlation. Think Alerts on an island.
Alerting: Alerting all the things. Similarly to the Data Collection piece, you don’t need to alert on everything. All you will do is inundate your team and develop bad habits with Alert fatigue.
The one caveat here, is if you run heavy automation that does alert clustering and allows closing of high volumes of alerts simultaneously.
Response: Not having defined priorities and severities. If everything looks like a P1, nothing is a P1.
This is a good exercise for you to go through internally, to see if you’re focusing on the right things. There’s always room to improve.
Hands-On Practice
To get some hands-on experience, start by setting up a lab.
Here is one option.
ELK Stack: use open-source tools like ELK Stack (Elasticsearch, Logstash, Kibana) for log management.
Elasticsearch is a search and analytics engine, Logstash processes the logs, and Kibana is the visualization tool for which you as the user can search with. To be complete, Beats will be used as the agent to send the logs to Elasticsearch.
Simulate Attacks: Use tools like Atomic to simulate attacks. This helps in understanding how your detection systems match up by triggerring test events and identifying areas for improvement.
Refine Detection Rules: Continuously refine your detection rules based on the outcomes of your simulations. This iterative process is essential for maintaining an effective detection system.
You could configure alerting with Kibana. However, I’ve found the bigger strength of Kibana being dashboards to visualize the alerts. The platform is good for visualizations and stats for your alerting, but has it’s limitations when it comes to investigations and active searching on data.
For another open source option, you could give ElastAlert a try for alerting. ElastAlert was originally created at Yelp.
For more ideas on setting up Detection, see this post.
Wrapping Up
In the end, this is a process. Detection enables teams to be able to have eyes on the valuable data they protect, and is a continuous iteration to be able to do be done successfully.
By continuously refining your approach, you can build a solid foundation in Detection Engineering.
See you in the next one.