Asymmetry of Cyber Security And The Daunting Task Of Defense

Article by Stuart McIntosh

Villainize cyber criminals all you want.  The truth is cyber-attacks happen for one basic reason – Economics.

 

Attackers are human – just like you and I.  They are motivated by the same things we all are – money and power – as guided by emotional and rational drivers.

 

I believe that one of the major drivers making cyber-crime so attractive is the asymmetry that exists in our domain.

 

What do I mean by asymmetry?  Simply stated it is the difference between the effort it takes to mount an attack and the amount of effort it takes to defend against one.  Compound this over our large & complex IT environments, and this asymmetry tips immensely in favor of the attackers.

 

Add to this that there is almost zero downside for attackers to launch a failed attack.  Even if we could quickly attribute and locate the source of the attacks, rarely are the bad actors prosecuted or fined.  Many of them have set up “legitimate” business operations in non-extradition countries or sovereign enemies of the US.

 

For me, this puts cyber attackers in a new light.  Applying a first principles approach leads me to break down attacks into their core elements:

 

  • Why are they attacking?  What is their primary motivation?  This helps you “get inside their heads” somewhat, to understand what makes them tick – but the answers to these questions are realities that you probably can’t change.

  • What are they attacking specifically?  What are they targeting?  We can potentially influence the systems and data that attackers target, but it does require definition and prioritization on our part, then implementing boarders and defenses in the order of the business impact priorities (the “crown jewels” approach).

  • How do they attack?  I do not mean specifics here, but pattern recognition of the types of elements involved.  The “hit list” of where attackers hit or succeed often.  Identifying these gives us another priority list, but this one includes where we focus on defensive prevention & detection.

 

There is significant asymmetry in cybersecurity today.  Arguably the more than in any other industry. Worsening that fact is the high percentage of success attackers eventually obtain. Maybe they have 20,000 failed attacks, but all they need is one to work in order to extract a financial or strategic return.

 

So the true goal is to reduce the significant asymmetry we face, anything less will only produce temporary gains.  Economically put – we need to raise the cost of attacks and lower the cost to defend.  But how?

 

In situations like this, I like to look at other examples of asymmetric attackers vs. defenders.  What were effective approaches to mitigate or eliminate the systematic imbalance?

 

A relatively recent example of this is from the battle fronts of Iraq and Afghanistan.  In both engagements, the allied forces faced an opposing force that would lay in-wait and only strike when they could maximize damage and fatalities – typically in locations or situations where defenses were at the bare minimum.


One of the most successful approaches against this style of warfare was called Seize-Clear-Build-Hold (https://manassaloi.com/2020/03/16/seize-clear-hold-build.html).  The essence of the approach was to penetrate the most dangerous, insurgent controlled neighborhoods in a city, and establish a foothold; “Seize”. The second step was to branch out from that foothold – going house to house if necessary – to root out the sleeping enemies ; “Clear”.  With the danger cleared from the “controlled” area, the operation extended to residents living in those areas – thousands of civilians just trying to survive in a brutal time of war – now with at least some protection offered by the allied forces; “Build”.  Finally, as the foothold expanded and the military had gained the trust of the residents on “their side”, there was a good change of longer term stability; “Hold”.

How can we use this as inspiration for bolstering our defense in cyber security?  I see the Seize-Clear-Build-Hold phases relating directly to the activities we are already doing.  We just haven’t aligned them in terms of a repeatable battle plan.  I think we need to start by believing ALL of the areas of our company need to be re-taken. While it sounds dauting,  it allows you to reestablish your knowledge and test assumptions about your organization's security.   Let’s relate the phases to the security space:

Seize - We identify an area we will be focusing on to extend the boundary of safety and security. Then we organize the data, patches, assets, and users through logs as well as knowledge experts of the area. The first move into the area is to find anomalies or malicious behavior.

Clear - Once anomalies and malicious activities begin to be processed and cleared, we need to cast wider nets.  Focus our security tooling on all assets; proper inventory, logging, secured privileged accounts, & full vulnerability scans.

Build - With control taken of the environment, we can begin incorporating proper and appropriate security into it. This includes ports closed, systems patched, endpoint and network security controls deployed and systems hardened. We also work with the knowledge experts on the systems and process to incorporate security into the lifecycle of their environment. This can be software release, code scanning, or audits.  This is a cooperative exercise leveraging the strengths of all involved.

Hold - We tend to have a difficult time holding onto advantages in technology for very long. Staff turner over, competing projects, budget reduction and the speed of technology being adopted all contribute to this. It is important to outline the activities necessary for holding the defense of our assets including the evolution of detections, preventative controls, addressing vulnerability and testing gaps identified. One method of aiding this is to develop central talent and procedures, as well as continually incorporate additional environments into a central strategy.  A good example of this is onboarding a newly acquired company.

Most importantly, this is not an academic approach. We have seen this process work inside large enterprises.  Executed as iterative feedback loops, these repeated activities gain their own momentum and gravity. Talent flourishes, teams communicate effectively, and understanding of your environment grows.

We cannot change why attackers attack.  And they will most certainly attack.  What we can do is be ready for them, fight them when the attack comes, making their jobs hard enough that they decide it’s not worth it to keep fighting us, and move on to the next town.

True Metrics for Cybersecurity Effectiveness

Article by Will Robus

Metrics provided by Outpost RBA

Almost every company we work with struggles with cybersecurity metrics. Personally, I am not surprised. At the end of the day effective security means you didn’t get breached. From that perspective, the thing we are trying to measure is essentially the absence of “bad”. 

“How do our security metrics look this month?”

“They look great – Zero breaches.” 

“Great job – keep spending millions of dollars to keep it there!”

I know it sounds absurd, but this vignette isn’t terribly far from the truth. The reality is cybersecurity is much more complex than a few numbers can describe. Compounding this complexity is how interrelated everything is. 

In previous articles I’ve highlighted that some of these relationships create competing incentives, which in turn create friction. Detection engineers want to increase visibility and add new detections. The existing detections are already generating a high volume of alerts that SOCs struggle keep up with. Too often a new detection is released into production and immediately floods the SOC with added volume of new alerts that end up being false positives.  

The natural result is competing incentives between the two groups – increased visibility for one group at the cost of reduced efficiency for another.

These competing incentives can be extrapolated to underlying security metrics themselves. A common frustration we see in Fortune 500 SOCs is a focus on just one or two metrics that are driven by a myriad of contributing factors. MTTR may go up 10% one month, which by itself is bad. However, if that increase is qualified with the fact that a new security data source was added and new detections went into production, the MTTR increase may indicate the SOC’s successful absorption of the increased visibility.

Risk Based Alerting not only changes the game for detection engineering and incident response workflows in a large enterprise, it opens up a new potential for how we create and track associated security metrics. Based on many conversations with SOCs and security managers, we would like to propose a new set of metrics, enabled by Splunk® with Outpost RBA, that tells a complete story around just how effective your security program is, as well as their rates of improvement.



Category #1 – Visibility Metrics

Visibility is the blessing and the curse of security analytics and alerting. On one hand you have ALL the data in system and traffic logs, and Splunk allows you to ingest and search that data in near-real time. 

The curse is that there is A LOT of data, and in that data is mostly “business as usual” activity.  It is the goal and the challenge of cybersecurity to identify the “bad” amongst all of this “good”.

Security Data

Data is the foundation. If you don’t have the data, you are blind to whatever may be happening. This is our first set of Metrics:

Proposed Metrics:

Data sources – How many and are the number of them growing over time?

Data volume – What is the volume of activity we are monitoring and how is it changing over time?



Security Detections

Security detections are what we use to find things in these terabytes a day of log data. How we use these large volumes of data is our second set of metrics:

Proposed Metrics:

Detections – How many and are they growing (we call these risk rules in RBA)

Data Diversity – What categories of security data are we searching? (e.g. Endpoint, Email, Network)

MITRE Coverage – How do these detections map to MITRE ATT&CK and is that coverage growing?

Threat Intelligence – What volume of Threat Intelligence are we collecting and matching in our security detections and is it changing?

Overall – these metrics give us context around how our security visibility is changing.  An increase in any of these numbers I would argue is positive as by simply increasing visibility, you are decreasing risk. However, an increase in visibility has a potential 2nd order effect of increasing Incident response metrics. More visibility, more things to respond to, more work for IR.

A decrease in any of these metrics is, on the surface, bad. Especially if you simply stop seeing data that you were searching before. However, some of the decreases could easily be explained by the 2nd set of metrics we have defined.



Category #2 – Environment Metrics

IT & security data comes from somewhere – your enterprise’s environment. And as Captain Obvious would point out here – large environments are very dynamic.  These dynamics directly contribute to the volumes of security data that we are measuring in the visibility metrics. Naturally for added context, we need to measure these as well.



IT Environment Changes

Users – how many and are they growing over time?

Business units – where do these users come from? (e.g. adding a newly acquired company)

Infrastructure – what infrastructure changes have taken place since the last measurement period? (e.g. did we add a new security tool? Go live with a significant cloud migration?)

These are few direct root causes for changes in visibility metrics. While a few of them may be qualitative, they at least can be observed as influencers and make some sense of more dramatic changes in visibility, for the positive or the negative.



Category #3 – Incident Response Metrics

SOC metrics seem to be the most popular reports up to the CISO and the board. I’m not going to argue against them, but I do believe these are the easiest to calculate, which is why they’ve become mainstay. Herein lies the source of the frustration of the incident response directors and managers. These metrics taken on the surface don’t tell the whole story. In addition to MTTO, MTTR, and any other performance metrics, we propose adding top of funnel metrics to quantify the true production of your incident response program.



Detection Data/IR Metrics

Risk Events – the volume of risk events produced by the security detections – in essence the output of all your security detections. Essentially a raw volume of “potential IOCs”.

Notables – the volume of security alerts generated by the analysis of the risk events of which an analyst needs to review and make a decision on.

Historically, we have seen these metrics have a natural ebb & flow, based on a number of contributing factors (many of which we are measuring in earlier categories). Simply put, user activity varies from day to day, week to week, as does attacker activity. One example is an observable increase in phishing campaigns on Tuesdays and Wednesdays between 8-10am.

Time and resources are limited. When you report the productivity metrics of an incident response program, you also need to include contextual volume metrics to understand if the overall performance is improving, staying the same, or decreasing.



Bonus Category – Outcome Metrics

The metrics presented so far are meant to be very straightforward and easily measured or reported. (We are happy to show you our Beta dashboard in Splunk – just ask). However, they all fail to capture the most important measurement of all – and that is EFFECTIVENESS.

How effective is our security and is it improving?

Great question. Trust that Highland Defense is committed to help your company answer this question. In the meantime, let me introduce you to some of our thoughts.

If we want to measure security effectiveness, then we need to measure the outcomes of security operations activities and investments. Fortunately, the SIEM with Outpost RBA has the foundational elements to do so. Let’s use this simple rubric to frame the universe of security alert outcomes.



Alert Outcomes and Desired IR

Note: These outcomes are represented in “hind-sight” – it’s very difficult to classify each alert with this information at the time it is generated.

We already measure speed in relation to alerts or notables. Notables are closed with a “status” field, so we have some indication of their outcome. If we could add “accuracy” and “impact” to these measurements, we could EASILY tell if a security program is becoming more effective. Couple these metrics with the visibility metrics above and imagine the confidence your CISO could present to the board in demonstrating the robustness of your security posture quarter after quarter.



New Security Metrics Made Possible by Outpost RBA

We hope that after reading this it has given you a fresh perspective on creating and presenting security metrics that tell the whole story. We see security teams kicking-butt every day in some of the world’s largest companies. Trying to capture and report the magnitude of their contributions, and put numbers to just how much risk is being reduced or eliminated by the work of detection engineers and security analysts, is the challenge. 

At Highland Defense we are working to solve this challenge and believe the approach summarized in this article is a step on the path forward. Please reach out to us to discuss in depth or see some of the dashboards and reports we’ve built to tell these important stories. The stories of your security teams working hard in the trenches to level up the security of your organization every day. Let’s put some metrics behind them and let the CISO beam with pride at the work you are doing.

Finally, here is a just sample of the metrics we can pull automatically from you Splunk data.

Example Metrics in Splunk

The CISO's 5 Agenda Killers

& How to Beat Them - By Will Robus, CEO of Highland Defense

Admittedly I’m new to the cybersecurity industry. I have spent my career in advanced technology development and deployment, in public sector as well as private, so tech is very much a part of my professional DNA. Maybe almost half.

The other half of my DNA is solving problems. Building and delivering amidst the largest of obstacles. I’ve even tried healthcare for a while (but unfortunately that industry is stack of problems that I’m not sure anyone can solve).  One of the things that attracted me to cybersecurity and a big reason we started Highland Defense is the dynamic challenges that all companies face in trying to get ahead of would-be attackers to keep their companies safe from breach or loss.

I’ve spent a lot of time over the last two years trying to understand these challenges from a leadership perspective. To “get in the heads” of CISOs & understand how they see the world from their screen, what their goals are, and what their agenda is for executing what they believe will be the difference maker for their organizations. 

Gathering data from as many CISOs as I could, I've listened to many podcasts and interviews, these conversations are very insightful. I’m usually left impressed at the savvy and fundamental approaches that are detailed in the conversations. Articles and interviews are helpful too. (I will confess that I get most excited when I read a quote from a CISO that aligns with how we are attacking the challenges of cybersecurity at Highland Defense.)  Finally, one-on-one conversations with CISOs are the most informative, however even if it’s not a sales pitch, I’m still a security vendor, and I imagine they edit their responses accordingly.

What follows in this article is my current observation of the goals and challenges CISOs face daily, based on the last two years of my subjective research.


A CISOs #1 Concern – Close the Gap

If I summed up what a CISO’s job description is in one line it would be this:

A CISO is responsible for closing the gap between existing security risks at an organization, and that organization’s ability to mitigate or eliminate those risks. 

A CISO has limited resources to accomplish this; limited time, limited money, limited talent. A CISO has an agenda, a battle plan, a path that organizes technology, people, and processes via objectives and budgets. An agenda of priorities they communicate up to the CEO & the board, then delegate across their direct reports & their teams.

Sounds simple. Make a plan. Get buy-in for that that plan. Fund & staff the plan. So why aren’t all CISOs crushing it? Why is the average tenure of a CISO at a Fortune 500 only about two years?

Simple? Yes. 

Easy? Rarely.


A CISOs #1 Problem – Agenda Killers

As every leader knows, things rarely work out how you expect them to. As the proverb reminds us: “The best laid plans of mice & men often go awry.”

Salespeople love to ask and CISOs hate to the answer the question “What do you need? What are your biggest problems?” I’ve received all kinds of answers to this question – from the frustration of the day to “I want you to tell me the biggest problem that I have that I don’t know about yet!” 

The truth is – CISOs are very capable leaders with strong teams of strategic, managerial, and technical talent. However, the “nature of the beast” is the everyday realities of internal and external change and complexity. We’ve summarized these CISO agenda killers into 5 distinct forces.


Agenda Killer #1 – Business Transformation

Wouldn’t it be much easier to complete all of your security projects and roll-outs if the business just stood still for a little while? Of course it would, but that’s not reality. The meta stakeholder of a security organization is the business itself, and the business only grows if it is moving ahead. This means there will be constant change; new business lines, deprecation of old business lines, mergers, acquisitions, joint ventures, key supplier on-boarding.  Just getting the business to consider security in these types of business transformations can be a challenge. 

Delivering security in an ever changing business context is a constant battle.


Agenda Killer #2 – Technology Transformation

Coming right alongside of business transformation is technology transformation. All of those newly acquired operating companies have their own tech stacks and infrastructure that instantly become the new problem of the CISO. There is also the nagging issue of tech debt – legacy systems, aging infrastructure long past its useful life, or simply bad decisions made by previous leaders that current leaders were forced to inherit.  I’d also like to introduce the idea of entropy here. Because of technology scope and infrastructure changes, CISO cannot afford to keep doing what they are today and expect to be secure tomorrow. 

There is a constant decay of security as time progresses – the natural trade-off of successful business and technology transformation.


Agenda Killer #3 – Attacker Evolution

On top of the complexity and change introduced internally at an organization, there is the constantly evolving threats of external attackers as well as insiders to manage as well. ATPs and hacker groups are continually refining their approaches. From a pure economic standpoint, the cost of launching or automating a new attack, even with age-old techniques, is very, very small vs. the costs a company incurs to maintain adequate defenses (see entropy above). 

From a speed and volume standpoint – it’s hard for security leaders to not feel outmanned and outgunned at times.


Agenda Killer #4 – Talent Market

We are all aware of the tight labor market for cybersecurity talent. Technical talent is difficult to find in any industry, as well as expensive to acquire and retain.  Cybersecurity is especially challenging, I believe because it requires a mashup of technical skills, not commonly found in other traditional IT roles. A good cybersecurity professional needs to understand all aspects of the IT environment, endpoints, network traffic, web traffic, email domains, IAM, as well as database architecture and application layer security. This is hard to “teach” as it usually comes from on the job experience. 


Our company is focused on leveraging Splunk to deliver world class security, and our CTO Stuart McIntosh frequently tells me “It’s easy to teach someone Splunk, but it’s a lot harder to teach someone security.”


Agenda Killer #5 – Security Solutions Transformation

Finally, as if the first four agenda killers weren’t enough, we have a constant onslaught of security solutions that will “solve the problems”. Vendors hawking thousands of solutions, accelerating activity in startup investment and acquisitions, not to mention the churn of “buzzwords of the year” that are promising a new silver bullet to all that ails a CISOs security program. There is a lot to keep up on, which is followed by the constant fear of investing in the wrong solution for tomorrow (and creating your own technical debt in the future). 

A CISO could spend 100% of their time just keeping up on the latest and greatest solutions for what ails their security program.


What we can do about it

I opened the article with my admission of cybersecurity naivety. With that naivety however comes the unique value of a fresh perspective. A new set of eyes to see for the first-time what others have been looking at for a long time. 

Naturally, we at Highland Defense have some ideas on how to overcome these obstacles.

We design these considerations into our products and services, as well as work closely with our customers coaching them to get better at overcoming these challenges. This enables them to push the security agendas for their security organization and their CISOs forward.

We believe in a wholistic approach that requires a sound technical approach (ideally grounded in first principles) and also requires designs to mitigate or eliminate the agenda killers above.


The Highland Defense Approach

In summary, we believe the way to eliminate these CISO agenda killers are by deploying three distinct and complementary approaches. 

1)   Infrastructure - Aggregate and normalize ALL of your data

2)   Security Solutions – Centralize security tooling and related data

3)   Process – Integrate security teams and their processes w/ the data and solutions from step 1 & 2

In a previous article about the future of threat detection I outlined three keys to achieving exponential returns around threat detection and alerting. Turns out that logic can apply here as well

1)   Infrastructure – Blow it up (Aggregate and normalize ALL of your data)

2)   Security Solutions – Stitch it up (Centralize and integrate security tooling and related data)

3)   Process – Roll it up (Integrate security teams and their processes w/ the data and solutions from step 1 & 2)

I’ll detail these in a future article, but for now a preview of how these approaches mitigate the 5 agenda killers we identified earlier.


The Biggest problem that CISO’s may not realize

In our experience working with Fortune 500 companies, the biggest issue we see across the organization is team alignment. This isn’t to say there are bad leaders or teams, it’s actually just a fall-out of the 5 agenda killers above – the friction and overhead created by constant change and growing complexity. The same can be said for security technology and operational processes. And when things are out of sync, sometimes because of competing priorities, everyone suffers, cost goes up, and we see the agenda killers flex their power to stall or kill progress.

So how do these three approaches resolve this? 

1)   Data aggregation and normalization – tech is tech, data is data, and by classifying these as “what type of data” is this and “who is this really” we essentially commoditize the source of the data, making the specific technology unimportant. For example – firewall data is firewall data – if we can read Cisco firewall data the same as Palo Alto data – does it matter which one we have in the environment?  Deploy this approach as a framework – and we have endless scalability. New acquisition? No problem – drop their data into the framework and let’s start securing them. 

This approach takes big bite out of the business transformation and technical transformation agenda killers

2)   Centralized and integrated security tooling & data – Attackers win by gaining a foothold and exploiting – regardless of the tactic or technique. By centralizing the security tools you can see everything through a single lens, more importantly see all of the ripples as they cross IT stacks and security solutions. You own your entire environment – the attackers do not – use your home field advantage. By pushing this further and integrating them, speed can be on your side. Fewer clicks, less “finding” and more doing.

This approach coupled with the previous commoditizes security tooling and data as well, greatly reducing technical transformation and attacker evolution agenda killers.

 3)   Process integration with security teams – We now have commoditized infrastructure data and commoditized security data in a central place. By “stitching it all together” with integrated processes (a unified framework with built in lifecycle management), all of sudden we have alignment between technology & teams. 

The result of a successful framework is instant bandwidth for your security talent, aligned with clear ways to contribute and defined paths to execute the CISOs agenda. 

This rounds out to address the final agenda killers: talent market and security solutions transformation.


To Create a Solution, we must first define the problem

The thoughts in this article are simply my current views based on the last two years of subjective research. And again, I’m a cybersecurity newbie. I completely expect these views to evolve over time, if not change in their entirety.

Please give me the gift of your thoughts and feedback based on your knowledge & experience. I welcome comments of validation and rebuke.   Our core belief is that we will only win the war of securing every company by working together for the benefit of all.

Cross posted on Linked-In: https://www.linkedin.com/pulse/cisos-5-agenda-killers-outpost-security

How to Make the Future of Security Alerting a Reality Today

Risk Based Alerting (RBA) is Here to Stay

The security talks at Splunk®’s annual .conf this year talked a lot about Risk Based Alerting (RBA).   My co-founder gave the very first talk on RBA at .conf 2018.   Since then it has gained viral popularity and Splunk formally included some RBA features in Enterprise Security’s fall 2020 release.  

 

The focus on shifting to RBA was apparent in the .conf talks this year, as well as in the product architecture itself.  It’s clear Splunk is making a long investment in RBA for security alerting.

 

And we believe Splunk well should.  As an RBA pioneer we have seen it make huge impacts on the security operations of some of the world’s largest companies.  Consistently we see performance metrics like alert volume, true positive percentages, and mean time to resolution exponentially improve just weeks after going fully live with RBA in a SOC.  Perhaps even more significant is the alignment we see across the security teams; from Splunk admins, to the threat intel teams, to the SOC analysts themselves.

 

The operational gains and the team alignment combine to change how security teams do their daily work.  This adds up to months and quarters of strong metrics to present to the board, as well as increasing cyber resiliency across the entire company.

 

We’ve released a technical guide entitled “Getting Started with RBA in Splunk® Enterprise Security” https://outpost-security.com/rba-getting-started . Following the guide will deliver next generation alerting for most small to mid-sized companies. But for larger enterprises, the complexities and size of large IT environments, as well as the distribution and diversity of their security teams, will introduce some challenges. 

 

Unfortunately, in some companies, we’ve seen these challenges significantly stall progress and eventually kill the hopes of making RBA successful.

 

The purpose of this article is to take a step back and outline the three main principles that we are leveraging that make RBA the future of security alerting.  These are first principles that when executed consistently allow you to present the most relevant information to a security analyst, enabling them to make the best decisions in the least amount of time.

 

These principles are – Expand, Relate, Enrich.   I’ll discuss each one in detail.  To be honest, since we put these principles to paper, I’ve had a hard time remembering them.   They can be stated another way – and we’ll use those to start:

Blow-it up – Stich-it up – Roll-it up

That’s right – the three steps in implementing the future of security alerting are - Blow-it up – Stich-it up – Roll-it up

Let’s get started…

Step 1 – Blow-it Up – See All the Events

 

One of the most interesting aspects of cybersecurity to me is that WE HAVE ALL THE DATA.  We can see EVERYTHING.  Everything is logged across the entirety of the IT stacks – and Splunk makes it possible to search and find things in these logs IN NEAR REAL TIME.

 

Seriously, that is amazing.  Almost perfect visibility.

 

But the challenge of course is using all that information effectively.  We’ll address that in the next steps, but for now I’ll ask you to wrap your head around this single paradigm shift.

 

Security engineers and our existing alerting toolsets are currently focused on looking at indicators of compromise (IOCs) that may indicate a potential threat.  This is how detections are written and how alerts are triggered.  The trade-off is we ignore a lot of data that is too noisy – or contains a lot of “business as usual” events that are hard to distinguish from actual bad actors.

 

The first step is to revisit this data in its entirety.  Risk Based Alerting give us the capability to filter the noise automatically – so we can broaden are detections and look at any event of interest – even if it has a low probability of being malicious.

 

One good example is “First time logon”.  Incredibly noisy, especially in large environments.  We’d never look at this event if it triggered a single alert, but with Risk Based alerting we can record this, score it as a low-probability of risk, and use it to stitch together a pattern of behavior when correlated with other “risk” events.

 

We call this a shift from “one-to-one” detections, to “many-to-one” detections. 

 

The takeaway from step 1 – expand your detections to look for as many clues as possible – automatically rank those clues using the RBA scoring methodology.  

 

But remember this is just the collection part of the process, there is more to the novel approach of RBA.

 

Step 2 – Stich-it Up – Relate the “Who”

 

Sometimes I think Risk Based Alerting is a misnomer.  I remember we had a discussion once around the name RBA.  In some industries – like banks – the word “risk” is taboo – and for compliance reasons – not just cultural and business model reasons  

 

The truth is the “R” should really stand for “Relational” – because that is the key element of RBA – and where RBA get’s its true power.

In step 1 we expanded our collection of events.  We’ll call that the “what”.  Step 2 addresses the “who”. 

 

Another challenge of log and security data is the variety of sources and technology stacks that it originates from.  While a user accesses the IT environment from an endpoint, the information they generate and consume travels across these stacks; authentication systems, email, network, firewalls, the web itself.

 

The simple disconnect of knowing a “user” by a single name adds complexity right out of the gate.  Jane Doe – is it janedoe2, jdoe2@outpost-security.com, Jane R Doe – CIO, or Employee ID 8675309?  The problem also extends to systems – “what is the IP address of this machine name?”

 

Every event has at least one “who”, most events have more than one.  A “user” and a “system” is the most elemental paring. (But you can make this instantly smarter by tagging the system as a source or a destination. Why not add vector physics to our alerting searches!).

 

If we have a source of truth that identifies our known users and systems across all technology stacks and log sources, we have achieved a massive simplification win.

 

As we find and record risk events, we can identify the “who” immediately and consistently, and record that along with the event or behavior itself.  Don’t forget we have a risk score added to each event as well.  Even if we see a user or a system that we don’t know, we can still remember their identity, and use it to find evidence of their actions in other events.

 

The takeaway from step 2 – relate everything together by the person or thing doing the actions (the “object” – e.g. user, endpoint, IP address, email sender)

 

Step 3 – Roll-it Up – See the “who”, all the “what”, and “who else” all at Once

We now have a gigantic collection of the “what” and the “who”.  The final step is to bring everything together.

 

Traditionally this is called correlation.  The difference that you see immediately in RBA though is that we’ve taken the concept of correlation and supercharged it. 

 

First and foremost, we don’t trigger an alert for review until we see enough “risk” events accumulate on a single object.  There are a couple of ways to calculate this, but for this article, we don’t need to go to that level of detail.

 

The important thing is what you (your security analyst) sees in this alert – a narrative of recent events of interest, generated by your broad detection sets.  This reads like a literal script of what this user did, when they did it, and what they did next.

 

In this screenshot we see a series of risk events for a user and system that individually could be benign, but when we see them together, another story becomes clear.  We also make it really easy by adding some handy risk messages that explain in plain English what each event means. Again, what would have been difficult to see via individual events – we see almost immediately that this is a ransomware attack firing.

(Thank you Splunk® for attack_data, https://github.com/splunk/attack_data )

In a practice, this has reduced MTTR’s in our customers from hours to less than 20 minutes.

 

The takeaway from step 3 – using the expanded detection data that is all tagged with at least one object, create a single pane of glass that shows all of the risk activity logged on an object over time.  Also show any related objects to those risk activities.

 

 

The Future of Security Alerting at Scale

 

Hopefully you have a better understanding of how RBA works in principle.   As I mentioned before, it is consistent execution of these core principles that allows RBA to be successful at scale. 

 

Why does that matter?  Execution at scale not only means security alerting metrics that will be the envy of every CISO, but it also means reduced overhead, increased bandwidth for your top talent, and the ability to build resiliency without adding staff or outsourcing to 3rd parties who may not be as effective.

 

Finally, Highland Defense was founded to help you achieve all of these things in your company.   Reach out for a demo so we can show you how you can make the future of security alerting a reality in your organization in about 10 weeks.

Living off the land Phishing Attacks - A Feedback Loop Story

The greatest driver of business value in security are feedback loops. However, the true value of the successful completion of one such loop can be difficult to assess. Traditional KPIs are retrospective and fail to capture the systemic value of these complex “wins” that security feedback loops deliver.

One way to communicate the complex value of these loops is through a simple story. What follows is a conversation between the Highland Defense founders, Stuart & Will, about a successful feedback loop that Stuart navigated with a Fortune 500 company.

————————————-

Stuart: A recurring problem we were seeing was phishing attacks that were coming into the company and used PowerShell commands from malicious Word docs or Excel files to infect computers. We knew that the chasing phishing emails alone was not going to be enough. So we looked how we could potentially stop the initial call-out.

 

Will: What do you mean by that?

 

Stuart: When a user opens a malicious attachment, it usually fires a command that calls out to the Internet to download additional files or information to infect and attack. This technique works because the email itself is very “light” and difficult to identify based on the email signature alone. Most of the attacks at the time were using PowerShell to do the call-out.  There was also a growth in the spread of Living Off the Land or Lol-bins (living off the land binaries). These were Microsoft executables that let you call the Internet and allow you to download malware as well. An example of this is msi.exe, a common Microsoft installer file. We were seeing an increasing number of attacks using either of these techniques.  

 

Will: To summarize the attack, it is a system executable, with a generic name, that runs when a user opens a malicious attachment. At the time the attachment is opened, the executable infects the user’s machine with whatever payload it is programed to retrieve from somewhere in the Internet.

 

Stuart: Yes. The whole point was to leverage the computer to download the malware as a way to avoid any security controls.

 

Will: Initiated by PowerShell – the command line interface – or some other .exe installer?

 

Stuart: Correct. In the old days, it used to be visual basic scripts. As we saw attacks evolving, they switched to JavaScript. There they were leveraging JavaScript in documents to download malware as well. Knowing this, we took some examples and wrote a couple of attacks ourselves to run on a test machine and see if we could find a choke point.  Where was there a security control that we could evoke to prevent malicious use of these executables, but still allow business or authorized use of the “good” files.

 

Will: As a result of this you narrowed down the point of compromise to the call out to the internet to download the malicious target payload.

 

Stuart: We wanted to focus on the call out because then the computer was less damaged; catch it early and prevent other changes to the computer. At the time we had an antivirus product that also had a local/host based firewall.  Which gave us an idea – what if we could prevent the call-out with the firewall? This would be great, but there's a lot of different firewalls and all are rather limited in what they can do, especially on an endpoint.

With the firewalls on the users workstations, you have three choices of action:

1)    Block by filename

2)   Block by port

3)   Block by filehash

We looked at all three.

If you blocked by port, you’d essentially block all Web traffic, which is an obvious non-starter. We can't block by an IP or destination because we don't know where they are going to download the malware from. That left file hash and file name. The challenge with Filehashes is they vary widely, depending on what version of PowerShell you have installed, patches you have added, the version of Windows OS running, etc. We ended up with such a large inventory of file hashes and knew it was going to be difficult to maintain.

 

Will: The filehash method is not very repeatable or scalable. The variances are so constant that it makes it a fragile detection?

 

Stuart: Correct. It's fragile. We were left only with Filenames. Now, bad actors can rename executables to get around a filename block, but we noticed that we were intervening so early in the attack that they didn't have time to rename the file.  They almost always rename it later to cover their tracks and have a back door. Again, we were seeing it early enough and could observe the base executables  were not being renamed or copied to another location with another name. We put in a firewall block for the big ones - powershell.exe, powershell_ice.exe. The result was an immediately stop to those phishing attacks running on workstations.

Then we added c-script.exe, w-script.exe. Those two actually prevented any VB script or JavaScript attacks from happening. Also important to note, that these blocking controls had a very minimal impact on the business. Most people didn't even know that we put it in. We deployed the control out to twenty thousand endpoints and had were seven users that called about an issue when we put the exclusion in for them. They were heavy PowerShell users who download modules from the Internet. We worked with them to create an exclusion to allow call-outs to certain web sites that we knew were good.

Overall, it was a rather invisible control to end users. When we pulled the numbers, we dropped our infection rate on phishing emails by almost 80 percent, just by putting in this one control.

 

Will: How is the file name identification less fragile than the hash? Why can't they just rename the files before getting caught - like you mentioned before?

 

Stuart: You have to remember, they don't have control of your workstation yet. Anything they do on that workstation prior to getting control has to be in the script that they sent to you in the phish.

One of the big things that Endpoint controls monitor is if you're copying and renaming system executables. That’s really easy to catch because nothing should be doing that.

 

Will:  That makes sense. And if the executable doesn’t have a generic or a benign name, then the system is going to catch it as malware off the endpoint control.

 

Stuart: Yes, they have to avoid strange or coded filenames. They have to hide in plain sight essentially.

That's what attackers have relied on for the last five years, leveraging known good executables to use them in a malicious way. And that's what this takes advantage of. It augments what else you're doing on your endpoint to really close-out how infections get installed on the workstation.

 

Will: So at the end of the day, it was an endpoint firewall config?

 

Stuart: Yes. We used our antivirus to deploy it, but you can also do the exact same configuration in the Windows host firewall and deploy it using active directory. There are a lot of options and most host based firewalls have the same options to configure.

The other key was we blocked executables going to out to public IP addresses. That's what minimize the business impact, if it's a public IP address range, block it.

What that does is still allow you to use PowerShell to connect to other computers within your company to manage them, monitor them or query them. While at the same time preventing PowerShell from being abused and hitting the open internet.

 

Will: Simple, specific, and effective, with almost no negative impacts on business operations. Sounds like a win-win-win.

 

Stuart:  There’s one more thing. When we put in this firewall rule, we started logging whenever it was hit by a workstation. Those logs flowed into Splunk and we used that data to write alerts and enrich our RBA data. Splunk could tell us “Hey, this workstation attempted to reach out to the Internet with a script. It was ‘malicous_script_z’”. With that information we could track down where the potential malware came from, what e-mail it was, who else might have been hit by the same phish.  It allowed us to correlate and coordinate a response across the entire company in near real time.

 

Will:  Of course – that is brilliant. Knowing the one potential compromise or near compromise lets you spread that across all objects of interest throughout the entire network. You immediately say “We know this is bad and in our network. We stopped it once, where else do we see it so we can stop those too?”

 

Stuart: Exactly. What it gives away is the IP address that it was attempting to reach out to on the Internet. You can see if the attack centered around one IP address or multiple IPs.   Then you can say “Hey, if I know this IP address was bad, I’m going to check the rest of my events to find out who else reached out to it.”  Even if your firewall control is on 50% of your machines, you can pivot off of the known bad IP to see and block other machines that are calling out to it.

 

Will: The attackers would have to constantly vary that IP, which isn’t that hard to do, if they wanted to have a successful attack at that company?

 

Stuart: Correct, but then all it takes is for them to hit one of the machines with a rule that has a block on, the log file gets created and sent to Splunk, and the new IP is know bad. It’s a loop that’s always checking.

 

Will: A truly anti-fragile security control.

Splunk User Group - TakeAways

We had the pleasure of meeting with fellow Splunk enthusiasts to talk Risk Based Alerting (RBA) as well as security in general. We wanted to share with everyone the detection and prevention quick wins we covered and thought this would be a great place so they can be referenced anytime. These are meant to be ideas and are designed to be approachable but may need adjustments to fit your environment.

Detection

Outside IP Determination - Web Traffic

This is a technique used by attackers to identify what external IP address a computer may have. This takes a lookup of known websites that do this and finds any systems connecting to them and leverage the web data model with Splunk ES. You can easily modify this to whatever your web traffic may be.

Search:

| tstats `summariesonly` count as connection_count, max(_time) as event_time

    from datamodel=Web

    where

    [| inputlookup outside_ip_determination.csv

    | fields url

    | eval url=url."*"

    | rename url as Web.url]

    by Web.user, Web.url, Web.src, sourcetype

| `drop_dm_object_name("Web")`

Contents of outside_ip_determination.csv:

url

ip-api.com

ipinfo.io

freegeoip.net

IP-info.org

tracemyip.org

curlmyip.com

ifconfig.co

icanhazip.com

api.ipify.org

Prevention

SANS IP Blocklist - Network Traffic

The idea of using a blocklist on a firewall is not new but continues to be effective at reducing the burden your other security controls have to face. In some environments i have see Intrusion Detection hits from external IP addresses be reduced 60-70% by simply configuring the dshield top 20 blocklist at the firewall. The ripple effect also means fewer alerts for your SOC.

DShield bloclist: https://feeds.dshield.org/block.txt

Below is a post specific to implementing the DShield list with Palo Alto but if you have another platform i hope it helps you see how it works:

https://isc.sans.edu/forums/diary/Subscribing+to+the+DShield+Top+20+on+a+Palo+Alto+Networks+Firewall/19365/

The Market Failure of Cybersecurity

The Current State of Disservice of Security Products and their Vendors

Outstanding Keynote given at VB2019 in London in early October from Haroon Meer & Adrian Sanabria.

Here are some highlights:

2:15 - Visual representation of a “crowded” provider market from Momentum Cyber

5:55 - “Median time for attackers to exist on a network before being discovered is 205 days”

7:00 - “For Infosec, VC model is broken”

9:10 - “Is you security software actually good? Most people can’t tell.”

12:10 - “Complexity is the opposite of Security.”

16:35 - “30% of the security vulnerabilities in the US Government come from Security Products.”

18:15 - “Most security products are not going under any sort of security review.”

20:20 - “Inferior tech is OK - as long as you have a good go-to-market plan.”

21:10 - “Bank of America CISO single biggest security challenge? Dealing with the bazillion vendors knocking on my door.”

25:30 - Hacking industry and product awards.

37:40 - Definition of Market failure “We have all these products but none of them do anything”

38:50 - Hope kernel #1 - 2FAC at Facebook (https://www.youtube.com/watch?v=pY4FBGI7bHM)

40:20 - Hope kernel #2 - bottom up product / market growth

42:27 - The new way - “You can go pretty far by caring about your product and caring about your users.”

Splunk .conf Talks Posted

Splunk was extremely quick with posting the slides and audio from all of the .conf sessions. We wanted to provide the link to the talk we gave on what we learned after implementing a risk Based Approach (RBA) in production as well as processing over 15k RBA alerts. We hope it provides insight and ideas for others who choose this path.

SEC1908 - Tales From a Threat Team: Lessons and Strategies for Succeeding with a Risk-Based Approach

We also want to highlight the RBA work that others are sharing:

And of course Stuart & Jim’s orginal RBA talk in 2018:

SEC1479 - Say Goodbye to Your Big Alert Pipeline, and Say Hello to Your New Risk-Based Approach