<img src="https://secure.ruth8badb.com/159098.png" alt="" style="display:none;">

Q&A: Defense-in-Depth Strategies for Ransomware Threats

SHARE

In our latest webinar, Defense-in-Depth Strategies for Ransomware Threats, we received so many compelling questions that there wasn’t enough time to answer them all. After the presentation, we sat down with Jonathan Echavarria, Enterprise Architect at ReliaQuest, to uncover the answers and bring them to you here in this blog.

 

Q: What are the key IOCs to monitor within an MDR tool to ensure ransomware is identified?

 

A: There are three parts to this answer.

 

The first is active directory usage and logging. You’ll want to monitor Kerberos Service Ticket Requests, which are a typical indicator of attacks like Kerberoasting. For logins, on the whole, you’ll want to track and understand where your users are authenticating FROM. If you notice that things fall outside of their normal authentication locations, that should issue a red flag. Additionally, you’ll want to track your users’ 2FA details. Denies should raise an immediate alarm – and you should also look for abnormal user agents (e.g., python request vs. a mobile user agent.

 

The second is detailed logging of process execution. You can do with your EDR or use system logging and ensure alerting is in place. Ensure you grab as much detail as you can afford to collect.

 

The third is file writes. Logging sensitive file writes, and especially overwrites, is particularly important for fileservers. When combined with strategies such as canaries, you can start to implement automated remediation strategies. For example, a file on the fileserver that shouldn’t normally be accessed is named “AAAA Legal Wire Instructions.docx.” By naming it AAAA, we abuse sorting strategies, as it will hopefully be the first file touched. Once a file write is detected on the file, you kill the calling process tree and lock the user account, which should hopefully help mitigate ransomware attacks in progress.

 

Q: What are some of the biggest “living off the land” attack vectors being utilized, and how do you best detect and prevent them?

 

A: Threat actors love to take advantage of living off the land vectors and pose risk to organizations by attempting to blend into activity. In most cases, attackers tend to use utilities such as nltest.exe, the native net.exe, wmi, etc.

 

The best strategy to mitigate living off the land vectors is to have a solid understanding of historical trends of how administrative tools are used within your environment. Understanding who uses the tools, how they use the tools, and what tools they use will allow you to develop accurate and insightful detections that are relevant to your environment. If your Windows administrators don’t typically use net.exe, but use some third-party administration tool, then it becomes viable to create detections when the lolbins are leveraged. Understanding who uses administrative tools makes it easy to spot when someone shouldn’t be. For example, the chances of someone in accounting executing a command via cmd.exe is likely extremely low. Creating detections for user groups that detect abuse of administrative “living off the land” tooling that is abnormal for the group is a high-value detection.

 

Q: Is it better to fool intruders into a fake rabbit hole or block them completely?

 

A: Both are viable strategies. They are entirely separate but complementary strategies. “Blocking attacks completely” does nothing to prevent an attacker that could circumvent your mitigations. If all of your efforts were spent on first-line prevention, it becomes difficult to evict an attacker in time. It is unrealistic to expect to be able to block every single attack against an environment, so some efforts must be placed on laying traps and providing opportunities to detect intruders. Strategies such as canaries are effective for this, but so are creating detections around abnormal administrative activities.

 

Q: Besides backup, patches, updates, firewalls, and user education, how else do we stay protected from ransomware?

 

A: Detections, canaries, and exercising your remediation strategies! It’s imperative that you have containment and remediation playbooks in place and that they are exercised regularly. As you run through the scenario, take heavy notes on what’s working and what’s not working, and adjust accordingly. Conduct regular tabletop sessions to ensure that all stakeholders are aware of their responsibilities and are well-educated on how and what they need to do in the event of an incident. One can never TRULY mitigate the risk of an attack, but we can do a lot to reduce the cost impact of one.

 

Q: Do you see moving more backups into the cloud causing more security risk? Which one do you prefer?

 

A: “It depends.” There are a lot of factors that go into the risk calculation, but that primarily has to do with the management associated with them and the access to the data. This applies regardless of where the backups are located. You have to ask and answer…Is it easy to access the backups? Who has access to the backups? Are the backups segmented? What attack paths can a threat actor leverage to the administrative access to the backups? It is easier to ensure that backups are air-gapped from your environment in a local backup deployment, but it may increase the maintenance complexity.

 

A big “thank you” to Jonathan for sharing his insights in our webinar and follow-up Q&A. As ransomware continues to morph, we’ll be sure to cover the latest protection techniques. In the meantime, you can find a wealth of resources on our Ransomware Protection microsite.