Enabling excessive permissions in Cloud infrastructure can easily lead to unauthorized access, but regularly auditing AWS permissions can make that harder for attackers.
In Neuvik’s previous blog, we highlighted a typical policy misconfiguration of enabling excessive permissions and access management. Although this is hard to program, scale, and observe (which is why this flaw is typically exploited), this misconfiguration increases the ability for an attacker to gain unauthorized access and possibly compromise sensitive data.
Knowing this, three immediate questions come to mind:
- How does AWS evaluate existing permissions?
- How do misconfigured permissions result in unauthorized access and the compromising of sensitive data?
- How does one evaluate existing permissions in AWS and fix them?
Over-permissions exist in other cloud providers such as Microsoft Azure and Google Cloud, but here we focus on Amazon Web Services (AWS).
How does AWS evaluate permissions?
AWS publishes its policy evaluation document on its website, which includes an illustrative flowchart with gates. When performing a Cloud Security Assessment, the first item our team looks for is ensuring that the least number of permissions are applied to an environment.
We have modified the above graphic to highlight two important sections of the flowchart. The first is that everything AWS starts with the “Default Deny” in the logic. Each policy that can enforce this decision lives in its swim lane. Note that a “user” would have both Resource Policies and Identity Policies to flow through for access. Resource Policy is acted on before Identity Policy.
From the perspective of the user, there can be more than one policy that you can have attached to your user’s profile. These policies do not supersede one another, instead they are all additive and apply in combination. The only rule is that “Denies” match before allows. This is shown in the graphic above. These identity policies can be applied to:
- Users
- Groups
- (Attached to) Systems
The users “effective permissions” or “total permissions” in the system are a combination of Resource-Based Policies and Identity-Based Policies. In reality, a user can have their policies described in all of the following ways:
- Directly applied to the user
- Applied to the user as a policy attachment, we can be used for many users
- Applied to a group directly which the user is a member of
- Applied to a group through a policy attachment, which can be used for users and groups.
In AWS, each one of these permissions can be very granular as they are tied to the API itself.
How do misconfigured permissions result in unauthorized access and the compromising of sensitive data?
To understand how the misconfigurations are created, we must discuss one example: Resource-Based Policies. Resource Policies are applied to resources such as S3. S3 Buckets can have their own policy allowances such that you can control access directly at the resource level. To explain further, look at the following S3 Policy:
{
“Version”: “2012–10–17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“s3:Get*”,
“s3:List*”,
“s3:PutObject”
],
“Resource”: [
“arn:aws:s3:::super-public-bucket/*”,
“arn:aws:s3:::super-public-bucket”
]
}
]
}
On the surface, this statement value may be innocuous. However, there are a few issues that exist here. For the purpose of this example, this particular bucket is supposed to be publicly accessible; maybe it hosts an organization’s website, or it hosts the static portions of the website, such as the Javascript and CSS files for the webpage. The permissions here may seem adequate as they allow only for Get*, List*, and PutObject. Even so, Get* is not a restrictive permission set. The reason for this is that Get* would provide us:
- GetObject: This is what is what the user probably needs, just to Get objects stored in S3. This is the only permission that needs to be found here.
Additionally, by using Get* there are API calls that an organization would not want to expose to the public:
- GetBucketPolicy: Getting the Bucket Policy for the Bucket Itself.
- GetObjectAcl: Each object access control list would be listed for the bucket.
- GetObjectAttribute: Similar to Get Object ACL, it would also potentially include versioning of the bucket objects. If Bucket Versioning is disabled, you will also be able to detect this.
Another example of misconfiguration abuse is using AWS CLI to talk to S3 buckets that are improperly configured. For example, look at the BucketACL below:
aws s3api get-bucket-policy — -bucket super-public-bucket
{
“Policy”: “{\”Version\”:\”2012–10–17\”,\”Statement\”:[{\”Sid\”:\”PublicReadGetObject\”,\”Effect\”:\”Allow\”,\”Principal\”:\”*\”,\”Action\”:\”s3:GetObject\”,\”Resource\”:\”arn:aws:s3::: super-public-bucket /*\”}]}”
}
Would an organization want a random user to be able to read this BucketACL? Would it divulge sensitive pieces of data? What if an AccountID is leaked? Part of risk management is being aware of who has access to what data, and what potential risks that might pose.
How does this get worse? Imagine that this is not simply a public bucket anymore, but is now instead www.super-public-site.com.
Given the fact that the bucket has GetObject, it can get all three files that the user is requesting in the image above: /index.html, /js/super-public-javascript.js, and /css/super-public-css.css. Remember, however, that there were other permissions listed earlier in this bucket: List* and PutObject.
This means that any user can list files and can fully enumerate each of the files in the bucket. This poses a potential vulnerability, as it could lead to information disclosure in some instances. More nefarious is PutObject. PutObject would allow us to put objects into a bucket, including replacing existing files.
What could an attacker do with this information?
- List files in the Bucket
- Get the BucketPolicy listing permissions.
- Replace JavaScript files, including in the new files scripts, that they can load in your browser!
How does one evaluate existing permissions in AWS and fix them?
The first recommendation we have is to use a security tool to understand the permissions better in the current AWS environment. A Cloud Security Posture Management tool is the most ideal, but if that proves cost-prohibitive, there are built-in tools that you can use in AWS to address this issue.
- AWS has a tool called IAMCTL which can perform the diff command on IAM policies. (https://aws.amazon.com/blogs/security/new-iamctl-tool-compares-multiple-iam-roles-and-policies/)
- AWS has a best practice around S3 buckets specifically. One of the controls is to use some of their built-in tools: AWS Trusted Advisor and AWS Config Rules (Look at s3-bucket-public-read-prohibited for this example).
- NCC Group has an open source tool for permissions auditing called ScoutSuite. We use this quite often to enumerate permissions across clouds.
- Ermetic has a tool called AccessUndenied that can also help build AWS policies.
After the policies are understood within the current AWS environment, it is recommended to begin updating policies, such as the one below:
{
“Version”: “2012–10–17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“s3:GetObject”
],
“Resource”: [
“arn:aws:s3:::super-public-bucket/*”
]
}
]
}
As can be seen in the example above, this is a far more restrictive policy that gives only necessary permissions. By addressing policies in this manner, risk of misconfiguration can be minimized, and result in lower potential vulnerabilities.
Do you need a Cloud Security Assessment Review?
Take a moment to look at the user roles in your cloud environment and check to see if you have any vulnerabilities from excess user permissions. If you need help with an AWS Security Assessment Review, we at Neuvik Solutions are here to help.
Contact us here and let us know what we can do for you.