AWS S3 Access Points
Beware of mistakes in Bucket policies
Published: Tuesday, Oct 7, 2025 Last modified: Tuesday, Oct 7, 2025
Many AWS infrastructure teams rely on S3 bucket policies to police access. Since Bucket policies can easily become complex, AWS offer Amazon S3 Access Points add a second layer of authorization that is easy to overlook. When those Access Point policies are misconfigured, non-admin users can quietly bypass the defenses you thought were in place.
This post walks through the controls we expect on an access point, the common mistakes, and a repeatable way to audit every policy in your estate.
Why Access Points Exist
Access points give you named entryways into a bucket. Each access point has:
- Its own DNS alias (for applications and data pipelines)
- A dedicated policy document (separate from the bucket policy)
- Optional prefix-level restrictions
The defence-in-depth pattern we follow looks like this:
- Admins: continue to hit the bucket directly (
s3://bucket-name/
). - Non-admins: blocked from the bucket, must use an access point.
- Access points: scoped to a single prefix (for example
s3accesslogs/
) and locked to a single IAM role.
In theory this cleanly separates admin and application traffic. In practice, it only works when each access point policy enforces the correct denies.
Choosing a Guardrail Strategy
There is a spectrum:
- Role-level only. Keep the access point policy minimal—just block everyone except the intended role—and rely on IAM identity policies for day-to-day control.
- Defence in depth. Layer explicit allows and denies so the access point protects you even if the role’s IAM permissions drift over time.
Both patterns start from the same idea: only the SensitiveCommercialSupp
-style role should
reach this access point. The question is whether you trust the IAM policies attached to that
role to stay perfectly aligned.
“Role-level only” policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Lock access point to one role",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:REGION:ACCOUNT:accesspoint/ACCESS_POINT",
"arn:aws:s3:REGION:ACCOUNT:accesspoint/ACCESS_POINT/object/*"
],
"Condition": {
"StringNotEquals": {
"aws:PrincipalArn": "arn:aws:iam::ACCOUNT:role/ROLE_NAME"
}
}
}
]
}
Merits
- Tiny, easy to reason about, hard to misconfigure.
- Stops every other principal—no cross-account back doors, no anonymous access.
- Lets IAM identity policies continue doing the heavy lifting; if the role doesn’t have
s3:GetObject
, the access point still blocks the call.
Drawbacks
- The access point itself is “hands off”: once the IAM role gains
s3:PutObject
,s3:Delete*
, or a wider resource ARN, the access point goes along with it. - Every read/write/list guardrail lives in identity policies, which are more likely to change quickly (emergency fixes, onboarding contractors, attaching AWS managed policies).
If your trust boundary is “that single IAM role” and you have strong controls on how its policies evolve, this can be enough.
Defence in depth
Want the access point to stay read-only and prefix-scoped even if IAM changes later? Add a resource-based allow for the happy path plus three targeted denies for the role.
AWS’ own examples show that the minimal pattern relies on allow statements that scope
both the principal and the resource. Example 1 in
Configuring IAM policies for using access points
grants a single IAM user access to a prefixed path through an access point, and example 3
adds a targeted s3:ListBucket
allow for the same user. We start from that pattern, then
layer a set of explicit denies as the backstop.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow reads through the access point",
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::ACCOUNT:role/audit-role"},
"Action": ["s3:GetObject"],
"Resource": "arn:aws:s3:REGION:ACCOUNT:accesspoint/logs-ap/object/s3accesslogs/*"
},
{
"Sid": "Allow listing within the prefix",
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::ACCOUNT:role/audit-role"},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:REGION:ACCOUNT:accesspoint/logs-ap",
"Condition": {"StringLike": {"s3:prefix": "s3accesslogs/*"}}
},
{
"Sid": "Deny everyone except the audit role",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:REGION:ACCOUNT:accesspoint/logs-ap",
"arn:aws:s3:REGION:ACCOUNT:accesspoint/logs-ap/object/*"
],
"Condition": {
"StringNotEquals": {
"aws:PrincipalArn": "arn:aws:iam::ACCOUNT:role/audit-role"
}
}
},
{
"Sid": "Deny write actions for the audit role",
"Effect": "Deny",
"Principal": {"AWS": "arn:aws:iam::ACCOUNT:role/audit-role"},
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectTagging",
"s3:PutObjectVersionAcl",
"s3:PutObjectVersionTagging",
"s3:RestoreObject"
],
"Resource": "arn:aws:s3:REGION:ACCOUNT:accesspoint/logs-ap/object/*"
},
{
"Sid": "Deny list outside s3accesslogs/",
"Effect": "Deny",
"Principal": {"AWS": "arn:aws:iam::ACCOUNT:role/audit-role"},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:REGION:ACCOUNT:accesspoint/logs-ap",
"Condition": {
"StringNotLike": {"s3:prefix": "s3accesslogs/*"}
}
},
{
"Sid": "Deny reads outside s3accesslogs/",
"Effect": "Deny",
"Principal": {"AWS": "arn:aws:iam::ACCOUNT:role/audit-role"},
"Action": "s3:GetObject",
"NotResource": "arn:aws:s3:REGION:ACCOUNT:accesspoint/logs-ap/object/s3accesslogs/*"
}
]
}
Those final four statements are the “divergence” from the AWS documentation. They shut every other principal out, deny write-style APIs, and prevent the intended role from listing or reading outside its prefix. Without them the policy still works, but we lose the extra assurance that misconfigured IAM permissions can’t punch through the access point.
Defence in depth is not simple
You might find defence-in-depth policies that skipped the prefix scoping in their allow
statements. The Resource
looked like .../object/*
, so the role could read or write
anything reachable through the access point. Others granted s3:*
to "Principal": "*"
,
which effectively turned the access point into an open door. The deny backstop would have
caught both errors.
Because access point policies are independent from the bucket policy, these mistakes slipped through normal checks. IAM role permissions were still limited, but the access point no longer enforced the isolation we depend on for logs, audit trails, and other sensitive prefixes.
Detection: Automate the Review
To make the findings repeatable we rewrote validate_policy.py
so that it checks
for the AWS-documented allow patterns and the deny overlay. The helper now verifies that:
- only the expected IAM role appears in
Principal
; - every object
Resource
ends with/object/<prefix>*
; - any
s3:ListBucket
allows are gated by ans3:prefix
condition for that same prefix; and - the four deny statements above are present and scoped correctly (principal lockdown, write guard, list guard, get guard).
Usage:
python3 validate_policy.py policy.json \
--role arn:aws:iam::123456789012:role/audit-role \
--prefix s3accesslogs/
For each failure, the tool prints the actual JSON that was found (in red) and the expected policy snippet (in green). That side-by-side view makes it easy to coach teams through the required changes and prevents regressions.
Remediation Playbook for SecOps
- Inventory access points. Use AWS Config or
aws s3control
to list every access point in scope. - Export the policies.
aws s3control get-access-point-policy
for each. - Run the validation script. Feed each policy through the helper with the role ARN and expected prefix.
- Fix failures quickly. Copy the green snippet, swap in the real ARN and access point name, and update the policy.
- Add to CI. If you treat access point JSON as code, add the script to your pipeline so broken policies never ship again.
Key Takeaways
- Treat S3 access point policies with the same rigor as bucket policies.
- Tight allow statements (principal + resource) do most of the work; we still layer explicit denies so the access point stays safe even if IAM permissions drift.
- Prefix isolation is not automatic—you have to enforce it.
- Automation is the only practical way to keep dozens (or hundreds) of access points aligned with policy.
If your estate already uses access points, grab the validator and run it on the entire fleet. The output will tell you exactly where your guardrails are missing and how to fix them.