27

I use the data processing pipeline constructed of

S3 + SNS + Lambda

becasue S3 can not send notificaiton out of its storage region so I made use of SNS to send S3 notification to Lambda in other region.

The lambda function coded with

from __future__ import print_function
import boto3


def lambda_handler (event, context):
    input_file_bucket = event["Records"][0]["s3"]["bucket"]["name"]
    input_file_key = event["Records"][0]["s3"]["object"]["key"]

    input_file_name = input_file_bucket+"/"+input_file_key

    s3=boto3.resource("s3")
    obj = s3.Object(bucket_name=input_file_bucket, key=input_file_key)
    response = obj.get()

    return event #echo first key valuesdf

when I ran save and test, I got the following error

    {
  "stackTrace": [
    [
      "/var/task/lambda_function.py",
      20,
      "lambda_handler",
      "response = obj.get()"
    ],
    [
      "/var/runtime/boto3/resources/factory.py",
      394,
      "do_action",
      "response = action(self, *args, **kwargs)"
    ],
    [
      "/var/runtime/boto3/resources/action.py",
      77,
      "__call__",
      "response = getattr(parent.meta.client, operation_name)(**params)"
    ],
    [
      "/var/runtime/botocore/client.py",
      310,
      "_api_call",
      "return self._make_api_call(operation_name, kwargs)"
    ],
    [
      "/var/runtime/botocore/client.py",
      395,
      "_make_api_call",
      "raise ClientError(parsed_response, operation_name)"
    ]
  ],
  "errorType": "ClientError",
  "errorMessage": "An error occurred (AccessDenied) when calling the GetObject operation: Access Denied"
}

I configured the lambda Role with

full S3 access

and set bucket policy on my target bucket

everyone can do anything(list, delete, etc.)

It seems that I haven't set policy well.

7 Answers 7

32

Omuthu's answer actually correctly identified my problem, but it didn't provide a solution so I thought I'd do that.

It's possible that when you setup your permissions in IAM you made something like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::test"
            ]
        }
    ]
}

Unfortunately, that's not correct. You need to apply the Object permissions to the objects in the bucket. So it has to look like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::test"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::test/*"
            ]
        }
    ]
}

Note the second ARN witht the /* at the end of it.

Sign up to request clarification or add additional context in comments.

3 Comments

Hey where can I see this settings? I cannot find it anywhere...
s3:ListBucket is needed to create bucket
The Amplify CLI set it up the incorrect way when I added a storage trigger. This was a frustrating bug...
17

I had a similar problem, I solved it by attaching the appropriate policy to my user.

IAM -> Users -> Username -> Permissions -> Attach policy.

Also make sure you add the correct access key and secret access key, you can do so using AmazonCLI.

Comments

10

Possibility of the specific S3 object which you are looking for is having limited permissions

  1. S3 object level permission for read is denied
  2. The role attached to lambda does not have permission to get/read S3 objects
  3. If access granted using S3 bucket policy, verify read permissions are provided

5 Comments

This is pretty vague. Could you possibly point out how one might address this issue?
Two possibilities 1. S3 object level permission for read is denied 2. The role attached to lambda does not have permission to get/read S3 objects
For me helped adding of s3:GetObject to the policy.
I had to write a bucket policy to grant access. I had to add to bucket policy: ` { "Version": "2012-10-17", "Statement": [ { "Sid": "Allow All", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<userid>:user/<username>" ] }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::<resource-name>/*" } ] } `
Also worth pointing out that AWS S3 returns Access Denied for objects that don't exist so as to not reveal whether an object exists or not...
7

I had similar problem, the difference was the bucket was encrypted in KMS key.

Fixed with: IAM -> Encryption keys -> YOUR_AWS_KMS_KEY -> to your policy or account

1 Comment

you saved me! :) I need to buy you a coffee!
7

Adding to Amri's answer, if your bucket is private and you have the credentials to access it you can use the boto3.client:

import boto3
s3 = boto3.client('s3',aws_access_key_id='ACCESS_KEY',aws_secret_access_key='SECRET_KEY')
response = s3.get_object(Bucket='BUCKET', Key='KEY')

*For this file: s3://bucket/a/b/c/some.text, Bucket is 'bucket' and Key is 'a/b/c/some.text'

---EDIT---

You can easily change the script to accept keys as environment variables for instance so they are not hardcoded. I left it like this for simplicity

3 Comments

this is a bad idea in a Lambda function. There's no reason to hardcode keys.
@tedder42 sometimes it makes sense. let's say you need to copy between buckets with different permissions.
It has solved my problem! ;)
1

In my case - the Lambda which I was running had a role blahblahRole and this blahblahRole didn't have the permission on S3 bucket.

1 Comment

I have s3* permissions but still I'm getting the error.
1

Following is a simple JSON IAM snippet tha could be added to the IAM Role. Although this gives FullAccess to the bucket, generally should not be a problem because dedicated buckets are used for Lambda operations in production.

But if FullAccess is not desired than whatever operations are throwing AccessDenied Error, those will need to be added separately.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAllOnThisBucket",
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": "arn:aws:s3:::bucket_name/*"
        }
    ]
}

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.