0

I'm attempting to establish whether an EC2 instance can reach S3. Currently I'm doing this through an upload:

try:
    # Create empty ping file
    subprocess.run(['touch', '/tmp/ping'])

    # Run upload commands
    upload_result = subprocess.run(['aws', 's3', 'cp', '/tmp/ping', 's3://mybucket/ping'])
    
    # Check if it succeeded
    upload_result.check_returncode()
except Exception as e:
    print('Could not reach S3')

However, I'm wondering if there's a more efficient (non-boto) way of doing this. The EC2 Instance does not have s3:getObject permissions, only s3:putObject which is intended. But if there's a way to establish it by a simple HTTPS request or something similar, I would love to hear about it.

1 Answer 1

1

Couple of things here that are not ideal:

  1. shelling out to the awscli (I would use the boto3 SDK instead)
  2. invoking a mutating operation (PutObject) simply to test connectivity

You might consider giving this Lambda function read access to a specific sentinel object (e.g. s3://mybucket/headtest) and then invoke HeadObject against it.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.