AWS Python script vs AWS CLI - amazon-web-services

I downloaded the AWS cli and was able to successfully list objects from my bucket. But doing the same from a Python script does not work. The error is forbidden error.
How should I configure the boto to use the same default AWS credentials ( as used by AWS cli )
Thank you
import logging import urllib, subprocess, boto, boto.utils, boto.s3
logger = logging.getLogger("test") formatter =
logging.Formatter('%(asctime)s %(message)s') file_handler =
logging.FileHandler("test.log") file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler(sys.stderr)
logger.addHandler(file_handler) logger.addHandler(stream_handler)
logger.setLevel(logging.INFO)
# wait until user data is available while True:
logger.info('**************************** Test starts *******************************')
userData = boto.utils.get_instance_userdata()
if userData:
break
time.sleep(5)
bucketName = ''
deploymentDomainName = ''
if bucketName:
from boto.s3.key import Key
s3Conn = boto.connect_s3('us-east-1')
logger.info(s3Conn)
bucket = s3Conn.get_bucket('testbucket')
key.key = 'test.py'
key.get_contents_to_filename('test.py')
CLI is -->
aws s3api get-object --bucket testbucket --key test.py my.py

Is it possible to use the latest Python SDK from Amazon (Boto 3)? If so, set up your credentials as outlined here: Boto 3 Quickstart.
Also, you might check your environment variable. If they don't exist, that is okay. If they don't match those on your account, then that could be the problem as some AWS SDKs and other tools with use environment variables over the config files.
*nix:
echo $AWS_ACCESS_KEY_ID && echo $AWS_SECRET_ACCESS_KEY
Windows:
echo %AWS_ACCESS_KEY% & echo %AWS_SECRET_ACCESS_KEY%
(sorry if my windows-foo is weak)

When you use CLI by default it takes credentials from .aws/credentials file, but for running bot you will have to specify access key and secret key in your python script.
import boto
import boto.s3.connection
access_key = 'put your access key here!'
secret_key = 'put your secret key here!'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'bucketname.s3.amazonaws.com',
#is_secure=False, # uncomment if you are not using ssl
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)

Related

problem using ~/.aws/credentials and ~/.aws/config

i am trying to call S3 bucket using python script. i already create the credentials and config file too. using this kind of format in credentials:
[default]
aws_access_key_id= my_key_id
aws_secret_access_key= my_secret_access_key
and config file:
[default]
region=ap-southeast-1
i already set my env variable too like this:
[![enter image description here][1]][1]
i tried to run this script:
import boto3
# Create an S3 client
s3 = boto3.client('s3')
s3.put_object(Body='testing', Bucket='file-server-datalake', Key= 'test.txt')
and i got NoCredentialsError
is there any way to solve this?

Get secrets for GCP deployments from KMS

I want to deploy a Cloud VPN tunnel in GCP using Deployment Manager
I set up a deployment script using Python for this and I don't want the shared secret for the VPN tunnel in plain text in my configuration.
So I tried to include the secret encrypted via KMS and then perform a call to the KMS in the python script to get the plain text secret.
The python code to decrypt the secret looks like this:
import base64
import googleapiclient.discovery
def decryptSecret(enc_secret,context):
""" decrypts the given Secret via KMS"""
# KMS Configuration
KEY_RING = <Key Ring>
KEY_ID = <Key>
KEY_LOCATION = REGION
KEY_PROJECT = context.env['project'],
# Creates an API client for the KMS API.
kms_client = googleapiclient.discovery.build('cloudkms', 'v1')
key_name = 'projects/{}/locations/{}/keyRings/{}/cryptoKeys/{}'.format(
KEY_PROJECT, KEY_LOCATION, KEY_RING, KEY_ID)
crypto_keys = kms_client.projects().locations().keyRings().cryptoKeys()
request = crypto_keys.decrypt(
name=key_name,
body={'ciphertext': enc_secret})
response = request.execute()
plaintext = base64.b64decode(response['plaintext'].encode('ascii'))
return plaintext
But if I deploy this code I just get the following error message from deployment manager:
Waiting for update [operation-<...>]...failed.
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1517326129267-5640004f18139-450d8883-8d57c3ff]: errors:
- code: MANIFEST_EXPANSION_USER_ERROR
message: |
Manifest expansion encountered the following errors: Error compiling Python code: No module named googleapiclient.discovery Resource: cloudvpn-testenv.py Resource: config
I also tried to include the complete google-api-python-client library in my configuration yaml, but I still get this error.
Any idea someone?
To answer your question directly:
# requirements.txt
google-api-python-client
# main.py
import base64
import os
import googleapiclient.discovery
crypto_key_id = os.environ['KMS_CRYPTO_KEY_ID']
def decrypt(client, s):
response = client \
.projects() \
.locations() \
.keyRings() \
.cryptoKeys() \
.decrypt(name=crypto_key_id, body={"ciphertext":s}) \
.execute()
return base64.b64decode(response['plaintext']).decode('utf-8').strip()
kms_client = googleapiclient.discovery.build('cloudkms', 'v1')
auth = decrypt(kms_client, '...ciphertext...'])
You can find more examples and samples on GitHub.
To indirectly answer your question, you may be interested in Secret Manager instead.

How to change permission recursively to folder with AWS s3 or AWS s3api

I am trying to grant permissions to an existing account in s3.
The bucket is owned by the account, but the data was copied from another account's bucket.
When I try to grant permissions with the command:
aws s3api put-object-acl --bucket <bucket_name> --key <folder_name> --profile <original_account_profile> --grant-full-control emailaddress=<destination_account_email>
I receive the error:
An error occurred (NoSuchKey) when calling the PutObjectAcl operation: The specified key does not exist.
while if I do it on a single file the command is successful.
How can I make it work for a full folder?
This can be only be achieved with using pipes. Try -
aws s3 ls s3://bucket/path/ --recursive | awk '{cmd="aws s3api put-object-acl --acl bucket-owner-full-control --bucket bucket --key "$4; system(cmd)}'
The other answers are ok, but the FASTEST way to do this is to use the aws s3 cp command with the option --metadata-directive REPLACE, like this:
aws s3 cp --recursive --acl bucket-owner-full-control s3://bucket/folder s3://bucket/folder --metadata-directive REPLACE
This gives speeds of between 50Mib/s and 80Mib/s.
The answer from the comments from John R, which suggested to use a 'dummy' option, like --storage-class STANDARD. Whilst this works, only gave me copy speeds between 5Mib/s and 11mb/s.
The inspiration for trying this came from AWS's support article on the subject: https://aws.amazon.com/premiumsupport/knowledge-center/s3-object-change-anonymous-ownership/
NOTE: If you encounter 'access denied` for some of your objects, this is likely because you are using AWS creds for the bucket owning account, whereas you need to use creds for the account where the files were copied from.
You will need to run the command individually for every object.
You might be able to short-cut the process by using:
aws s3 cp --acl bucket-owner-full-control --metadata Key=Value --profile <original_account_profile> s3://bucket/path s3://bucket/path
That is, you copy the files to themselves, but with the added ACL that grants permissions to the bucket owner.
If you have sub-directories, then add --recursive.
use python to set up the permissions recursively
#!/usr/bin/env python
import boto3
import sys
client = boto3.client('s3')
BUCKET='enter-bucket-name'
def process_s3_objects(prefix):
"""Get a list of all keys in an S3 bucket."""
kwargs = {'Bucket': BUCKET, 'Prefix': prefix}
failures = []
while_true = True
while while_true:
resp = client.list_objects_v2(**kwargs)
for obj in resp['Contents']:
try:
print(obj['Key'])
set_acl(obj['Key'])
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
while_true = False
except Exception:
failures.append(obj["Key"])
continue
print "failures :", failures
def set_acl(key):
client.put_object_acl(
GrantFullControl="id=%s" % get_account_canonical_id,
Bucket=BUCKET,
Key=key
)
def get_account_canonical_id():
return client.list_buckets()["Owner"]["ID"]
process_s3_objects(sys.argv[1])
One thing you can do to get around the need for setting the ACL for every single object is disabling ACLs for the bucket. All objects in the bucket will then be owned by the bucket owner, and you can use policies for access control instead of ACLs.
You do this by setting the "object ownership" setting to "bucket owner enforced". As per the AWS documentation, this is in fact the recommended setting:
For the majority of modern use cases in S3, we recommend that you disable ACLs by choosing the bucket owner enforced setting and use your bucket policy to share data with users outside of your account as needed. This approach simplifies permissions management and auditing.
You can set this in the web console by going to the "Permissions" tab for the bucket, and clicking the "Edit" button in the "Object Ownership" section. You can then select the "ACLs disabled" radio button.
You can also use the AWS CLI. An example from the documentation:
aws s3api put-bucket-ownership-controls --bucket DOC-EXAMPLE-BUCKET --ownership-controls Rules=[{ObjectOwnership=BucketOwnerEnforced}]
This was my powershell only solution.
aws s3 ls s3://BUCKET/ --recursive | %{ "aws s3api put-object-acl --bucket BUCKET --key "+$_.ToString().substring(30)+" --acl bucket-owner-full-control" }
I had a similar issue with taking ownership of log objects in a quite large bucket.
Total number of objects - 3,290,956 Total size 1.4 TB.
The solutions I was able to find were far too sluggish for that amount of objects. I ended up writing some code that was able to do the job several times faster than
aws s3 cp
You will need to install requirements:
pip install pathos boto3 click
#!/usr/bin/env python3
import logging
import os
import sys
import boto3
import botocore
import click
from time import time
from botocore.config import Config
from pathos.pools import ThreadPool as Pool
logger = logging.getLogger(__name__)
streamformater = logging.Formatter("[*] %(levelname)s: %(asctime)s: %(message)s")
logstreamhandler = logging.StreamHandler()
logstreamhandler.setFormatter(streamformater)
def _set_log_level(ctx, param, value):
if value:
ctx.ensure_object(dict)
ctx.obj["log_level"] = value
logger.setLevel(value)
if value <= 20:
logger.info(f"Logger set to {logging.getLevelName(logger.getEffectiveLevel())}")
return value
#click.group(chain=False)
#click.version_option(version='0.1.0')
#click.pass_context
def cli(ctx):
"""
Take object ownership of S3 bucket objects.
"""
ctx.ensure_object(dict)
ctx.obj["aws_config"] = Config(
retries={
'max_attempts': 10,
'mode': 'standard'
}
)
#cli.command("own")
#click.argument("bucket", type=click.STRING)
#click.argument("prefix", type=click.STRING, default="/")
#click.option("--profile", type=click.STRING, default="default", envvar="AWS_DEFAULT_PROFILE", help="Configuration profile from ~/.aws/{credentials,config}")
#click.option("--region", type=click.STRING, default="us-east-1", envvar="AWS_DEFAULT_REGION", help="AWS region")
#click.option("--threads", "-t", type=click.INT, default=40, help="Threads to use")
#click.option("--loglevel", "log_level", hidden=True, flag_value=logging.INFO, callback=_set_log_level, expose_value=False, is_eager=True, default=True)
#click.option("--verbose", "-v", "log_level", flag_value=logging.DEBUG, callback=_set_log_level, expose_value=False, is_eager=True, help="Increase log_level")
#click.pass_context
def command_own(ctx, *args, **kwargs):
ctx.obj.update(kwargs)
profile_name = ctx.obj.get("profile")
region = ctx.obj.get("region")
bucket = ctx.obj.get("bucket")
prefix = ctx.obj.get("prefix").lstrip("/")
threads = ctx.obj.get("threads")
pool = Pool(nodes=threads)
logger.addHandler(logstreamhandler)
logger.info(f"Getting ownership of all objects in s3://{bucket}/{prefix}")
start = time()
try:
SESSION: boto3.Session = boto3.session.Session(profile_name=profile_name)
except botocore.exceptions.ProfileNotFound as e:
logger.warning(f"Profile {profile_name} was not found.")
logger.warning(f"Falling back to environment variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN")
AWS_ACCESS_KEY_ID = os.environ.get("AWS_ACCESS_KEY_ID", "")
AWS_SECRET_ACCESS_KEY = os.environ.get("AWS_SECRET_ACCESS_KEY", "")
AWS_SESSION_TOKEN = os.environ.get("AWS_SESSION_TOKEN", "")
if AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY:
if AWS_SESSION_TOKEN:
SESSION: boto3.Session = boto3.session.Session(aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_session_token=AWS_SESSION_TOKEN)
else:
SESSION: boto3.Session = boto3.session.Session(aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
else:
logger.error("Unable to find AWS credentials.")
sys.exit(1)
s3c = SESSION.client('s3', config=ctx.obj["aws_config"])
def bucket_keys(Bucket, Prefix='', StartAfter='', Delimiter='/'):
Prefix = Prefix[1:] if Prefix.startswith(Delimiter) else Prefix
if not StartAfter:
del StartAfter
if Prefix.endswith(Delimiter):
StartAfter = Prefix
del Delimiter
for page in s3c.get_paginator('list_objects_v2').paginate(Bucket=Bucket, Prefix=Prefix):
for content in page.get('Contents', ()):
yield content['Key']
def worker(key):
logger.info(f"Processing: {key}")
s3c.copy_object(Bucket=bucket, Key=key,
CopySource={'Bucket': bucket, 'Key': key},
ACL='bucket-owner-full-control',
StorageClass="STANDARD"
)
object_keys = bucket_keys(bucket, prefix)
pool.map(worker, object_keys)
end = time()
logger.info(f"Completed for {end - start:.2f} seconds.")
if __name__ == '__main__':
cli()
Usage:
get_object_ownership.py own -v my-big-aws-logs-bucket /prefix
The bucket mentioned above was processed for ~7 hours using 40 threads.
[*] INFO: 2021-08-05 19:53:55,542: Completed for 25320.45 seconds.
Some more speed comparison using AWS cli vs this tool on the same subset of data:
aws s3 cp --recursive --acl bucket-owner-full-control --metadata-directive
53.59s user 7.24s system 20% cpu 5:02.42 total
vs
[*] INFO: 2021-08-06 09:07:43,506: Completed for 49.09 seconds.
I used this Linux Bash shell oneliner to change ACLs recursively:
aws s3 ls s3://bucket --recursive | cut -c 32- | xargs -n 1 -d '\n' -- aws s3api put-object-acl --acl public-read --bucket bukcet --key
It works even if file names contain () characters.
The python code is more efficient this way, otherwise it takes a lot longer.
import boto3
import sys
client = boto3.client('s3')
BUCKET='mybucket'
def process_s3_objects(prefix):
"""Get a list of all keys in an S3 bucket."""
kwargs = {'Bucket': BUCKET, 'Prefix': prefix}
failures = []
while_true = True
while while_true:
resp = client.list_objects_v2(**kwargs)
for obj in resp['Contents']:
try:
set_acl(obj['Key'])
except KeyError:
while_true = False
except Exception:
failures.append(obj["Key"])
continue
kwargs['ContinuationToken'] = resp['NextContinuationToken']
print ("failures :"+ failures)
def set_acl(key):
print(key)
client.put_object_acl(
ACL='bucket-owner-full-control',
Bucket=BUCKET,
Key=key
)
def get_account_canonical_id():
return client.list_buckets()["Owner"]["ID"]
process_s3_objects(sys.argv[1])

Errno 11004 getaddrinfo failed error in connecting to Amazon S3 bucket

I am trying to use the boto (ver 2.43.0) library in Python to connect to S3, but I keep getting socket.gaierror: [Errno 11004] when I try to do this:
from boto.s3.connection import S3Connection
access_key = 'accesskey_here'
secret_key = 'secretkey_here'
conn = S3Connection(access_key, secret_key)
mybucket = conn.get_bucket('s3://diap.prod.us-east-1.mybucket/')
print("success!")
I can connect to and access folders in mybucket using AWS CLI by using a command like this in Windows:
> aws s3 ls s3://diap.prod.us-east-1.mybucket/
<list of folders in mybucket will be here>
or using software like CloudBerry or S3Browser.
Is there something that I am doing wrong here to access S3 bucket and folders properly?
get_bucket() expects a bucket name.
get_bucket(bucket_name, validate=True, headers=None)
Try:
mybucket = conn.get_bucket('mybucket')
If it doesn't work, show the full stack trace.
{Update]: There is a bug in boto library for bucket names with dot. Update your boto config
[s3]
calling_format = boto.s3.connection.OrdinaryCallingFormat
Or
from boto.s3.connection import S3Connection, OrdinaryCallingFormat
conn = S3Connection(access_key, secret_key, calling_format=OrdinaryCallingFormat())

AWS S3 Bucket Upload/Transfer with boto3

I need to upload files to S3 and I was wondering which boto3 api call I should use?
I have found two methods in the boto3 documentation:
http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.upload_file
http://boto3.readthedocs.io/en/latest/reference/customizations/s3.html
Do I use the client.upload_file() ...
#!/usr/bin/python
import boto3
session = Session(aws_access_key_id, aws_secret_access_key, region)
s3 = session.resource('s3')
s3.Bucket('my_bucket').upload_file('/tmp/hello.txt', 'hello.txt')
or do I use S3Transfer.upload_file() ...
#!/usr/bin/python
import boto3
session = Session(aws_access_key_id, aws_secret_access_key, region)
S3Transfer(session).upload_file('/tmp/hello.txt', 'my_bucket', 'hello.txt')
Any suggestions would be appreciated. Thanks in advance.
.
.
.
possible solution...
# http://boto3.readthedocs.io/en/latest/reference/services/s3.html#examples
# http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.put_object
# http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object
client = boto3.client("s3", "us-west-1", aws_access_key_id = "xxxxxxxx", aws_secret_access_key = "xxxxxxxxxx")
with open('drop_spot/my_file.txt') as file:
client.put_object(Bucket='s3uploadertestdeleteme', Key='my_file.txt', Body=file)
response = client.get_object(Bucket='s3uploadertestdeleteme', Key='my_file.txt')
print("Done, response body: {}".format(response['Body'].read()))
It's better to use the method on the client. They're the same, but using the client method means you don't have to setup things yourself.
You can use Client: low-level service access : I saw a sample code in https://www.techblog1.com/2020/10/python-3-how-to-communication-with-aws.html