Django Storage and Boto3 not retrieving Media from AWS S3 - django

I am using a development server to test uploading and retrieving static files from AWS S3 using Django storages and Boto3. The file upload worked but I cannot retrieve the files.
This is what I get:
And when I check out the URL in another tab I get this
**This XML file does not appear to have any style information associated with it. The document tree is shown below.**
<Error>
<Code>IllegalLocationConstraintException</Code>
<Message>The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.</Message>
<RequestId></RequestId>
<HostId></HostId>
</Error>
Also I configured the settings.py with my own credentials and IAM user
AWS_ACCESS_KEY_ID = <key>
AWS_SECRET_ACCESS_KEY = <secret-key>
AWS_STORAGE_BUCKET_NAME = <bucket-name>
AWS_DEFAULT_ACL = None
AWS_S3_FILE_OVERWRITE = False
AWS_S3_REGION_NAME = 'me-south-1'
AWS_S3_USE_SSL = True
AWS_S3_VERIFY = False
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'

Please check in your AWS Identity & Access Management Console (IAM) whether your access keys have proper S3 permissions assigned to them.
Also, make sure you have installed AWS CLI and setup your credentials in your machine.
You can try running the below command and verify it.
$ aws s3 ls
2018-12-11 17:08:50 my-bucket
2018-12-14 14:55:44 my-bucket2
Reference : https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html

Related

failed to download files from AWS S3

Senario:
commit Athena query with boto3 and output to s3
download result in s3
Error: An error occurred (404) when calling the HeadObject operation: Not Found
It's weird that the file exists in S3 and I can copy it down with aws s3 cp command. But I just cannot download with boto3 and failed to execute head-object.
aws s3api head-object --bucket dsp-smaato-sink-prod --key /athena_query_results/c96bdc09-d545-4ee3-bc66-be3be928e3f2.csv
It does work. I've checked account policies and it has granted admin policy.
# snippets
def s3_donwload(url, target=None):
# s3 = boto3.resource('s3')
# client = s3.meta.client
client = boto3.client("s3", region_name=constant.AWS_REGION, endpoint_url='https://s3.ap-southeast-1.amazonaws.com')
s3_file = urlparse(url)
if target:
target = os.path.abspath(target)
else:
target = os.path.abspath(os.path.basename(s3_file.path))
logger.info(f"download {url} to {target}...")
client.download_file(s3_file.netloc, s3_file.path, target)
logger.info(f"download {url} to {target} done!")
Take a look at the value of s3_file.path -- does it start with a slash? If so, it needs to change because Amazon S3 keys do not start with a slash.
I suggest that you print the content of netloc, path and target to see what values it is actually passing.
It's a bit strange to use os.path with an S3 URL, so it might need some tweaking.

Using aws profile with fs S3Filesystem

Trying to use a specific AWS profile when using Apache Pyarrow. The documentation show no option to pass a profile name when instantiating S3FileSystem using pyarrow fs [https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html]
Tried to get around this by creating a session with boto3 and using that :
# include mfa profile
session = boto3.session.Session(profile_name="custom_profile")
# create filesystem with session
bucket = fs.S3FileSystem(session_name=session)
bucket.get_file_info(fs.FileSelector('bucket_name', recursive=True))
but this too fails :
OSError: When listing objects under key '' in bucket 'bucket_name': AWS Error [code 15]: Access Denied
is it possible to use fs with custom aws profile ?
~/.aws/credentials :
[default]
aws_access_key_id = <access_key>
aws_secret_access_key = <secret_key>
[custom_profile]
aws_access_key_id = <access_key>
aws_secret_access_key = <secret_key>
aws_session_token = <token>
additional context : all actions of users require MFA. custom AWS profile in credentials file stores token generated post MFA based authentication on the CLI, need to use that profile in the script
I think is better this way:
session = boto3.session.Session(profile_name="custom_profile")
credentials = session.get_credentials()
s3_files = fs.S3FileSystem(
secret_key=credentials.secret_key,
access_key=credentials.access_key,
region=session.region_name,
session_token=credentials.token)
one can specify a token, but must also specify access key and secret key :
s3 = fs.S3FileSystem(access_key="",
secret_key="",
session_token="")
one would also have to implement some method to parse the ~/.aws/credentials file to get access to these values or do it manually each time

How to download data from AWS in python

I am new to AWS and boto. The data I want to download is on AWS, and I have the access key and the secret key. My problem is I do not understand the approaches I found. For instance, this code:
import boto
import boto.s3.connection
def download_data_connect_s3(access_key, secret_key, region, bucket_name, key, local_path):
conn = boto.connect_s3(aws_access_key_id = access_key,\
aws_secret_access_key = secret_key,\
host='s3-{}.amazonaws.com'.format(region),\
calling_format = boto.s3.connection.OrdinaryCallingFormat()\
)
bucket = conn.get_bucket(bucket_name)
key = bucket.get_key(key)
key.get_contents_to_filename(local_path)
print('Downloaded File {} to {}'.format(key, local_path))
region = 'us-west-1'
access_key = # the key here
secret_key = # the secret key here
bucket_name = 'temp_name'
key = '<folder…/filename>' unique identifer
local_path = # local path
download_data_connect_s3(access_key, secret_key, region, bucket_name, key, local_path)
What I don't understand is the 'key' 'bucket_name' and 'local path'. What is 'key' in comparison to access key and secret key? I was not given a 'key'. Also, is the 'bucket_name' the name of the bucket on AWS (I was not provided with the bucket name); and local path the directory where I want to save the data?
You are right.
bucket_name = name of your S3 bucket
key = is object key. It's full path of the file in side the bucket. (ex: you have a file named a.txt in folder x, so key = x/a.txt. Refer to this link
local_path = where you want to save the data in local machine
It sounds like the data is stored in Amazon S3.
You can use the AWS Command-Line Interface (CLI) to access Amazon S3.
To view the list of buckets in that account:
aws s3 ls
To view the contents of a bucket:
aws s3 ls bucket-name
To copy a file from a bucket to the current directory:
aws s3 cp s3://bucket-name/filename.txt .
Or sync a whole folder:
aws s3 sync s3://bucket-name/folder/ local-folder/

Errno 11004 getaddrinfo failed error in connecting to Amazon S3 bucket

I am trying to use the boto (ver 2.43.0) library in Python to connect to S3, but I keep getting socket.gaierror: [Errno 11004] when I try to do this:
from boto.s3.connection import S3Connection
access_key = 'accesskey_here'
secret_key = 'secretkey_here'
conn = S3Connection(access_key, secret_key)
mybucket = conn.get_bucket('s3://diap.prod.us-east-1.mybucket/')
print("success!")
I can connect to and access folders in mybucket using AWS CLI by using a command like this in Windows:
> aws s3 ls s3://diap.prod.us-east-1.mybucket/
<list of folders in mybucket will be here>
or using software like CloudBerry or S3Browser.
Is there something that I am doing wrong here to access S3 bucket and folders properly?
get_bucket() expects a bucket name.
get_bucket(bucket_name, validate=True, headers=None)
Try:
mybucket = conn.get_bucket('mybucket')
If it doesn't work, show the full stack trace.
{Update]: There is a bug in boto library for bucket names with dot. Update your boto config
[s3]
calling_format = boto.s3.connection.OrdinaryCallingFormat
Or
from boto.s3.connection import S3Connection, OrdinaryCallingFormat
conn = S3Connection(access_key, secret_key, calling_format=OrdinaryCallingFormat())

S3ResponseError: 301 Moved Permanently - Django

I have a very big issue with Amazon S3. I am working on a Django app and I want to store file on S3:
My settings are:
AWS_STORAGE_BUCKET_NAME = 'tfjm2-inscriptions'
AWS_ACCESS_KEY_ID = 'id'
AWS_SECRET_ACCESS_KEY = 'key'
AWS_S3_CUSTOM_DOMAIN = '%s.s3-eu-west-1.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
And I get this error: S3ResponseError: 301 Moved Permanently
Some same issues on the Internet say that it is because it is a non-US bucket and I did tried with a US-standard bucket but it get a 401 Forbidden error.....
I do not know what to do.
Please help me.
Thank you
You can do this:
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
in terminal run 'nano ~/.boto'
if there is some configs try to comment or rename file and connect again. (it helps me)
http://boto.cloudhackers.com/en/latest/boto_config_tut.html
there is boto config file directories. take a look one by one and clean them all, it will work by default configs. also configs may be in .bash_profile, .bash_source...
I guess you must allow only KEY-SECRET
Solve by changing your code to:
AWS_STORAGE_BUCKET_NAME = 'tfjm2-inscriptions'
AWS_ACCESS_KEY_ID = 'id'
AWS_SECRET_ACCESS_KEY = 'key'
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
AWS_S3_REGION_NAME = 'us-east-2' ##### Use the region name where your bucket is created
AWS allow you to access a create and access buckets in the same region as an optimization measure and as such if you create a bucket in say 'us-west-2' you'll get a 301 if try accessing it from a different region (Africa, Europe and even East US).
You should specify the region if requesting the bucket from outside its region.