I have developed an web app, in which files needs to be uploaded from local PC to AWS EC2 instance via flask web call and run the machine learning model in the back-end. But, could not find any related resources to do that.
Can we upload in AWS S3 instead and link ec2 EBS and S3?
If any help is provided then it will be useful to do this!
Use boto to upload files to s3.
In flash create an endpoint that will take the local file and push it to S3.
import boto3
from botocore.exceptions import ClientError
def upload_file(file_name, bucket, object_name=None):
"""Upload a file to an S3 bucket
:param file_name: File to upload
:param bucket: Bucket to upload to
:param object_name: S3 object name. If not specified then file_name is used
:return: True if file was uploaded, else False
"""
# If S3 object_name was not specified, use file_name
if object_name is None:
object_name = file_name
# Upload the file
s3_client = boto3.client('s3')
try:
response = s3_client.upload_file(file_name, bucket, object_name)
except ClientError as e:
logging.error(e)
return False
return True ```
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html
Read This https://www.javatpoint.com/flask-file-uploading
Upload the file to tmp directory and then upload to S3.
Related
I have integrated django-storages in Django Project. for large file sizes, I have used a pre-signed URL to upload files from externally without taking a load on my server.
by pre-signed URL, Files uploaded successfully in the s3 bucket AWS, after uploading the file in s3 I need to update the name of the file in FileField.
Probably you need something like, using the boto3 library to retrieve the file from S3 and os library to rename the file.
import boto3
import os
s3 = boto3.client('s3')
uploaded_file = s3.get_object(Bucket='your-bucket-name', Key='object-key')
new_filename = 'new_file_name.txt'
os.rename(uploaded_file['Body'].name, new_filename)
...
with open(new_filename, 'rb') as f:
file_obj = File(f, name=new_filename)
my_model.file_field_name = file_obj
my_model.save()
How to create a Pre-signed URL for the specific version of a file in AWS S3?
If the bucket is enabled for file versioning and file has more than one version and wants to create presigned url for speific version of file.
Just need to pass the version_id along with key to create pre-signed url for the specific version of the file.
Python Example:
def get_pre_signed_url(bucket, file_name):
try:
response = boto3.client('s3', aws_access_key_id=os.environ.get("aws_access_key_id"), aws_secret_access_key=os
.environ.get("aws_secret_access_key"), region_name=os.environ.get("region_name"))\
.generate_presigned_post(Bucket=bucket, Key=os.environ.get('folder_location') + file_name,
ExpiresIn=300)
except ClientError as e:
logging.error(e)
return None
return response
Filename is {fileName}?versionId={versionId}
Check this repo, for more information
I have a file with urls in my s3 bucket. I would like to use a python lambda function to upload the url files to s3 bucket.
For example my uploaded file to s3 contains:
http://...
http://...
Each line corresponds to a file to be uploaded into s3.
Here is the code:
import json
import urllib.parse
import boto3
import requests
import os
from gzip import GzipFile
from io import TextIOWrapper
import requests
print('Loading functions')
s3 = boto3.client('s3')
def get_file_seqs(response):
try:
size = response['ResponseMetadata']['HTTPHeaders']['content-length']
print("[+] Size retrieved")
return size
except:
print("[-] Size can not be retrieved")
def lambda_handler(event, context):
# Defining bucket objects
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
#get file from s3
print('[+] Getting file from S3 bucket')
response = s3.get_object(Bucket=bucket, Key=key)
try:
#checking file size
print('[+] Checking file size')
file_size = get_file_seqs(response)
if file_size == 0:
print('File size is equal to 0')
return False
else:
#create new directories
print('[+] Creating new directories')
bucket_name = "triggersnextflow"
directories = ['backups/sample/', 'backups/control/']
#loop to create new dirs
for dirs in directories:
s3.put_object(Bucket = bucket_name, Key = dirs, Body = '')
#NOW I WOULD LIKE TO DOWNLOAD THE FILES FROM THE URLS INSIDE S3 OBJECT
#return true
return True
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
Download an S3 object to a file:
import boto3
s3 = boto3.resource('s3')
s3.meta.client.download_file('mybucket', 'hello.txt', '/tmp/hello.txt')
You will find great resource of information here:
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.download_file
I have the following function:
def upload_file(file_name, bucket, object_name=None):
"""Upload a file to an S3 bucket -> from aws docs
:param file_name: File to upload
:param bucket: Bucket to upload to
:param object_name: S3 object name. If not specified then file_name is used
:return: True if file was uploaded, else False
"""
# If S3 object_name was not specified, use file_name
if object_name is None:
object_name = file_name
# Upload the file
s3_client = boto3.client('s3')
try:
response = s3_client.upload_file(file_name, bucket, object_name)
except ClientError as e:
logging.error(e)
return False
return True
I am trying to upload a html file to an S3 bucket acting as a webserver. When I manually upload the html file to S3, it works as expected, and displays the page when I navigate to the S3 bucket's URL.
If I programmatically upload the file using the above function, the html file will no longer be hosted, and my browser will attempt to download a XZ file.
Am I missing a parameter or something?
Courtesy of #jarmod, I learned I was setting an incorrect content-type.
Here is the updated function to upload an HTML file as text/html.
def upload_file(file_name, bucket, object_name=None):
"""Upload a file to an S3 bucket -> from aws docs
:param file_name: File to upload
:param bucket: Bucket to upload to
:param object_name: S3 object name. If not specified then file_name is used
:return: True if file was uploaded, else False
"""
# If S3 object_name was not specified, use file_name
if object_name is None:
object_name = file_name
# Upload the file
s3_client = boto3.client('s3')
try:
response = s3_client.upload_file(file_name, bucket, object_name, ExtraArgs={'ContentType': "text/html"})
except ClientError as e:
logging.error(e)
return False
return True
I created a Python script that should upload a file from my local ec2 to the s3 bucket
import boto3
s3 = boto3.resource('s3')
data = open('backupFile.txt', 'rb')
s3.Bucket('mlsd').put_object(Key='backupFile.txt', Body=data)
I went to AWS account details and got the credentials.
I executed aws configure to set credentials on my EC2.
Hear is the output of the credentials using aws configure list:
I went to .aws/credentials and pasted access_key_id, secret_access_key, and token
I ensured that the token is not expired.
When I ran the script, I got the following output:
Not sure what the problem is.
Boto3 detects your credentials in possible locations, as described here, so it should find your access_key_id and secret_access_key
Make sure the user whose access_key_id you use has the access to S3 bucket.
I tried this code example and it works:
import logging
import boto3
from botocore.exceptions import ClientError
def upload_file(file_name, bucket, object_name=None):
"""Upload a file to an S3 bucket
:param file_name: File to upload
:param bucket: Bucket to upload to
:param object_name: S3 object name. If not specified then file_name is used
:return: True if file was uploaded, else False
"""
# If S3 object_name was not specified, use file_name
if object_name is None:
object_name = file_name
# Upload the file
s3_client = boto3.client('s3')
try:
response = s3_client.upload_file(file_name, bucket, object_name)
except ClientError as e:
logging.error(e)
return False
return True