Swisscom Appcloud S3 Connection reset by peer - django

We have a Django Webservice that uses Swisscom AppCloud's S3 solution. So far we had no problems, but without changing anything on the application we are experiencing ConnectionError: ('Connection aborted.', error(104, 'Connection reset by peer')) errors when we are trying to upload files. We are using boto3 1.4.4.
Edit:
The error occures after somwhere between 10 and 30s. When I try from my local development machine it works.
from django.conf import settings
from boto3 import session
from botocore.exceptions import ClientError
class S3Client(object):
def __init__(self):
s3_session = session.Session()
self.s3_client = s3_session.client(
service_name='s3',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
endpoint_url=settings.S3_ENDPOINT,
)
.
.
.
def add_file(self, bucket, fileobj, file_name):
self.s3_client.upload_fileobj(fileobj, bucket, file_name)
url = self.s3_client.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': file_name
},
ExpiresIn=60*24*356*10 # signed for 10 years. Should be enough..
)
url, signature = self._split_signed_url(url)
return url, signature, file_name
Could this be a version problem or anything else on our side?
Edit:
Made some tests with s3cmd: I can list the buckets I have access to but for all other commands like listing all objects or just listing the objects in a bucket I get a Retrying failed request: / ([Errno 54] Connection reset by peer)

After some investigation I found the error:
Swisscom's implementation of S3 is somehow not up-to-date with Amazon's. To solve the problem I had to downgrade botocore from 1.5.78 to 1.5.62.

Related

AWS Lambda gets the content of a private wiki in Azure

I'm trying to access a private wiki in azure (I'm admin in the project) but using the a lambda in AWS.
import requests
import base64
def lambda_handler(event, context):
# Replace with your own Azure DevOps organization name
organization_name = "name of the organization"
# Replace with the project name that contains the wiki
project_name = "project name"
# Replace with the name of the wiki
wiki_name = "wiki name"
# Replace with the version of the wiki to retrieve
wiki_version = "1234"
# Replace with your own personal access token (PAT)
personal_access_token = "my pat"
# Build the URL to retrieve the wiki content
url = f"https://dev.azure.com/{organization_name}/{project_name}/_apis/wiki/wikis/{wiki_name}/{wiki_version}/content?api-version=7.0"
# Set the authorization header with the base64-encoded PAT
auth_header = f"Basic {base64.b64encode(f'{personal_access_token}'.encode('utf-8')).decode('utf-8')}"
headers = {
"Authorization": auth_header,
"Accept": "application/json"
}
# Make the request to retrieve the wiki content
response = requests.get(url, headers=headers)
# Return the response if successful, otherwise return an error message
if response.status_code == 200:
return response.json()
else:
My lambda has a security group allowing all inbound and outbound, and public subnet.
Also lambda has a very permisive role lol
But I always get a
"[ERROR] ConnectionError: HTTPSConnectionPool(host='dev.azure.com', port=443): Max retries exceeded with url:xxxxx (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f405ba34790>: Failed to establish a new connection: [Errno 110] Connection timed out'))"
Not sure if the issue is the lambda or in azure.
I already tried changing the permissions and changing the role of aws lamdba.
Also I got another PAT with all permissions allowed.

Save file into EC2 directly through Lambda

I've made a Lambda function that stores a binary file into S3 and it works fine.
Instead, now I would like to save this file directly into my EC2 instance storage volume .
I searched a lot but I didn't understand if it's possible yet. Do you know?
I've already made an SSH connection (inside the Lambda..) to run SSH commands but I don't how to use in my case and if is the right way to save my data ...Do you have some idea?
I know that there is possibility to connect S3 to EC2 but first I would like to understand the possibility above..
Thanks
I made a solution ( Python ):
Using Boto3 and Paramiko package I build an SSH client to EC2, so I move my file to S3 by AWSCLI.
If useful for anyone I post part of code below:
import json
import boto3
import paramiko
def lambda_handler(event, context):
#My Parameters
myBucket = "lorem"
myPemKeyFile="lorem.pem"
myEc2Username="lorem"
ec2_client = boto3.client('ec2')
s3_client = boto3.client("s3")
OutFileName= "lorem.txt"
# PREPARING FOR SSH CLIENT
try:
# GETTING ISTANCE INFORMATION
describeInstance = ec2_client.describe_instances()
hostPublicIP=[]
# fetchin public IP address of the running instances
for i in describeInstance['Reservations']:
for instance in i['Instances']:
if instance["State"]["Name"] == "running":
hostPublicIP.append(instance['PublicIpAddress'])
#print(hostPublicIP)
# DOWNLOADING PEM FILE FROM S3
s3_client.download_file(myBucket,myPemKeyFile, '/tmp/file.pem')
# reading pem file and creating key object
key = paramiko.RSAKey.from_private_key_file("/tmp/file.pem")
# CREATING Paramiko.SSHClient
ssh_client = paramiko.SSHClient()
# setting policy to connect to unknown host
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
host=hostPublicIP[0]
#print("Connecting to : " + host)
# connecting to server
ssh_client.connect(hostname=host, username=myEc2Username, pkey=key)
#print("Connected to :" + host)
except:
raise Exception('OPS, there whas a crash preparing for SSH client!! 500 ')
# MOVING FILE INTO S3
commands = [
"aws s3 mv ~/directoryFrom/"+OutFileName+" s3://"+myBucket+"/"+OutFileName
]
try:
for command in commands:
stdin , stdout, stderr = ssh_client.exec_command(command)
SSHout=stdout.read()
except:
raise Exception('OPS, somethig happends to SSH client. Move file to S3 didn\'t run 500')

What I am doing wrong by allowing user to download file from AWS with django boto3?

Hello Awesome People!
I will try to be clear with my question.
All of my media files are uploaded to AWS, I created a view that allows each user to download images. Before, I did it without the boto that summed up
"this back-end doesn't support absolute path."
And now, after some research, I'm using the s3 boto3 connection.
Model
class Photo(models.Model):
file = models.ImageField(upload_to="uploaded_documents/")
total_download = models.PositiveIntegerField()
View
def download_file(request,id):
photo = get_object_or_404(Photo,id=id)
photo.total_download += 1
photo.save()
path = os.path.basename(photo.file.name)
# path = '/media/public/uploaded_documents/museum.jpg'
client = boto3.client('s3',aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
resource = boto3.resource('s3')
bucket = resource.Bucket(settings.AWS_STORAGE_BUCKET_NAME)
bucket.download_file(path, 'my_local_image.jpg')
Here I don't know what to do to trigger it. when I run it, I get the following error:
NoCredentialsError at /api/download-file/75
Exception Type: NoCredentialsError
Exception Value: Unable to locate credentials
UPDATE
I use the credentials in resources instead of client
client = boto3.client('s3')
resource = boto3.resource('s3',aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
and it seems to be authenticated. But now I got the error:
Exception Type: ClientError
Exception Value: An error occurred (404) when calling the HeadObject operation: Not Found
please try looking at the answers in:
Boto3 Error: botocore.exceptions.NoCredentialsError: Unable to locate credentials
also check that the file path is correct in S3

AWS S3 Bucket Upload/Transfer with boto3

I need to upload files to S3 and I was wondering which boto3 api call I should use?
I have found two methods in the boto3 documentation:
http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.upload_file
http://boto3.readthedocs.io/en/latest/reference/customizations/s3.html
Do I use the client.upload_file() ...
#!/usr/bin/python
import boto3
session = Session(aws_access_key_id, aws_secret_access_key, region)
s3 = session.resource('s3')
s3.Bucket('my_bucket').upload_file('/tmp/hello.txt', 'hello.txt')
or do I use S3Transfer.upload_file() ...
#!/usr/bin/python
import boto3
session = Session(aws_access_key_id, aws_secret_access_key, region)
S3Transfer(session).upload_file('/tmp/hello.txt', 'my_bucket', 'hello.txt')
Any suggestions would be appreciated. Thanks in advance.
.
.
.
possible solution...
# http://boto3.readthedocs.io/en/latest/reference/services/s3.html#examples
# http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.put_object
# http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object
client = boto3.client("s3", "us-west-1", aws_access_key_id = "xxxxxxxx", aws_secret_access_key = "xxxxxxxxxx")
with open('drop_spot/my_file.txt') as file:
client.put_object(Bucket='s3uploadertestdeleteme', Key='my_file.txt', Body=file)
response = client.get_object(Bucket='s3uploadertestdeleteme', Key='my_file.txt')
print("Done, response body: {}".format(response['Body'].read()))
It's better to use the method on the client. They're the same, but using the client method means you don't have to setup things yourself.
You can use Client: low-level service access : I saw a sample code in https://www.techblog1.com/2020/10/python-3-how-to-communication-with-aws.html

django boto3: NoCredentialsError -- Unable to locate credentials

I am trying to use boto3 in my django project to upload files to Amazon S3. Credentials are defined in settings.py:
AWS_ACCESS_KEY = xxxxxxxx
AWS_SECRET_KEY = xxxxxxxx
S3_BUCKET = xxxxxxx
In views.py:
import boto3
s3 = boto3.client('s3')
path = os.path.dirname(os.path.realpath(__file__))
s3.upload_file(path+'/myphoto.png', S3_BUCKET, 'myphoto.png')
The system complains about Unable to locate credentials. I have two questions:
(a) It seems that I am supposed to create a credential file ~/.aws/credentials. But in a django project, where do I have to put it?
(b) The s3 method upload_file takes a file path/name as its first argument. Is it possible that I provide a file stream obtained by a form input element <input type="file" name="fileToUpload">?
This is what I use for a direct upload, i hope it provides some assistance.
import boto
from boto.exception import S3CreateError
from boto.s3.connection import S3Connection
conn = S3Connection(settings.AWS_ACCESS_KEY,
settings.AWS_SECRET_KEY,
is_secure=True)
try:
bucket = conn.create_bucket(settings.S3_BUCKET)
except S3CreateError as e:
bucket = conn.get_bucket(settings.S3_BUCKET)
k = boto.s3.key.Key(bucket)
k.key = filename
k.set_contents_from_filename(filepath)
Not sure about (a) but django is very flexible with file management.
Regarding (b) you can also sign the upload and do it directly from the client to reduce bandwidth usage, its quite sneaky and secure too. You need to use some JavaScript to manage the upload. If you want details I can include them here.