Protecting Amazon S3 Files for User-Specific display - amazon-web-services

My server would communicate with S3. There are two possibilities as far as I understand:
1) Load the file to my server and send it to the user, keeping S3 access only to my server's IP
2) Redirect to S3 while handling authentication on my server
I've understood(I think) how to do #1 from:
Does Amazon S3 support HTTP request with basic authentication
But is there any way to accomplish #2? I want to avoid the latency of first loading the file to my server and then sending it to the user.
I'm not sure how to keep the S3 url protected from public access in #2. Someone might go through my authentication, get a download link, but that link will be publicly accessible.
I'm new to S3 in general, so bear with me if I've misunderstood anything.
Edit: I've looked into signed links with expiration times, but they can still be accessed by others. I would also prefer to use my own authentication so I can allow access to a link only while a user is signed in.

You should try below code, which your server produce an URL which will expire in say 60 seconds, for users to directly download the file from S3 server.
First: Download HMAX.php from here:
http://pear.php.net/package/Crypt_HMAC/redirected
<?php
require_once('Crypt/HMAC.php');
echo getS3Redirect("/test.jpg") . "\n";
function getS3Redirect($objectName)
{
$S3_URL = "http://s3.amazonaws.com";
$keyId = "your key";
$secretKey = "your secret";
$expires = time() + 60;
$bucketName = "/your bucket";
$stringToSign = "GET\n\n\n$expires\n$bucketName$objectName";
$hasher =& new Crypt_HMAC($secretKey, "sha1");
$sig = urlencode(hex2b64($hasher->hash($stringToSign)));
return "$S3_URL$bucketName$objectName?AWSAccessKeyId=$keyId&Expires=$expires&Signature=$sig";
}
function hex2b64($str)
{
$raw = ";
for ($i=0; $i < strlen($str); $i+=2)
{
$raw .= chr(hexdec(substr($str, $i, 2)));
}
return base64_encode($raw);
}
?>
Take a try.

Related

How to efficiently allow for users to view Amazon S3 content?

I am currently creating a basic app with React-Native (frontend) and Flask/MongoDB (backend). I am planning on using AWS S3 as cheap cloud storage for all the images and videos that are going to be uploaded and viewed. My current idea (and this could be totally off), is when a user uploads content, it will go through my Flask API and then to the S3 storage. When a user wants to view content, I am not sure what the plan of attack is here. Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
I am quite new to using AWS and if there is already a post discussing this topic, please let me know, and I'd be more than happy to take down this duplicate. I just can't seem to find anything.
Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
If the content is public, you just provide an URL which points directly to the file on the S3 bucket.
If the content is private, you generate presigned url on your backend for the file for which you want to give access. This URL should be valid for a short amount of time (for example: 15/30 minutes). You can regenerate it, if it becomes unavailable.
Moreover, you can generate a presigned URL which can be used for uploads directly from the front-end to the S3 bucket. This might be an option if you don't want the upload traffic to go through the backend or you want faster uploads.
There is an API boto3, try to use it.
It is not so difficult, I have done something similar, will post code here.
I have done like #Ervin said.
frontend asks backend to generate credentials
backend sends to frontend the credentials
Frontend upload file to S3
Frontend warns backend it has done.
Backend validate if everything is ok.
Backend will create a link to download, you have a lot of security options.
example of item 6) To generate a presigned url to download content.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
params = {}
params['Bucket'] = bucket
params['Key'] = attachment_model.s3_filename
params['ResponseContentDisposition'] = 'attachment; filename={0}'.format(attachment_model.filename)
if attachment_model.mimetype is not None:
params['ResponseContentType'] = attachment_model.mimetype
url = client.generate_presigned_url('get_object', ExpiresIn=3600, Params=params)
example of item 2) Backend will create presigned credentials to post your file on S3, send s3_credentials to frontend
acl_permission = 'private' if private_attachment else 'public-read'
condition = [{'acl': acl_permission},
["starts-with", "$key", '{0}/'.format(folder_name)],
{'Content-Type': mimetype }]
bucket = app.config.get('BOTO3_BUCKET', None)
fields = {"acl": acl_permission, 'Bucket': bucket, 'Content-Type': mimetype}
client = boto_flask.clients.get('s3')
s3_credentials = client.generate_presigned_post(bucket, s3_filename, Fields=fields, Conditions=condition, ExpiresIn=3600)
example of item 5) Here are an example how backend can check if file on S3 are ok.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
response = client.head_object(Bucket=bucket, Key=s3_filename)
if response is None:
return None, None
md5 = response.get('ETag').replace('"', '')
size = response.get('ContentLength')
Here are an example how frontend will ask for credentials, upload file to S3 and inform backend it is done.
I tried to remove a lot of particular code.
//frontend asking backend to create credentials, frontend will send some file metadata
AttachmentService.createPostUrl(payload).then((responseCredentials) => {
let form = new FormData();
Object.keys(responseCredentials.s3.fields).forEach(key => {
form.append(key, responseCredentials.s3.fields[key]);
});
form.append("file", file);
let payload = {
data: form,
url: responseCredentials.s3.url
}
//Frontend will send file to S3
axios.post(payload.url, payload.data).then((res) => {
return Promise.resolve(true);
}).then((result) => {
//when it is done, frontend informs backend
AttachmentService.uploadSuccess(...).then((refreshCase) => {
//Success
});
});
});

Get Non Expiry S3 URL to store it to dynamodb flutter

Guys, I am working on an application that requires to upload an image to s3 and keep the URL in the dynamodb database, however after the upload the geturl function which I have generates the URL for a certain time which has an expiry, how do I get a URL with no expiry
Future<String> getUrl() async {
try {
print('In getUrl');
String key = _uploadFileResult;
try {
GetUrlResult result = await Amplify.Storage.getUrl(key: key);
print(result.url);
return result.url;
} on StorageException catch (e) {
print(e.message);
}
} catch (e) {
print('GetUrl Err: ' + e.toString());
}
}
All S3 pre-signed URLs have an expiration time. Pre-signed URLs exist to share private S3 objects for a limited time.
If that's a problem for you then one option is to make the object public, if appropriate, and simply store its URL of the form https://mybucket.s3.amazonaws.com/images/cat.png.
Alternatively, write a small application that responds to a specific URL (e.g. https://myapi.mydomain.com/images/cat.png and have that app create a pre-signed URL for the related object and issue a 302 redirect to send the client to the temporary, pre-signed URL.

The best way to send file to GCS wihout user confirmation

I am developing an application that needs to send files to Google Cloud Storage.
The webapp will have a HTML page that the user choose files to do upload.
The user do not have Google Account.
The amount files to send is 5 or less.
I do not want to send files to GAE and GAE send to GCS. I would like that my user to do upload directly to GCS.
I did this code for upload:
function sentStorage() {
var file = document.getElementById("myFile").files[0];
url = 'https://www.googleapis.com/upload/storage/v1/b/XXX/o?uploadType=resumable&name=' + file.name;
xhr = new XMLHttpRequest();
var token = 'ya29.XXXXXXXXXXXXXXX';
xhr.open('POST', url);
xhr.setRequestHeader('Content-Type', file.type);
// resumable
//url = 'https://www.googleapis.com/upload/storage/v1/b/XXXXXX/o?uploadType=resumable&name=' + file.name;
//xhr.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
//xhr.setRequestHeader('Content-Length', file.size);
xhr.setRequestHeader('x-goog-project-id', 'XXXXXXXXXX');
xhr.setRequestHeader('Authorization', 'Bearer ' + token);
xhr.send(file);
xhr.onreadystatechange = function () {
if (xhr.readyState === 4) {
var response = JSON.parse(xhr.responseText);
if (xhr.status === 200) {
alert('codigo 200');
} else {
var message = 'Error: ' + response.error.message;
console.log(message);
alert(message);
}
}
};
}
I get a serviceaccount information (Google Console) and generate a token Bearer for it. I used a python file that read the "json account information" and generate the token.
My requisit is that user do not need to confirm any Google Account information for send files, this obligation is from my application. (Users do not have Google Account) and the html page send the files directly to GCS without send to GAE or GCE, so, I need to use HTML form or Javascript. I prefer Javascript.
Only users of this application can do upload (the application has an authentication with database), so, anonymous user can not do it.
My questions are:
This token will expire? I used a serviceaccount for generate this token.
There is a better api javascript to do it?
This security solution is better or I should use a different approach?
Sending either a refresh or an access token to an untrusted end user is very dangerous. The bearer of an access token has complete authority to act as the associated account (within the scope used to generate it) until the access token expires a few minutes later. You don't want to do that.
There are a few good alternatives. The easiest way is to create exactly the upload request you want, then sign the URL for that request using the private key of a service account. That signed URL, which will be valid for a few minutes, could then be used to upload a single object. You'll need to sign the URL on the server side before giving it to the customer. Here's the documentation on signed URLs: https://cloud.google.com/storage/docs/access-control/signed-urls

AWS cannot signed CloudFront urls

Excepted: I want to get signed urls with my AWS CloudFront url.
What I have done: I have created a AWS CloudFront instence and enabled Restrict Viewer Access function, Trusted Signers is Self.
Below is the php code I want to sign the url
function getSignedURL()
{
$resource = 'http://d2qui8qg6d31zk.cloudfront.net/richardcuicks3sample/140-140.bmp';
$timeout = 300;
//This comes from key pair you generated for cloudfront
$keyPairId = "YOUR_CLOUDFRONT_KEY_PAIR_ID";
$expires = time() + $timeout; //Time out in seconds
$json = '{"Statement":[{"Resource":"'.$resource.'","Condition":{"DateLessThan":{"AWS:EpochTime":'.$expires.'}}}]}';
//Read Cloudfront Private Key Pair
$fp=fopen("private_key.pem","r");
$priv_key=fread($fp,8192);
fclose($fp);
//Create the private key
$key = openssl_get_privatekey($priv_key);
if(!$key)
{
echo "<p>Failed to load private key!</p>";
return;
}
//Sign the policy with the private key
if(!openssl_sign($json, $signed_policy, $key, OPENSSL_ALGO_SHA1))
{
echo '<p>Failed to sign policy: '.openssl_error_string().'</p>';
return;
}
//Create url safe signed policy
$base64_signed_policy = base64_encode($signed_policy);
$signature = str_replace(array('+','=','/'), array('-','_','~'), $base64_signed_policy);
//Construct the URL
$url = $resource.'?Expires='.$expires.'&Signature='.$signature.'&Key-Pair-Id='.$keyPairId;
return $url;
}
For $keyPairId and private_key.pem, I logged in my root account and generated this two variables in Security Credentials->CloudFront Key Pairs section.
If I access http://d2qui8qg6d31zk.cloudfront.net/richardcuicks3sample/140-140.bmp on browser directly. It will response like
<Error>
<Code>MissingKey</Code>
<Message>
Missing Key-Pair-Id query parameter or cookie value
</Message>
</Error>
After I run the function, I got a long signed url, parse the url on chrome browser, it will response like
<Error>
<Code>InvalidKey</Code>
<Message>Unknown Key</Message>
</Error>
Question: I have search AWS document and google much time about this, Could anyone tell me why this happened or if I miss something? Thanks in advance!
$priv_key=fread($fp,8192);
If I understand, you generated the key. If so, it looks like you are setting a key size that is not supported.
The key pair must be an SSH-2 RSA key pair.
The key pair must be in base64 encoded PEM format.
The supported key lengths are 1024, 2048, and 4096 bit
Docs: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-trusted-signers.html#private-content-creating-cloudfront-key-pairs
I opted for Trusted Key Groups and i got that invalidkey/unknownkey error when i initially thought that the keypair id is the same as the access key id under "My Security Credentials". The correct one to use is that ID from your public keys (CloudFront > Key Management > Public Keys).
Thanks #imperalix for answering this question.
I have solved this issue,
Inspired by this site, I found I used the wrong CloudFront url to be signed.
Before: http://d2qui8qg6d31zk.cloudfront.net/richardcuicks3sample/140-140.bmp
After: http://d2qui8qg6d31zk.cloudfront.net/140-140.bmp
Because I create the CloudFront distribution for the richardcuicks3sample bucket, so don't need include this bucket name in the url. After I changed the url, the signed url works well.

Amazon AWS: Sending email via SES beyond painfully slow

We have a PHP-based app (running on a t2.medium instance) that sends emails (to opted-in users only) via SES and both are located in the same region. The app was launched earlier this year and the sending of emails worked properly for months. We recently switched to sending via mailgun (so we could get more information on a problem we were having), but we did not change any of our SES settings. (Note: Our account is approved to send 50k emails per hours - we are trying to send several hundred.)
I wrote a adjunct utility for our app, which also sends emails, and I decided to continue using SES for this utility. A simplified version of the code follows. Note that I kept the layout of this test program as close to the actual utility as possible (and it should be obvious that the utility makes a database call, etc.)
<?php
require_once dirname(__FILE__) . '/PHPMailer-master/PHPMailerAutoload.php';
$mail = new PHPMailer;
$mail->isSMTP();
$mail->Host = 'email-smtp.us-west-2.amazonaws.com';
$mail->SMTPAuth = true;
$mail->Username = 'my_user_name';
$mail->Password = 'my_password';
$mail->SMTPSecure = 'tls';
$mail->From = 'from_sender';
$mail->FromName = 'WebTeam';
$mail->IsHTML(true);
$oldt = microtime(true);
while(true) {
$first_name = 'first_name';
$email = 'to_recipient';
$strCnt = 'many';
$subject = "Lots of great new things to buy";
$body = "<p>" . $first_name . ",</p>";
$body = $body . "<p>You have ' . $strCnt . ' new things to buy waiting for you. Don't let them slip by! ";
$body = $body . "Click <a href='http://fake_url.com'>here</a> to see them!</p>";
$body = $body . "<p>The Web Team</p>";
$mail->addAddress($email);
$mail->Subject = $subject;
$mail->Body = $body;
$newt = microtime(true);
echo 'email build done: ' . $newt - $oldt . PHP_EOL;
$oldt = $newt;
if(!$mail->send(true)) {
echo 'error sending email: ' . $mail->ErrorInfo . PHP_EOL;
} else {
$newt = microtime(true);
echo 'email sent: ' . $newt - $oldt . PHP_EOL . PHP_EOL;
$oldt = $newt;
}
$mail->ClearAllRecipients(); // added line
}
?>
Quite simple!
But, here's the rub. When I ran this the first time, the first email took less than one second to send, the second one took 31 seconds, and the third one required 191 seconds. I then added the one more line of code and ran the program again. This time, the first email took 63 seconds to send. After about 20 minutes, I ran the program a third time. This time, the first three emails were sent in less than one second each, but the fourth one took 191 seconds. I then ran it a fifth time, and the first email took 135 seconds to send. (Do note that all of the emails were received.)
What the heck is going on? More importantly, how do I resolve the problem?
This is not SES being slow. This is a documented, deliberate limitation on EC2 itself, with two possible workarounds.
From the SES documentation:
Important
Amazon Elastic Compute Cloud (Amazon EC2) throttles email traffic over port 25 by default. To avoid timeouts when sending email through the SMTP endpoint from EC2, use a different port (587 or 2587) or fill out a Request to Remove Email Sending Limitations to remove the throttle.
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-connect.html