I'm trying to integrate AWS s3 to my app for sharing images.
I'm currently using Branch.io for sharing content between devices using deep links. But this approach have a problem. I cannot send image data with deep links as explained in this post.
So, having the same post as reference, I tried to use AWS S3 for uploading files and then sharing the links. But I believe this approach requires my app to have a login function for Cognito but I do not want that.
I also tried generating presigned urls for the images but then, my credentials expire after an hour and I get an "Expired token" error. I'm currently using this approach, I just wish it wouldn't expire in an hour. 7 days or so is enough for me, but an hour is too short.
I can also make images uploaded for sharing public, therefore access it freely from anywhere but I don't like that either for security reasons.
What would be the best way to handle this?
When you generate the URL you need to pass in the timeout parameter,
var params = {Bucket: 'myBucket', Key: 'myKey', Expires: 604800};
Sample code to generate getSigned URL.
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey', Expires: 604800};
var url = s3.getSignedUrl('getObject', params);
console.log("get URL is", url);
604800 seconds in a week.
Hope it helps.
you can use S3 Pre-signed URLs which has maximum expriation of 7 days. you can set it with query string parameter "X-Amz-Expires".
Amruta from Branch here:
Unfortunately, there is no way to add images to the link using the Branch SDK. As a workaround, as mentioned in the post you have tagged, you can create links on the Branch dashboard and upload images in the Social media tab. Once, the link is saved the same tab will provide you with the link for the image hosted by Branch.
Unfortunately, this would only work if you have a predefined set of images.
I solved the issue by not using temporary credentials generated by AWS SDK. I instead, created a new user for the app through AWS dashboard, gave it the necessary permissions to be able to upload the objects. In the code, I generated a new basic AWS credential with the id and secret key of the new user.
Now I can create a presigned url and it works up to 1 week. I can even increase the expire date by using V2 signing.
Related
I am using the PowerBI API to upload some pbix files. Most of these files are using the Import mode for the SQL.
When I use the rest API to upload the files, the credentials do not get updated on the website. I know the credentials do not live on the actual file. I also know there is API to patch these credentials using the API but I have not been able to make it work with the Import Mode. Only seems to work with DirectQuery.
I have also tried Set All connections which is documented to only work with direct query connections using this format:
Data Source=xxxx.mydb.net; Initial Catalog=dbname;User ID=xxx;Password=xxxx;
My problem now is that the way Power BI manages cached credentials make it hard to figure out which credentials are being used. There is some magic happening where updating one file sometimes makes the other files which use the same credential also allow refresh.
This is the error I am getting for all files uploaded via API.
Data source errorScheduled refresh has been disabled due to at least one data source not having credentials provided. Please provide credentials for all data sources, and then turn scheduled refresh back on.
Cluster This-is-not-relevant.net
Activity ID00000000-0000-0000-0000-000000000000
Request ID00000000-0000-0000-0000-000000000000
Time2020-09-99 99:54:11Z
Thank you,
Chéyo
This is the solution using the PowerBI Csharp SDK. Make sure the JSON payload is property scaped.
var request = new UpdateDatasourceRequest
{
CredentialDetails = new CredentialDetails
{
Credentials = $"{{\"credentialData\":[{{\"name\":\"username\",\"value\":{JsonConvert.SerializeObject(credential.Username)}}},{{\"name\":\"password\",\"value\":{JsonConvert.SerializeObject(credential.Password)}}}]}}",
CredentialType = "Basic",
EncryptedConnection = "Encrypted",
EncryptionAlgorithm = "None",
PrivacyLevel = "None"
}
};
await PowerBI.Client().Gateways.UpdateDatasourceAsync(gatewayId: datasource.GatewayId, datasource.DatasourceId, updateDatasourceRequest: request);
I am creating a signed URL with AWS so I can safely pass this URL to another API for temporary use. The signed URL points to a S3 resource. The problem is the other API does not accept such long links. Therefore I am trying to shorten it. I tried to use shorteners like goo.gl or bit.ly to no avail because the URL was too long for them. I even built my own private shortener with AWS (AWS url shortener) but it had the same problem: "The length of website redirect location cannot exceed 2,048 characters.".
I am creating the signed URLs in iOS (Swift) with AWSS3PreSignedURLBuilder.default().getPreSignedURL(preSignedURLRequest) while using AWS Cognito as an unauthorised user.
I have tried the following things to no avail:
Choose the shortest possible S3 bucket name with 3 characters
Shorten the filename as much as possible. I limited the file name to 10 characters plus file extension name (14 characters in total). Shorter file names are not viable for me because they should be unique to a certain extent.
But even with all these minor tweaks the signed URL returned by AWS is sometimes too long. Especially the token parameter (X-Amz-Security-Token) seems to be really long. With my minor tweaks I sometimes get URLs shorter than 2,048 characters but sometimes slightly longer. I would like to find a solution which guarantees me that the URL is not too long and can be shortened.
In my own private AWS URL shortener the following code snippet creates the S3 object which redirects to the actual long URL.
s3.putObject({
Bucket: s3_bucket,
Key: key_short,
Body: "",
WebsiteRedirectLocation: url_long,
ContentType: "text/plain"
},
(err, data) => {
if (err) {
console.log(err);
done("", err.message);
} else {
const ret_url = "https://" + cdn_prefix + "/" + id_short;
console.log("Success, short_url = " + ret_url);
done(ret_url, "");
}
});
The method returns with the following error
The length of website redirect location cannot exceed 2,048
characters.
The documentation of putObject for the header "x-amz-website-redirect-location" in the object meta states the following (see: put object documentation):
The length of the value is limited to 2 KB
How can I make sure that the initial AWS signed URL is not too long for the URL shorteners?
EDIT:
One of the problems I have identified is that I create the signed URL as an unauthenticated user in AWS Cognito. Therefore the signed URL includes this ridiculously long token as a parameter. I did not want to embed my accessKey and shortKey in the iOS App thats why I switched to AWS Cognito (see aws cognito). But currently there are no authorised users just unauthorised ones and I need to create the signed URL as an unauthorised AWS Cognito user. If I create the signed URL with with a regular credentials using accessKey and shortKey I get a much shorter URL. But for that I would have to embed my accessKey and shortKey in the iOS app which is not recommended.
I solved the problem by creating an AWS lambda for creating a presigned URL and returning the presigned URL. The presigned URL allows the caller to access (getObject) the S3 resource. There are two options regarding this:
The role assigned to the AWS lambda has the S3 permission for getObject. The resulting presigned URL will have a much shorter token included than the presigned URL created with the temporary credentials issued by AWS Cognito in the iOS app.
Embed the access key and secret key of a role with the S3 permission for getObject directly into the AWS lambda which will give you an even shorter URL because there is no token included in the resulting presigned URL. (e.g. sample AWS code)
I call this lambda from within my iOS app as an unauthorised cognito user. After receiving the presigned URL from the AWS lambda I am able to shorten it because with this method the presigned URLs are much shorter.
There is an older method of generating pre-signed URLs that produces a very short link, eg:
https://s3-ap-southeast-2.amazonaws.com/my-bucket/foo.png?AWSAccessKeyId=AKI123V12345RYTP123&Expires=1508620600&Signature=oB1/jca2JFXw5DbN7gBKEXkUQk8%3D
However, this pre-dates sigv4 so it does not work in the newer regions (Frankfurt onwards).
You can find sample code at:
mdwhatcott/s3_presign.py
S3 Generate Pre Signed Url
It can also be used to sign for uploads: Correct S3 Policy For Pre-Signed URLs
I've got a Django Rest API and a React Native app. I'd like to upload some files to my s3 bucket from my app.
I could do this :
User would like to upload an image --> GET my_api/s3/credentials/
App --> POST image directly to s3 using credentials (access/private keys)
The problem is that once the user has the accessKey and privateKey, he can use it indefinitely.
Is there a way to retrieve temporary credentials I could give to the user after a call on my_api/s3/credentials/ ?
I've found an answer. It is possible to generate from server side a temporary URL to POST your content.
details here
I am using CarrierWaveDirect to upload a high resolution images to s3. I then use that image to process multiple versions which are made public through Cloudfront urls.
The uploaded high res files need to remain private to anonymous users, but the web application needs to access the private file in order to do the processing for other versions.
I am currently setting all uploaded files to private in the CarrierWave initializer via
config.fog_public = false
I have an IAM policy for the web application that allows full admin access. I also have set the ACCESSKEY AND SECRETKEY in the app for that IAM user. Given these two criteria, I would think that the web app could access the private file and continue with processing, but it is denied access to the private file.
*When I log into the user account associated with the web app, I am able to access the private file because a token is added on to the URL.
I can't figure out why the app cannot access the private file given the ACCESSKEY AND SECRRETKEY
I was having a hard time getting to your problem. I am quite certain your question is not
unable to access private s3 file even though IAM policy grants access
but rather
how to handcraft a presigned URL for GETting a private file on S3
The gist shows you're trying to create the presigned URL for GET yourself. While this is perfectly fine, it's also very error-prone.
Please verify that what you're trying to do is working at all, using the AWS SDK for Ruby (I only post code known to work with version 1 here but if you aren't held back by legacy code, start with version 2):
s3 = AWS::S3.new
bucket = s3.buckets["your-bucket-name"]
obj = bucket.objects["your-object-path"]
obj.url_for(:read, expires: 10*60) # generate a URL that expires in 10 minutes
See the docs for AWS::S3::S3Object#url_for and Aws::S3::Object#presigned_url for details.
You may need to read up on passing args to AWS::S3.new here (for credentials, regions and so).
I'd advise you take the following steps:
Make it work locally using the access_key_id and secret_access_key
Make it work in your worker
If it works, you can compare the query string the SDK returned with the one you handcrafted yourself. Maybe you can spot an error.
But in any case, I suggest you use higher-level SDKs to do things like that for you.
If this doesn't get you anywhere, please post a comment with your new findings.
I am totally new to s3 bucket, I know we can save images, videos, etc any kind of resources there.
when I want to access these images from my app I can access through web url but how to make sure any other unauthorised user can't see/download my image using that url(URL is so easy to guess if bucket name is known).
How to make the URL secure?
Can I use my preferred username & password into the URL to make it secure?
I also do not found any way to make the resources unaccessible through the URL that amazon uses(http://s3.amazonaws.com/bucketname/resourcename.extension), is that possible?
Any help would be appreciated...
Thanks.
First, make your bucket private.
Then, generate signed URL to access your content.
The format will be https://<bucket>.s3.amazonaws.com/objectname?AWSAccessKeyId=<accesskey>&Expires=<expiretime>&Signature=<signature string>
Please see https://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
You can also try it with this tool: http://www.dancartoon.com/projects/s3-siggenerator/
I am working for above task for security purpose how to get image and video from s3 bucket I found one npm module.
aws-sdk
I resolved my issue through asw-sdk solution.