x-googl-acl isn't making uploaded files public - google-cloud-platform

Currently I have been trying to upload objects (videos) to my Google Storage Cloud. I have found out the reason (possibly) that I haven't been able to make them public is due to ACL or IAM permission. The way it's currently done is I get a signedUrl from the backend as
const getGoogleSignedUrl = async (root, args, context) => {
const { filename } = args;
const googleCloud = new Storage({
keyFilename: ,
projectId: 'something'
});
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: 'video/quicktime',
extensionHeaders: {'x-googl-acl': 'public-read'}
};
const bucketName = 'something';
// Get a v4 signed URL for uploading file
const [url] = await googleCloud
.bucket(bucketName)
.file(filename)
.getSignedUrl(options);
return { url };
}
Once I have gotten temporary permission from the backend as a url I try to make a put request to uploaded the file as:
const response = await fetch(url, {
method: 'PUT',
body: blob,
headers: {
'x-googl-acl': 'public-read',
'content-type': 'video/quicktime'
}
}).then(res => console.log("thres is ", res)).catch(e => console.log(e));
Even though the file does get uploaded to google cloud storage. It is always showing public access as Not public. Any help would be helpful since I am starting to not understand how making an object public with google cloud works.
Within AWS (previously) it was easy to make an object public by adding x-aws-acl to a put request.
Thank you in advance.
Update
I have changed the code to reflex currently what it looks like. Also when I look at the object in Google Storage after it's been uploaded I see
Public access Not authorized
Type video/quicktime
Size 369.1 KB
Created Feb 11, 2021, 5:49:02 PM
Last modified Feb 11, 2021, 5:49:02 PM
Hold status None
Retention policy None
Encryption type Google-managed key
Custom time —
Public URL Not applicable
Update 2
As stated. The issue where I wasn't able to upload the file after trying to add the recommended header was because I wasn't providing the header correctly. I changed the header from x-googl-acl to x-goog-acl which has allowed me to upload it to the cloud.
New problem is now Public access is showing as Not authorized
Update 3
In order to try something new I followed the direction listed here https://www.jhanley.com/google-cloud-setting-up-gcloud-with-service-account-credentials/. Once I finished everything. The next steps I took was
1 - Upload a new video to the cloud. This will be done using the new json provided
const googleCloud = new Storage({
keyFilename: json_file_given),
projectId: 'something'
});
Once the file has been uploaded I noticed there was no changes in regards to it being public. Still has Public access Not authorized
2 - Once checking the status of the object uploaded I went on to follow similar approach on the bottom to make sure I am using the same account as the json object that uploaded the file.
gcloud auth activate-service-account test#development-123456.iam.gserviceaccount.com --key-file=test_google_account.json
3 - Once I noticed I am using the right person with the right permission I performed the next step of
gsutil acl ch -u AllUsers:R gs://example-bucket/example-object
This is return actually resulted in a response of No changes to gs://crit_bull_1/google5.mov

Related

How do I make putObject request to presignedUrl using s3 AWS

I am working with AWS S3 Bucket, and trying to upload image from react native project managed by expo. I have express on the backend. I have created a s3 file on backend that handles getting the presigned url, and this works, and returns the url to the front end inside this thunk function from reduxjs toolkit. I used axios to send request to my server, this works. I have used axios and fetch to try the final put to the presigned url but when it reached the s3 bucket there is nothing in the file just an empty file with 200 bytes everytime. When I use the same presigned url from postman and upload and image in binary section then send the post request the image uploads to the bucket no problems. When I send binary or base64 to bucket from RN app it just uploads those values in text form. I attempted react-native-image-picker but was having problems with that too. Any ideas would be helpful thanks. I have included a snippet from redux slice. If you need more info let me know.
redux slice projects.js
// create a project
// fancy funtion here ......
export const createProject = createAsyncThunk(
"projects/createProject",
async (postData) => {
// sending image to s3 bucket and getting a url to store in d
const response = await axios.get("/s3")
// post image directly to s3 bucket
const s3Url = await fetch(response.data.data, {
method: "PUT",
body: postData.image
});
console.log(s3Url)
console.log(response.data.data)
// make another request to my server to store extra data
try {
const response = await axios.post('/works', postData)
return response.data.data;
} catch (err) {
console.log("Create projects failed: ", err)
}
}
)

How to efficiently allow for users to view Amazon S3 content?

I am currently creating a basic app with React-Native (frontend) and Flask/MongoDB (backend). I am planning on using AWS S3 as cheap cloud storage for all the images and videos that are going to be uploaded and viewed. My current idea (and this could be totally off), is when a user uploads content, it will go through my Flask API and then to the S3 storage. When a user wants to view content, I am not sure what the plan of attack is here. Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
I am quite new to using AWS and if there is already a post discussing this topic, please let me know, and I'd be more than happy to take down this duplicate. I just can't seem to find anything.
Should I use my Flask API as a proxy, or is there a way to simply send a link to the content directly on S3 (which would avoid the extra traffic through my API)?
If the content is public, you just provide an URL which points directly to the file on the S3 bucket.
If the content is private, you generate presigned url on your backend for the file for which you want to give access. This URL should be valid for a short amount of time (for example: 15/30 minutes). You can regenerate it, if it becomes unavailable.
Moreover, you can generate a presigned URL which can be used for uploads directly from the front-end to the S3 bucket. This might be an option if you don't want the upload traffic to go through the backend or you want faster uploads.
There is an API boto3, try to use it.
It is not so difficult, I have done something similar, will post code here.
I have done like #Ervin said.
frontend asks backend to generate credentials
backend sends to frontend the credentials
Frontend upload file to S3
Frontend warns backend it has done.
Backend validate if everything is ok.
Backend will create a link to download, you have a lot of security options.
example of item 6) To generate a presigned url to download content.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
params = {}
params['Bucket'] = bucket
params['Key'] = attachment_model.s3_filename
params['ResponseContentDisposition'] = 'attachment; filename={0}'.format(attachment_model.filename)
if attachment_model.mimetype is not None:
params['ResponseContentType'] = attachment_model.mimetype
url = client.generate_presigned_url('get_object', ExpiresIn=3600, Params=params)
example of item 2) Backend will create presigned credentials to post your file on S3, send s3_credentials to frontend
acl_permission = 'private' if private_attachment else 'public-read'
condition = [{'acl': acl_permission},
["starts-with", "$key", '{0}/'.format(folder_name)],
{'Content-Type': mimetype }]
bucket = app.config.get('BOTO3_BUCKET', None)
fields = {"acl": acl_permission, 'Bucket': bucket, 'Content-Type': mimetype}
client = boto_flask.clients.get('s3')
s3_credentials = client.generate_presigned_post(bucket, s3_filename, Fields=fields, Conditions=condition, ExpiresIn=3600)
example of item 5) Here are an example how backend can check if file on S3 are ok.
bucket = app.config.get('BOTO3_BUCKET', None)
client = boto_flask.clients.get('s3')
response = client.head_object(Bucket=bucket, Key=s3_filename)
if response is None:
return None, None
md5 = response.get('ETag').replace('"', '')
size = response.get('ContentLength')
Here are an example how frontend will ask for credentials, upload file to S3 and inform backend it is done.
I tried to remove a lot of particular code.
//frontend asking backend to create credentials, frontend will send some file metadata
AttachmentService.createPostUrl(payload).then((responseCredentials) => {
let form = new FormData();
Object.keys(responseCredentials.s3.fields).forEach(key => {
form.append(key, responseCredentials.s3.fields[key]);
});
form.append("file", file);
let payload = {
data: form,
url: responseCredentials.s3.url
}
//Frontend will send file to S3
axios.post(payload.url, payload.data).then((res) => {
return Promise.resolve(true);
}).then((result) => {
//when it is done, frontend informs backend
AttachmentService.uploadSuccess(...).then((refreshCase) => {
//Success
});
});
});

S3 presigned url fails when large files

I have a presigned that works fine for any small file.
When I try to upload larger files, I get ACCESS DENIED in the post without any other message in the body.
The funny thing about it is that if I keep trying, after a few denied hits it works. It is totally random ...
When access is not denied, the condition works by giving the correct error return with a message when the file is larger than 100mb..but the problem is that good part of the posts get denied...
This denied happens in the post for the address of amazon, so I dont have acess to any log of it...
The same POST & SCRIPT:
OK FILE:
ACCESS DENIED:
Here is the code:
const S3 = new AWS.S3({
signatureVersion: 'v4',
region: region
});
const params = {
Expires: linkExpiresSecs,
Bucket: bucketName,
Conditions: [
["content-length-range", 1, 104857600]
],
Fields: {
key: keyFile
}
};
const response = await S3.createPresignedPost(params);
i think the validity of the link expires before the file is downloaded for larger files.
As for the behavior that sometimes the download succeeds, that could be due to network situation e.g. less congestion. or some part of the file was previously cached.

Stream download from S3 timeout

I'm facing some timeout problems with a "middleware" (from now on file-service) service developed with NestJS and AWS-S3.
The file-service has two main purposes:
Act as an object storage abstraction layer, to allow the backend to upload files to different storage services completely transparent to the user.
Receive signed tokens as a url query parameter with file/object information, verify access to resource and stream it.
Upload works without problem.
Download small files has no problems too.
But when I try to download large files (> 50MB), after a few seconds, the connection beaks down because of a timeout and as you can figure out the download fails.
I've been spending some days looking for a solutions and reading docs.
Here some of them:
About KeepAlive
Use an instance of S3 each time
But nothing works.
Here the code:
Storage definition class
export class S3Storage implements StorageInterface {
config: any;
private s3;
constructor() {}
async initialize(config: S3ConfigInterface): Promise<void> {
this.config = config;
this.s3 = new AWS.S3();
// initialize S3 Configuration
AWS.config.update({
accessKeyId: config.accessKeyId,
secretAccessKey: config.secretAccessKey,
region: config.region
});
}
async downloadFile(target: FileDto): Promise<Readable> {
const params = {
Bucket: this.config.Bucket,
Key: target.sourcePath
};
return this.s3.getObject(params).createReadStream();
}
}
Download method
private async downloadOne(target: FileDto, request, response) {
const storage = await this.provider.getStorage(target.datasource);
response.setHeader('Content-Type', mime.lookup(target.filename) || 'application/octet-stream');
response.setHeader('Content-Disposition', `filename="${path.basename(target.filename)}";`);
const stream = await storage.downloadFile(target);
stream.pipe(response);
// await download and exit
await new Promise((resolve, reject) => {
stream.on('end', () => {
resolve(`${target.filename} has been downloaded`);
});
stream.on('error', () => {
reject(`${target.filename} could not be downloaded`);
});
});
}
If any one has faced the same issue (or similar) or any one has any idea (useful or not), I will appreciate any help or advice.
Thank you in advance.
I had the same issue, and here is how I solved it on my side: instead of processing the file by directly getting the stream from S3, I decided to download the content to a temp file (Amazon backend server for my API) and process already the stream from that temp file. Afterwards, I removed the temp file in order not to fill the hard drive.

uploading an image to AWS using react-native-aws-signature

hai i am try to upload an image to Amazon-s3 using react-native-aws-signature, here is my sample code i am attaching
var AWSSignature = require('react-native-aws-signature');
var awsSignature = new AWSSignature();
var source1 = {uri: response.uri, isStatic: true}; // this is uris which got from image picker
console.log("source:"+JSON.stringify(source1));
var credentials = {
SecretKey: ‘security-key’,
AccessKeyId: ‘AccesskeyId’,
Bucket:’Bucket_name’
};
var options = {
path: '/?Param2=value2&Param1=value1',
method: 'POST',
service: 'service',
headers: {
'X-Amz-Date': '20150209T123600Z',
'host': 'xxxxx.aws.amazon.com'
},
region: ‘us-east-1,
body: response.uri,
credentials
};
awsSignature.setParams(options);
var signature = awsSignature.getSignature();
var authorization = awsSignature.getAuthorizationHeader();
here i am declaring the source1 in that response.uri is passing in body which is coming from image picker,Can any one give suggestions that is there any wrong in my code, if there please tell me that how to resolve it,Any help much appreciated
awsSignature.getAuthorizationHeader(); will return the authorization header when given the correct parameters, and that's all it does.Just a step in the whole process of making a signed call to AWS API.
When sending POST request to S3, here is a link to the official documentation that you should read. S3 Documentation
It seems you need to send in the image as a form parameter.
You can also leverage the new AWS Amplify library on the official AWS repo here: https://github.com/aws/aws-amplify
This has a storage module for signing requests to S3: https://github.com/aws/aws-amplify/blob/master/media/storage_guide.md
For React Native you'll need to install that:
npm install aws-amplify-react-native
If you're using Cognito User Pool credentials you'll need to link the native bridge as outlined here: https://github.com/aws/aws-amplify/blob/master/media/quick_start.md#react-native-development