hai i am try to upload an image to Amazon-s3 using react-native-aws-signature, here is my sample code i am attaching
var AWSSignature = require('react-native-aws-signature');
var awsSignature = new AWSSignature();
var source1 = {uri: response.uri, isStatic: true}; // this is uris which got from image picker
console.log("source:"+JSON.stringify(source1));
var credentials = {
SecretKey: ‘security-key’,
AccessKeyId: ‘AccesskeyId’,
Bucket:’Bucket_name’
};
var options = {
path: '/?Param2=value2&Param1=value1',
method: 'POST',
service: 'service',
headers: {
'X-Amz-Date': '20150209T123600Z',
'host': 'xxxxx.aws.amazon.com'
},
region: ‘us-east-1,
body: response.uri,
credentials
};
awsSignature.setParams(options);
var signature = awsSignature.getSignature();
var authorization = awsSignature.getAuthorizationHeader();
here i am declaring the source1 in that response.uri is passing in body which is coming from image picker,Can any one give suggestions that is there any wrong in my code, if there please tell me that how to resolve it,Any help much appreciated
awsSignature.getAuthorizationHeader(); will return the authorization header when given the correct parameters, and that's all it does.Just a step in the whole process of making a signed call to AWS API.
When sending POST request to S3, here is a link to the official documentation that you should read. S3 Documentation
It seems you need to send in the image as a form parameter.
You can also leverage the new AWS Amplify library on the official AWS repo here: https://github.com/aws/aws-amplify
This has a storage module for signing requests to S3: https://github.com/aws/aws-amplify/blob/master/media/storage_guide.md
For React Native you'll need to install that:
npm install aws-amplify-react-native
If you're using Cognito User Pool credentials you'll need to link the native bridge as outlined here: https://github.com/aws/aws-amplify/blob/master/media/quick_start.md#react-native-development
Related
Currently I have been trying to upload objects (videos) to my Google Storage Cloud. I have found out the reason (possibly) that I haven't been able to make them public is due to ACL or IAM permission. The way it's currently done is I get a signedUrl from the backend as
const getGoogleSignedUrl = async (root, args, context) => {
const { filename } = args;
const googleCloud = new Storage({
keyFilename: ,
projectId: 'something'
});
const options = {
version: 'v4',
action: 'write',
expires: Date.now() + 15 * 60 * 1000, // 15 minutes
contentType: 'video/quicktime',
extensionHeaders: {'x-googl-acl': 'public-read'}
};
const bucketName = 'something';
// Get a v4 signed URL for uploading file
const [url] = await googleCloud
.bucket(bucketName)
.file(filename)
.getSignedUrl(options);
return { url };
}
Once I have gotten temporary permission from the backend as a url I try to make a put request to uploaded the file as:
const response = await fetch(url, {
method: 'PUT',
body: blob,
headers: {
'x-googl-acl': 'public-read',
'content-type': 'video/quicktime'
}
}).then(res => console.log("thres is ", res)).catch(e => console.log(e));
Even though the file does get uploaded to google cloud storage. It is always showing public access as Not public. Any help would be helpful since I am starting to not understand how making an object public with google cloud works.
Within AWS (previously) it was easy to make an object public by adding x-aws-acl to a put request.
Thank you in advance.
Update
I have changed the code to reflex currently what it looks like. Also when I look at the object in Google Storage after it's been uploaded I see
Public access Not authorized
Type video/quicktime
Size 369.1 KB
Created Feb 11, 2021, 5:49:02 PM
Last modified Feb 11, 2021, 5:49:02 PM
Hold status None
Retention policy None
Encryption type Google-managed key
Custom time —
Public URL Not applicable
Update 2
As stated. The issue where I wasn't able to upload the file after trying to add the recommended header was because I wasn't providing the header correctly. I changed the header from x-googl-acl to x-goog-acl which has allowed me to upload it to the cloud.
New problem is now Public access is showing as Not authorized
Update 3
In order to try something new I followed the direction listed here https://www.jhanley.com/google-cloud-setting-up-gcloud-with-service-account-credentials/. Once I finished everything. The next steps I took was
1 - Upload a new video to the cloud. This will be done using the new json provided
const googleCloud = new Storage({
keyFilename: json_file_given),
projectId: 'something'
});
Once the file has been uploaded I noticed there was no changes in regards to it being public. Still has Public access Not authorized
2 - Once checking the status of the object uploaded I went on to follow similar approach on the bottom to make sure I am using the same account as the json object that uploaded the file.
gcloud auth activate-service-account test#development-123456.iam.gserviceaccount.com --key-file=test_google_account.json
3 - Once I noticed I am using the right person with the right permission I performed the next step of
gsutil acl ch -u AllUsers:R gs://example-bucket/example-object
This is return actually resulted in a response of No changes to gs://crit_bull_1/google5.mov
I am using the API method "Directly making calls to Selling partner APIs".
I followed all the steps correctly:
Request a Login with Amazon access token - This API is working successfully & Getting access token.
Construct a Selling Partner API URI - After that, I took access_token and pass into options header request- 'x-amz-access-token'.
Add headers to the URI
Create and sign your request
After calling API https://sellingpartnerapi-eu.amazon.com/vendor/orders/v1/purchaseOrders
I am getting a signature response with error-
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method.
My code:-
let options = {
service: 'execute-api',
region:'eu-west-1',
method: 'GET',
url: 'https://sellingpartnerapi-eu.amazon.com/vendor/orders/v1/purchaseOrders/?includeDetails=true&createdBefore=2020-04-20T14:00:00-08:00&createdAfter=2020-04-14T00:00:00-08:00',
headers: {
'Content-Type': 'application/json',
'host':'sellingpartnerapi-eu.amazon.com',
'x-amz-access-token':access_token,
'x-amz-date': '20200604T061745Z',
'user-agent':'My App(Language=Node.js;Platform=Windows/10)'
}
}
let signedRequest = aws4.sign(options,
{
secretAccessKey: secretAccessKey,
accessKeyId: accessKeyId,
})
console.dir("signedRequest");
console.dir(signedRequest);
delete signedRequest.headers['Host']
delete signedRequest.headers['Content-Length']
request(awsres, function(err, res, body) {
console.dir("err");
console.dir(err);
console.dir("res");
console.dir(res.body);
});
I would imagine the signature is calculated the same way it is calculated in other AWS services.
Signature Version 4 signing process docs.
#aws-sdk/signature-v4-node -- official AWS SDK v3 signature package, but v3 is very much in beta. But I imagine they have figured out the signature package, as it wouldn't be possible to interact with the API without it.
aws-signature-v4 NodeJS package
My express server has a credentials.json containing credentials for a google service account. These credentials are used to get a jwt from google, and that jwt is used by my server to update google sheets owned by the service account.
var jwt_client = null;
// load credentials form a local file
fs.readFile('./private/credentials.json', (err, content) => {
if (err) return console.log('Error loading client secret file:', err);
// Authorize a client with credentials, then call the Google Sheets API.
authorize(JSON.parse(content));
});
// get JWT
function authorize(credentials) {
const {client_email, private_key} = credentials;
jwt_client = new google.auth.JWT(client_email, null, private_key, SCOPES);
}
var sheets = google.sheets({version: 'v4', auth: jwt_client });
// at this point i can call google api and make authorized requests
The issue is that I'm trying to move from node/express to npm serverless/aws. I'm using the same code but getting 403 - forbidden.
errors:
[ { message: 'The request is missing a valid API key.',
domain: 'global',
reason: 'forbidden' } ] }
Research has pointed me to many things including: AWS Cognito, storing credentials in environment variables, custom authorizers in API gateway. All of these seem viable to me but I am new to AWS so any advice on which direction to take would be greatly appreciated.
it is late, but may help someone else. Here is my working code.
const {google} = require('googleapis');
const KEY = require('./keys');
const _ = require('lodash');
const sheets = google.sheets('v4');
const jwtClient = new google.auth.JWT(
KEY.client_email,
null,
KEY.private_key,
[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/drive.file',
'https://www.googleapis.com/auth/spreadsheets'
],
null
);
async function getGoogleSheetData() {
await jwtClient.authorize();
const request = {
// The ID of the spreadsheet to retrieve data from.
spreadsheetId: 'put your id here',
// The A1 notation of the values to retrieve.
range: 'put your range here', // TODO: Update placeholder value.
auth: jwtClient,
};
return await sheets.spreadsheets.values.get(request)
}
And then call it in the lambda handler. There is one thing I don't like is storing key.json as file in the project root. Will try to find some better place to save.
I added website authentication for s3 bucket using lambda function and then connect the lambda function with the CloudFront by using behavior settings in distribution settings and it worked fine and added authentication(means htaccess authentication in simple servers). Now I want to change the password for my website authentication. For that, I updated the password and published the new version of the lambda function and then in the distribution settings; I created a new invalidation to clear cache. But it didn't work, and website authentication password didn't change. Below is my lambda function code to add authentication.
'use strict';
exports.handler = (event, context, callback) => {
// Get request and request headers
const request = event.Records[0].cf.request;
const headers = request.headers;
// Configure authentication
const authUser = 'user';
const authPass = 'pass';
// Construct the Basic Auth string
const authString = 'Basic ' + new Buffer(authUser + ':' + authPass).toString('base64');
// Require Basic authentication
if (typeof headers.authorization == 'undefined' || headers.authorization[0].value != authString) {
const body = 'Unauthorized';
const response = {
status: '401',
statusDescription: 'Unauthorized',
body: body,
headers: {
'www-authenticate': [{key: 'WWW-Authenticate', value:'Basic'}]
},
};
callback(null, response);
}
// Continue request processing if authentication passed
callback(null, request);
};
Can anyone please help me to solve the problem.
Thanks in advance.
On Lambda function view, After you save your changes (using Firefox could be a safer option, see below if you wonder why)
you will see a menu item under Configuration - > Designer -> CloudFront. You will see following screens.
After you deploy :
You can publish your change to CloudFront distribution. Once you publish this, it will automatically start deploying CF distribution which you can view on CF menu.
Also i would prefer using "Viewer Request" as a CloudFront trigger event, not sure which one you are using as this should avoid Cloudfront caching. On top of this Chrome sometimes fails to save changes on Lambda. There should be a bug on aws console. Try Firefox just to be safe when you are editing lambda functions.
I am trying to access a simple AWS IOT REST service but I have not been able to do so successfully yet. Here is what I did.
I created an iam user in my aws and downloaded the access key and secret key
Logged into AWS IOT with that user and created a "thing"
From the thing's property I found the REST URL for the shadow
Used Postman with the new "aws signature" feature and provided it with the access key, secret key, region (us-east-1) and service name (iot)
Tried to "GET" the endpoint and this is what I got -
{
"message": "Credential should be scoped to correct service. ",
"traceId": "be056198-d202-455f-ab85-805defd1260d"
}
I thought there is something wrong with postman so I tried using aws-sdk-sample example of connecting to S3 and changed it to connect to the IOT URL.
Here is my program snippet (Java)
String awsAccessKey = "fasfasfasdfsdafs";
String awsSecretKey = "asdfasdfasfasdfasdfasdf/asdfsdafsd/fsdafasdf";
URL endpointUrl = null;
String regionName = "us-east-1";
try {
endpointUrl = new URL("https://dasfsdfasdf.iot.us-east-1.amazonaws.com/things/SOMETHING/shadow");
}catch (Exception e){
e.printStackTrace();
}
Map<String, String> headers = new HashMap<String, String>();
headers.put("x-amz-content-sha256", AWSSignerBase.EMPTY_BODY_SHA256);
AWSSignerForAuthorizationHeader signer = new AWSSignerForAuthorizationHeader(
endpointUrl, "GET", "iot", regionName);
String authorization = signer.computeSignature(headers,
null, // no query parameters
AWSSignerBase.EMPTY_BODY_SHA256,
awsAccessKey,
awsSecretKey);
// place the computed signature into a formatted 'Authorization' header
// and call S3
headers.put("Authorization", authorization);
String response = HttpUtils.invokeHttpRequest(endpointUrl, "GET", headers, null);
System.out.println("--------- Response content ---------");
System.out.println(response);
System.out.println("------------------------------------");
This gives me the same error -
--------- Request headers ---------
x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Authorization: AWS4-HMAC-SHA256 Credential=fasfasfasdfsdafs/20160212/us-east-1/iot/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=3b2194051a8dde8fe617219c78c2a79b77ec92338028e9e917a74e8307f4e914
x-amz-date: 20160212T182525Z
Host: dasfsdfasdf.iot.us-east-1.amazonaws.com
--------- Response content ---------
{"message":"Credential should be scoped to correct service. ","traceId":"cd3e0d96-82fa-4da5-a4e1-b736af6c5e34"}
------------------------------------
Can someone tell me what I am doing wrong please? AWS documentation does not have much information on this error. Please help
Sign your request with iotdata instead if iot
example:
AWSSignerForAuthorizationHeader signer = new AWSSignerForAuthorizationHeader(
endpointUrl, "GET", "iotdata", regionName);
In your 4th step, don't fill anything for Service Name. Postman will default the value with execute-api.
Hope this works!
Its basically due to Service name is not given correctly you can use service Name = 'iotdata' instead of iot.
If you user Key management then Service Name would be kms.
For EC2 Service Name would be ec2 etc.
Use the AWS IoT SDK for Node.js instead. Download the IoT Console generated private key and client cert as well as the CA Root cert from here. Start with the scripts in the examples directory.