I am trying to access a simple AWS IOT REST service but I have not been able to do so successfully yet. Here is what I did.
I created an iam user in my aws and downloaded the access key and secret key
Logged into AWS IOT with that user and created a "thing"
From the thing's property I found the REST URL for the shadow
Used Postman with the new "aws signature" feature and provided it with the access key, secret key, region (us-east-1) and service name (iot)
Tried to "GET" the endpoint and this is what I got -
{
"message": "Credential should be scoped to correct service. ",
"traceId": "be056198-d202-455f-ab85-805defd1260d"
}
I thought there is something wrong with postman so I tried using aws-sdk-sample example of connecting to S3 and changed it to connect to the IOT URL.
Here is my program snippet (Java)
String awsAccessKey = "fasfasfasdfsdafs";
String awsSecretKey = "asdfasdfasfasdfasdfasdf/asdfsdafsd/fsdafasdf";
URL endpointUrl = null;
String regionName = "us-east-1";
try {
endpointUrl = new URL("https://dasfsdfasdf.iot.us-east-1.amazonaws.com/things/SOMETHING/shadow");
}catch (Exception e){
e.printStackTrace();
}
Map<String, String> headers = new HashMap<String, String>();
headers.put("x-amz-content-sha256", AWSSignerBase.EMPTY_BODY_SHA256);
AWSSignerForAuthorizationHeader signer = new AWSSignerForAuthorizationHeader(
endpointUrl, "GET", "iot", regionName);
String authorization = signer.computeSignature(headers,
null, // no query parameters
AWSSignerBase.EMPTY_BODY_SHA256,
awsAccessKey,
awsSecretKey);
// place the computed signature into a formatted 'Authorization' header
// and call S3
headers.put("Authorization", authorization);
String response = HttpUtils.invokeHttpRequest(endpointUrl, "GET", headers, null);
System.out.println("--------- Response content ---------");
System.out.println(response);
System.out.println("------------------------------------");
This gives me the same error -
--------- Request headers ---------
x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Authorization: AWS4-HMAC-SHA256 Credential=fasfasfasdfsdafs/20160212/us-east-1/iot/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=3b2194051a8dde8fe617219c78c2a79b77ec92338028e9e917a74e8307f4e914
x-amz-date: 20160212T182525Z
Host: dasfsdfasdf.iot.us-east-1.amazonaws.com
--------- Response content ---------
{"message":"Credential should be scoped to correct service. ","traceId":"cd3e0d96-82fa-4da5-a4e1-b736af6c5e34"}
------------------------------------
Can someone tell me what I am doing wrong please? AWS documentation does not have much information on this error. Please help
Sign your request with iotdata instead if iot
example:
AWSSignerForAuthorizationHeader signer = new AWSSignerForAuthorizationHeader(
endpointUrl, "GET", "iotdata", regionName);
In your 4th step, don't fill anything for Service Name. Postman will default the value with execute-api.
Hope this works!
Its basically due to Service name is not given correctly you can use service Name = 'iotdata' instead of iot.
If you user Key management then Service Name would be kms.
For EC2 Service Name would be ec2 etc.
Use the AWS IoT SDK for Node.js instead. Download the IoT Console generated private key and client cert as well as the CA Root cert from here. Start with the scripts in the examples directory.
Related
I'm trying to set up Google Cloud Translation in a Firebase Cloud Function. I'm using the demo code provided by Google Cloud Translation:
// Instantiates a client
const translationClient = new TranslationServiceClient();
const projectId = 'languagetwo-cd94d';
const location = 'global';
const text = 'Hello, world!';
async function translateText() {
// Construct request
const request = {
parent: `projects/${projectId}/locations/${location}`,
contents: [text],
mimeType: 'text/plain', // mime types: text/plain, text/html
sourceLanguageCode: 'en',
targetLanguageCode: 'es',
};
// Run request
const [response] = await translationClient.translateText(request);
for (const translation of response.translations) {
console.log(`Translation: ${translation.translatedText}`);
}
}
translateText();
This demo tutorial makes a second file called key.json:
{
"type": "service_account",
"project_id": "myAwesomeApp",
"private_key_id": "1234567890",
"private_key": "-----BEGIN PRIVATE KEY-----\noPeeking=\n-----END PRIVATE KEY-----\n",
"client_email": "translation-quickstart#myAwesomeApp.iam.gserviceaccount.com",
"client_id": "1234567890",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/translation-quickstart%40myAwesomeApp.iam.gserviceaccount.com"
}
I uploaded my credentials from the CLI:
gcloud auth login
gcloud iam service-accounts create translation-quickstart --project myAwesomeApp
gcloud projects add-iam-policy-binding myAwesomeApp
gcloud iam service-accounts keys \
create key.json --iam-account \
translation-quickstart#myAwesomeApp.iam.gserviceaccount.com
export GOOGLE_APPLICATION_CREDENTIALS=key.json
I then entered node app.js at the CLI and it runs perfectly. ¡Hola Mundo!
How do I import my credentials into a Firebase Cloud Function? I tried this:
exports.ENtranslateES = functions.firestore.document('Users/{userID}/English/Translation_Request').onUpdate((change) => { // triggers when browser writes a request word to the database
// Google Cloud
const { TranslationServiceClient } = require('#google-cloud/translate');
// Instantiates a client
const translationClient = new TranslationServiceClient();
const projectId = 'languagetwo-cd94d';
const location = 'global';
const text = 'Hello, world!';
async function translateText() {
// Construct request
const request = {
parent: `projects/${projectId}/locations/${location}`,
contents: [text],
mimeType: 'text/plain', // mime types: text/plain, text/html
sourceLanguageCode: 'en',
targetLanguageCode: 'es',
};
// Run request
const [response] = await translationClient.translateText(request);
for (const translation of response.translations) {
console.log(`Translation: ${translation.translatedText}`);
}
}
return translateText()
});
I added only a return at the bottom because Firebase Cloud Functions require that something has to be returned.
The result is that the function triggers and translateText() fires. Then I get an error message:
Error: 7 PERMISSION_DENIED: Cloud IAM permission
That looks like the credentials weren't imported. How do I import the key.json credentials into the Firebase Cloud Function?
Normally, you do not import a service account into a Google compute service such as Cloud Functions. Those services have an attached service account. There are methods of securely storing a service account using services like Google Cloud Secret Manager. In your case there is a better solution.
The following line in your source code uses the Cloud Function attached service account, which defaults to the App Engine default service account PROJECT_ID#appspot.gserviceaccount.com.
const translationClient = new TranslationServiceClient();
Since you did not specify a credential when creating the translationClient, ADC (Application Default Credentials) searches for credentials. In your example, the search found valid credentials from the Cloud Function service account.
The solution is to add the required role to that service account.
If you want to use the service account that you created, then attach the service account identity (email address) to the Cloud Function link.
Access control with IAM
I got Google Cloud Translate to work in Postman. This is a step towards an answer, not an answer. (Google has its own version of Postman, called Google Cloud API, but it doesn't work with Translate,)
I followed this blog post to set up Google Cloud API in Postman. I started at my Google Cloud Console. I selected my project. I clicked APIs & Services, then Credentials, then + Create Credentials, then OAuth client ID. Under Application type I selected Web application. I named the client ID Postman. Lastly I added an Authorized redirect URI: https://console.cloud.google.com/. Now when I click on my Postman API in my Google Cloud Console I see a Client ID and a Client secret.
In Postman, I changed GET to POST and entered the URL from the Google Cloud Translation page:
https://cloud.google.com/translate/docs/reference/rest/v2/translate
Under the Authorization tab I put in:
Token Name: GCP Token
Grant Type: Authorization Code
Callback URL: https://www.getpostman.com/oauth2/callback
Auth URL: https://accounts.google.com/o/oauth2/auth
Access Token URL: https://accounts.google.com/o/oauth2/token
Client ID: 1234567890abc.apps.googleusercontent.com
Client Secret: ABCDE-NoPeeking-1234567890
Scope: https://www.googleapis.com/auth/cloud-platform
State:
Client Authorization: Send as Basic Auth header
I then clicked Get New Access Token and an access token appeared at the top of all this. The token is good for one hour.
Under Params I entered:
q: rain
target: es
source: en
Google Cloud Translate returned lluvia.
Now I know what the auth properties and query parameters are. I don't know how to put them into a Firebase Cloud Function.
I'm developing a Javascript (browser) client for HTTP API's in AWS API Gateway.The API's use an IAM authorizer. In my Javascript App I log in through a Cognito Identity Pool (developer identity). Next I convert the OpenID token into an access key id, secret access key and session token using AWS.CognitoIdentityCredentials.
I then want to use these credentials to make the API call, using the code below. I see the call being executed, but I get a HTTP/403 error back. The reply does not contain any further indication of the cause. I'd appreciate all help to understand what is going wrong. When disabling the IAM authorizer, the HTTP API works nicely.
I also tried the JWT authorizer, passing the OpenID token received from the Cognito Identity Pool (using http://cognito-identity.amazon.com as provider). When doing so I get the error: Bearer scope="" error="invalid_token" error_description="unable to decode "n" from RSA public key" in the www-authenticate response header.
Thanks a lot.
// Credentials will be available when this function is called.
var accessKeyId = AWS.config.credentials.accessKeyId;
var secretAccessKey = AWS.config.credentials.secretAccessKey;
var sessionToken = AWS.config.credentials.sessionToken;
let test_url = 'https://xxxxxxx.execute-api.eu-central-1.amazonaws.com/yyyyyy';
var httpRequest = new AWS.HttpRequest("https://" + test_url, "eu-central-1");
httpRequest.method = "GET";
AWS.config.credentials = {
accessKeyId: accessKeyId,
secretAccessKey: secretAccessKey,
sessionToken: sessionToken
}
var v4signer = new AWS.Signers.V4(httpRequest, "execute-api");
v4signer.addAuthorization(AWS.config.credentials, AWS.util.date.getDate());
fetch(httpRequest.endpoint.href , {
method: httpRequest.method,
headers: httpRequest.headers,
//body: httpRequest.body
}).then(function (response) {
if (!response.ok) {
$('body').html("ERROR: " + JSON.stringify(response.blob()));
return;
}
$('body').html("SUCCESS: " + JSON.stringify(response.blob()));
});
After some debugging and searching the web, I found the solution. Generating a correct signature seems to require a 'host' header:
httpRequest.headers.host = 'xxxxxxx.execute-api.eu-central-1.amazonaws.com'
After adding this host header the API call succeeds.
I am using the API method "Directly making calls to Selling partner APIs".
I followed all the steps correctly:
Request a Login with Amazon access token - This API is working successfully & Getting access token.
Construct a Selling Partner API URI - After that, I took access_token and pass into options header request- 'x-amz-access-token'.
Add headers to the URI
Create and sign your request
After calling API https://sellingpartnerapi-eu.amazon.com/vendor/orders/v1/purchaseOrders
I am getting a signature response with error-
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method.
My code:-
let options = {
service: 'execute-api',
region:'eu-west-1',
method: 'GET',
url: 'https://sellingpartnerapi-eu.amazon.com/vendor/orders/v1/purchaseOrders/?includeDetails=true&createdBefore=2020-04-20T14:00:00-08:00&createdAfter=2020-04-14T00:00:00-08:00',
headers: {
'Content-Type': 'application/json',
'host':'sellingpartnerapi-eu.amazon.com',
'x-amz-access-token':access_token,
'x-amz-date': '20200604T061745Z',
'user-agent':'My App(Language=Node.js;Platform=Windows/10)'
}
}
let signedRequest = aws4.sign(options,
{
secretAccessKey: secretAccessKey,
accessKeyId: accessKeyId,
})
console.dir("signedRequest");
console.dir(signedRequest);
delete signedRequest.headers['Host']
delete signedRequest.headers['Content-Length']
request(awsres, function(err, res, body) {
console.dir("err");
console.dir(err);
console.dir("res");
console.dir(res.body);
});
I would imagine the signature is calculated the same way it is calculated in other AWS services.
Signature Version 4 signing process docs.
#aws-sdk/signature-v4-node -- official AWS SDK v3 signature package, but v3 is very much in beta. But I imagine they have figured out the signature package, as it wouldn't be possible to interact with the API without it.
aws-signature-v4 NodeJS package
My express server has a credentials.json containing credentials for a google service account. These credentials are used to get a jwt from google, and that jwt is used by my server to update google sheets owned by the service account.
var jwt_client = null;
// load credentials form a local file
fs.readFile('./private/credentials.json', (err, content) => {
if (err) return console.log('Error loading client secret file:', err);
// Authorize a client with credentials, then call the Google Sheets API.
authorize(JSON.parse(content));
});
// get JWT
function authorize(credentials) {
const {client_email, private_key} = credentials;
jwt_client = new google.auth.JWT(client_email, null, private_key, SCOPES);
}
var sheets = google.sheets({version: 'v4', auth: jwt_client });
// at this point i can call google api and make authorized requests
The issue is that I'm trying to move from node/express to npm serverless/aws. I'm using the same code but getting 403 - forbidden.
errors:
[ { message: 'The request is missing a valid API key.',
domain: 'global',
reason: 'forbidden' } ] }
Research has pointed me to many things including: AWS Cognito, storing credentials in environment variables, custom authorizers in API gateway. All of these seem viable to me but I am new to AWS so any advice on which direction to take would be greatly appreciated.
it is late, but may help someone else. Here is my working code.
const {google} = require('googleapis');
const KEY = require('./keys');
const _ = require('lodash');
const sheets = google.sheets('v4');
const jwtClient = new google.auth.JWT(
KEY.client_email,
null,
KEY.private_key,
[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/drive.file',
'https://www.googleapis.com/auth/spreadsheets'
],
null
);
async function getGoogleSheetData() {
await jwtClient.authorize();
const request = {
// The ID of the spreadsheet to retrieve data from.
spreadsheetId: 'put your id here',
// The A1 notation of the values to retrieve.
range: 'put your range here', // TODO: Update placeholder value.
auth: jwtClient,
};
return await sheets.spreadsheets.values.get(request)
}
And then call it in the lambda handler. There is one thing I don't like is storing key.json as file in the project root. Will try to find some better place to save.
I have a serverless website on AWS S3. But S3 have a limitation that I want to overcome: it don't allow me to have friendly URLs.
For example, I would like to replace URL:
www.mywebsite.com/user.html?login=daniel
With this URL friendly:
www.mywebsite.com/user/daniel
So, I would like to know if I can use Lambda together with API Gateway to achieve this.
My idea is:
API Gateway ---> Lambda function ---> fetch S3 resource
The API Gateway will get ANY request, and pass information to a Lambda funcion, that will process some logic using the request URL (including maybe some database query) and then fetch the resource from S3.
I know AWS API Gateway main purpose is to be a gateway to REST APIs, but can we also use it as a proxy to an entire website?
The good option can be to use CloudFront as a reverse proxy, you can use Viewer/Origin response request to trigger lambda and fetch the resource from S3.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/
It is possible to use API Gateway as a reverse proxy for a S3 website.
I was able to do that following steps below:
In AWS API Gateway, create a "proxy resource" with resource path = "{proxy+}"
Go to AWS Certificate Manager and request a wildcard certificate for your website (*.mywebsite.com)
AWS will tell you to create a CNAME record in you domain registrar, to verify that you own that domain
After your certificate is validated, go to AWS API Gateway and create a Custom Domain Name (click on "Custom Domain Names" and then "Create Custom Domain Name"). In "domain name" type your domain (www.mywebsite.com) and select the ACM Certificate that you just created (step 1 above). Create a "Base Path Mapping" with path = "/" and in "destination" select your API and stage.
After that, you will need to add another CNAME record, with the CloudFront "Target Domain Name" that was generated for that Custom Domain Name.
In the Lambda, we can route the requests:
'use strict';
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const myBucket = 'myBucket';
exports.handler = async (event) => {
var responseBody = "";
if (event.path=="/") {
responseBody = "<h1>My Landing Page</h1>";
responseBody += "<a href='/xpto'>link to another page</a>";
return buildResponse(200, responseBody);
}
if (event.path=="/xpto") {
responseBody = "<h1>Another Page</h1>";
responseBody += "<a href='/'>home</a>";
return buildResponse(200, responseBody);
}
if (event.path=="/my-s3-resource") {
var params = {
Bucket: myBucket,
Key: 'path/to/my-s3-resource.html',
};
const data = await s3.getObject(params).promise();
return buildResponse(200, data.Body.toString('utf-8'));
}
return buildResponse(404, '404 Error');
};
function buildResponse(statusCode, responseBody) {
var response = {
"isBase64Encoded": false,
"statusCode": statusCode,
"headers": {
"Content-Type" : "text/html; charset=utf-8"
},
"body": responseBody,
};
return response;
}
A good bet would be to use CloudFront and Lambda#Edge.
Lambda#Edge allows you to run Lambda function in the edge location of the CloudFront CDN network.
CloudFront gives you the option to hook into various events during its lifecycle and apply logic.
This article looks like it might be describing something similar to what you're talking about.
https://aws.amazon.com/blogs/networking-and-content-delivery/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/