Message":"User: anonymous is not authorized to perform: iam:PassRole - amazon-web-services

I am following below link for "Use Amazon S3 to Store a Single Amazon Elasticsearch Service Index"
https://aws.amazon.com/blogs/database/use-amazon-s3-to-store-a-single-amazon-elasticsearch-service-index/
When I am trying
curl -XPUT 'http://localhost:9200/_snapshot/snapshot-repository' -d'{
"type": "s3",
"settings": {
"bucket": "es-s3-repository",
"region": "us-west-2",
"role_arn": "arn:aws:iam::123456789012:role/es-s3-repository"
}
}'
with update bucket, region and role_arn, but I am getting below error
{"Message":"User: anonymous is not authorized to perform: iam:PassRole on resource: arn:aws:iam...}
To resolve this issue, I followed this link https://aws.amazon.com/premiumsupport/knowledge-center/anonymous-not-authorized-elasticsearch/ also. but still It is not working.

You need to sign your requests to AWS Elasticsearch. The blog post that you linked describes using a proxy server to create the signature, did you do that?
As an alternative to using such a proxy server with curl, you can make the requests from a program. In the AWS Elasticsearch docs give you an example in Python, with a link to a Java client.

Related

AWS AccessDenied when calling the UploadServerCertificate

I ran into a problem with AWS instance when I was trying to import self signed SSL certificate to IAM console following this tutorial -> https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-ssl.html
Basically tutorial is made to self sign a certificate and upload it to IAM user to have HTTPS application for testing purposes.
I SSH to my instance and ran all those commands, but in the end when I need to import it I get the error that my account is not authorized...
An error occurred (AccessDenied) when calling the
UploadServerCertificate operation: User:
arn:aws:sts::xxxxxxxxx:assumed-role/aws-elasticbeanstalk-ec2-role/xxxxxxx
is not authorized to perform: iam:UploadServerCertificate on resource:
arn:aws:iam::xxxxxxxxx:server-certificate/elastic-beanstalk-x509
I'm logged in as a ec2-user into the instance because I didn't find a way to log in with any other user...
I tried running command as sudo and nothing changes. On a similar post I have seen that I need to create a specific IAM user to which I need to append specific group policy to have "IAMFullAccess" policy. But I don't understand how can I specify that I want to run this command as this user since I am logged in as ec2-user on SSH...
You need to do some reading: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
Create an IAM role with Upload permission
Add a trust policy to the role that it will allow it to be assumed by your EC2 instance
Attach the role to the EC2 instance
From your error it seems that you are using Elastic Beanstalk. This means that you already have a role that is assumed by your EC2. Find this role (xxxxx in the error message) and add the appropriate permissions.
Okay I have managed to add the certificate to the instance...
aws iam list-server-certificates {
"ServerCertificateMetadataList": [
{
"ServerCertificateId": "id",
"ServerCertificateName": "elastic-beanstalk-x509",
"Expiration": "2022-10-21T13:07:11Z",
"Path": "/",
"Arn": "arn",
"UploadDate": "2021-10-21T13:42:39Z"
}
] }
I also added Listener and proces on "Modify Application Load Balancer" but the site is still not responding to HTTPS requests... Any idea?

AWS Appsync Http resolver for IOT device shadow

Im trying (in vain) to get a device shadow through appsync Http resolvers.
{
"version": "2018-05-29",
"method": "GET",
"resourcePath": "/things/${ctx.args.id}/shadow",
"params":{
"headers":
$utils.toJson($utils.http.copyHeaders($ctx.request.headers))
}
}
All im managing to get as a response is "Credential should be scoped to correct service"
I can see that the Authorization header for the call contains
"Credential = ---/---/eu-west-1/appsync/aws4_request"
When i call GET "deviceShadow" it as REST in my application today (which works) the same values are
"Credential = ---/---/eu-west-1/iotdata/aws4_request"
So it seams like appsync is being set as the service and that is messing up the call?
Any tips how to get this working?
I think you'll need to add a role and IAM signing configuration to the Data Source. Perform the following steps with the AWS CLI.
Attach an IAM role to the data source that grants the appropriate permissions to call the IoT Device Shadow operations. I think it's iot:GetThingShadow for this example.
Add an IAM configuration section to the AWS AppSync Data Source. This is NOT the resolver template.
{
"endpoint": "https://<iot-endpoint>",
"authorizationConfig": {
"authorizationType": "AWS_IAM",
"awsIamConfig": {
"signingRegion": "eu-west-1",
"signingServiceName": "iot"
}
}
}
When AWS AppSync invokes your resolver, it will generate a SigV4 signature using the attached role and call the AWS IoT Device Shadow service. Try this out.

AWS API Gateway WebSocket Connection Error

I created an API by AWS API Gateway and Lambda that is same 'https://github.com/aws-samples/simple-websockets-chat-app'. But the API not working trust. I get an error when i try to connect. Its message is "WebSocket connection to 'wss://b91xftxta9.execute-api.eu-west-1.amazonaws.com/dev' failed: Error during WebSocket handshake: Unexpected response code: 500"
My Connection Code
var ws= new WebSocket("wss://b91xftxta9.execute-api.eu-west-1.amazonaws.com/dev");
ws.onopen=function(d){
console.log(d);
}
Try adding $context.error.validationErrorString and $context.integrationErrorMessage to the logs for the stage.
I added a bunch of stuff to the Log Format section, like this:
{ "requestId":"$context.requestId", "ip": "$context.identity.sourceIp",
"requestTime":"$context.requestTime", "httpMethod":"$context.httpMethod",
"routeKey":"$context.routeKey", "status":"$context.status",
"protocol":"$context.protocol", "errorMessage":"$context.error.message",
"path":"$context.path",
"authorizerPrincipalId":"$context.authorizer.principalId",
"user":"$context.identity.user", "caller":"$context.identity.caller",
"validationErrorString":"$context.error.validationErrorString",
"errorResponseType":"$context.error.responseType",
"integrationErrorMessage":"$context.integrationErrorMessage",
"responseLength":"$context.responseLength" }
In early development this allowed me to see this type of error:
{
"requestId": "QDu0QiP3oANFPZv=",
"ip": "76.54.32.210",
"requestTime": "21/Jul/2020:21:37:31 +0000",
"httpMethod": "POST",
"routeKey": "$default",
"status": "500",
"protocol": "HTTP/1.1",
"integrationErrorMessage": "The IAM role configured on the integration
or API Gateway doesn't have permissions to call the integration.
Check the permissions and try again.",
"responseLength": "35"
}
try using wscat -c wss://b91xftxta9.execute-api.eu-west-1.amazonaws.com/dev in a terminal. This should allow you to connect it. If you don't have wscat installed, just do a npm install -g wscat
To get more details, enable logging for your API: Stages -> Logs/Tracing -> CloudWatch Settings -> Enable CloudWatch Logs. Then, send a connection request again and monitor your API logs in CloudWatch. In my case, I had the next error:
Execution failed due to configuration error: API Gateway does not have permission to assume the provided role {arn_of_my_role}
So, I added API Gateway to my role's Trust Relationships, as it's mentioned here and it fixed the problem.

Permission Denied When Making Request to GCP Video Intelligence API

so I am able to make a valid request to the video intelligence api with the sample video given in the quickstart. https://cloud.google.com/video-intelligence/docs/getting-started I have tried many different ways of authenticating to the api as well. The API token I am using was created from the Credentials page in the console. There are no options to tie it to the video api so I figured it should automatically work. The API has been enabled on my account.
export TOKEN="foobar"
curl -XPOST -s -k -H"Content-Type: application/json" "https://videointelligence.googleapis.com/v1beta1/videos:annotate?key=$TOKEN" --data '{"inputUri": "gs://custom-bucket/IMG_3591.mov", "features": ["LABEL_DETECTION"]}'
{
"error": {
"code": 403,
"message": "The caller does not have permission",
"status": "PERMISSION_DENIED"
}
}
curl -XPOST -s -k -H"Content-Type: application/json" "https://videointelligence.googleapis.com/v1beta1/videos:annotate?key=$TOKEN" --data '{"inputUri": "gs://cloud-ml-sandbox/video/chicago.mp4", "features": ["LABEL_DETECTION"]}'
{
"name": "us-east1.18013173402060296928"
}
Update:
I set the file as public and it worked. But I need to access this as private, so I gave the service account access to the file and tried to get the API key like suggested.
export TOKEN="$(gcloud auth print-access-token)"
curl -XPOST -s -k -H"Content-Type: application/json" "https://videointelligence.googleapis.com/v1beta1/videos:annotate?key=$TOKEN" --data '{"inputUri": "gs://custom-bucket/IMG_3591.mov", "features":["LABEL_DETECTION"]}'
{
"error": {
"code": 400,
"message": "API key not valid. Please pass a valid API key.",
"status": "INVALID_ARGUMENT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.Help",
"links": [
{
"description": "Google developers console",
"url": "https://console.developers.google.com"
}
]
}
]
}
}
It seems like the token returned by this print-access-token function does not work. I do have an API key, but it does not have access to the bucket and I don't see a way to give an API key access.
Update 2:
So it looks like we were setting our token wrong. We were following this example https://cloud.google.com/video-intelligence/docs/analyze-labels#videointelligence-label-file-protocol which is where we got the apiKey=$TOKEN from. But it looks like we needed to set the Bearer Header. We did try this at first but we were having the first issue of not having access to the bucket. So thank you.
TL;DR - Video Intelligence service is unable to access the file on your Cloud storage bucket because of lack of permissions. Since the API uses the permissions of the service account token being passed, you will need to grant your service account permissions to read the file in the GCS bucket or the entire GCS bucket itself.
Long version
The access token you pass should correspond to an IAM service account key. The service account will belong to a project (where you need to enable the Video intelligence API access) and the service account should have permissions to access the GCS bucket you're trying to access.
Each such service account has an associated email id in the form SERVICE_ACCOUNT_NAME#PROJECT_NAME.iam.gserviceaccount.com.
In Cloud console, you can go to the Cloud storage bucket/file and grant Reader permissions for the IAM service account email address. There is no need to make this bucket public.
If you use gsutil, you can run the following equivalent command:
gsutil acl ch -u SERVICE_ACCOUNT_NAME#PROJECT_NAME.iam.gserviceaccount.com:READ gs://custom-bucket/IMG_3591.mov`
I confirmed this myself with an IAM service account that I created in my project and used this to invoke the video intelligence API. The file was not made public, but granted Reader permissions only to the service account.
I used gcloud to activate the service account and fetch the access token, although you can do this manually as well using the google OAuth APIs:
gcloud auth activate-service-account SERVICE_ACCOUNT_KEY.json
export TOKEN="$(gcloud auth print-access-token)"
The steps for creating the IAM service account using gcloud are in the same page.
I can repro this issue. I believe the problem is that you don't have proper permission setup for your video file in your gs bucket. To test out this hypothesis try sharing it publicly (checkbox next to the blob in Google Storage) and then run the request again.

AWS ImportImage operation: S3 bucket does not exist

I'm trying to import a windows 2012 OVA file into aws. I'm using this documentation.
AWS VMWare Import
I've created an s3 bucket to store the OVA files, and the OVA files have been uploaded there.
And when I try to import the images files into AWS, I get an error:
aws ec2 import-image --description "server1" --disk-containers file://containers.json --profile=company-dlab_us-east-1
An error occurred (InvalidParameter) when calling the ImportImage operation: S3 bucket does not exist: s3://companyvmimport/
Which is strange because I can list the bucket I'm trying to upload to using the aws command line:
aws s3 ls --profile=company-dlab_us-east-1
2016-10-20 09:52:33 companyvmimport
This is my containers.json file:
[
{
"Description": "server1",
"Format": "ova",
"UserBucket": {
"S3Bucket": "s3://companyvmimport/",
"S3Key": "server1.ova"
}
}]
Where am I going wrong? How can I get this to work?
I think you have an issue in your copy/paste, in your containers.json file you reference bucket as s3://companyvmimport but you have error about kpmgvmimport
anyway you dont need to indicate the s3 protocol in the json
your JSon file should look like
[
{
"Description": "server1",
"Format": "ova",
"UserBucket": {
"S3Bucket": "companyvmimport",
"S3Key": "server1.ova"
}
}]
If the file is not right at the "root" of the bucket you need to indicate full path.
I think the comment in the answer, about setting the custom policy on the S3 bucket, contains JSON that may grant access to everyone (which may not be what's desired).
If you add a Principal statement to the JSON you can limit the access to just yourself.
I ran into this same issue (User does not have access to the S3 object), and after fighting it for a while, with the help of this post and some further research, finally figured it out. I opened a separate post specifically for this "does not have access" issue.