403 Forbidden when trying to query AWS ElasticSearch cluster - amazon-web-services

I'm having issues performing requests using jest to an AWS ElasticSearch cluster v5.3.
Reason is:
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details
I am using windows 10 with java 11, spring boot 2, webflux, jest and the aws http request signer that they point to in their documentation.
I've checked and doubled checked the access and secret keys of the IAM user. I also added policies for the IAM user of full control over the cluster, still the 403 message.
Removing or adding the Content-Length header yields the same error.
Not sure where to go from here.
Any help would be appreciated.
Thx

So from I discovered, is that the network issue had something to do with the corporate proxy. I created a tunnel between my laptop and the ElasticSearch cluster, removed the proxy from the http client used by jest, and things work smoothly now.
I wasn't able to figure out exactly how the proxy affected the request signature though, but I'll stick with the tunnel solution.

Related

CloudRun Service to Service returning 403 After Setup

I have a service to service set up that I completed using the google cloud tutorial (https://cloud.google.com/run/docs/authenticating/service-to-service#nodejs)
Changed the cloudrun Service account to have roles/run.invoker (they both share the same role)
Make a request to get the access token: http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=https://XXXX-XXXX-XXXX-xx.a.run.app'
(failing) Use that access token to make a request at https://XXXX-XXXX-XXXX-xx.a.run.app/my-endpoint with the access token: axios.post('https://XXXX-XXXX-XXXX-xx.a.run.app/my-endpoint', {myData}, {headers: {Authorization: 'Bearer eyJhbGciOiJSUz.....'}})
However, on step 3, making the call to my service, I receive a 403 error, any thoughts on what I missed?
Note: I have tried deploying my invoked service with --allow-unauthenticated and without it. I am not using a custom domain, I am using the CloudRun created url.
PS: If I change the ingress from internal and load balancer to all it works, however I'm not sure if this is correct to do.
The HTTP 403 Forbidden error message when accessing your Cloud Run service means that your client is not authorized to invoke this service.
You have not granted the service account permission to call the receiving service. Your question states that you added roles/run.invoker but the error message indicates you did not complete this step correctly.
Go to the Google Cloud Console.
Select the receiving service (this is the Cloud Run service you are calling).
Click Show Info Panel in the top right corner to show the Permissions tab.
In the Add members field, enter the identity of the calling service.
Select the Cloud Run Invoker role from the Select a role drop-down menu.
Click Add.
Note: When requesting the Identity Token, do not specify the custom domain. Your question's wording is confusing on that point.
[UPDATE]
The OP has enabled internal and load balancer. This requires setting up Serverless VPC Access.
Connecting to a VPC network
Solution was to add a VPC Connector and route all traffic through it. I added this to the deploy script --vpc-egress all-traffic. Originally I had --vpc-egress private-ranges-only to connect to redis MemoryStore, however this was insufficient to connect to my other service (internal only ingress).
Credit to excellent insight from #JohnHanley and #GuillaumeBlaquiere
Interesting Note About NodeJS: My container wouldn't start when I switched the --vpc-egress to all-traffic, and I had no idea why because there were no logs. It turns out running node v16.2 caused some weird issues with --vpc-egress all-traffic that I couldn't debug, so downgrading to 14.7 allowed the container to start.

Serverless Django app (AWS Lambda via Zappa) times out when trying to OAuth to Twitter

I've got a Django app setup to use django-allauth to connect to Twitter. The flow is all working locally and I've followed the same setup steps on Lambda to add my tokens, site, etc.
When I try to access the login url (/accounts/twitter/login/) the request eventually times out with this message from AWS Lambda:
{"message": "Endpoint request timed out"}
The last message from zappa tail before the timeout event is:
[1619019159940] [DEBUG] 2021-04-21T15:32:39.939Z 7f66a0e3-58de-4612-82c0-54590d69676f Starting new HTTPS connection (1): api.twitter.com:443
I've seen that the gateways have a 30 second timeout but I don't think it should be taking this long anyway. Locally, it's taking a couple of seconds.
Does anyone with knowledge of these platforms have an idea where the bottleneck might be and where the issue could be? Or have any pointed questions to help debug?
Things I've already checked and (tentatively) ruled out:
The database backend is AWS Aurora Serverless, and I did worry that the double-serverless setup might be causing the slow speeds. However, a simple call of the Django management command (zappa manage dev migrate returns takes less than a second so I've ruled that out for now. Plus the Admin dashboard loads fine which is also accessing the DB.
I've got both the dev and live URLs added into Twitter's dashboard as valid OAuth callback URLs.
Leaving this answer to help future searches, although it's not the route I'll take.
Thanks to #Jens in the comments for pointing towards the VPC issue. You need to add a NAT Gateway service to the Lambda to add public internet access to a private VPC.
"To grant internet access to your function, its associated VPC must have a NAT gateway (or NAT instance) in a public subnet."
Source: https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
This is a per-hour billed extra so might defeat the point of using serverless (if you're using it for small fees and simplicity rather than scale like I was).

AWS SFTP Gateway API test works but SFTP clients return authentication failure

I am settings up a AWS SFTP using Cognito as the identify provider. I have a working Gateway API and can run tests on that successfully.
When I come to connect to the SFTP instance the username and password are rejected. I've checked the log files and there are not entries indicating that any calls to the Gateway API were made at all. This to me suggests something wrong with the IAM role associated with the SFTP instance and therefore the Gateway API is not being called. From my reading the configuration appears to be correct.
I'm working from the following blog page.
https://agilevision.io/blog/aws/2019/02/06/integrate-aws-sftp-with-custom-identity-provider.html
Can anyone suggest what might be wrong.

Dynatrace AWS access key verification domains (url/ip)

We have implemented Dynatrace and we need to add our AWS account to pull CloudWatch logs.
The problem: We have a corporate proxy and firewall which is super locked down and seems to block whichever requests Dynatrace is trying to make to AWS to authenticate with the key and secret.
The infra guys has allowed a bunch of AWS domains per region to read CloudWatch logs, but we still can't authenticate (see image below).
I have set up a Dynatrace security gateway in AWS which our local gateways are able to access. We are successfully getting logs form OneAgents through the SGW. The problem is getting the CloudWatch logs/integration going.
The error in the image below leads me to believe that Dynatrace is not able to communicate to the AWS auth servers at all.
Any advice would be appreciated.
I'm just a dev
We are not allowed to open the amazon domain.
#corporate-devlife

AWS Elasticsearch and CORS

I'm trialing the AWS Elasticsearch service:
https://aws.amazon.com/elasticsearch-service/
Very easy to setup. Basically just hit deploy. I unfortunately can't get any of the Elasticsearch GUI's to connect (ElasticHQ, Elasticsearch Head) as CORS is not enabled in the AWS build, and there is no way to change the elasticsearch config, or install plugins that I can see.
Does anyone know how to change these options on AWS?
My workaround while still staying inside of the AWS ecosystem was to create an API using the API Gateway.
I created a new POST end point with the address of my elasticsearch instance, and then followed the following guide: CORS on AWS API Gateway to add CORS to this end point. This allowed my front end code to be able to make requests from a different domain.
In case it's useful to anyone else - you can disable CORS for testing purposes using a Chrome plugin.
ElasticHQ and Elasticsearch Head still won't work properly with AWS Elasticsearch though (at the time of writing) as they make calls to /_cluster/state which is not currently one of the supported AWS ElasticSearch operations.
Disabling CORS and performing a GET on /_cluster/state returns
{
Message: "Your request: '/_cluster/state' is not allowed."
}
Some functionality still works in ElasticHQ but I'm unable to get Elasticsearch Head to work.
Like #Le3wood said, the workaround could be integrating with AWS ecosystem. Besides API gateway, using AWS Lambda also works.