I'm trialing the AWS Elasticsearch service:
https://aws.amazon.com/elasticsearch-service/
Very easy to setup. Basically just hit deploy. I unfortunately can't get any of the Elasticsearch GUI's to connect (ElasticHQ, Elasticsearch Head) as CORS is not enabled in the AWS build, and there is no way to change the elasticsearch config, or install plugins that I can see.
Does anyone know how to change these options on AWS?
My workaround while still staying inside of the AWS ecosystem was to create an API using the API Gateway.
I created a new POST end point with the address of my elasticsearch instance, and then followed the following guide: CORS on AWS API Gateway to add CORS to this end point. This allowed my front end code to be able to make requests from a different domain.
In case it's useful to anyone else - you can disable CORS for testing purposes using a Chrome plugin.
ElasticHQ and Elasticsearch Head still won't work properly with AWS Elasticsearch though (at the time of writing) as they make calls to /_cluster/state which is not currently one of the supported AWS ElasticSearch operations.
Disabling CORS and performing a GET on /_cluster/state returns
{
Message: "Your request: '/_cluster/state' is not allowed."
}
Some functionality still works in ElasticHQ but I'm unable to get Elasticsearch Head to work.
Like #Le3wood said, the workaround could be integrating with AWS ecosystem. Besides API gateway, using AWS Lambda also works.
Related
I am trying to set up an app on AWS that ...
Deploys a react app to an S3 bucket
Deploys a node backend that interacts with an AWS RDS database
Connects the react app front end to the node backend to do CRUD operations
Doing part 1 is easy and there are plenty of tutorials. However, parts 2 and 3 seem totally foreign to me. I have found nothing that explains how to tie the front end to the database or how to tie the front end to the back end.
Do I need an API Gateway?
Does the node backend have to be hosted on an EC2 instance?
If so, how do I do this?
Where does cloudformation come into play?
I have found nothing that explains how to tie the front end to the
database or how to tie the front end to the back end.
Frontend connects to the backend by making HTTP API calls (via fetch or a library like axios) to the URL associated with the backend server.
The backend would connect to the database via NodeJS database connections.
The frontend should never connect directly to the database.
Do I need an API Gateway?
Using API Gateway is entirely optional.
Does the node backend have to be hosted on an EC2 instance?
The Node backend needs to be deployed on a compute service that can run NodeJS code, such as AWS EC2, ECS, EKS, Lambda...
If so, how do I do this?
This part of your question is so broad it is off-topic for this site. Given your level of experience I suggest looking at AWS Elastic Beanstalk for deploying your backend.
Where does cloudformation come into play?
CloudFormation is a tool you use to define your AWS infrastructure as code, so instead of clicking around in the AWS UI to create everything, and then not being able to reproduce that reliably when you need to, everything is defined in template files that can be tracked in source control.
Where it "comes into play" is if you decide you want to use an Infrastructure as Code tool, you might use CloudFormation. It is entirely optional.
I'm testing the waters for running Apache Airflow on AWS through the Managed Workflows for Apache Airflow (MWAA). The version of Airflow that AWS have deployed and are managing for me is 1.10.12.
When I try to access the v1 REST API at /api/experimental/test I get back status code 403 Forbidden.
Is it possible to enable the experimental API in MWAA? How?
I think MWAA provide a REST endpoint to use the CLI
https://$WEB_SERVER_HOSTNAME/aws_mwaa/cli
It's quite confusing because you fisrt need to create a cli-token using the awscli to then hit the endpoint using that token. You will need a policy to allow your awscli to request that token.
Lastly there isn't support for all the commands, just a bunch.
Anyway it's all explained on the user guide
https://docs.aws.amazon.com/mwaa/latest/userguide/amazon-mwaa-user-guide.pdf
By default, api.auth_backend configuration option is set to airflow.api.auth.backend.deny_all in MWAA environments. You need to override it to one of the authentication methods mentioned in the documentation as shown in the figure bellow:
Note: it is highly discouraged to use airflow.api.auth.backend.default as it'll
leave your environment publicly accessible.
[2021/07/29] Edit:
Based on this comment, AWS blocked access to the REST API.
I'm having issues performing requests using jest to an AWS ElasticSearch cluster v5.3.
Reason is:
The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details
I am using windows 10 with java 11, spring boot 2, webflux, jest and the aws http request signer that they point to in their documentation.
I've checked and doubled checked the access and secret keys of the IAM user. I also added policies for the IAM user of full control over the cluster, still the 403 message.
Removing or adding the Content-Length header yields the same error.
Not sure where to go from here.
Any help would be appreciated.
Thx
So from I discovered, is that the network issue had something to do with the corporate proxy. I created a tunnel between my laptop and the ElasticSearch cluster, removed the proxy from the http client used by jest, and things work smoothly now.
I wasn't able to figure out exactly how the proxy affected the request signature though, but I'll stick with the tunnel solution.
I've recently gotten into AWS Serverless Architecture with .NET Core 1.0. In my application we use Elasticsearch on its own machine in order to maintain it. What I am trying to do is use AWS Elasticsearch Service from AWS API Gateway which is being proxied by AWS Lambda. (I believe I have typed this correctly)
When ever my code accesses my Elasticsearch domain I receive a timeout error. As of right now, my Elasticsearch domain is left wide open so anyone can access the information. I would like to lock it down for only the API Gateway and Lamda function.
I've tried messing with the policies and roles to no success. Has anyone tried to do what I am trying to do, if so, how were they able to connect it? Or is there a better way?
The simple solution is to put all of your services out of the VPC they are in right now (I believe they are not in the same one, as your IO calls get timed out).
My answer here would give you a nice background on AWS Lambda with VPC and why external IO calls time out.
AWS lambda invoke not calling another lambda function - Node.js
note: the answer is not related to NodeJS.
I am fairly new to AWS Lambda but sure can see the benefits of it and stumbled upon the superb framework Serverless to help me built solutions on Lambda.
I started out building solutions using AWS API Gateway but really need "internal" VPC API's and not public Internet facing API's like API GW creates.
I found that Servless indeed can expose a HTTP endpoint but I can't figure out how this is done and how the URL is created.
When I deploy the Lambda from Serverless it gives me the URL, e.g.:
https://uxezd6ry8z.execute-api.eu-west-1.amazonaws.com/dev/ping
I would like to be able to find (or create) this same http listener for already existing Lambdas so my question is how is the URL created and where is teh actual HTTP listener deployed?
You might be looking for the invoke url,
1. go to https://console.aws.amazon.com/apigateway
2. select api link (which you have deployed on aws lambda).
3. select stages in left side panel and
see the invoke url.
Adding a http listener can be done by going to your lambda function, selecting the 'triggers' tab and 'add trigger', finally selecting API Gateway - but as others mentioned this does create a public facing url.
Duh, I was in the wrong AWS logon previously so the API GW was not showing any matching Serverless API and that was why I couldn't understand how they did it...
Once I logged into the AWS account that hosts the Serverless structure I can see the API GW GET API's for the Serverless HTTP listener.