I know this issue has been already discussed before , Yet I feel my question is a bit different.
I'm trying to figure out how am I to enable access to the Kibana over the self manged AWS elastic search which I have in my AWS account .
Could be that what am I about to say here is inaccurate or complete nonsense .
I am pretty novice in the whole AWS VPC wise section and to ELK stuck.
Architecture:
Here is the "Architecture":
I have a VPC.
Within the VPC I have several sub nets.
Each server sends it's data to the elastic search using log stash which runs on the server itself. For simplicity lets assume I have a single server.
The elastic search https url which can be found in the Amazon console is resolved to an internal IP within the sub net that I have defined.
Resources:
I have found the following link which suggest to use one of two option:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
Solutions:
Option 1: resource based policy
Either to allow resource based policy for elastic search by introducing condition which specify certain IP address.
This was discussed in the following thread but unfortunately did not work for me.
Proper access policy for Amazon Elastic Search Cluster
When I try to implement it in the Amazon console, Amazon notifies me that because I'm using Security group , I should resolve it by using security group.
Security group rules:
I tried to set a rule which allows my personal computer(Router) public IP to access Amazon elastic search ports or even opening all ports to my public IP.
But that didn't worked out.
I would be happy to get a more detailed explanation to why but I'm guessing that's because the elastic search has only internal IP and not public IP and because it is encapsulated within the VPC I am unable to access it from outside even if I define a rule for a public IP to access it.
Option 2: Using proxy
I'm decline to use this solution unless I have no other choice.
I'm guessing that if I set another server with public and internal IP within the same subnet and VPC as that of the elastic search , and use it as a proxy, I would be then be able to access this server from the outside by defining the same rules to the it's newly created security group . Like the article suggested.
Sources:
I found out of the box solution that some one already made for this issue using proxy server in the following link:
Using either executable or docker container.
https://github.com/abutaha/aws-es-proxy
Option 3: Other
Can you suggest other solution? Is it possible to use Amazon Load balancer or Amazon API gateway to accomplish this task?
I just need proof of concept not something which goes into production environment.
Bottom line:
I need to be able to aceess Kibana from browser in order to be able to search elastic search indexes.
Thanks a lot
The best way is with the just released Cognito authentication.
https://aws.amazon.com/about-aws/whats-new/2018/04/amazon-elasticsearch-service-simplifies-user-authentication-and-access-for-kibana-with-amazon-cognito/
This is a great way to authenticated A SINGLE USER. This is not a good way for the system you're building to access ElasticSearch.
Related
The current setup is as follows:
I have a Cloud Run service, which acts as "back-end", which needs to reach external services but wants to be reached ONLY by the second Cloud Run instance. which acts as a "front-end", which needs to reach auth0 and the "back-end" and be reached by any client with a browser.
I recognize that the setup is not optimal, but I've inherited as is and we cannot migrate to another solution (maybe k8n). I'm trying to make this work with the least amount of impact on the infrastructure and, ideally, without having to touch the services themselves.
What I've tried is to restrict the ingress of the back-end service to INTERNAL and place two serverless VPC connectors (one per service), so that the front-end service would be able to reach the back-end but no one else could.
But I've encountered a huge issue: if I set the egress of the front-end all on the VPC it works, but now the front-end cannot reach auth0 and therefore the users cannot authenticate. If I place the egress as "mixed" (only internal ip ranges go through the VPC) the Google Run URL (*.run.app) is resolved not through the VPC and therefore it returns a big bad 403.
What I tried so far:
Placing a load balancer in front of the back-end service. But the serverless NEG only supports the global http load balancer and I'd need an internal one if I wanted an internal ip to resolve against
Trying to see if the VPC accessor itself MAYBE provided an internal (static) ip, but it doesn't seem so
Someone in another question suggested a "MIG as a proxy" but I haven't managed to figure that out (Can I run Cloud Run applications on a private IP (inside dedicated VPC network)?)
Fooled around with the Gateway API, but it seems that I'd have to provide a openAPI specification for the back-end, and I'm still under the delusion that this might be resolved with a cheaper (in terms of effort) approach.
So, I get that the Cloud Run instance cannot possibly have an internal IP by itself, but is there any kind of GCP product that can act as a proxy? Can someone elaborate on the "MIG as a proxy" approach (Managed Instance Group? Of what, though?), which might be the solution I'm looking for? (Sadly, I do not have the reputation needed to comment on that question or I would have).
Any kind of pointer is, as always, deeply appreciated.
You are designing this wrong. Use Cloud Run's identity-based access control instead of trying to route traffic. Google IAP (Identity Aware Proxy) will block all traffic that is not authorized.
Authenticating service-to-service
This is my first time setting up a dynamic website, so bare with me. My goal is to have SSL/https working on my php single instance aws Elastic beanstalk web app.
I already know that with a load balancer SSL is easy to set up and ACM certificates only work with load balancer.
I want single instance since it is cheaper. My project is small, don't expect a lot of traffic at most 1 user per day.
... back to problem, I did some research and came across this link, which is a "how to" from amazon:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-php.html
The problem I'm running into is the part where I'm suppose to put my "certificate contents here".
From research what goes here is a SSL certificate from a third party. When I purchased my domain from namecheap , I also purchased PostivieSSL. Now where I'm confused is how to create this "cerificate contents". I found this link on namecheap:
https://www.namecheap.com/support/knowledgebase/article.aspx/9446/14/generating-csr-on-apache-opensslmodsslnginx-heroku/
I know that I have to generate a CSR through SSH with commands ,where they will ask info about my site which is needed to make the request and get the certificate. It says I have to do this where I'm hosting my website. My question is how do I do this in elastic beanstalk? or is there another way to do this or am I understanding wrong. I'm a bit lost here
I've spent 2 days researching but cant find how to do this. I've found some people linking GitHub repositories doing this in some other similar questions but they don't seem to help me understand how to do this.
I was more or less in your shoes, but with the Java app instead of PHP. The way I see it, you have got three broad tasks to solve.
Generate proper certificate. You can either go for the one you already have from PositiveSSL or generate a free one for test purposes with Let's Encrypt and certbot (this might give you more control and understanding over what (sub)domain you're using it for). The end result is a set of certificate and key for the desired domain.
Make sure the certificate and key are on the Elastic Beanstalk instance in question and are being picked up by your web server. For this you need to properly package your app before deploying it, paying attention to the paths and the AWS docs for the single instance which you mentioned. Paste your certificate data in .ebextensions/https-instance.config, and it will be deployed as files under specified paths. Once you're done with the whole process later on, consider sourcing private certs and keys from your S3, never commit private data to version control.
Make sure the HTTPS traffic flows through. For this you'll need to make sure that your Elastic Beanstalk VPC security group has an inbound rule for port 443 (also covered in the AWS docs).
I have an EC2 instance running inside an ECS cluster with VPC.
On the instance, I need to run a ECS task that needs access to DynamoDB.
When I try running the same task using Fargate, I can use the assignPublicIp = 'ENABLED' option to allow my task to have access to other AWS services, and everything works fine.
However, the assignPublicIp option is not available for the EC2 launch type, and I cannot figure out how to allow my EC2 instance have access to other AWS services.
I read the AWS docs and followed guides like this one to setup VPC endpoint for DynamoDB.
I also made sure that there aren't any network access restrictions by making sure that inbound/outbound rules for my NACL and security group for the VPC are wide open (at least for the sake of testing).
Here is how the rules look like, for both NACL and my security group:
Finally, I used the VPC > Reachability Analyzer to check if AWS can detect any problems regarding the connection path between my EC2 instance and DynamoDB, but the analysis reported a Reachable status.
It basically told me that there was no issues regarding establishing a connection along the following path:
Network interface for my EC2 instance (source)
Security group for the VPC
NACL for the VPC
Route table for the VPC
which includes the following route added by the VPC endpoint for DynamoDB
Destination: pl-02cd2c6b (com.amazonaws.us-east-1.dynamodb, 3.218.182.0/24, 3.218.180.0/23, 52.94.0.0/22, 52.119.224.0/20)
Target: the endpoint ID (e.g., vpce-foobar)
VPC endpoint for DynamoDB (destination)
Despite AWS telling me that I have a "Reachable" status, I still think it might be a network reachability problem, because when I run the task, the script I am running gets stuck right after it makes a GetItem call to DynamoDB.
If it was a permission error or an invalid parameter issue, I would get an error immediately, but everything just "hangs" there, until the task eventually times out.
Any pointers on what I might be missing here, or other workarounds would be very appreciated.
Thank you.
EDIT 1 (2021/02/13):
So I went back to the AWS docs to see if I had missed anything in setting up the VPC endpoints. I originally had one setup for DynamoDB, but since I also need to use S3 in my service, I went ahead and setup a Gateway VPC Endpoint for S3 too (I also wanted to see if the issue I am having is a generic network problem, or specific to DynamoDB).
Then, I made some changes to my script to try to make a call to S3 (to get the bucket's location, for simplicity) as the very first thing to do. I knew that the call would end up timing out, so I wanted to trigger the error immediately upon starting my script execution.
I waited until my task would eventually fail because of the timeout, and this time I noticed something interesting.
Here is the error logs I got when the task failed:
The IP address that my task was trying to reach was 52.85.146.194:443.
And here are the IP addresses that I found in the managed prefix list for S3, which I found in the VPC console:
The IP address I got the timeout error from is not on the list. Could this be a hint to the cause of the issue? Or am I missing something and there is actually nothing wrong with that?
I've got an Elasticsearch cluster hosted in AWS, which currently has open permissions.
I'm looking to lock that down to only being accessible from the AWS account in which it lives, however if I do this (with a Principal statement in the Elasticsearch access policy) then I can no longer use the AWS-provided Kibana plugin - it fails saying that the anonymous user cannot perform ESHttpGet.
I can find lots of questions on how to link a self-hosted Kibana to an AWS Elasticsearch, but not the provided one. Can anyone help with what access I need to allow for this to work?
I found this section in an AWS documentation page:
Because Kibana is a JavaScript application, requests originate from the user's IP address. IP-based access control might be impractical due to the sheer number of IP addresses you would need to whitelist in order for each user to have access to Kibana. One workaround is to place a proxy server between Kibana and Amazon ES. Then you can add an IP-based access policy that allows requests from only one IP address, the proxy's. The following diagram shows this configuration.
So the solution is to also whitelist any IP addresses from which you'd like to be able to use Kibana.
I have a simple question which I don't think has a simple answer.
I would like to use Amazon Workspaces but a requirement would be that I can restrict the IP addresses that can access a or any workspace.
I kind of get the impression this should be possible through rules on the security group on the directory, but I'm not really sure, and I don't know where to start.
I've been unable to find any instructions for this or other examples of people having done this. Surely I'm not the first/only person to want to do this?!
Can anyone offer any pointers??
Based on the Comments given by the #Mayb2Moro; he obtained information from AWS Support that the restriction based on the Security Group or VPC wouldn't be possible as the Workspaces connectivity would go via. another external endpoint [management interface in the backend].
Yes you are right, you need to work on the security group configured while the workspace is setup. The process goes like this,
Pick the security group used while the Workspace bundle was created
Go to the EC2 -> Security Group and select the security group and restrict them to your Office's Exit IP.
PS : Image Source - http://www.itnews.com.au/Lab/381939,itnews-labs-amazon-workspaces.aspx
Now you can assign IP Access Control Groups to a Directory that is associated to your workspaces.
In the IP Access Control Group, you can specify the IPs that you wish to allow access to the workspaces.
Refer to the IP Access Control Groups for Your WorkSpaces for the official documentation.