AWS Elasticsearch Kibana Plugin Access Denied - amazon-web-services

I've got an Elasticsearch cluster hosted in AWS, which currently has open permissions.
I'm looking to lock that down to only being accessible from the AWS account in which it lives, however if I do this (with a Principal statement in the Elasticsearch access policy) then I can no longer use the AWS-provided Kibana plugin - it fails saying that the anonymous user cannot perform ESHttpGet.
I can find lots of questions on how to link a self-hosted Kibana to an AWS Elasticsearch, but not the provided one. Can anyone help with what access I need to allow for this to work?

I found this section in an AWS documentation page:
Because Kibana is a JavaScript application, requests originate from the user's IP address. IP-based access control might be impractical due to the sheer number of IP addresses you would need to whitelist in order for each user to have access to Kibana. One workaround is to place a proxy server between Kibana and Amazon ES. Then you can add an IP-based access policy that allows requests from only one IP address, the proxy's. The following diagram shows this configuration.
So the solution is to also whitelist any IP addresses from which you'd like to be able to use Kibana.

Related

Gmail Access Filter for GCP VM Instance

How to setup access to VM instance with static ip through Google OAuth like Cloudflare Access?
Now I can setup access only for Service Account, Tags and some range of IP Addresses
How it can be configured only for specific gmail-users?
Screenshot of Cloudflare Access when I'm trying to connect to VM with static IP address:
Jan Hernandez already answered in a comment, however I still want to answer:
Identity Aware Proxy actually lets you perform what you are looking for, it's a GCP component that give you the option to restrict access to resources by access level, this can be done with gmail users as well, however you'll need to set up an OAuth consent screen.
Give it a look and try it.

Set an IP restriction on service account

At first, I'm using service account with delegated credentials executing Apps Script API to run a function on Google Apps Script from a Python script via Google's Python client library, and it just works fine.
I'd like to add some IP restriction for it, to make sure it can only execute by the specific IP.
I have tried to add a firewall rule in VPC, which deny all ingress from 0.0.0.0/0 and set the target to the service account. However, running the script after setting the vpc rule is no different than before it.
The firewall rule seems to only target the VM instance used by the service account.
Is there any better way to set IP restriction for service account?
You can't restrict access to APIs based on the requestor IP, only through IAM permissions (with service accounts). Therefore you cannot configure the service account to be used only from a specific IP address.
As mentioned here : “is a special type of Google account that belongs to your application or a virtual machine (VM), instead of to an individual end user.” I ignore the reason why you are looking to restrict by IP but please keep in mind that the service account uses the private key which should not be shared between environments/users/apps, should be stored in a safe place and must used only in the server(s) running the application.

Amazon AWS elasticsearch Kibana access from browser

I know this issue has been already discussed before , Yet I feel my question is a bit different.
I'm trying to figure out how am I to enable access to the Kibana over the self manged AWS elastic search which I have in my AWS account .
Could be that what am I about to say here is inaccurate or complete nonsense .
I am pretty novice in the whole AWS VPC wise section and to ELK stuck.
Architecture:
Here is the "Architecture":
I have a VPC.
Within the VPC I have several sub nets.
Each server sends it's data to the elastic search using log stash which runs on the server itself. For simplicity lets assume I have a single server.
The elastic search https url which can be found in the Amazon console is resolved to an internal IP within the sub net that I have defined.
Resources:
I have found the following link which suggest to use one of two option:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
Solutions:
Option 1: resource based policy
Either to allow resource based policy for elastic search by introducing condition which specify certain IP address.
This was discussed in the following thread but unfortunately did not work for me.
Proper access policy for Amazon Elastic Search Cluster
When I try to implement it in the Amazon console, Amazon notifies me that because I'm using Security group , I should resolve it by using security group.
Security group rules:
I tried to set a rule which allows my personal computer(Router) public IP to access Amazon elastic search ports or even opening all ports to my public IP.
But that didn't worked out.
I would be happy to get a more detailed explanation to why but I'm guessing that's because the elastic search has only internal IP and not public IP and because it is encapsulated within the VPC I am unable to access it from outside even if I define a rule for a public IP to access it.
Option 2: Using proxy
I'm decline to use this solution unless I have no other choice.
I'm guessing that if I set another server with public and internal IP within the same subnet and VPC as that of the elastic search , and use it as a proxy, I would be then be able to access this server from the outside by defining the same rules to the it's newly created security group . Like the article suggested.
Sources:
I found out of the box solution that some one already made for this issue using proxy server in the following link:
Using either executable or docker container.
https://github.com/abutaha/aws-es-proxy
Option 3: Other
Can you suggest other solution? Is it possible to use Amazon Load balancer or Amazon API gateway to accomplish this task?
I just need proof of concept not something which goes into production environment.
Bottom line:
I need to be able to aceess Kibana from browser in order to be able to search elastic search indexes.
Thanks a lot
The best way is with the just released Cognito authentication.
https://aws.amazon.com/about-aws/whats-new/2018/04/amazon-elasticsearch-service-simplifies-user-authentication-and-access-for-kibana-with-amazon-cognito/
This is a great way to authenticated A SINGLE USER. This is not a good way for the system you're building to access ElasticSearch.

AWS Elasticsearch & VPC - configuring network access from my fixed IP

I am unable to access AWS Elasticsearch Kibana with a browser.
I have set up an Elasticsearch instance within my VPC exactly as described here;
https://aws.amazon.com/blogs/aws/amazon-elasticsearch-service-now-supports-vpc/
I used the default IAM access policy template which is basicaly all current IAM profiles (*)
My EC2 webapp (xenforo forum) is happily connected and chugging away.
I would like to access my elasticsearch domain kibana endpoint via browser from my home PC.
The security group I attached to the cluster configuration includes a rule to allow ALL TCP inbound from my home broadband fixed IP address.
I log into the AWS console, click the Kibana link from the elasticsearch domain overview and... nothing, times out.
I have read everything I can find on the matter. No joy - except perhaps I should be signing my https requests as well which seems crazy complicated and my understanding is that IP access should be configurable with security groups?
Can anyone clarify?
to access Kibana, it seems the only way is pass proper header with your requests to.
We solved it by using https://github.com/abutaha/aws-es-proxy - its not niciest, but works for us.
requires to have aws-cli installed
requires to do bit of setup, but works well afterwards
hope it helps
Hi There are many ways to access Kibana here are some of them that I found:-
Use an SSH tunnel. For information on how to do this :- https://aws.amazon.com/premiumsupport/knowledge-center/kibana-outside-vpc-ssh-elasticsearch
Advantages: Provides a secure connection over the SSH protocol. All connections use the SSH port.
Disadvantages: Requires client-side configuration and a proxy server.
Use an NGINX Proxy. For information on how to do this, please visit reference :- https://aws.amazon.com/premiumsupport/knowledge-center/kibana-outside-vpc-nginx-elasticsearch
Advantages: Setup is easier, because only server-side configuration is required. Uses standard HTTP (port 80) and HTTPS (port 443).
Disadvantages: Requires a proxy server. The security level of the connection depends on how the proxy server is configured.

Transitioning from Amazon AWS to a different Hosting provider

This task fell on my lap and I have no experience with Amazon aws. We run a simple informational site along with redmine (as a subdomain) using amazon aws and want to switch to simple helix. I have researched how to switch providers and I haven't found any posts that show how to do this step by step. Is there a simple way to move from Amazon aws to another provider? I think it would be best to create a duplicate of what we have on amazon aws on the simple helix server before totally dropping amazon aws. As far as I know I only have log in details to EC2 Console, no ssh log in details or FTP for amazon aws.
When an AWS instance is launched a public/private key pair is specified and installed in the running instance. You can find the name of the key-pair by looking at details of the instance in the console. Check for "Key pair name".
Hopefully, you'll have the private key of that pair somewhere at hand. If it's lost I'm not sure how to recover it without tech support from Amazon.
If you have the private key then ssh is simple, just type:
ssh -i my.private.key -l ubuntu servername
or something similar and you're in.
FTP access might require opening up a port in the firewall. Look at the security group settings for the server to see what ports are open. Secure ftp is available if you can ssh into the machine using the same private key.