This is the first time I am working with elasticsearch.
The following is my environment/configuration.
I have 3 EC2 Ubuntu 14.04 instances.
I have download and extracted elasticsearch-2.3.0.tar.gz.
I have changed elasticsearch.yml file under elasticsearch/config in each of the instance.
I have made the following changes in each of the elasticsearch.yml file.
3.1. EC2 Instance number 1 ( my client node)
cluster.name: MyCluster
node.name: Client
node.master: false
node.data: false
path.data: /home/ubuntu/elasticsearch/data/elasticsearch/nodes/0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["aa.aa.aa.aa" , "aa.aa.aaa.aa" , "aaa.a.aa.aa"]
In the above bracket I have provide IP of all my 3 instances.
3.2. EC2 Instance number 2 ( my Master node)
cluster.name: MyCluster
node.name: Master
node.master: true
node.data: true
path.data: /home/ubuntu/elasticsearch/data/elasticsearch/nodes/0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["aa.aa.aa.aa" , "aa.aa.aaa.aa" , "aaa.a.aa.aa"]
In the above bracket I have provide IP of all my 3 instances.
Note that I have made node.data: true (according to this link)
3.3. EC2 Instance number 3 ( my data node)
cluster.name: MyCluster
node.name: slave_1
node.master: false
node.data: true
path.data: /home/ubuntu/elasticsearch/data/elasticsearch/nodes/0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["aa.aa.aa.aa" , "aa.aa.aaa.aa" , "aaa.a.aa.aa"]
In the above bracket I have provide IP of all my 3 instances.
After this configuration I run elasticsearch service on each instance starting from data node then master node and client node in the end.
If I check the node status using curl http://localhost:9200, I am getting json which states that the node is running.
But when I check the cluster health using curl -XGET 'http://localhost:9200/_cluster/health?pretty=true' I am getting the following error on my client instance.
I hope I am clear with my question and I am going in the right direction.
Thankyou
Elasticsearch 2.0+ defaults to binding all sockets to localhost. This means, by default, nothing outside of that machine can talk to it.
This is explicitly for security purposes and simple development setups. Locally, it works great, but you need to configure it for your environment when it gets more serious. This is also why you can talk to the node via localhost. Basically, you want this when you want more than one node across other machines using the network settings. This works with ES 2.3+:
network:
bind_host: [ _local_, _global_ ]
publish_host: _global_
Then other nodes can talk to the public IP, but you still have localhost to simplify working with the node locally (e.g., you--the human--never have to know the IP when SSHed into a box).
As you are in EC2 with Elasticsearch 2.0+, I recommend that you install the cloud-aws plugin (future readers beware: this plugin is being broken into 3 separate plugins in ES 5.x!).
$ bin/plugin install cloud-aws
With that installed, you get a bit more awareness out of your EC2 instances. With this great power, you can add more detail to your ES configurations:
# Guarantee that the plugin is installed
plugin.mandatory: cloud-aws
# Discovery / AWS EC2 Settings
discovery
type: ec2
ec2:
availability_zones: [ "us-east-1a", "us-east-1b" ]
groups: [ "my_security_group1", "my_security_group2" ]
# The keys here need to be replaced with your keys
cloud:
aws
access_key: AKVAIQBF2RECL7FJWGJQ
secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br
region: us-east-1
node.auto_attributes: true
# Bind to the network on whatever IP you want to allow connections on.
# You _should_ only want to allow connections from within the network
# so you only need to bind to the private IP
node.host: _ec2:privateIp_
# You can bind to all hosts that are possible to communicate with the
# node but advertise it to other nodes via the private IP (less
# relevant because of the type of discovery used, but not a bad idea).
#node:
# bind_host: [ _local_, _ec2:privateIp_, _ec2:publicIp_, _ec2:publicDns_ ]
# publish_host: _ec2:privateIp_
This will allow them to talk by binding the IP address to what is expected. If you want to be able to SSH into those machines and communicate with ES over localhost (you probably do for debugging), then you will want the version commented out with _local_ as a bind_host in that list.
Related
I have an AWS EC2 instance with Centos 8.
Inside this instance, I have successfully installed the Cassandra (3.11.10) database.
Inside this database, I have successfully created keyspace via this CQL query:
create keyspace if not exists dev_keyspace with replication={'class': 'SimpleStrategy', 'replication_factor' : 2};
Then I edited configurion file (/etc/cassandra/default.conf/cassandra.yaml):
cluster_name: "DevCluster"
seeds: <ec2_private_ip_address>
listen_address: <ec2_private_ip_address>
start_rpc: true
rpc_address: 0.0.0.0
broadcast_rpc_address: <ec2_private_ip_address>
endpoint_snitch: Ec2Snitch
After that, restarted database:
Datacenter: eu-central
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN <ec2_private_ip_address> 75.71 KiB 256 100.0% XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX 1a
When I try to connect to the Cassandra database with such credentials it raise an error:
host: <ec2_public_ip_address>
port: 9042
keyspace: dev_keyspace
username: cassandra (default)
password: cassandra (default)
ERROR:
All host(s) tried for query failed (tried:
/<ec2_private_ip_address>:9042
(com.datastax.driver.core.exceptions.TransportException:
[/<ec2_private_ip_address>:9042] Cannot connect))
What did I forget to configure? Let me know if you need more information.
You won't be able to access your cluster remotely because you've configured Cassandra to only listen for clients on the private IP with this setting:
broadcast_rpc_address: <ec2_private_ip_address>
For the node to accept requests from external clients, you need to set the following in cassandra.yaml:
listen_address: private_ip
rpc_address: public_ip
Note that you don't need to set the broadcast RPC address. You will need to restart Cassandra for the changes to take effect.
You will also need to define a security group with inbound rules on the AWS Management Console to allow ingress to your EC2 instances on port 9042. Cheers!
gitlab version 13.8.1-ee (install with helm)
GKE version : 1.16.15-gke.6000
I install gitlab & gitlab-runner on GKE, private cluster.
Also, I have nginx-ingress-controller for firewall rule, following docs.
https://gitlab.com/gitlab-org/charts/gitlab/blob/70f31743e1ff37bb00298cd6d0b69a0e8e035c33/charts/nginx/index.md
nginx-ingress:
controller:
scope:
enabled: true
namespace: default
service:
loadBalancerSourceRanges:
["IP","ADDRESSES"]
With this setting, gitlab-runner pod has error
couldn't execute POST against https://gitlab.my-domain.com/api/v4/runners: Post https://gitlab.my-domain.com/api/v4/runners: dial tcp [my-domain's-IP]: i/o timeout
Issue is same as this one.
Gitlab Runner can't access Gitlab self-hosted instance
But I already set cloudNAT & cloud Route, also adding IP address of CloudNAT in loadBalancerSourceRanges in gitlab's value.yaml.
To check if cloudNAT worked or not, I tried to exec pod and check IP
$ kubectl exec -it gitlab-gitlab-runner-xxxxxxxx /bin/sh
wget -qO- httpbin.org/ip
and it showed IP address of CloudNAT.
So, the request must be called using CloudNAT IP as source IP.
https://gitlab.my-domain.com/api/v4/runners
What can I do to solve it ?
It worked when I added kubernetes-pod-inner-ipaddress in loadBalancerSourceRanges. Both stable/nginx, https://kubernetes.github.io/ingress-nginx worked.
gitlab-runner called https://my-domain/api/v4/runners . I thought it would go through public network, so added only CloudNAT IP, but maybe it was not.
Still, it's a little bit weird.
First time I set 0.0.0.0/0 in loadBalancerSourceRanges, then added only CloudNAT IP in FW, https://my-domain/api/v4/runners worked.
So, loadBalancerSourceRanges may be used in 2 places, 1 is FW rule which we can see on GCP, the other is hidden.
Deployed K8s service with type as LoadBalancer. K8s cluster running on an EC2 instance. The service is stuck at "pending state".
Does the service type 'ELB' requires any stipulation in terms of AWS configuration parameters?
Yes. Typically you need the option --cloud-provider=aws on:
All kubelets
kube-apiserserver
kube-controller-manager
Also, you have to make sure that all your K8s instances (master/nodes) have an AWS instance role that allows them to create/remove ELBs and routes (All access to EC2 should do).
Then you need to make sure all your nodes are tagged:
Key: KubernetesCluster, Value: 'your cluster name'
Key: k8s.io/role/node, Value: 1 (For nodes only)
Key: kubernetes.io/cluster/kubernetes, Value: owned
Make sure your subnet is also tagged:
Key: KubernetesCluster, Value: 'your cluster name'
Also, your Kubernetes node definition, you should have something like this:
ProviderID: aws:///<aws-region>/<instance-id>
Generally, all of the above is not needed if you are using the Kubernetes Cloud Controller Manager which is in beta as of K8s 1.13.0
I'm trying to setup elasticsearch on my AWS lightsail instance, and got it running on port 9200, however I'm not able to connect from AWS lambda to the instance on the same port. I've updated my lightsail instance level networking setting to allow port 9200 to accept traffic, however I'm neither able to connect to port 9200 through the static IP, nor I'm able to get my AWS lambda function to talk to my lightsail host on port 9200.
I understand that AWS has separate Elasticsearch offering that I can use, however I'm doing a test setup and need to run vanilla ES on the same lightsail host. The ES is up and running and I can connect to it through SSH tunnel, however it doesn't work when I try to connect using the static IP or through another AWS service.
Any pointers shall be appreciated.
Thanks.
Update elasticsearch.yml
network.host: _ec2:privateIpv4_
We are running multiple version of elaticsearch cluster on AWS Cloud:
elasticsearch-2.4 cluster elasticsearch.yml(On classic ec2 instance --i3.2xlarge )
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
node.max_local_storage_nodes: 1
node.rack_id: rack_us_east_1d
index.number_of_shards: 8
index.number_of_replicas: 1
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.multicast.enabled: false
cloud.aws.access_key: ***
cloud.aws.secret_key: ***
cloud.aws.region: us-east-1
discovery.type: ec2
discovery.ec2.groups: es-cluster-sg
network.host: _ec2:privateIpv4_
elasticsearch-6.3 cluster elasticsearch.yml(Inside VPC & i3.2xlarge instance)
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.hosts_provider: ec2
discovery.ec2.groups: vpc-es-eluster-sg
network.host: _ec2:privateIpv4_
path:
logs: /es-data/log
data: /es-data/data
discovery.ec2.host_type: private_ip
discovery.ec2.tag.es_cluster: staging-elasticsearch
discovery.ec2.endpoint: ec2.us-east-1.amazonaws.com
I recommend don't open port 9300 & 9200 for outside. Allow only EC2 instance to communicate with your elaticsearch.
Now how to access elasticsearch from my local box?
Use tunnelling(port forwarding) from your system using this command:
$ ssh -i es.pem ec2-user#es-node-public-ip -L 9200:es-node-private-ip:9200 -N
It is like, you are running elasticsearch on your local system.
I might be late to the party, but for anyone still struggling with this sort of problem should know that new versions of elastic search bind to localhost by default as mentioned in this answer to override this behavior you should set:
network.bind_host: 0
to allow the node to be accessed outside of localhost
# ======================== Elasticsearch Configuration =========================
#cluster.name: my-application
node.name: node-1
node.master: true
node.data: true
network.host: 172.31.24.193
discovery.zen.ping.unicast.hosts:["172.31.24.193","172.31.25.87","172.31.23.237"]
node-2 elasticsearch.yml configuration
# ======================== Elasticsearch Configuration =========================
#cluster.name: my-application
node.name: node-2
node.master: true
node.data: true
network.host: 172.31.25.87
discovery.zen.ping.unicast.hosts:["172.31.24.193","172.31.25.87","172.31.23.237"]
node-3 elasticsearch configuration
# ======================== Elasticsearch Configuration =========================
#cluster.name: my-application
node.name: node-3
node.master: true
node.data: true
network.host: 172.31.23.237
discovery.zen.ping.unicast.hosts:["172.31.24.193","172.31.25.87","172.31.23.237"]
Error description: I have installed an ec2-discovery plugin. I am passing AWS access key and secret key and endpoint in the elastic keystore.
I am using latest elastic search 6.2. I have started all the nodes on amazon ec2 instances. I have three instances of ec2.
I am getting the error on all the three nodes like this
[node-2] not enough master nodes discovered during pinging (found [[Candidate{node={node-2}{TpI8T4GBShK8CN7c2ruAXw}{DAsuqCnISsuiw6BGvqrysA}{172.31.25.87}{172.31.25.87:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
First,
to use ec2-discovery, you need to have this in your elasticsearch.yml:
discovery.zen.hosts_provider: ec2
and remove the discovery.zen.ping.unicast.hosts. please check https://www.elastic.co/guide/en/elasticsearch/plugins/current/discovery-ec2-usage.html
The idea of ec2-discovery is not to hardcode the nodes IPs in the config file, but rather auto 'discover' them.
Second,
the error you've provided shows that the nodes are not able to ping each other, make sure you set a rule in your security group to allow this. In the InBound tab, add a new rule:
Type: All TCP
Source: your security group id (sg-xxxxxx)