# ======================== Elasticsearch Configuration =========================
#cluster.name: my-application
node.name: node-1
node.master: true
node.data: true
network.host: 172.31.24.193
discovery.zen.ping.unicast.hosts:["172.31.24.193","172.31.25.87","172.31.23.237"]
node-2 elasticsearch.yml configuration
# ======================== Elasticsearch Configuration =========================
#cluster.name: my-application
node.name: node-2
node.master: true
node.data: true
network.host: 172.31.25.87
discovery.zen.ping.unicast.hosts:["172.31.24.193","172.31.25.87","172.31.23.237"]
node-3 elasticsearch configuration
# ======================== Elasticsearch Configuration =========================
#cluster.name: my-application
node.name: node-3
node.master: true
node.data: true
network.host: 172.31.23.237
discovery.zen.ping.unicast.hosts:["172.31.24.193","172.31.25.87","172.31.23.237"]
Error description: I have installed an ec2-discovery plugin. I am passing AWS access key and secret key and endpoint in the elastic keystore.
I am using latest elastic search 6.2. I have started all the nodes on amazon ec2 instances. I have three instances of ec2.
I am getting the error on all the three nodes like this
[node-2] not enough master nodes discovered during pinging (found [[Candidate{node={node-2}{TpI8T4GBShK8CN7c2ruAXw}{DAsuqCnISsuiw6BGvqrysA}{172.31.25.87}{172.31.25.87:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
First,
to use ec2-discovery, you need to have this in your elasticsearch.yml:
discovery.zen.hosts_provider: ec2
and remove the discovery.zen.ping.unicast.hosts. please check https://www.elastic.co/guide/en/elasticsearch/plugins/current/discovery-ec2-usage.html
The idea of ec2-discovery is not to hardcode the nodes IPs in the config file, but rather auto 'discover' them.
Second,
the error you've provided shows that the nodes are not able to ping each other, make sure you set a rule in your security group to allow this. In the InBound tab, add a new rule:
Type: All TCP
Source: your security group id (sg-xxxxxx)
Related
I have an AWS EC2 instance with Centos 8.
Inside this instance, I have successfully installed the Cassandra (3.11.10) database.
Inside this database, I have successfully created keyspace via this CQL query:
create keyspace if not exists dev_keyspace with replication={'class': 'SimpleStrategy', 'replication_factor' : 2};
Then I edited configurion file (/etc/cassandra/default.conf/cassandra.yaml):
cluster_name: "DevCluster"
seeds: <ec2_private_ip_address>
listen_address: <ec2_private_ip_address>
start_rpc: true
rpc_address: 0.0.0.0
broadcast_rpc_address: <ec2_private_ip_address>
endpoint_snitch: Ec2Snitch
After that, restarted database:
Datacenter: eu-central
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN <ec2_private_ip_address> 75.71 KiB 256 100.0% XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX 1a
When I try to connect to the Cassandra database with such credentials it raise an error:
host: <ec2_public_ip_address>
port: 9042
keyspace: dev_keyspace
username: cassandra (default)
password: cassandra (default)
ERROR:
All host(s) tried for query failed (tried:
/<ec2_private_ip_address>:9042
(com.datastax.driver.core.exceptions.TransportException:
[/<ec2_private_ip_address>:9042] Cannot connect))
What did I forget to configure? Let me know if you need more information.
You won't be able to access your cluster remotely because you've configured Cassandra to only listen for clients on the private IP with this setting:
broadcast_rpc_address: <ec2_private_ip_address>
For the node to accept requests from external clients, you need to set the following in cassandra.yaml:
listen_address: private_ip
rpc_address: public_ip
Note that you don't need to set the broadcast RPC address. You will need to restart Cassandra for the changes to take effect.
You will also need to define a security group with inbound rules on the AWS Management Console to allow ingress to your EC2 instances on port 9042. Cheers!
When I try to specify a scaling trigger it keeps erroring with
Service:AmazonCloudFormation, Message:[/Resources/AWSEBCloudwatchAlarmHigh/Type/Dimensions/0/Value/Fn::GetAtt/0] 'null' values are not allowed in templates
I have a saved template and I am trying to add
aws:autoscaling:trigger:
BreachDuration: 5
LowerBreachScaleIncrement: -1
LowerThreshold: 0.75
MeasureName: Latency
Period: 1
EvaluationPeriods: 1
Statistic: Average
Unit: Seconds
UpperBreachScaleIncrement: 2
UpperThreshold: 1
So I created it without this in which it created the auto alarm. I tried to update to this setting using the browser but it also failed with message
Service:AmazonCloudFormation, Message:[/Resources/AWSEBCloudwatchAlarmHigh/Type/Dimensions/0/Value/Fn::GetAtt/0] 'null' values are not allowed in templates
Here is my saved template
Platform:
PlatformArn: arn:aws:elasticbeanstalk:eu-west-2::platform/Python 3.6 running on 64bit Amazon Linux/2.9.14
OptionSettings:
aws:elasticbeanstalk:command:
BatchSize: '30'
BatchSizeType: Percentage
AWSEBAutoScalingScaleUpPolicy.aws:autoscaling:trigger:
UpperBreachScaleIncrement: '2'
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: domain.settings
PYTHONPATH: $PYTHONPATH
ALLOWED_CIDR_NETS: 10.0.0.0/16
DATABASE_NAME: domainproductionplus
DATABASE_HOST: domain-production-plus.coz8h02qupfe.eu-west-2.rds.amazonaws.com
ENVIRONMENT: production
DATABASE_PORT: '5432'
EMAIL_BACKEND: django.core.mail.backends.console.EmailBackend
DEBUG: '0'
DATABASE_ENGINE: django.db.backends.postgresql_psycopg2
REDIS_LOCATION: aws-co-qemfpydhs2ly.ubjsxm.0001.euw2.cache.amazonaws.com
AWS_S3_REGION_NAME: eu-west-2
ALLOWED_HOSTS: '*'
VAPID_ADMIN_EMAIL: email#domain.com
DATABASE_USER: domainprodplus
AWS_STORAGE_BUCKET_NAME: domain-production-plus
REDIS_LOCATION_X: domain-production-plus-001.domain-production-plus.ubjsxm.euw2.cache.amazonaws.com
DATABASE_PASSWORD: '{{resolve:ssm:domain-api-production-plus-DATABASE_PASSWORD:1}}'
HASHID_SALT: '{{resolve:ssm:domain-api-production-plus-HASHID_SALT:1}}'
VAPID_PRIVATE_KEY: '{{resolve:ssm:domain-api-production-plus-VAPID_PRIVATE_KEY:1}}'
VAPID_PUBLIC_KEY: '{{resolve:ssm:domain-api-production-plus-VAPID_PUBLIC_KEY:1}}'
SECRET_KEY: '{{resolve:ssm:domain-api-production-plus-SECRET_KEY:1}}'
AWS_SECRET_ACCESS_KEY: '{{resolve:ssm:domain-api-production-plus-AWS_SECRET_ACCESS_KEY:1}}'
aws:autoscaling:updatepolicy:rollingupdate:
RollingUpdateType: Health
RollingUpdateEnabled: true
aws:elb:policies:
domainionDrainingEnabled: true
aws:ec2:instances:
InstanceTypes: t2.micro
AWSEBAutoScalingGroup.aws:autoscaling:asg:
Cooldown: '120'
MaxSize: '6'
aws:elasticbeanstalk:container:python:
WSGIPath: domain/wsgi.py
StaticFiles: /static/=www/static/
aws:ec2:vpc:
VPCId: vpc-0fddefb70e6c8b32a
Subnets: subnet-04497865d7eb17b70
AssociatePublicIpAddress: false
aws:elasticbeanstalk:environment:process:default:
DeregistrationDelay: 20
HealthCheckInterval: 15
HealthCheckPath: /app-version-updates
HealthCheckTimeout: 5
HealthyThresholdCount: 3
MatcherHTTPCode: 200
Port: 80
Protocol: HTTP
StickinessEnabled: false
StickinessLBCookieDuration: 86400
StickinessType: lb_cookie
UnhealthyThresholdCount: 5
aws:elbv2:listener:80:
ListenerEnabled: true
Protocol: HTTP
Rules: domainapiproductionplus
aws:elbv2:listener:443:
ListenerEnabled: true
SSLCertificateArns: arn:aws:acm:eu-west-2:799479065523:certificate/5fb4f19c-f377-4ef6-8a7a-9657832c0d17
Protocol: HTTPS
Rules: domainapiproductionplus
SSLPolicy: ELBSecurityPolicy-TLS-1-2-2017-01
aws:elbv2:listenerrule:domainapiproductionplus:
HostHeaders: api-production-plus.hcidomain.digital
PathPatterns: /*
Priority: 2
process: default
aws:elb:loadbalancer:
CrossZone: true
ManagedSecurityGroup: sg-0ac0850967d4d2929
aws:elbv2:loadbalancer:
ManagedSecurityGroup: sg-0ac0850967d4d2929
SharedLoadBalancer: arn:aws:elasticloadbalancing:eu-west-2:799479065523:loadbalancer/app/domain-production-plus/206b8390c82843a3
aws:elasticbeanstalk:environment:
ServiceRole: arn:aws:iam::799479065523:role/aws-elasticbeanstalk-service-role
LoadBalancerType: application
LoadBalancerIsShared: true
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
EC2KeyName: domain
SecurityGroups: sg-0ac0850967d4d2929,sg-095397beca170840e,sg-02f17712a24784d64
MonitoringInterval: 1 minute
aws:autoscaling:trigger:
BreachDuration: 5
LowerBreachScaleIncrement: -1
LowerThreshold: 0.75
MeasureName: Latency
Period: 1
EvaluationPeriods: 1
Statistic: Average
Unit: Seconds
UpperBreachScaleIncrement: 2
UpperThreshold: 1
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
EnvironmentTier:
Type: Standard
Name: WebServer
AWSConfigurationTemplateVersion: 1.1.0.0
Tags:
project: domain
product: domain
I am using a shared load balancer, could this be the issue? with classic load balancer it works fine - setting the autoscale metric to use Latency.
To create the environment from cli I run.
% eb create domain-api-production-plus --cfg domain-api-production-plus \
--cname domain-api-production-plus \
--elb-type application \
--shared-lb arn:aws:elasticloadbalancing:eu-west-2:799479065523:loadbalancer/app/domain-production-plus/206b8390c82843a3 \
--vpc \
--vpc.ec2subnets subnet-04497865d7eb17b70,subnet-032624d3e62d499f1 \
--vpc.elbsubnets subnet-0b3c3aa9b190a2546,subnet-05453d986413e8ae2 \
--vpc.id vpc-0fddefb70e6c8b32a \
--vpc.securitygroups sg-02f17712a24784d64,sg-095397beca170840e,sg-0ac0850967d4d2929 \
--tags project=domain,Name=domain-api-production-plus \
--service-role aws-elasticbeanstalk-service-role \
--region eu-west-2 \
--platform "arn:aws:elasticbeanstalk:eu-west-2::platform/Python 3.6 running on 64bit Amazon Linux/2.9.14" \
--keyname domain
Do you want to associate a public IP address? (Y/n): n
Do you want the load balancer to be public? (Select no for internal) (Y/n):
Creating application version archive "app-5aac-200929_084247".
Uploading Domain/app-5aac-200929_084247.zip to S3. This may take a while.
Upload Complete.
Environment details for: domain-api-production-plus
Application name: Domain
Region: eu-west-2
Deployed Version: app-5aac-200929_084247
Environment ID: e-tcwd2awzvs
Platform: arn:aws:elasticbeanstalk:eu-west-2::platform/Python 3.6 running on 64bit Amazon Linux/2.9.14
Tier: WebServer-Standard-1.0
CNAME: domain-api-production-plus.eu-west-2.elasticbeanstalk.com
Updated: 2020-09-29 07:42:50.765000+00:00
Printing Status:
2020-09-29 07:42:49 INFO createEnvironment is starting.
2020-09-29 07:42:50 INFO Using elasticbeanstalk-eu-west-2-799479065523 as Amazon S3 storage bucket for environment data.
2020-09-29 07:42:55 INFO Created security group named: awseb-AWSEBManagedLBSecurityGroup-dw7edzemvt.
2020-09-29 07:43:13 INFO Created target group named: arn:aws:elasticloadbalancing:eu-west-2:799479065523:targetgroup/awseb-domain--default-38qig/440561c9ab287e68
2020-09-29 07:43:13 INFO Created security group named: sg-0e16398cbceab94d6
2020-09-29 07:43:14 INFO Created Auto Scaling launch configuration named: awseb-e-tcwd2awzvs-stack-AWSEBAutoScalingLaunchConfiguration-1I6T492EE9NN1
2020-09-29 07:43:30 INFO Created Load Balancer listener rule named: arn:aws:elasticloadbalancing:eu-west-2:799479065523:listener-rule/app/domain-production-plus/206b8390c82843a3/57e3715de5b24a02/0f045a9230df4191
2020-09-29 07:43:30 INFO Created Load Balancer listener rule named: arn:aws:elasticloadbalancing:eu-west-2:799479065523:listener-rule/app/domain-production-plus/206b8390c82843a3/57e3715de5b24a02/f42374f1622dbd49
2020-09-29 07:43:30 INFO Created Load Balancer listener rule named: arn:aws:elasticloadbalancing:eu-west-2:799479065523:listener-rule/app/domain-production-plus/206b8390c82843a3/457d5212b3cacc19/aa1f1cd2117f1290
2020-09-29 07:44:17 INFO Created Auto Scaling group named: awseb-e-tcwd2awzvs-stack-AWSEBAutoScalingGroup-1I5GB63XJKP1Y
2020-09-29 07:44:17 INFO Waiting for EC2 instances to launch. This may take a few minutes.
2020-09-29 07:44:17 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:eu-west-2:799479065523:scalingPolicy:664aebe1-ba1f-4d20-aed5-b44204b3a702:autoScalingGroupName/awseb-e-tcwd2awzvs-stack-AWSEBAutoScalingGroup-1I5GB63XJKP1Y:policyName/awseb-e-tcwd2awzvs-stack-AWSEBAutoScalingScaleDownPolicy-ZTC3D3FZQPZT
2020-09-29 07:44:17 INFO Created Auto Scaling group policy named: arn:aws:autoscaling:eu-west-2:799479065523:scalingPolicy:c2431b1c-1efa-4927-a7ad-cdba75fa47ae:autoScalingGroupName/awseb-e-tcwd2awzvs-stack-AWSEBAutoScalingGroup-1I5GB63XJKP1Y:policyName/awseb-e-tcwd2awzvs-stack-AWSEBAutoScalingScaleUpPolicy-2MVWR52GOSTF
2020-09-29 07:44:32 INFO Created CloudWatch alarm named: awseb-e-tcwd2awzvs-stack-AWSEBCloudwatchAlarmHigh-1OBABSCE98Y89
2020-09-29 07:44:32 INFO Created CloudWatch alarm named: awseb-e-tcwd2awzvs-stack-AWSEBCloudwatchAlarmLow-UOJ9YBPCAIOH
2020-09-29 07:45:36 INFO Successfully launched environment: domain-api-production-plus
UPDATE
So I cannot find the TargetResponseTime on the beanstalk environment.
I am using a shared load balancer, could this be the issue? with classic load balancer it works fine - setting the autoscale metric to use Latency..
Latency metric is only for CLB, not other load balancer types:
Latency: [HTTP listener] The total time elapsed, in seconds, from the time the load balancer sent the request to a registered instance until the instance started to send the response headers.
For the ALB, the closest metric would be:
TargetResponseTime: The time elapsed, in seconds, after the request leaves the load balancer until a response from the target is received. This is equivalent to the target_processing_time field in the access logs.
In your config file there is mixture of ALB and CLB settings. For example, aws:elb:loadbalancer is for CLB, while aws:elbv2:loadbalancer with SharedLoadBalancer is only for ALB.
Your aws:autoscaling:trigger is using Latency, which as explained above, is only for CLB. For ALB, it should be TargetResponseTime.
I can't verify if changing MeasureName in your ASG will solve all the issues you are having, but this is definitely a part which contributes to the problems.
Trying to do an openshift 3.11 install with 3 master setup ,2 infra and 2 nodes. I didn't use a LB node since I figured the AWS ELB would take care of that for me.
My current issue is the installation will fail on the wait for control pane task.
failed: [ip-10-0-4-29.us-east-2.compute.internal] (item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg": {"cmd": "/usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system"
Different errors shown below
I've done the following.
Because this is only a demon system I wanted to go the cheap route and create self signed certs. So i ran the following
openssl rew -new -key openshift.key -out openshift.csr
openssl req -new -key openshift.key -out openshift.csr
openssl x509 -req -days 1095 -in openshift.csr -signkey openshift.key -out openshift.crt
then within my hosts file i added the following
openshift_master_named_certificates=[{"certfile": "/home/ec2-user/certs/openshift.crt", "keyfile": "/home/ec2-user/certs/openshift.key"}]
Next I created an ELB accepting HTTP traffic on port 8443 and directing it to HTTP 8443 to any of the masters.
When I do this I get the following fail when re-running the command which is failing the task
[root#ip-10-0-4-29 ~]# /usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system
Unable to connect to the server: http: server gave HTTP response to HTTPS client
If i change the ELB to take http traffic and direct it to HTTPS 8443 I ge the following error
[root#ip-10-0-4-29 ~]# /usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system
The connection to the server os.domain-name.net:8443 was refused - did you specify the right host or port?
If I try to change the ELB to accept HTTPS traffic I needed to copy the guide to create SSL certs to use in aws but even then accepting HTTPS traffic on 8443 and sending it either via HTTP or HTTPS to 8443 on the master node results in this error
[root#ip-10-0-4-29 ~]# /usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system
Unable to connect to the server: x509: certificate signed by unknown authority
I've also copy in my hosts file just incase I've something off with it.
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin
openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
openshift_clusterid=openshift
openshift_cloudprovider_kind=aws
openshift_hosted_manage_registry=true
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=s3
openshift_hosted_registry_storage_s3_accesskey="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_hosted_registry_storage_s3_secretkey="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
openshift_hosted_registry_storage_s3_bucket=os-test-os-bucket
openshift_hosted_registry_storage_s3_region=us-west-2
openshift_hosted_registry_storage_s3_chunksize=26214400
openshift_hosted_registry_storage_s3_rootdirectory=/registry
openshift_hosted_registry_pullthrough=true
openshift_hosted_registry_acceptschema2=true
openshift_hosted_registry_enforcequota=true
openshift_hosted_registry_replicas=3
#openshift_enable_excluders=false
openshift_disable_check=memory_availability
openshift_additional_repos=[{'id': 'centos-okd-ci', 'name': 'centos-okd-ci', 'baseurl' :'https://rpms.svc.ci.openshift.org/openshift-origin-v3.11', 'gpgcheck' :'0', 'enabled' :'1'}]
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true']}]
openshift_router_selector='node-role.kubernetes.io/infra=true'
openshift_registry_selector='node-role.kubernetes.io/infra=true'
openshift_metrics_install_metrics=true
openshift_master_named_certificates=[{"certfile": "/home/ec2-user/certs/openshift.crt", "keyfile": "/home/ec2-user/certs/openshift.key"}]
# uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=os.domain-name.net
openshift_master_cluster_public_hostname=os.domain-name.net
# host group for masters
[masters]
ip-10-0-4-29.us-east-2.compute.internal
ip-10-0-5-54.us-east-2.compute.internal
ip-10-0-6-8.us-east-2.compute.internal
[etcd]
ip-10-0-4-29.us-east-2.compute.internal
ip-10-0-5-54.us-east-2.compute.internal
ip-10-0-6-8.us-east-2.compute.internal
[nodes]
# host group for nodes, includes region info
[nodes]
#master
ip-10-0-4-29.us-east-2.compute.internal openshift_node_group_name='node-config-master'
ip-10-0-5-54.us-east-2.compute.internal openshift_node_group_name='node-config-master'
ip-10-0-6-8.us-east-2.compute.internal openshift_node_group_name='node-config-master'
#infra
ip-10-0-4-28.us-east-2.compute.internal openshift_node_group_name='node-config-infra'
ip-10-0-5-241.us-east-2.compute.internal openshift_node_group_name='node-config-infra'
#node
ip-10-0-4-162.us-east-2.compute.internal openshift_node_group_name='node-config-compute'
ip-10-0-5-146.us-east-2.compute.internal openshift_node_group_name='node-config-compute'
Please if anyone can help me get past this hurdle so I can finally try and demo out a CI/CD pipeline using Openshift I'd be truly grateful
I know this is an old link but I was running into the same issue with my ELB configured as HTTPS. I changed the listener to TCP and used port 443 for the Load Balancer Port and the Instance Port. For the Health Check, make sure you are using Ping Protocol HTTPS, Ping Port 443 and Ping Path of "/" . Those configuration changes allowed the installation to proceed.
I'm trying to setup elasticsearch on my AWS lightsail instance, and got it running on port 9200, however I'm not able to connect from AWS lambda to the instance on the same port. I've updated my lightsail instance level networking setting to allow port 9200 to accept traffic, however I'm neither able to connect to port 9200 through the static IP, nor I'm able to get my AWS lambda function to talk to my lightsail host on port 9200.
I understand that AWS has separate Elasticsearch offering that I can use, however I'm doing a test setup and need to run vanilla ES on the same lightsail host. The ES is up and running and I can connect to it through SSH tunnel, however it doesn't work when I try to connect using the static IP or through another AWS service.
Any pointers shall be appreciated.
Thanks.
Update elasticsearch.yml
network.host: _ec2:privateIpv4_
We are running multiple version of elaticsearch cluster on AWS Cloud:
elasticsearch-2.4 cluster elasticsearch.yml(On classic ec2 instance --i3.2xlarge )
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
node.max_local_storage_nodes: 1
node.rack_id: rack_us_east_1d
index.number_of_shards: 8
index.number_of_replicas: 1
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.multicast.enabled: false
cloud.aws.access_key: ***
cloud.aws.secret_key: ***
cloud.aws.region: us-east-1
discovery.type: ec2
discovery.ec2.groups: es-cluster-sg
network.host: _ec2:privateIpv4_
elasticsearch-6.3 cluster elasticsearch.yml(Inside VPC & i3.2xlarge instance)
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.hosts_provider: ec2
discovery.ec2.groups: vpc-es-eluster-sg
network.host: _ec2:privateIpv4_
path:
logs: /es-data/log
data: /es-data/data
discovery.ec2.host_type: private_ip
discovery.ec2.tag.es_cluster: staging-elasticsearch
discovery.ec2.endpoint: ec2.us-east-1.amazonaws.com
I recommend don't open port 9300 & 9200 for outside. Allow only EC2 instance to communicate with your elaticsearch.
Now how to access elasticsearch from my local box?
Use tunnelling(port forwarding) from your system using this command:
$ ssh -i es.pem ec2-user#es-node-public-ip -L 9200:es-node-private-ip:9200 -N
It is like, you are running elasticsearch on your local system.
I might be late to the party, but for anyone still struggling with this sort of problem should know that new versions of elastic search bind to localhost by default as mentioned in this answer to override this behavior you should set:
network.bind_host: 0
to allow the node to be accessed outside of localhost
This is the first time I am working with elasticsearch.
The following is my environment/configuration.
I have 3 EC2 Ubuntu 14.04 instances.
I have download and extracted elasticsearch-2.3.0.tar.gz.
I have changed elasticsearch.yml file under elasticsearch/config in each of the instance.
I have made the following changes in each of the elasticsearch.yml file.
3.1. EC2 Instance number 1 ( my client node)
cluster.name: MyCluster
node.name: Client
node.master: false
node.data: false
path.data: /home/ubuntu/elasticsearch/data/elasticsearch/nodes/0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["aa.aa.aa.aa" , "aa.aa.aaa.aa" , "aaa.a.aa.aa"]
In the above bracket I have provide IP of all my 3 instances.
3.2. EC2 Instance number 2 ( my Master node)
cluster.name: MyCluster
node.name: Master
node.master: true
node.data: true
path.data: /home/ubuntu/elasticsearch/data/elasticsearch/nodes/0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["aa.aa.aa.aa" , "aa.aa.aaa.aa" , "aaa.a.aa.aa"]
In the above bracket I have provide IP of all my 3 instances.
Note that I have made node.data: true (according to this link)
3.3. EC2 Instance number 3 ( my data node)
cluster.name: MyCluster
node.name: slave_1
node.master: false
node.data: true
path.data: /home/ubuntu/elasticsearch/data/elasticsearch/nodes/0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["aa.aa.aa.aa" , "aa.aa.aaa.aa" , "aaa.a.aa.aa"]
In the above bracket I have provide IP of all my 3 instances.
After this configuration I run elasticsearch service on each instance starting from data node then master node and client node in the end.
If I check the node status using curl http://localhost:9200, I am getting json which states that the node is running.
But when I check the cluster health using curl -XGET 'http://localhost:9200/_cluster/health?pretty=true' I am getting the following error on my client instance.
I hope I am clear with my question and I am going in the right direction.
Thankyou
Elasticsearch 2.0+ defaults to binding all sockets to localhost. This means, by default, nothing outside of that machine can talk to it.
This is explicitly for security purposes and simple development setups. Locally, it works great, but you need to configure it for your environment when it gets more serious. This is also why you can talk to the node via localhost. Basically, you want this when you want more than one node across other machines using the network settings. This works with ES 2.3+:
network:
bind_host: [ _local_, _global_ ]
publish_host: _global_
Then other nodes can talk to the public IP, but you still have localhost to simplify working with the node locally (e.g., you--the human--never have to know the IP when SSHed into a box).
As you are in EC2 with Elasticsearch 2.0+, I recommend that you install the cloud-aws plugin (future readers beware: this plugin is being broken into 3 separate plugins in ES 5.x!).
$ bin/plugin install cloud-aws
With that installed, you get a bit more awareness out of your EC2 instances. With this great power, you can add more detail to your ES configurations:
# Guarantee that the plugin is installed
plugin.mandatory: cloud-aws
# Discovery / AWS EC2 Settings
discovery
type: ec2
ec2:
availability_zones: [ "us-east-1a", "us-east-1b" ]
groups: [ "my_security_group1", "my_security_group2" ]
# The keys here need to be replaced with your keys
cloud:
aws
access_key: AKVAIQBF2RECL7FJWGJQ
secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br
region: us-east-1
node.auto_attributes: true
# Bind to the network on whatever IP you want to allow connections on.
# You _should_ only want to allow connections from within the network
# so you only need to bind to the private IP
node.host: _ec2:privateIp_
# You can bind to all hosts that are possible to communicate with the
# node but advertise it to other nodes via the private IP (less
# relevant because of the type of discovery used, but not a bad idea).
#node:
# bind_host: [ _local_, _ec2:privateIp_, _ec2:publicIp_, _ec2:publicDns_ ]
# publish_host: _ec2:privateIp_
This will allow them to talk by binding the IP address to what is expected. If you want to be able to SSH into those machines and communicate with ES over localhost (you probably do for debugging), then you will want the version commented out with _local_ as a bind_host in that list.