How do I properly configure Cassandra in EC2 to connect to it? - amazon-web-services

I have an AWS EC2 instance with Centos 8.
Inside this instance, I have successfully installed the Cassandra (3.11.10) database.
Inside this database, I have successfully created keyspace via this CQL query:
create keyspace if not exists dev_keyspace with replication={'class': 'SimpleStrategy', 'replication_factor' : 2};
Then I edited configurion file (/etc/cassandra/default.conf/cassandra.yaml):
cluster_name: "DevCluster"
seeds: <ec2_private_ip_address>
listen_address: <ec2_private_ip_address>
start_rpc: true
rpc_address: 0.0.0.0
broadcast_rpc_address: <ec2_private_ip_address>
endpoint_snitch: Ec2Snitch
After that, restarted database:
Datacenter: eu-central
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN <ec2_private_ip_address> 75.71 KiB 256 100.0% XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX 1a
When I try to connect to the Cassandra database with such credentials it raise an error:
host: <ec2_public_ip_address>
port: 9042
keyspace: dev_keyspace
username: cassandra (default)
password: cassandra (default)
ERROR:
All host(s) tried for query failed (tried:
/<ec2_private_ip_address>:9042
(com.datastax.driver.core.exceptions.TransportException:
[/<ec2_private_ip_address>:9042] Cannot connect))
What did I forget to configure? Let me know if you need more information.

You won't be able to access your cluster remotely because you've configured Cassandra to only listen for clients on the private IP with this setting:
broadcast_rpc_address: <ec2_private_ip_address>
For the node to accept requests from external clients, you need to set the following in cassandra.yaml:
listen_address: private_ip
rpc_address: public_ip
Note that you don't need to set the broadcast RPC address. You will need to restart Cassandra for the changes to take effect.
You will also need to define a security group with inbound rules on the AWS Management Console to allow ingress to your EC2 instances on port 9042. Cheers!

Related

Why can't I connect to my AWS Redshift Serverless cluster from my laptop?

I've set up a Redshift Serverless cluster w/ a workgroup and a namespace.
I turned on the "Publicly Accessible" option
I've created an inbound rule for the 5439 port w/ Source set to 0.0.0.0/0
I've created an IAM credential for access to Redshift
I ran aws config and added the keys
But when I run
aws redshift-data list-databases --cluster-identifier default --database dev --db-user admin --endpoint http://default.530158470050.us-east-1.redshift-serverless.amazonaws.com:5439/dev
I get this error:
Connection was closed before we received a valid response from endpoint URL: "http://default.XXXXXX.us-east-1.redshift-serverless.amazonaws.com:5439/dev".
In Node, when trying to use the AWS.RedshiftDataClient to do the same thing, I get this:
{
code: 'TimeoutError',
path: null,
host: 'default.XXXXXXX.us-east-1.redshift-serverless.amazonaws.com',
port: 5439,
localAddress: undefined,
time: 2022-07-09T02:20:47.397Z,
region: 'us-east-1',
hostname: 'default.XXXXXX.us-east-1.redshift-serverless.amazonaws.com',
retryable: true
}
What am I missing?
What Security Group and VPC have you configured for your Redshift Serverless Cluster?
Make sure the Security Group allows traffic from "My Ip" so that you can reach the VPC.
If it is not enough, check the cluster is installed on public subnets (an Internet Gateway should be attached to the VPC and the route tables route traffic to it eventually + "Publicly Accessible" option enabled).

elasticsearch kibana setup in separate aws ec2 servers

I have installed elasticsearch in one instance and kibana in another instance.
Both the services are running and I can connect elasticsearch using curl and its instance public ip with port 9200
version: 7.9.2 both
Assume: Public ips
elasticsearch - x.x.x.x
kibana - y.y.y.y
Issue:
Cant connect kibana instance with its curl and public ip with port 5601
Error: Failed to connect to y.y.y.y port 5601: connection refused
Query:
Correct config for elasticsearch.yml and kibana.yml
` kibana.yml:
port: 5601
server.host: "y.y.y.y"
elasticsearch.hosts: ["http://x.x.x.x:9200"]
elasticsearch.yml:
network.host: 0.0.0.0
http.port: 9200 `
It is extremely likely you have not configured the correct security group rules on the kibana instance to permit you to access the service. You need an ingress rule permitting tcp to port 5601 from whatever your ingress range is.
Likewise, it is extremely likely you have not granted access to elasticsearch (x.x.x.x:9200) from y.y.y.y
Check your security group rules.
Also, please ensure your elasticsearch public ip does not permit access from 0.0.0.0 - publically accessible elasticsearch clusters are a prime target for naughty people.

version 3.11 | AWS installation fails due to HTTPS / x509 issues on waiting for control pane

Trying to do an openshift 3.11 install with 3 master setup ,2 infra and 2 nodes. I didn't use a LB node since I figured the AWS ELB would take care of that for me.
My current issue is the installation will fail on the wait for control pane task.
failed: [ip-10-0-4-29.us-east-2.compute.internal] (item=etcd) => {"attempts": 60, "changed": false, "item": "etcd", "msg": {"cmd": "/usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system"
Different errors shown below
I've done the following.
Because this is only a demon system I wanted to go the cheap route and create self signed certs. So i ran the following
openssl rew -new -key openshift.key -out openshift.csr
openssl req -new -key openshift.key -out openshift.csr
openssl x509 -req -days 1095 -in openshift.csr -signkey openshift.key -out openshift.crt
then within my hosts file i added the following
openshift_master_named_certificates=[{"certfile": "/home/ec2-user/certs/openshift.crt", "keyfile": "/home/ec2-user/certs/openshift.key"}]
Next I created an ELB accepting HTTP traffic on port 8443 and directing it to HTTP 8443 to any of the masters.
When I do this I get the following fail when re-running the command which is failing the task
[root#ip-10-0-4-29 ~]# /usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system
Unable to connect to the server: http: server gave HTTP response to HTTPS client
If i change the ELB to take http traffic and direct it to HTTPS 8443 I ge the following error
[root#ip-10-0-4-29 ~]# /usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system
The connection to the server os.domain-name.net:8443 was refused - did you specify the right host or port?
If I try to change the ELB to accept HTTPS traffic I needed to copy the guide to create SSL certs to use in aws but even then accepting HTTPS traffic on 8443 and sending it either via HTTP or HTTPS to 8443 on the master node results in this error
[root#ip-10-0-4-29 ~]# /usr/bin/oc get pod master-etcd-ip-10-0-4-29.us-east-2.compute.internal -o json -n kube-system
Unable to connect to the server: x509: certificate signed by unknown authority
I've also copy in my hosts file just incase I've something off with it.
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=origin
openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
openshift_clusterid=openshift
openshift_cloudprovider_kind=aws
openshift_hosted_manage_registry=true
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=s3
openshift_hosted_registry_storage_s3_accesskey="{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
openshift_hosted_registry_storage_s3_secretkey="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
openshift_hosted_registry_storage_s3_bucket=os-test-os-bucket
openshift_hosted_registry_storage_s3_region=us-west-2
openshift_hosted_registry_storage_s3_chunksize=26214400
openshift_hosted_registry_storage_s3_rootdirectory=/registry
openshift_hosted_registry_pullthrough=true
openshift_hosted_registry_acceptschema2=true
openshift_hosted_registry_enforcequota=true
openshift_hosted_registry_replicas=3
#openshift_enable_excluders=false
openshift_disable_check=memory_availability
openshift_additional_repos=[{'id': 'centos-okd-ci', 'name': 'centos-okd-ci', 'baseurl' :'https://rpms.svc.ci.openshift.org/openshift-origin-v3.11', 'gpgcheck' :'0', 'enabled' :'1'}]
openshift_node_groups=[{'name': 'node-config-master', 'labels': ['node-role.kubernetes.io/master=true']}, {'name': 'node-config-infra', 'labels': ['node-role.kubernetes.io/infra=true']}, {'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true']}]
openshift_router_selector='node-role.kubernetes.io/infra=true'
openshift_registry_selector='node-role.kubernetes.io/infra=true'
openshift_metrics_install_metrics=true
openshift_master_named_certificates=[{"certfile": "/home/ec2-user/certs/openshift.crt", "keyfile": "/home/ec2-user/certs/openshift.key"}]
# uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=os.domain-name.net
openshift_master_cluster_public_hostname=os.domain-name.net
# host group for masters
[masters]
ip-10-0-4-29.us-east-2.compute.internal
ip-10-0-5-54.us-east-2.compute.internal
ip-10-0-6-8.us-east-2.compute.internal
[etcd]
ip-10-0-4-29.us-east-2.compute.internal
ip-10-0-5-54.us-east-2.compute.internal
ip-10-0-6-8.us-east-2.compute.internal
[nodes]
# host group for nodes, includes region info
[nodes]
#master
ip-10-0-4-29.us-east-2.compute.internal openshift_node_group_name='node-config-master'
ip-10-0-5-54.us-east-2.compute.internal openshift_node_group_name='node-config-master'
ip-10-0-6-8.us-east-2.compute.internal openshift_node_group_name='node-config-master'
#infra
ip-10-0-4-28.us-east-2.compute.internal openshift_node_group_name='node-config-infra'
ip-10-0-5-241.us-east-2.compute.internal openshift_node_group_name='node-config-infra'
#node
ip-10-0-4-162.us-east-2.compute.internal openshift_node_group_name='node-config-compute'
ip-10-0-5-146.us-east-2.compute.internal openshift_node_group_name='node-config-compute'
Please if anyone can help me get past this hurdle so I can finally try and demo out a CI/CD pipeline using Openshift I'd be truly grateful
I know this is an old link but I was running into the same issue with my ELB configured as HTTPS. I changed the listener to TCP and used port 443 for the Load Balancer Port and the Instance Port. For the Health Check, make sure you are using Ping Protocol HTTPS, Ping Port 443 and Ping Path of "/" . Those configuration changes allowed the installation to proceed.

Connect to AWS Lighsail instance on port 9200 from AWS Lambda

I'm trying to setup elasticsearch on my AWS lightsail instance, and got it running on port 9200, however I'm not able to connect from AWS lambda to the instance on the same port. I've updated my lightsail instance level networking setting to allow port 9200 to accept traffic, however I'm neither able to connect to port 9200 through the static IP, nor I'm able to get my AWS lambda function to talk to my lightsail host on port 9200.
I understand that AWS has separate Elasticsearch offering that I can use, however I'm doing a test setup and need to run vanilla ES on the same lightsail host. The ES is up and running and I can connect to it through SSH tunnel, however it doesn't work when I try to connect using the static IP or through another AWS service.
Any pointers shall be appreciated.
Thanks.
Update elasticsearch.yml
network.host: _ec2:privateIpv4_
We are running multiple version of elaticsearch cluster on AWS Cloud:
elasticsearch-2.4 cluster elasticsearch.yml(On classic ec2 instance --i3.2xlarge )
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
node.max_local_storage_nodes: 1
node.rack_id: rack_us_east_1d
index.number_of_shards: 8
index.number_of_replicas: 1
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.multicast.enabled: false
cloud.aws.access_key: ***
cloud.aws.secret_key: ***
cloud.aws.region: us-east-1
discovery.type: ec2
discovery.ec2.groups: es-cluster-sg
network.host: _ec2:privateIpv4_
elasticsearch-6.3 cluster elasticsearch.yml(Inside VPC & i3.2xlarge instance)
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.hosts_provider: ec2
discovery.ec2.groups: vpc-es-eluster-sg
network.host: _ec2:privateIpv4_
path:
logs: /es-data/log
data: /es-data/data
discovery.ec2.host_type: private_ip
discovery.ec2.tag.es_cluster: staging-elasticsearch
discovery.ec2.endpoint: ec2.us-east-1.amazonaws.com
I recommend don't open port 9300 & 9200 for outside. Allow only EC2 instance to communicate with your elaticsearch.
Now how to access elasticsearch from my local box?
Use tunnelling(port forwarding) from your system using this command:
$ ssh -i es.pem ec2-user#es-node-public-ip -L 9200:es-node-private-ip:9200 -N
It is like, you are running elasticsearch on your local system.
I might be late to the party, but for anyone still struggling with this sort of problem should know that new versions of elastic search bind to localhost by default as mentioned in this answer to override this behavior you should set:
network.bind_host: 0
to allow the node to be accessed outside of localhost

Elasticserch Master not discoverd exception - Version 2.3.0

This is the first time I am working with elasticsearch.
The following is my environment/configuration.
I have 3 EC2 Ubuntu 14.04 instances.
I have download and extracted elasticsearch-2.3.0.tar.gz.
I have changed elasticsearch.yml file under elasticsearch/config in each of the instance.
I have made the following changes in each of the elasticsearch.yml file.
3.1. EC2 Instance number 1 ( my client node)
cluster.name: MyCluster
node.name: Client
node.master: false
node.data: false
path.data: /home/ubuntu/elasticsearch/data/elasticsearch/nodes/0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["aa.aa.aa.aa" , "aa.aa.aaa.aa" , "aaa.a.aa.aa"]
In the above bracket I have provide IP of all my 3 instances.
3.2. EC2 Instance number 2 ( my Master node)
cluster.name: MyCluster
node.name: Master
node.master: true
node.data: true
path.data: /home/ubuntu/elasticsearch/data/elasticsearch/nodes/0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["aa.aa.aa.aa" , "aa.aa.aaa.aa" , "aaa.a.aa.aa"]
In the above bracket I have provide IP of all my 3 instances.
Note that I have made node.data: true (according to this link)
3.3. EC2 Instance number 3 ( my data node)
cluster.name: MyCluster
node.name: slave_1
node.master: false
node.data: true
path.data: /home/ubuntu/elasticsearch/data/elasticsearch/nodes/0
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["aa.aa.aa.aa" , "aa.aa.aaa.aa" , "aaa.a.aa.aa"]
In the above bracket I have provide IP of all my 3 instances.
After this configuration I run elasticsearch service on each instance starting from data node then master node and client node in the end.
If I check the node status using curl http://localhost:9200, I am getting json which states that the node is running.
But when I check the cluster health using curl -XGET 'http://localhost:9200/_cluster/health?pretty=true' I am getting the following error on my client instance.
I hope I am clear with my question and I am going in the right direction.
Thankyou
Elasticsearch 2.0+ defaults to binding all sockets to localhost. This means, by default, nothing outside of that machine can talk to it.
This is explicitly for security purposes and simple development setups. Locally, it works great, but you need to configure it for your environment when it gets more serious. This is also why you can talk to the node via localhost. Basically, you want this when you want more than one node across other machines using the network settings. This works with ES 2.3+:
network:
bind_host: [ _local_, _global_ ]
publish_host: _global_
Then other nodes can talk to the public IP, but you still have localhost to simplify working with the node locally (e.g., you--the human--never have to know the IP when SSHed into a box).
As you are in EC2 with Elasticsearch 2.0+, I recommend that you install the cloud-aws plugin (future readers beware: this plugin is being broken into 3 separate plugins in ES 5.x!).
$ bin/plugin install cloud-aws
With that installed, you get a bit more awareness out of your EC2 instances. With this great power, you can add more detail to your ES configurations:
# Guarantee that the plugin is installed
plugin.mandatory: cloud-aws
# Discovery / AWS EC2 Settings
discovery
type: ec2
ec2:
availability_zones: [ "us-east-1a", "us-east-1b" ]
groups: [ "my_security_group1", "my_security_group2" ]
# The keys here need to be replaced with your keys
cloud:
aws
access_key: AKVAIQBF2RECL7FJWGJQ
secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br
region: us-east-1
node.auto_attributes: true
# Bind to the network on whatever IP you want to allow connections on.
# You _should_ only want to allow connections from within the network
# so you only need to bind to the private IP
node.host: _ec2:privateIp_
# You can bind to all hosts that are possible to communicate with the
# node but advertise it to other nodes via the private IP (less
# relevant because of the type of discovery used, but not a bad idea).
#node:
# bind_host: [ _local_, _ec2:privateIp_, _ec2:publicIp_, _ec2:publicDns_ ]
# publish_host: _ec2:privateIp_
This will allow them to talk by binding the IP address to what is expected. If you want to be able to SSH into those machines and communicate with ES over localhost (you probably do for debugging), then you will want the version commented out with _local_ as a bind_host in that list.