Connect to AWS Lighsail instance on port 9200 from AWS Lambda - amazon-web-services

I'm trying to setup elasticsearch on my AWS lightsail instance, and got it running on port 9200, however I'm not able to connect from AWS lambda to the instance on the same port. I've updated my lightsail instance level networking setting to allow port 9200 to accept traffic, however I'm neither able to connect to port 9200 through the static IP, nor I'm able to get my AWS lambda function to talk to my lightsail host on port 9200.
I understand that AWS has separate Elasticsearch offering that I can use, however I'm doing a test setup and need to run vanilla ES on the same lightsail host. The ES is up and running and I can connect to it through SSH tunnel, however it doesn't work when I try to connect using the static IP or through another AWS service.
Any pointers shall be appreciated.
Thanks.

Update elasticsearch.yml
network.host: _ec2:privateIpv4_
We are running multiple version of elaticsearch cluster on AWS Cloud:
elasticsearch-2.4 cluster elasticsearch.yml(On classic ec2 instance --i3.2xlarge )
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
node.max_local_storage_nodes: 1
node.rack_id: rack_us_east_1d
index.number_of_shards: 8
index.number_of_replicas: 1
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.multicast.enabled: false
cloud.aws.access_key: ***
cloud.aws.secret_key: ***
cloud.aws.region: us-east-1
discovery.type: ec2
discovery.ec2.groups: es-cluster-sg
network.host: _ec2:privateIpv4_
elasticsearch-6.3 cluster elasticsearch.yml(Inside VPC & i3.2xlarge instance)
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.hosts_provider: ec2
discovery.ec2.groups: vpc-es-eluster-sg
network.host: _ec2:privateIpv4_
path:
logs: /es-data/log
data: /es-data/data
discovery.ec2.host_type: private_ip
discovery.ec2.tag.es_cluster: staging-elasticsearch
discovery.ec2.endpoint: ec2.us-east-1.amazonaws.com
I recommend don't open port 9300 & 9200 for outside. Allow only EC2 instance to communicate with your elaticsearch.
Now how to access elasticsearch from my local box?
Use tunnelling(port forwarding) from your system using this command:
$ ssh -i es.pem ec2-user#es-node-public-ip -L 9200:es-node-private-ip:9200 -N
It is like, you are running elasticsearch on your local system.

I might be late to the party, but for anyone still struggling with this sort of problem should know that new versions of elastic search bind to localhost by default as mentioned in this answer to override this behavior you should set:
network.bind_host: 0
to allow the node to be accessed outside of localhost

Related

Why does AWS ECS allows inbound traffic to ALL ports by default?

I am deploying the following relatively simple docker-compose.yml file on AWS ECS via the Docker CLI.
It uses tomcat server image which can be also replaced by any other container which does not exits of startup.
services:
tomcat:
image: tomcat:9.0
command: catalina.sh run
ports:
- target: 8080
published: 8080
x-aws-protocol: http
Commands used
docker context use mycontextforecs
docker compose up
The cluster, services, task, target, security groups and application load balancer are automatically created as expected.
But, the security group created by AWS ECS allows inbound traffic on ALL ports by default instead of only the exposed 8080.
Following is a screenshot of the security group, which also has a comment -
"tomcat:8080/ on default network"
But port range is "All" instead of 8080
I've read the following and some other stackoverflow links but could not get an answer.
https://docs.docker.com/cloud/ecs-compose-features/
https://docs.docker.com/cloud/ecs-architecture/
https://docs.docker.com/cloud/ecs-integration/
I understand that the default "Fargate" instance type gets a public ip assigned.
But why does ECS allow traffic on all ports?
If I add another service in the docker-compose file, the default security group gets shared between both of them.
As a result, anyone can telnet into the port exposed by the service due to this security group rule.

How do I properly configure Cassandra in EC2 to connect to it?

I have an AWS EC2 instance with Centos 8.
Inside this instance, I have successfully installed the Cassandra (3.11.10) database.
Inside this database, I have successfully created keyspace via this CQL query:
create keyspace if not exists dev_keyspace with replication={'class': 'SimpleStrategy', 'replication_factor' : 2};
Then I edited configurion file (/etc/cassandra/default.conf/cassandra.yaml):
cluster_name: "DevCluster"
seeds: <ec2_private_ip_address>
listen_address: <ec2_private_ip_address>
start_rpc: true
rpc_address: 0.0.0.0
broadcast_rpc_address: <ec2_private_ip_address>
endpoint_snitch: Ec2Snitch
After that, restarted database:
Datacenter: eu-central
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN <ec2_private_ip_address> 75.71 KiB 256 100.0% XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX 1a
When I try to connect to the Cassandra database with such credentials it raise an error:
host: <ec2_public_ip_address>
port: 9042
keyspace: dev_keyspace
username: cassandra (default)
password: cassandra (default)
ERROR:
All host(s) tried for query failed (tried:
/<ec2_private_ip_address>:9042
(com.datastax.driver.core.exceptions.TransportException:
[/<ec2_private_ip_address>:9042] Cannot connect))
What did I forget to configure? Let me know if you need more information.
You won't be able to access your cluster remotely because you've configured Cassandra to only listen for clients on the private IP with this setting:
broadcast_rpc_address: <ec2_private_ip_address>
For the node to accept requests from external clients, you need to set the following in cassandra.yaml:
listen_address: private_ip
rpc_address: public_ip
Note that you don't need to set the broadcast RPC address. You will need to restart Cassandra for the changes to take effect.
You will also need to define a security group with inbound rules on the AWS Management Console to allow ingress to your EC2 instances on port 9042. Cheers!

elasticsearch kibana setup in separate aws ec2 servers

I have installed elasticsearch in one instance and kibana in another instance.
Both the services are running and I can connect elasticsearch using curl and its instance public ip with port 9200
version: 7.9.2 both
Assume: Public ips
elasticsearch - x.x.x.x
kibana - y.y.y.y
Issue:
Cant connect kibana instance with its curl and public ip with port 5601
Error: Failed to connect to y.y.y.y port 5601: connection refused
Query:
Correct config for elasticsearch.yml and kibana.yml
` kibana.yml:
port: 5601
server.host: "y.y.y.y"
elasticsearch.hosts: ["http://x.x.x.x:9200"]
elasticsearch.yml:
network.host: 0.0.0.0
http.port: 9200 `
It is extremely likely you have not configured the correct security group rules on the kibana instance to permit you to access the service. You need an ingress rule permitting tcp to port 5601 from whatever your ingress range is.
Likewise, it is extremely likely you have not granted access to elasticsearch (x.x.x.x:9200) from y.y.y.y
Check your security group rules.
Also, please ensure your elasticsearch public ip does not permit access from 0.0.0.0 - publically accessible elasticsearch clusters are a prime target for naughty people.

Accessing GCP Memorystore from local machines

Whats the best way to access Memorystore from Local Machines during development? Is there something like Cloud SQL Proxy that I can use to set up a tunnel?
You can spin up a Compute Engine instance and use port forwarding to connect to your Redis machine.
For example if your Redis machine has internal IP address 10.0.0.3 you'd do:
gcloud compute instances create redis-forwarder --machine-type=f1-micro
gcloud compute ssh redis-forwarder -- -N -L 6379:10.0.0.3:6379
As long as you keep the ssh tunnel open you can connect to localhost:6379
Update: this is now officially documented:
https://cloud.google.com/memorystore/docs/redis/connecting-redis-instance#connecting_from_a_local_machine_with_port_forwarding
I created a vm on google cloud
gcloud compute instances create redis-forwarder --machine-type=f1-micro
then ssh into it and installed haproxy
sudo su
apt-get install haproxy
then updated the config file
/etc/haproxy/haproxy.cfg
....existing file contents
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
I was then able to connect to memory store from my local machine for development
You can spin up a Compute Engine instance and setup an haproxy using the following docker image haproxy docker image then haproxy will forward your tcp requests to memorystore.
For example i want to access memorystore instance with ip 10.0.0.12 so added the following haproxy configs:
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server 10.0.0.12:6379 check
So now you can access memorystore from your local machine using the following command:
redis-cli -h <your-haproxy-public-ipaddress> -p 6379
Note: replace with you actual haproxy ip address.
Hope that can help you to solve your problem.
This post builds on earlier ones and should help you bypass firewall issues.
Create a virtual machine in the same region(and zone to be safe) as your Memorystore instance. On this machine:
Add a network tag with which we will create a firewall rule to allow traffic on port 6379
Add an external IP with which you will access this VM
SSH into this machine and install haproxy
sudo su
apt-get install haproxy
add the following below existing config in the /etc/haproxy/haproxy.cfg file
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
Now create a firewall rule that allows traffic on port 6379 on the VM. Ensure:
It has the same target tag as the networking tag we created on the VM.
It allows traffic on port 6379 for the TCP protocol.
Now you should be able to connect remotely like so:
redis-cli -h [VM IP] -p 6379
Memorystore does not allow connecting from local machines, other ways like from CE, GAE are expensive especially your project is small or in developing phase, I suggest you create a cloud function to execute memorystore, it's serverless service which means lower fee to execute. I wrote small tool for this, the result is similar to run on local machine. You can check if help to you.
Like #Christiaan answered above, it almost worked for me but I needed a few other things to check to make it work well.
Firstly, in my case, my Redis is running in a specific network other than default network, so I had to create the jumpbox inside the same network (let's call it my-network)
Secondly, I needed to apply a firewall rule to open port 22 in that network.
So putting all my needed command it looks like this:
gcloud compute firewall-rules create default-allow-ssh --project=my-project --network my-network --allow tcp:22 --source-ranges 0.0.0.0/0
gcloud compute instances create jump-box --machine-type=f1-micro --project my-project --zone europe-west1-b --network my-network
gcloud compute ssh jump-box --project my-project --zone europe-west1-b -- -N -L 6379:10.177.174.179:6379
Then I have access to Redis locally on 6379

AWS instance can't be accessed from browser

I set up kubernetes cluster and then I deployed it on AWS . It created one load balancer, one master and 4 minion nodes.
I can use kubectl proxy command to check whether it works locally and it turned out that yes. I am able to connect to a particular pod.
The problem is that I can't access it externally. I have IP which looks like this :
ab0154f2bcc5c11e6aff30a71ada8ce9-447509613.eu-west-1.elb.amazonaws.com
I also modified security group, so each node has a following security group :
Ports Protocol Source
80 tcp 0.0.0.0/0
8080 tcp 0.0.0.0/0
All All sg-4dbbce2b, sg-4ebbce28, sg-e6a4d180
22 tcp 0.0.0.0/0
What might be wrong with this configuration ?
Does the service which created the ELB have endpoints. Do a kubectl describe svc <serviceName> and check the endpoints section. If not then you need to match up the selectors better. If you do see them then I would try hitting the nodeport from one of the machines to verify it works. A simple curl should work. If that works then I would look deeper into the AWS security.