Why can't I connect to my AWS Redshift Serverless cluster from my laptop? - amazon-web-services

I've set up a Redshift Serverless cluster w/ a workgroup and a namespace.
I turned on the "Publicly Accessible" option
I've created an inbound rule for the 5439 port w/ Source set to 0.0.0.0/0
I've created an IAM credential for access to Redshift
I ran aws config and added the keys
But when I run
aws redshift-data list-databases --cluster-identifier default --database dev --db-user admin --endpoint http://default.530158470050.us-east-1.redshift-serverless.amazonaws.com:5439/dev
I get this error:
Connection was closed before we received a valid response from endpoint URL: "http://default.XXXXXX.us-east-1.redshift-serverless.amazonaws.com:5439/dev".
In Node, when trying to use the AWS.RedshiftDataClient to do the same thing, I get this:
{
code: 'TimeoutError',
path: null,
host: 'default.XXXXXXX.us-east-1.redshift-serverless.amazonaws.com',
port: 5439,
localAddress: undefined,
time: 2022-07-09T02:20:47.397Z,
region: 'us-east-1',
hostname: 'default.XXXXXX.us-east-1.redshift-serverless.amazonaws.com',
retryable: true
}
What am I missing?

What Security Group and VPC have you configured for your Redshift Serverless Cluster?
Make sure the Security Group allows traffic from "My Ip" so that you can reach the VPC.
If it is not enough, check the cluster is installed on public subnets (an Internet Gateway should be attached to the VPC and the route tables route traffic to it eventually + "Publicly Accessible" option enabled).

Related

Cloudflare Zero Trust kubectl connection - private cluster

I'm following this article in order to secure kubectl connection with Cloudflare Zero Trust (using cloudflared daemon):
https://developers.cloudflare.com/cloudflare-one/tutorials/kubectl/
My cluster is private EKS cluster in private subnets. Now, how would you typically set this flow up. Would cloudflared be seated in worker nodes? Or should there be a bastion host in front of the cluster (with NAT gateway)?
Here (in the article) I can see service attribute. It seems to be pointing to kubernetes API. But what is the address inside the EKS? Is it what I see as API server endpoint in my EKS dashboard?
tunnel: 6ff42ae2-765d-4adf-8112-31c55c1551ef
credentials-file: /root/.cloudflared/6ff42ae2-765d-4adf-8112-31c55c1551ef.json
ingress:
- hostname: azure.widgetcorp.tech
service: tcp://kubernetes.docker.internal:6443
originRequest:
proxyType: socks
- service: http_status:404
Many thanks for helping!

K8s service type ELB stuck at inprogress

Deployed K8s service with type as LoadBalancer. K8s cluster running on an EC2 instance. The service is stuck at "pending state".
Does the service type 'ELB' requires any stipulation in terms of AWS configuration parameters?
Yes. Typically you need the option --cloud-provider=aws on:
All kubelets
kube-apiserserver
kube-controller-manager
Also, you have to make sure that all your K8s instances (master/nodes) have an AWS instance role that allows them to create/remove ELBs and routes (All access to EC2 should do).
Then you need to make sure all your nodes are tagged:
Key: KubernetesCluster, Value: 'your cluster name'
Key: k8s.io/role/node, Value: 1 (For nodes only)
Key: kubernetes.io/cluster/kubernetes, Value: owned
Make sure your subnet is also tagged:
Key: KubernetesCluster, Value: 'your cluster name'
Also, your Kubernetes node definition, you should have something like this:
ProviderID: aws:///<aws-region>/<instance-id>
Generally, all of the above is not needed if you are using the Kubernetes Cloud Controller Manager which is in beta as of K8s 1.13.0

Connect to AWS Lighsail instance on port 9200 from AWS Lambda

I'm trying to setup elasticsearch on my AWS lightsail instance, and got it running on port 9200, however I'm not able to connect from AWS lambda to the instance on the same port. I've updated my lightsail instance level networking setting to allow port 9200 to accept traffic, however I'm neither able to connect to port 9200 through the static IP, nor I'm able to get my AWS lambda function to talk to my lightsail host on port 9200.
I understand that AWS has separate Elasticsearch offering that I can use, however I'm doing a test setup and need to run vanilla ES on the same lightsail host. The ES is up and running and I can connect to it through SSH tunnel, however it doesn't work when I try to connect using the static IP or through another AWS service.
Any pointers shall be appreciated.
Thanks.
Update elasticsearch.yml
network.host: _ec2:privateIpv4_
We are running multiple version of elaticsearch cluster on AWS Cloud:
elasticsearch-2.4 cluster elasticsearch.yml(On classic ec2 instance --i3.2xlarge )
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
node.max_local_storage_nodes: 1
node.rack_id: rack_us_east_1d
index.number_of_shards: 8
index.number_of_replicas: 1
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.multicast.enabled: false
cloud.aws.access_key: ***
cloud.aws.secret_key: ***
cloud.aws.region: us-east-1
discovery.type: ec2
discovery.ec2.groups: es-cluster-sg
network.host: _ec2:privateIpv4_
elasticsearch-6.3 cluster elasticsearch.yml(Inside VPC & i3.2xlarge instance)
cluster.name: ES-CLUSTER
node.name: ES-NODE-01
gateway.recover_after_nodes: 1
gateway.recover_after_time: 2m
gateway.expected_nodes: 1
discovery.zen.minimum_master_nodes: 1
discovery.zen.hosts_provider: ec2
discovery.ec2.groups: vpc-es-eluster-sg
network.host: _ec2:privateIpv4_
path:
logs: /es-data/log
data: /es-data/data
discovery.ec2.host_type: private_ip
discovery.ec2.tag.es_cluster: staging-elasticsearch
discovery.ec2.endpoint: ec2.us-east-1.amazonaws.com
I recommend don't open port 9300 & 9200 for outside. Allow only EC2 instance to communicate with your elaticsearch.
Now how to access elasticsearch from my local box?
Use tunnelling(port forwarding) from your system using this command:
$ ssh -i es.pem ec2-user#es-node-public-ip -L 9200:es-node-private-ip:9200 -N
It is like, you are running elasticsearch on your local system.
I might be late to the party, but for anyone still struggling with this sort of problem should know that new versions of elastic search bind to localhost by default as mentioned in this answer to override this behavior you should set:
network.bind_host: 0
to allow the node to be accessed outside of localhost

Docker container deployed via Beanstalk cannot connect to the database on RDS

I'm new to both docker and AWS. I just created my very first docker image. The application is a backend microservice with rest controllers persisting data in a MySQL database. I've manually created the database in RDS and after running the container locally, the rest APIs work fine in Postman.
Here is the Dockerfile:
FROM openjdk:8-jre-alpine
MAINTAINER alireza.online
COPY ./target/Practice-1-1.0-SNAPSHOT.jar /myApplication/
COPY ./target/libs/ /myApplication/libs/
EXPOSE 8080
CMD ["java", "-jar", "./myApplication/Practice-1-1.0-SNAPSHOT.jar"]
Then I deployed the docker image via AWS Beanstalk. Here is the Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "aliam/backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
],
"Logging": "/var/log/nginx"
}
And everything went well:
But now, I'm getting "502 Bad Gateway" in postman when trying to run "backend.us-east-2.elasticbeanstalk.com/health".
I checked the log on Beanstalk and realized that the application has problem connecting to the RDS database:
"Could not create connection to database server. Attempted reconnect 3 times. Giving up."
What I tried to do to solve the problem:
1- I tried to assign the same security group the EC2 instance is using to my RDS instance, but it didn't work.
2- I tried to make more inbound rules on the security group to add public and private IPs of the EC2 instance but I was not sure about the port and the CIDR I should define and couldn't make it.
Any comment would be highly appreciated.
Here are resources in your stack:
LoadBalancer -> EC2 instance(s) -> MySQL database
All of them need to have SecurityGroups assigned to them, allowing connections on the right ports to the upstream resources.
So, if you assign sg-1234 security group to your EC2 instances, and sg-5678 to your RDS database, there must be a rule existing in the sg-5678 allowing inbound connections from sg-1234 (no need for CIDRs, you can open a connection from SG to SG). The typical MySQL port is 3306.
Similarly, the LoadBalancer (which is automatically created for you by ElasticBeanstalk) must have access to your EC2 instance's 8080 port. Furthermore, if you want to access your instances with the "backend.us-east-2.elasticbeanstalk.com/health" domain name, the loadbalancer would have to listen on port 80 and have a target group of your instances on 8080 port.
Hope this helps!

Unable to connect to EC2 instance via ssh

I'm having trouble connecting my EC2 instance via ssh. Currently my session times out when I try to connect.
I have a security group with the following settings
Inbound:
Type: All traffic
Protocol: All
Port Range: All
Source: 0.0.0.0/0
Outbound:
Type: All traffic
Protocol: All
Port Range: All
Destination: 0.0.0.0/0
I followed the instructions on saving the private key and converting it to use with putty. When I put the public dns into putty, I am unable to connect. I verified the host name resolves by an online DNS checker.
On the client side, I launch putty and put the following information in:
Host name (or IP address): ec2-user#<Public DNS value>
Port: 22
Connection Type: ssh
In the connection->ssh->Auth->Private Key File for Authentication I point it to my private key from AWS after it has been transformed to a ppk.
Is there anything else I need to setup to be able to connect to the EC2 instance?
It turned out to be an issue with the account and not a technical issue. For whatever reason my account was set to isolated mode by Amazon. The AWS tech support verified that all of the settings were correct.