AWS redshift blocks my IP - amazon-web-services

I cannot connect to a aws Redshift cluster, but I am able to connect with exactly the same configuration when I'm using a different Wifi. Here are some details:
I use mac with SQL Workbench/J with AWS Redshift driver.
The error I'm getting:
[Amazon] (500150) Error setting/closing connection: Operation timed out.
Using Wireshark I see outbound TCP request with no answer.
When I set my smart phone as a hot spot (instead of using my home Wifi) - the same connection works fine.
Here are my security group details:
Inbound: Redshift TCP 5439 0.0.0.0/0
Outbound: All traffic All All 0.0.0.0/0
Also, I tested this on two different aws accounts - same problem on both.
Any idea would be of great help

found an answer here:
http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html
Idle connections are terminated by an intermediate network component (e.g firewall). To solve (on mac):
sudo sysctl net.inet.tcp.keepintvl=20000
sudo sysctl net.inet.tcp.keepidle=20000
sudo sysctl net.inet.tcp.keepinit=20000
sudo sysctl net.inet.tcp.always_keepalive=1
If this works, add the following to /etc/sysctl.conf to persist:
net.inet.tcp.keepidle=20000
net.inet.tcp.keepintvl=20000
net.inet.tcp.keepinit=20000
net.inet.tcp.always_keepalive=1
And after restart, to test:
sysctl net.inet.tcp.keepidle
sysctl net.inet.tcp.keepintvl
sysctl net.inet.tcp.keepinit
sysctl net.inet.tcp.always_keepalive
To change DSN timeout settings

Related

Connect to the CloudSQL Postgres from different project

I'm going to connect from the instance in Project-A(custom VPC) with CloudSQL Postgres in Project-B(default VPC). Documentation says that I need to peer these two VPC. The peering status in the "Active" state. In Project-A I also have cloudsql_auth_proxy. Once I execute cloudsql_auth_proxy, I get this:
root#cloudsql-auth-proxy:~# ./cloud_sql_proxy -instances=projectB:us-west1:postgres=tcp:0.0.0.0:5432
2022/12/29 16:46:59 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2022/12/29 16:47:01 Listening on 0.0.0.0:5432 for -instances=projectB:us-west1:postgres=tcp:0.0.0.0:5432
2022/12/29 16:47:01 Ready for new connections
2022/12/29 16:47:01 Generated RSA key in 244.541948ms
When I try to connect to the cloudsql_proxy like this psql -h xxx.xxx.xxx.xxx -p 5432 -U proxyuser -d postgres it hangs.
The output of cloudsql_auth_proxy looks like this:
2022/12/29 16:48:00 New connection for "-instances=projectB:us-west1:postgres"
2022/12/29 16:48:00 refreshing ephemeral certificate for instance -instances=projectB:us-west1:postgres
2022/12/29 16:48:00 Scheduling refresh of ephemeral certificate in 55m0s
: dial tcp 10.35.144.3:3307: connect: connection timed out
Any thoughts about this?
You'll need to deploy a Socks5 proxy in Project B VPC to provide a network path between VPCs. Dante is a popular choice.
Once you have a Socks5 proxy running, you can launch the Proxy pointing at it.
See https://github.com/GoogleCloudPlatform/cloud-sql-proxy#running-behind-a-socks5-proxy.
I think you might have posted this on the GCP subreddit too! :P
To expand on #enocom answer with some diagrams.
For reference :
VPC non-transitivity in GCP makes this a bit awkward.
I am a bit puzzled by a GCP design that would require running two extra GCE constructs + a socks proxy + a cloud_sql_auth proxy. That's a lot of bits to interconnect GCP native services like CloudSQL and Datastream.
I don't think I can remove any of the current pieces. If we remove vm-002, Datastream won't be able to reach vm-001 due to the lack of transitivity.
Reference Dante config to remove the authentication from the socks proxy. Don't do this in prod - just for the sake of simple test ;)
In /etc/danted.conf
systemctl restart danted.service
systemctl status danted.service
logoutput: syslog
clientmethod: none
socksmethod: none
# The listening network interface or address.
internal: 0.0.0.0 port=1080
# The proxying network interface or address.
external: ens4
client pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}
socks pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}

tomcat on docker container on linux mapped to anything other than 8080 is not accessible from internet

I tested AWS EC2 Amazon Linux and Ubuntu 18.04.
Tomcat is reachable from localhost:8081, but not from outside network
After pulling thee tomcat image
docker pull tomcat
Then running a container with port mapping:
docker run -d --name container-test -p 8081:8080 tomcat
Tomcat web page is not accessible, says:
This site can’t be reached 13.49.148.112:8081 took too long to respond.
But if doing this way, it's working fine.
docker run -d --name container-test2 -p 8080:8080 tomcat
I opened ALL ALL ALL in AWS security groups.
netstat shows that ports are listening correctly
ACLs are at default rule 100 allowing everything
I also did nmap this and found out the port is filtered:
$nmap -p8081 172.217.27.174
PORT STATE SERVICE
8081/tcp filtered blackice-icecap
Tried to add a rule to iptables but no luck:
iptables -I INPUT 3 -s 0.0.0.0/0 -d 0.0.0.0/0 -p tcp --dport 8081 -m state --state New -j ACCEPT
What can be done?
UPDATE:
Spent 2 good days to solve the issue with Amazon Linux2, but no success at all, switched to Ubuntu 22.04 and it's working. Also, same setup works on diff ami image in Mumbai region,
hence there is a high chance the image is faulty in Stockholm region specifically.
could be one of this:
check the port mappings of the container of your task definition
check the entries of the NACL (access control list) of your subnet (check if its public)
check if you allowed the trafic in the security group for your ip or 0.0.0.0/0

Unable to access Kibana on AWS EC2 instance using url

I have Elasticseasrch and Kibana installed on EC2 instance where I am able to access Elasticsearch using on this url http://public-ip/9200. But I am unable to access Kibana using http://public-ip/5601.
I have configured kibana.yml and added certain fields.
server.port: 5601
server.host: 0.0.0.0
elasticsearch.url: 0.0.0.0:9200
On doing wget http://localhost:5601 I am getting below output:
--2022-06-10 11:23:37-- http://localhost:5601/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:5601... connected.
HTTP request sent, awaiting response... 200 OK
Length: 83731 (82K) [text/html]
Saving to: ‘index.html’
What am I doing wrong?
Server Host set to 0.0.0.0 means it should be accessible from outside localhost but double check that the listener is actually listening for external connections on that port using netstat -nltpu. The server is also accessible on it's public IP on port 9200 so try the following:
EC2 Security Group should inbound TCP traffic on that port 5601 from your IP address.
Network ACLs should allow inbound/outbound TCP traffic on port 5601.
OS firewall ( e.g. ufw or firewalld ) should allow traffic on that port. You can run iptables -L -nxv to check the firewall rules.
Try connecting to that port from a different EC2 instance in the same VPC. It is possible that what ever internet connection you are using may have a firewall blocking connections on that port. This is common with corporate firewalls.
If these fail, next you want to check if the packets are reaching your EC2 instance so you can run a packet capture on that port using tcpdump -ni any port 5601 and check if you have any packets coming in/out on that port.
if you don't see any packets on tcpdump, use VPC Flow Logs to see if packets are coming in/out that port.
Considering the kibana port (5601 ) is open via security groups
I could able to resolve the issue by updating config server.host:localhost to server.host:0.0.0.0
and elasticsearch.hosts: ["http://localhost:9200"] (in my case kibana and ES both are running on the same machine) in kibana.yml
https://discuss.elastic.co/t/kibana-url-gives-connection-refused-from-outside-machine/122067/8

Timeout when connecting to Amazon RDS (Microsoft SQL Server / PostrgreSQL)

I cannot connect to my fresh new instance of SQL Server Express Edition from the Internet.
~$ sudo nc -vz <HOST>.eu-west-3.rds.amazonaws.com 1433
nc: connect to <HOST>.eu-west-3.rds.amazonaws.com port 1433 (tcp) failed: Connection timed out
I have already configured AWS security group assigned to this database instance. My inbound and outbound rules are:
type: all trafic
protocol: all
port range: all
source: ::/0
Also, everything looks fine on AWS Management Console:
DB instance status: available
Pending maintenance: none
Publicly accessible: Yes
Locally, I have also disabled my ufw:
~$ sudo ufw status verbose
Status: inactive
and iptables:
~$ sudo iptables -P INPUT ACCEPT && sudo iptables -P OUTPUT ACCEPT && sudo iptables -P FORWARD ACCEPT && sudo iptables -F
But still, nothing works. (the same happens both to my SQL Server Express Edition and PostgreSQL 9.4.15 on AWS)
According to your description I assume you want to access your RDS from Internet.
In order to access the RDS ensure this points:
RDS must be in a public VPC subnet.
RDS must be configured with "Public accessibility" = "Yes"
Security Group should contain 0.0.0.0/0 (IPv4) and ::/0 (IPv6)

Accessing GCP Memorystore from local machines

Whats the best way to access Memorystore from Local Machines during development? Is there something like Cloud SQL Proxy that I can use to set up a tunnel?
You can spin up a Compute Engine instance and use port forwarding to connect to your Redis machine.
For example if your Redis machine has internal IP address 10.0.0.3 you'd do:
gcloud compute instances create redis-forwarder --machine-type=f1-micro
gcloud compute ssh redis-forwarder -- -N -L 6379:10.0.0.3:6379
As long as you keep the ssh tunnel open you can connect to localhost:6379
Update: this is now officially documented:
https://cloud.google.com/memorystore/docs/redis/connecting-redis-instance#connecting_from_a_local_machine_with_port_forwarding
I created a vm on google cloud
gcloud compute instances create redis-forwarder --machine-type=f1-micro
then ssh into it and installed haproxy
sudo su
apt-get install haproxy
then updated the config file
/etc/haproxy/haproxy.cfg
....existing file contents
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
I was then able to connect to memory store from my local machine for development
You can spin up a Compute Engine instance and setup an haproxy using the following docker image haproxy docker image then haproxy will forward your tcp requests to memorystore.
For example i want to access memorystore instance with ip 10.0.0.12 so added the following haproxy configs:
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server 10.0.0.12:6379 check
So now you can access memorystore from your local machine using the following command:
redis-cli -h <your-haproxy-public-ipaddress> -p 6379
Note: replace with you actual haproxy ip address.
Hope that can help you to solve your problem.
This post builds on earlier ones and should help you bypass firewall issues.
Create a virtual machine in the same region(and zone to be safe) as your Memorystore instance. On this machine:
Add a network tag with which we will create a firewall rule to allow traffic on port 6379
Add an external IP with which you will access this VM
SSH into this machine and install haproxy
sudo su
apt-get install haproxy
add the following below existing config in the /etc/haproxy/haproxy.cfg file
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
Now create a firewall rule that allows traffic on port 6379 on the VM. Ensure:
It has the same target tag as the networking tag we created on the VM.
It allows traffic on port 6379 for the TCP protocol.
Now you should be able to connect remotely like so:
redis-cli -h [VM IP] -p 6379
Memorystore does not allow connecting from local machines, other ways like from CE, GAE are expensive especially your project is small or in developing phase, I suggest you create a cloud function to execute memorystore, it's serverless service which means lower fee to execute. I wrote small tool for this, the result is similar to run on local machine. You can check if help to you.
Like #Christiaan answered above, it almost worked for me but I needed a few other things to check to make it work well.
Firstly, in my case, my Redis is running in a specific network other than default network, so I had to create the jumpbox inside the same network (let's call it my-network)
Secondly, I needed to apply a firewall rule to open port 22 in that network.
So putting all my needed command it looks like this:
gcloud compute firewall-rules create default-allow-ssh --project=my-project --network my-network --allow tcp:22 --source-ranges 0.0.0.0/0
gcloud compute instances create jump-box --machine-type=f1-micro --project my-project --zone europe-west1-b --network my-network
gcloud compute ssh jump-box --project my-project --zone europe-west1-b -- -N -L 6379:10.177.174.179:6379
Then I have access to Redis locally on 6379