I'm going to connect from the instance in Project-A(custom VPC) with CloudSQL Postgres in Project-B(default VPC). Documentation says that I need to peer these two VPC. The peering status in the "Active" state. In Project-A I also have cloudsql_auth_proxy. Once I execute cloudsql_auth_proxy, I get this:
root#cloudsql-auth-proxy:~# ./cloud_sql_proxy -instances=projectB:us-west1:postgres=tcp:0.0.0.0:5432
2022/12/29 16:46:59 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2022/12/29 16:47:01 Listening on 0.0.0.0:5432 for -instances=projectB:us-west1:postgres=tcp:0.0.0.0:5432
2022/12/29 16:47:01 Ready for new connections
2022/12/29 16:47:01 Generated RSA key in 244.541948ms
When I try to connect to the cloudsql_proxy like this psql -h xxx.xxx.xxx.xxx -p 5432 -U proxyuser -d postgres it hangs.
The output of cloudsql_auth_proxy looks like this:
2022/12/29 16:48:00 New connection for "-instances=projectB:us-west1:postgres"
2022/12/29 16:48:00 refreshing ephemeral certificate for instance -instances=projectB:us-west1:postgres
2022/12/29 16:48:00 Scheduling refresh of ephemeral certificate in 55m0s
: dial tcp 10.35.144.3:3307: connect: connection timed out
Any thoughts about this?
You'll need to deploy a Socks5 proxy in Project B VPC to provide a network path between VPCs. Dante is a popular choice.
Once you have a Socks5 proxy running, you can launch the Proxy pointing at it.
See https://github.com/GoogleCloudPlatform/cloud-sql-proxy#running-behind-a-socks5-proxy.
I think you might have posted this on the GCP subreddit too! :P
To expand on #enocom answer with some diagrams.
For reference :
VPC non-transitivity in GCP makes this a bit awkward.
I am a bit puzzled by a GCP design that would require running two extra GCE constructs + a socks proxy + a cloud_sql_auth proxy. That's a lot of bits to interconnect GCP native services like CloudSQL and Datastream.
I don't think I can remove any of the current pieces. If we remove vm-002, Datastream won't be able to reach vm-001 due to the lack of transitivity.
Reference Dante config to remove the authentication from the socks proxy. Don't do this in prod - just for the sake of simple test ;)
In /etc/danted.conf
systemctl restart danted.service
systemctl status danted.service
logoutput: syslog
clientmethod: none
socksmethod: none
# The listening network interface or address.
internal: 0.0.0.0 port=1080
# The proxying network interface or address.
external: ens4
client pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}
socks pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}
Related
I have Elasticseasrch and Kibana installed on EC2 instance where I am able to access Elasticsearch using on this url http://public-ip/9200. But I am unable to access Kibana using http://public-ip/5601.
I have configured kibana.yml and added certain fields.
server.port: 5601
server.host: 0.0.0.0
elasticsearch.url: 0.0.0.0:9200
On doing wget http://localhost:5601 I am getting below output:
--2022-06-10 11:23:37-- http://localhost:5601/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:5601... connected.
HTTP request sent, awaiting response... 200 OK
Length: 83731 (82K) [text/html]
Saving to: ‘index.html’
What am I doing wrong?
Server Host set to 0.0.0.0 means it should be accessible from outside localhost but double check that the listener is actually listening for external connections on that port using netstat -nltpu. The server is also accessible on it's public IP on port 9200 so try the following:
EC2 Security Group should inbound TCP traffic on that port 5601 from your IP address.
Network ACLs should allow inbound/outbound TCP traffic on port 5601.
OS firewall ( e.g. ufw or firewalld ) should allow traffic on that port. You can run iptables -L -nxv to check the firewall rules.
Try connecting to that port from a different EC2 instance in the same VPC. It is possible that what ever internet connection you are using may have a firewall blocking connections on that port. This is common with corporate firewalls.
If these fail, next you want to check if the packets are reaching your EC2 instance so you can run a packet capture on that port using tcpdump -ni any port 5601 and check if you have any packets coming in/out on that port.
if you don't see any packets on tcpdump, use VPC Flow Logs to see if packets are coming in/out that port.
Considering the kibana port (5601 ) is open via security groups
I could able to resolve the issue by updating config server.host:localhost to server.host:0.0.0.0
and elasticsearch.hosts: ["http://localhost:9200"] (in my case kibana and ES both are running on the same machine) in kibana.yml
https://discuss.elastic.co/t/kibana-url-gives-connection-refused-from-outside-machine/122067/8
I am very new to coding so trying to figure this out was very hard for me. I'm trying to deploy my code with docker and running my code inside the EC2 cloud. But I can't seem to get the instance's url to work. I set my inbound (security group) HTTP (80) => 0.0.0.0/0, HTTPs (443) => 0.0.0.0/0, and SSH(22) => my ip. I read that setting my SSH to 0.0.0.0/0 was a bad idea, so I went with my ip (there was an option called 'my ip'). Also, I am using ubuntu for my AMI.
While successfully docker using (docker-compose up), I used curl http://localhost:3001 (3001 is my exposed port inside my code) and it works fine. But when I used curl ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com, it outputs:
curl: (6) Could not resolve host: ssh and
curl: (7) Failed to connect to ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com port 80: Connection refused
Curl ec2-xxx-xx-amazonaws.com send request on port 80 , while you are docker is running at port 3001.
First verify that you have exposed some host port to docker. Something like this should come in docker ps -a
0.0.0.0/3001--> 3001 . the first 3001 can be any host port
Next make sure that the first port whichever you used is there in security group and opened for your ip.
Hopefully if all good at vpc and route tables settings then :3001(use whatever host port you gave if used anything apart of 3001) all should work
I just started a new AWS EC2 instance. In the instance's security group I added a new rule to open port 8080 as well as port 80.
I created a docker image and container that runs an apache server as per the aws tutorial.
When I run docker run -p 80:80 hello-world (where hello-world is the apache container image), everything works fine and I can access the server from the public network (using a web browser, or a curl command).
However, when I run docker run -p 8080:80 hello-world and I try to send a GET request (web browser, or curl) I get a connection timeout.
If I login to the host that is running the docker container, the curl command works fine. This tells me that port 8080 isn't really open to the public network, and something is blocking it, what could that be?
I tried to reproduce the thing, and I wasn't able to do it (it worked for me), so things that you should check:
1) Check that security group has indeed opened ports 80 and 8080 to your ip (or 0.0.0.0/0 if this is just a test just to confirm that this is not a firewall issue).
2) check the container is running:
docker ps -a
you should see: 0.0.0.0:8080->80/tcp under ports.
3) check that when you are sending the GET request, you are specifying the port 8080 in the request, so your browser should look something like:
http://your.ip:8080
or curl:
curl http://your.ip:8080
warning: just for testing
For testing: Setting Security Groups can solve the problem.
SecurityGroups > Inbound > Edit inbound rules > Add new rules > All TCP
Whats the best way to access Memorystore from Local Machines during development? Is there something like Cloud SQL Proxy that I can use to set up a tunnel?
You can spin up a Compute Engine instance and use port forwarding to connect to your Redis machine.
For example if your Redis machine has internal IP address 10.0.0.3 you'd do:
gcloud compute instances create redis-forwarder --machine-type=f1-micro
gcloud compute ssh redis-forwarder -- -N -L 6379:10.0.0.3:6379
As long as you keep the ssh tunnel open you can connect to localhost:6379
Update: this is now officially documented:
https://cloud.google.com/memorystore/docs/redis/connecting-redis-instance#connecting_from_a_local_machine_with_port_forwarding
I created a vm on google cloud
gcloud compute instances create redis-forwarder --machine-type=f1-micro
then ssh into it and installed haproxy
sudo su
apt-get install haproxy
then updated the config file
/etc/haproxy/haproxy.cfg
....existing file contents
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
I was then able to connect to memory store from my local machine for development
You can spin up a Compute Engine instance and setup an haproxy using the following docker image haproxy docker image then haproxy will forward your tcp requests to memorystore.
For example i want to access memorystore instance with ip 10.0.0.12 so added the following haproxy configs:
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server 10.0.0.12:6379 check
So now you can access memorystore from your local machine using the following command:
redis-cli -h <your-haproxy-public-ipaddress> -p 6379
Note: replace with you actual haproxy ip address.
Hope that can help you to solve your problem.
This post builds on earlier ones and should help you bypass firewall issues.
Create a virtual machine in the same region(and zone to be safe) as your Memorystore instance. On this machine:
Add a network tag with which we will create a firewall rule to allow traffic on port 6379
Add an external IP with which you will access this VM
SSH into this machine and install haproxy
sudo su
apt-get install haproxy
add the following below existing config in the /etc/haproxy/haproxy.cfg file
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
Now create a firewall rule that allows traffic on port 6379 on the VM. Ensure:
It has the same target tag as the networking tag we created on the VM.
It allows traffic on port 6379 for the TCP protocol.
Now you should be able to connect remotely like so:
redis-cli -h [VM IP] -p 6379
Memorystore does not allow connecting from local machines, other ways like from CE, GAE are expensive especially your project is small or in developing phase, I suggest you create a cloud function to execute memorystore, it's serverless service which means lower fee to execute. I wrote small tool for this, the result is similar to run on local machine. You can check if help to you.
Like #Christiaan answered above, it almost worked for me but I needed a few other things to check to make it work well.
Firstly, in my case, my Redis is running in a specific network other than default network, so I had to create the jumpbox inside the same network (let's call it my-network)
Secondly, I needed to apply a firewall rule to open port 22 in that network.
So putting all my needed command it looks like this:
gcloud compute firewall-rules create default-allow-ssh --project=my-project --network my-network --allow tcp:22 --source-ranges 0.0.0.0/0
gcloud compute instances create jump-box --machine-type=f1-micro --project my-project --zone europe-west1-b --network my-network
gcloud compute ssh jump-box --project my-project --zone europe-west1-b -- -N -L 6379:10.177.174.179:6379
Then I have access to Redis locally on 6379
I set up kubernetes cluster and then I deployed it on AWS . It created one load balancer, one master and 4 minion nodes.
I can use kubectl proxy command to check whether it works locally and it turned out that yes. I am able to connect to a particular pod.
The problem is that I can't access it externally. I have IP which looks like this :
ab0154f2bcc5c11e6aff30a71ada8ce9-447509613.eu-west-1.elb.amazonaws.com
I also modified security group, so each node has a following security group :
Ports Protocol Source
80 tcp 0.0.0.0/0
8080 tcp 0.0.0.0/0
All All sg-4dbbce2b, sg-4ebbce28, sg-e6a4d180
22 tcp 0.0.0.0/0
What might be wrong with this configuration ?
Does the service which created the ELB have endpoints. Do a kubectl describe svc <serviceName> and check the endpoints section. If not then you need to match up the selectors better. If you do see them then I would try hitting the nodeport from one of the machines to verify it works. A simple curl should work. If that works then I would look deeper into the AWS security.