Whats the best way to access Memorystore from Local Machines during development? Is there something like Cloud SQL Proxy that I can use to set up a tunnel?
You can spin up a Compute Engine instance and use port forwarding to connect to your Redis machine.
For example if your Redis machine has internal IP address 10.0.0.3 you'd do:
gcloud compute instances create redis-forwarder --machine-type=f1-micro
gcloud compute ssh redis-forwarder -- -N -L 6379:10.0.0.3:6379
As long as you keep the ssh tunnel open you can connect to localhost:6379
Update: this is now officially documented:
https://cloud.google.com/memorystore/docs/redis/connecting-redis-instance#connecting_from_a_local_machine_with_port_forwarding
I created a vm on google cloud
gcloud compute instances create redis-forwarder --machine-type=f1-micro
then ssh into it and installed haproxy
sudo su
apt-get install haproxy
then updated the config file
/etc/haproxy/haproxy.cfg
....existing file contents
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
I was then able to connect to memory store from my local machine for development
You can spin up a Compute Engine instance and setup an haproxy using the following docker image haproxy docker image then haproxy will forward your tcp requests to memorystore.
For example i want to access memorystore instance with ip 10.0.0.12 so added the following haproxy configs:
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server 10.0.0.12:6379 check
So now you can access memorystore from your local machine using the following command:
redis-cli -h <your-haproxy-public-ipaddress> -p 6379
Note: replace with you actual haproxy ip address.
Hope that can help you to solve your problem.
This post builds on earlier ones and should help you bypass firewall issues.
Create a virtual machine in the same region(and zone to be safe) as your Memorystore instance. On this machine:
Add a network tag with which we will create a firewall rule to allow traffic on port 6379
Add an external IP with which you will access this VM
SSH into this machine and install haproxy
sudo su
apt-get install haproxy
add the following below existing config in the /etc/haproxy/haproxy.cfg file
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
Now create a firewall rule that allows traffic on port 6379 on the VM. Ensure:
It has the same target tag as the networking tag we created on the VM.
It allows traffic on port 6379 for the TCP protocol.
Now you should be able to connect remotely like so:
redis-cli -h [VM IP] -p 6379
Memorystore does not allow connecting from local machines, other ways like from CE, GAE are expensive especially your project is small or in developing phase, I suggest you create a cloud function to execute memorystore, it's serverless service which means lower fee to execute. I wrote small tool for this, the result is similar to run on local machine. You can check if help to you.
Like #Christiaan answered above, it almost worked for me but I needed a few other things to check to make it work well.
Firstly, in my case, my Redis is running in a specific network other than default network, so I had to create the jumpbox inside the same network (let's call it my-network)
Secondly, I needed to apply a firewall rule to open port 22 in that network.
So putting all my needed command it looks like this:
gcloud compute firewall-rules create default-allow-ssh --project=my-project --network my-network --allow tcp:22 --source-ranges 0.0.0.0/0
gcloud compute instances create jump-box --machine-type=f1-micro --project my-project --zone europe-west1-b --network my-network
gcloud compute ssh jump-box --project my-project --zone europe-west1-b -- -N -L 6379:10.177.174.179:6379
Then I have access to Redis locally on 6379
Related
I'm going to connect from the instance in Project-A(custom VPC) with CloudSQL Postgres in Project-B(default VPC). Documentation says that I need to peer these two VPC. The peering status in the "Active" state. In Project-A I also have cloudsql_auth_proxy. Once I execute cloudsql_auth_proxy, I get this:
root#cloudsql-auth-proxy:~# ./cloud_sql_proxy -instances=projectB:us-west1:postgres=tcp:0.0.0.0:5432
2022/12/29 16:46:59 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2022/12/29 16:47:01 Listening on 0.0.0.0:5432 for -instances=projectB:us-west1:postgres=tcp:0.0.0.0:5432
2022/12/29 16:47:01 Ready for new connections
2022/12/29 16:47:01 Generated RSA key in 244.541948ms
When I try to connect to the cloudsql_proxy like this psql -h xxx.xxx.xxx.xxx -p 5432 -U proxyuser -d postgres it hangs.
The output of cloudsql_auth_proxy looks like this:
2022/12/29 16:48:00 New connection for "-instances=projectB:us-west1:postgres"
2022/12/29 16:48:00 refreshing ephemeral certificate for instance -instances=projectB:us-west1:postgres
2022/12/29 16:48:00 Scheduling refresh of ephemeral certificate in 55m0s
: dial tcp 10.35.144.3:3307: connect: connection timed out
Any thoughts about this?
You'll need to deploy a Socks5 proxy in Project B VPC to provide a network path between VPCs. Dante is a popular choice.
Once you have a Socks5 proxy running, you can launch the Proxy pointing at it.
See https://github.com/GoogleCloudPlatform/cloud-sql-proxy#running-behind-a-socks5-proxy.
I think you might have posted this on the GCP subreddit too! :P
To expand on #enocom answer with some diagrams.
For reference :
VPC non-transitivity in GCP makes this a bit awkward.
I am a bit puzzled by a GCP design that would require running two extra GCE constructs + a socks proxy + a cloud_sql_auth proxy. That's a lot of bits to interconnect GCP native services like CloudSQL and Datastream.
I don't think I can remove any of the current pieces. If we remove vm-002, Datastream won't be able to reach vm-001 due to the lack of transitivity.
Reference Dante config to remove the authentication from the socks proxy. Don't do this in prod - just for the sake of simple test ;)
In /etc/danted.conf
systemctl restart danted.service
systemctl status danted.service
logoutput: syslog
clientmethod: none
socksmethod: none
# The listening network interface or address.
internal: 0.0.0.0 port=1080
# The proxying network interface or address.
external: ens4
client pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}
socks pass {
from: 0.0.0.0/0 to: 0.0.0.0/0
}
I tested AWS EC2 Amazon Linux and Ubuntu 18.04.
Tomcat is reachable from localhost:8081, but not from outside network
After pulling thee tomcat image
docker pull tomcat
Then running a container with port mapping:
docker run -d --name container-test -p 8081:8080 tomcat
Tomcat web page is not accessible, says:
This site can’t be reached 13.49.148.112:8081 took too long to respond.
But if doing this way, it's working fine.
docker run -d --name container-test2 -p 8080:8080 tomcat
I opened ALL ALL ALL in AWS security groups.
netstat shows that ports are listening correctly
ACLs are at default rule 100 allowing everything
I also did nmap this and found out the port is filtered:
$nmap -p8081 172.217.27.174
PORT STATE SERVICE
8081/tcp filtered blackice-icecap
Tried to add a rule to iptables but no luck:
iptables -I INPUT 3 -s 0.0.0.0/0 -d 0.0.0.0/0 -p tcp --dport 8081 -m state --state New -j ACCEPT
What can be done?
UPDATE:
Spent 2 good days to solve the issue with Amazon Linux2, but no success at all, switched to Ubuntu 22.04 and it's working. Also, same setup works on diff ami image in Mumbai region,
hence there is a high chance the image is faulty in Stockholm region specifically.
could be one of this:
check the port mappings of the container of your task definition
check the entries of the NACL (access control list) of your subnet (check if its public)
check if you allowed the trafic in the security group for your ip or 0.0.0.0/0
I configured a Compute Engine instance with only an internal IP (10.X.X.10). I am able to ssh into it via gcloud with IAP with tunneling, access and copy files storage via Private Google Access and VPC was set up with no conflicting IP ranges:
gcloud compute ssh --zone "us-central1-c" "vm_name" --tunnel-through-iap --project "projectXXX"
Now I want to open Jupyter notebook without creating an external IP in the VM.
Identity-Aware Proxy (IAP) is working well, Private Google Access also. After that I enabled a NAT Gateway, that generated an external IP (35.X.X.155).
I configured Jupyter by running jupyter notebook --generate-config, set up a password "sha"
Now I run Jupyter by typing this on gcloud SSH:
python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
Replacing:http://instance-XXX/?token=abcd
By:http://35.X.X.155/?token=abcd
But the external IP is not accessible, not even in the browser, neither in http nor in https. Note that I'm not considering using a Network Load Balancing, because it's not necessary.
Ping 35.X.X.155 works perfectly
I also tried jupyter notebook --gateway-url=http://NAT-gateway:8888
without success
Look at this as an alternative to a bastion (VM with external IP)
Any ideas on how to solve this issue ?
UPDATE: Looks like I have to find a way to SSH into the NAT Gateway.
What you are trying to do can be accomplished using IAP for TCP forwarding, and there is no need to use NAT at all in this scenario. Here are the steps to follow:
Ensure you have ports 22 and 8080 allowed in the project's firewall:
gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
allow-8080-ingress-from-iap default INGRESS 1000 tcp:8080 False
allow-ssh-ingress-from-iap default INGRESS 1000 tcp:22 False
On your endpoint's gcloud CLI, log in to GCP and set the project to where the instance is running:
gcloud config set project $GCP_PROJECT_NAME
Check if you already have SSH keys generated in your system:
ls -1 ~/.ssh/*
#=>
/. . ./id_rsa
/. . ./id_rsa.pub
If you don't have any, you can generate them with the command: ssh-keygen -t rsa -f ~/.ssh/id_rsa -C id_rsa
Add the SSH keys to your project's metadata:
gcloud compute project-info add-metadata \
--metadata ssh-keys="$(gcloud compute project-info describe \
--format="value(commonInstanceMetadata.items.filter(key:ssh-keys).firstof(value))")
$(whoami):$(cat ~/.ssh/id_rsa.pub)"
#=>
Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME].
Assign the iap.tunnelResourceAccessor role to the user:
gcloud projects add-iam-policy-binding $GCP_PROJECT_NAME \
--member=user:$USER_ID \
--role=roles/iap.tunnelResourceAccessor
Start an IAP tunnel pointing to your instance:port and bind it to your desired localhost port (in this case, 9000):
gcloud compute start-iap-tunnel $INSTANCE_NAME 8080 \
--local-host-port=localhost:9000
Testing if tunnel connection works.
Listening on port [9000].
At this point, you should be able to access your Jupyter Notebook in http://127.0.0.1:9000?token=abcd.
Note: The start-iap-tunnel command is not a one-time running command and should be issued and kept running every time you want to access your Jupyter Notebook implementation.
I am unable to connect to my EC2 instance via its public dns on a browser, even though for security groups "default and "launch-wizard-1" port 80 is open for inbound and outbound traffic.
It may be important I note that I have a docker image that is running in the instance, one I launched with:
docker run -d -p 80:80 elasticsearch
I'm under the impression this forwards port 80 of the container to port 80 of the EC2 instance, correct?
The problem was that elasticsearch serves http over port 9200.
So the correct command was:
docker run -d -p 80:9200 elasticsearch
The command was run under root.
I cannot connect to a aws Redshift cluster, but I am able to connect with exactly the same configuration when I'm using a different Wifi. Here are some details:
I use mac with SQL Workbench/J with AWS Redshift driver.
The error I'm getting:
[Amazon] (500150) Error setting/closing connection: Operation timed out.
Using Wireshark I see outbound TCP request with no answer.
When I set my smart phone as a hot spot (instead of using my home Wifi) - the same connection works fine.
Here are my security group details:
Inbound: Redshift TCP 5439 0.0.0.0/0
Outbound: All traffic All All 0.0.0.0/0
Also, I tested this on two different aws accounts - same problem on both.
Any idea would be of great help
found an answer here:
http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html
Idle connections are terminated by an intermediate network component (e.g firewall). To solve (on mac):
sudo sysctl net.inet.tcp.keepintvl=20000
sudo sysctl net.inet.tcp.keepidle=20000
sudo sysctl net.inet.tcp.keepinit=20000
sudo sysctl net.inet.tcp.always_keepalive=1
If this works, add the following to /etc/sysctl.conf to persist:
net.inet.tcp.keepidle=20000
net.inet.tcp.keepintvl=20000
net.inet.tcp.keepinit=20000
net.inet.tcp.always_keepalive=1
And after restart, to test:
sysctl net.inet.tcp.keepidle
sysctl net.inet.tcp.keepintvl
sysctl net.inet.tcp.keepinit
sysctl net.inet.tcp.always_keepalive
To change DSN timeout settings