I am trying to open Jupyter Lab after installing Google Deep learning VM.
This code i am running in SSH terminal in browser:
export PROJECT_ID="name"
export ZONE="us-west1-b"
export INSTANCE_NAME="tensorflow-1-vm"
gcloud compute ssh --project $PROJECT_ID --zone $ZONE \
$INSTANCE_NAME -- -L 8080:localhost:8080
I always get same mistake and cannot access to http://localhost:8080/ :
bind: Address already in use
channel_setup_fwd_listener_tcpip: cannot listen to port: 8080
Could not request local forwarding.
Could you please tell me what am i doing wrong? Thank you!
Make sure your instance has firewall rules configure to allow http/https egress protocol along and the instance have a public IP.
Check this out https://cloud.google.com/ai-platform/notebooks/docs/ssh-access
Related
I configured a Compute Engine instance with only an internal IP (10.X.X.10). I am able to ssh into it via gcloud with IAP with tunneling, access and copy files storage via Private Google Access and VPC was set up with no conflicting IP ranges:
gcloud compute ssh --zone "us-central1-c" "vm_name" --tunnel-through-iap --project "projectXXX"
Now I want to open Jupyter notebook without creating an external IP in the VM.
Identity-Aware Proxy (IAP) is working well, Private Google Access also. After that I enabled a NAT Gateway, that generated an external IP (35.X.X.155).
I configured Jupyter by running jupyter notebook --generate-config, set up a password "sha"
Now I run Jupyter by typing this on gcloud SSH:
python /usr/local/bin/jupyter-notebook --ip=0.0.0.0 --port=8080 --no-browser &
Replacing:http://instance-XXX/?token=abcd
By:http://35.X.X.155/?token=abcd
But the external IP is not accessible, not even in the browser, neither in http nor in https. Note that I'm not considering using a Network Load Balancing, because it's not necessary.
Ping 35.X.X.155 works perfectly
I also tried jupyter notebook --gateway-url=http://NAT-gateway:8888
without success
Look at this as an alternative to a bastion (VM with external IP)
Any ideas on how to solve this issue ?
UPDATE: Looks like I have to find a way to SSH into the NAT Gateway.
What you are trying to do can be accomplished using IAP for TCP forwarding, and there is no need to use NAT at all in this scenario. Here are the steps to follow:
Ensure you have ports 22 and 8080 allowed in the project's firewall:
gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
allow-8080-ingress-from-iap default INGRESS 1000 tcp:8080 False
allow-ssh-ingress-from-iap default INGRESS 1000 tcp:22 False
On your endpoint's gcloud CLI, log in to GCP and set the project to where the instance is running:
gcloud config set project $GCP_PROJECT_NAME
Check if you already have SSH keys generated in your system:
ls -1 ~/.ssh/*
#=>
/. . ./id_rsa
/. . ./id_rsa.pub
If you don't have any, you can generate them with the command: ssh-keygen -t rsa -f ~/.ssh/id_rsa -C id_rsa
Add the SSH keys to your project's metadata:
gcloud compute project-info add-metadata \
--metadata ssh-keys="$(gcloud compute project-info describe \
--format="value(commonInstanceMetadata.items.filter(key:ssh-keys).firstof(value))")
$(whoami):$(cat ~/.ssh/id_rsa.pub)"
#=>
Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME].
Assign the iap.tunnelResourceAccessor role to the user:
gcloud projects add-iam-policy-binding $GCP_PROJECT_NAME \
--member=user:$USER_ID \
--role=roles/iap.tunnelResourceAccessor
Start an IAP tunnel pointing to your instance:port and bind it to your desired localhost port (in this case, 9000):
gcloud compute start-iap-tunnel $INSTANCE_NAME 8080 \
--local-host-port=localhost:9000
Testing if tunnel connection works.
Listening on port [9000].
At this point, you should be able to access your Jupyter Notebook in http://127.0.0.1:9000?token=abcd.
Note: The start-iap-tunnel command is not a one-time running command and should be issued and kept running every time you want to access your Jupyter Notebook implementation.
"Connection Failed
You cannot connect to the VM instance because of an unexpected error. Wait a few moments and then try again."
When I tried to copy file from one virtual machine to another using scp, ssh client loses connection. Command that I run is the following:
gcloud compute scp --recurse file_name account#instance_name:~/folder --zone zone_name --project project_name
What can be the reason of it?
Make sure that you have opened port 22, which is the port that allows communication with SSH. If you are not sure, you can create a rule by going to VPC -> Firewall Rules -> port 22.
Here is an article that can help with allowing SSH connection
This other article can help you troubleshooting SSH
You can also run the following command to check which ports you have open netstat -tuplen, make sure you have port 22 listening.
gitlab version 13.8.1-ee (install with helm)
GKE version : 1.16.15-gke.6000
I install gitlab & gitlab-runner on GKE, private cluster.
Also, I have nginx-ingress-controller for firewall rule, following docs.
https://gitlab.com/gitlab-org/charts/gitlab/blob/70f31743e1ff37bb00298cd6d0b69a0e8e035c33/charts/nginx/index.md
nginx-ingress:
controller:
scope:
enabled: true
namespace: default
service:
loadBalancerSourceRanges:
["IP","ADDRESSES"]
With this setting, gitlab-runner pod has error
couldn't execute POST against https://gitlab.my-domain.com/api/v4/runners: Post https://gitlab.my-domain.com/api/v4/runners: dial tcp [my-domain's-IP]: i/o timeout
Issue is same as this one.
Gitlab Runner can't access Gitlab self-hosted instance
But I already set cloudNAT & cloud Route, also adding IP address of CloudNAT in loadBalancerSourceRanges in gitlab's value.yaml.
To check if cloudNAT worked or not, I tried to exec pod and check IP
$ kubectl exec -it gitlab-gitlab-runner-xxxxxxxx /bin/sh
wget -qO- httpbin.org/ip
and it showed IP address of CloudNAT.
So, the request must be called using CloudNAT IP as source IP.
https://gitlab.my-domain.com/api/v4/runners
What can I do to solve it ?
It worked when I added kubernetes-pod-inner-ipaddress in loadBalancerSourceRanges. Both stable/nginx, https://kubernetes.github.io/ingress-nginx worked.
gitlab-runner called https://my-domain/api/v4/runners . I thought it would go through public network, so added only CloudNAT IP, but maybe it was not.
Still, it's a little bit weird.
First time I set 0.0.0.0/0 in loadBalancerSourceRanges, then added only CloudNAT IP in FW, https://my-domain/api/v4/runners worked.
So, loadBalancerSourceRanges may be used in 2 places, 1 is FW rule which we can see on GCP, the other is hidden.
I'm new to devops. I want to install Jenkins in AWS EC2 with docker.
I have installed the Jenkins by this command:
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
On AWS security group, I have enabled port 8080 and 50000. I also enabled port 22 for SSH, 27017 for Mongo and 3000 for Node.
I can see the Jenkins container when I run docker ps. However, when I run https://xxxx.us-east-2.compute.amazonaws.com:8080, there is not a Jenkins window popup for Jenkins setting and display error, ERR_SSL_PROTOCOL_ERROR.
Does someone know what's wrong here? Should I install Nginx as well? I didn't install it yet.
The error is due to the fact that you are using https:
https://xxxx.us-east-2.compute.amazonaws.com:8080
From your description it does not seem that you've setup any type of ssl connection to your instance. So you should connect using http only:
http://xxxx.us-east-2.compute.amazonaws.com:8080
But this is not good practice as you communicate using plain text. A common solution is to access your jenkins web-ui through ssh tunnel. This way the connection is encrypted and you don't have to exposed any jenkins port in your security groups.
Whats the best way to access Memorystore from Local Machines during development? Is there something like Cloud SQL Proxy that I can use to set up a tunnel?
You can spin up a Compute Engine instance and use port forwarding to connect to your Redis machine.
For example if your Redis machine has internal IP address 10.0.0.3 you'd do:
gcloud compute instances create redis-forwarder --machine-type=f1-micro
gcloud compute ssh redis-forwarder -- -N -L 6379:10.0.0.3:6379
As long as you keep the ssh tunnel open you can connect to localhost:6379
Update: this is now officially documented:
https://cloud.google.com/memorystore/docs/redis/connecting-redis-instance#connecting_from_a_local_machine_with_port_forwarding
I created a vm on google cloud
gcloud compute instances create redis-forwarder --machine-type=f1-micro
then ssh into it and installed haproxy
sudo su
apt-get install haproxy
then updated the config file
/etc/haproxy/haproxy.cfg
....existing file contents
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
I was then able to connect to memory store from my local machine for development
You can spin up a Compute Engine instance and setup an haproxy using the following docker image haproxy docker image then haproxy will forward your tcp requests to memorystore.
For example i want to access memorystore instance with ip 10.0.0.12 so added the following haproxy configs:
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server 10.0.0.12:6379 check
So now you can access memorystore from your local machine using the following command:
redis-cli -h <your-haproxy-public-ipaddress> -p 6379
Note: replace with you actual haproxy ip address.
Hope that can help you to solve your problem.
This post builds on earlier ones and should help you bypass firewall issues.
Create a virtual machine in the same region(and zone to be safe) as your Memorystore instance. On this machine:
Add a network tag with which we will create a firewall rule to allow traffic on port 6379
Add an external IP with which you will access this VM
SSH into this machine and install haproxy
sudo su
apt-get install haproxy
add the following below existing config in the /etc/haproxy/haproxy.cfg file
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
Now create a firewall rule that allows traffic on port 6379 on the VM. Ensure:
It has the same target tag as the networking tag we created on the VM.
It allows traffic on port 6379 for the TCP protocol.
Now you should be able to connect remotely like so:
redis-cli -h [VM IP] -p 6379
Memorystore does not allow connecting from local machines, other ways like from CE, GAE are expensive especially your project is small or in developing phase, I suggest you create a cloud function to execute memorystore, it's serverless service which means lower fee to execute. I wrote small tool for this, the result is similar to run on local machine. You can check if help to you.
Like #Christiaan answered above, it almost worked for me but I needed a few other things to check to make it work well.
Firstly, in my case, my Redis is running in a specific network other than default network, so I had to create the jumpbox inside the same network (let's call it my-network)
Secondly, I needed to apply a firewall rule to open port 22 in that network.
So putting all my needed command it looks like this:
gcloud compute firewall-rules create default-allow-ssh --project=my-project --network my-network --allow tcp:22 --source-ranges 0.0.0.0/0
gcloud compute instances create jump-box --machine-type=f1-micro --project my-project --zone europe-west1-b --network my-network
gcloud compute ssh jump-box --project my-project --zone europe-west1-b -- -N -L 6379:10.177.174.179:6379
Then I have access to Redis locally on 6379