I deployed an application on Google Cloud (GKE). In order to access its UI, I did port-forwarding(port 9090). When I use Cloud Shell web preview I can access the UI. However, when I tried to open localhost:9090 in my browser, I cannot access. Do you know why I cannot access from my browser, is it normal?
Thank you!
Answered provided in the comments by a community member.
Do you know why I cannot access from my browser, is it normal?
Cloud Shell is where you're running kubectl port-forward. Port forwarding only applies to the host on which the command is run unless you have a chain of port-forwarding commands. If you want to access the UI from your local host, then you will need to run the kubectl port-forward on your local host too.
So how can I can run kubectl port-forward command on my local host for the application that I deployed cloud? Should I install Google Cloud CLI on my local machine?
I assumed (!) that you're using kubectl port-forward on Cloud Shell. If that's correct, then you need to install kubectl on your local machine to run it there. Because of the way that GKE authenticates, it may also be prudent to install gcloud on your local machine. You can then use gcloud container clusters get-credentials ... to create a local Kubernete (GKE) config file on your local machine that is then used by kubectl commands.
Related
I am trying to test an outbound connection from within a Amazon Linux 2 container that is running in Kubernetes. I have a service set up and I am able to telnet to that service through a VPN. But I want to test a connection coming out from that container. Is there a way that this can be done.
I have tried the ping, etc. but the commands all say "command not found"
Is there any command I can run that can test an outbound connection?
Please provide more context. What exact image are you running? When debugging connectivity of kubernetes pods and services, you can exec into the pod with
kubectl exec -it <pod_name> -n <namespace> -- <bash|ash|sh>
Once you gain access to the pod and can emulate a shell inside, you can update + upgrade the runtime with the package manager (apt, yum, depends on the distro).
After upgrading, you can install curl and try to curl an external site.
is it possible to open the browser inside gcp cloud shell, or alternatively is it possible to have a desktop access to the cloud shell through the browser ? I am trying to run istio inside cloud shell, which I have done successfully. However to view the sample application I need to open a browser inside the cloudshell itself. I am not sure how to do it from my browser on my laptop and what uri I should use.
Yes.
It depends on your "client" for Cloud Shell.
If you're using the browser, there's a menu option that permits publishing a Cloud Shell port (this used to be a limited set of ports e.g. 8080 but I think it's now broader).
If you're using gcloud, you can use the following command to port-forward the Cloud Shell instance's CLDS_PORT to your host's HOST_PORT:
gcloud cloud-shell ssh --ssh-flag="-L [HOST_PORT]:localhost:[CLDS_PORT]"
I am trying to run my flask app on GCP instance. However the app gets published at local host of that instance. I want to access that instances localhost.
I saw couple of videos and article but almost all were about deploying app on GCP. Is there no simple way to just forward whatever is published on localhost of VM instance to my PC browser and If I submit some information in the app then it goes to VM instance and gives back the result to my local machine's browser via VM instances localhost.
You can use Local Port Forwarding when you ssh into the target instance hosted in GCP.
Local port forwarding lets you connect from your local machine to another server. To use local port forwarding, you need to know your destination server, source port and target port.
You should already know your destination server. The target port must be the one on which your flask app is listening. The source port can be any port that is not in use on your local computer.
Assuming flask app is listening on port 8080 on the GCP instance and you want to make the app available in your local computer on port 9876, ssh into your GCP instance using the following command:
ssh -L 9876:127.0.0.1:8080 <username>#<gcpInstanceIP>
Same result can be achieved using gcloud compute ssh if you don't have the ssh key on the target instance.
The -- argument must be specified between gcloud specific args on the left and SSH_ARGS on the right:
gcloud compute ssh <gcp-instance-name> --zone=<instance-zone> -- -L <source-port>:localhost:<target-port>
You can also use the Google Cloud Shell:
Activate Cloud Shell located at the top-right corner in the GCP Web Interface
ssh into your instance with Local Port Forwarding
gcloud compute ssh <gcp-instance-name> --zone=<instance-zone> -- -L 8080:localhost:<target-port>
Click the Web Preview in the Google Cloud Shell, the Preview on port 8080.
GCP has finally released managed Jupyter notebooks. I would like to be able to interact with the notebook locally by connecting to it. Ie. i use PyCharm to connect to the externaly configured jupyter notebbok server by passing its URL & token param.
Question also applies to AWS Sagemaker notebooks.
AWS does not natively support SSH-ing into SageMaker notebook instances, but nothing really prevents you from setting up SSH yourself.
The only problem is that these instances do not get a public IP address, which means you have to either create a reverse proxy (with ngrok for example) or connect to it via bastion box.
Steps to make the ngrok solution work:
download ngrok with curl https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip > ngrok.zip
unzip ngrok.zip
create ngrok free account to get permissions for tcp tunnels
run ./ngrok authenticate with your token
start with ./ngrok tcp 22 > ngrok.log & (& will put it in the background)
logfile will contain the url so you know where to connect to
create ~/.ssh/authorized_keys file (on SageMaker) and paste your public key (likely ~/.ssh/id_rsa.pub from your computer)
ssh by calling ssh -p <port_from_ngrok_logfile> ec2-user#0.tcp.ngrok.com (or whatever host they assign to you, it;s going to be in the ngrok.log)
If you want to automate it, I suggest using lifecycle configuration scripts.
Another good trick is wrapping downloading, unzipping, authenticating and starting ngrok into some binary in /usr/bin so you can just call it from SageMaker console if it dies.
It's a little bit too long to explain completely how to automate it with lifecycle scripts, but I've written a detailed guide on https://biasandvariance.com/sagemaker-ssh-setup/.
On AWS, you can use AWS Glue to create a developer endpoint, and then you create the Sagemaker notebook from there. A developer endpoint gives you access to connect to your python or Scala spark REPL via ssh, and it also allows you to tunnel the connection and access from any other tool, including PyCharm.
For PyCharm professional we have even tighter integration, allowing you to SFTP files and debug remotely.
And if you need to install any dependencies on the notebook, apart from doing it directly on the notebook, you can always choose new>terminal and you will have a connection to that machine directly from your jupyter environment where you can install anything you want.
There is a way to SSH into a Sagemaker notebook instance without having to use a third party reverse proxy like ngrok, nor setup an EC2 bastion, nor using AWS Systems Manager, here is how you can do it.
Prerequisites
Use your own VPC and not the VPC managed by AWS/Sagemaker for the notebook instance
Configure an ingress rule in the security group of your notebook instance to allow SSH traffic (port 22 over TCP)
How to do it
Create a lifecycle script configuration that is executed when the instance starts
Add the following snippet inside the lifecycle script :
INSTANCE_IP=$(/sbin/ifconfig eth2 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')
echo "SSH into the instance using : ssh ec2-user#$INSTANCE_IP" > ~ec2-user/SageMaker/ssh-instructions.txt
Add your public SSH key inside /home/ec2-user/.ssh/authorized_keys, either manually with the terminal of jupyterlab UI, or inside the lifecycle script above
When your users open the Jupyter interface, they will find the ssh-instructions.txt file which gives the host and command to use : ssh ec2-user#<INSTANCE_IP>
If you want to SSH from a local environment, you'll probably need to connect to your VPN that routes your traffic inside your VPC.
GCP's AI Platform Notebooks automatically creates a persistent URL which you can use to access your notebook. Is that what you were looking for?
Try using CreatePresignedNotebookInstanceUrl to access your notebook instance using an url.
I have a containerized app running on a VM. It consists of two docker containers. The first contains the WebSphere Liberty server and the web app. The second contains PostgreSQL and the app's DB.
On my local VM, I just use docker run to start the two containers and then I use docker attach to attach to the web server container so I can edit the server.xml file to specify the public host IP for the DB and then start the web server in the container. The app runs fine.
Now I'm trying to deploy the app on Google Cloud Platform.
I set up my gcloud configuration (project, compute/zone).
I created a cluster.
I created a JSON pod config file which specifies both containers.
I created the pod.
I opened the firewall for the port specified in the pod config file.
At this point:
I look at the pod (gcloud preview container kubectl get pods), it
shows both containers are running.
I SSH to the cluster (gcloud compute ssh xxx-mycluster-node-1) and issue sudo docker ps and it shows the database container running, but not the web server container. With sudo docker ps -l I can see the web server container that is not running, but it keeps trying to start and exiting every 10 seconds or so.
So now I need to update the server.xml and start the Liberty server, but I have no idea how to do that in this realm. Can I attach to the web server container like I do in my local VM? Any help would be greatly appreciated. Thanks.
Yes, you can attach to a container in a pod.
Using Kubernetes 1.0 issue the following command:
Do:
kubectl get po to get the POD name
kubectl describe po POD-NAME to find container name
Then:
kubectl exec -it POD-NAME -c CONTAINER-NAME bash Assuming you have bash
Its similar to docker exec -it CONTAINER-NAME WHAT_EVER_LOCAL_COMMAND
On the machine itself, you can see crash looping containers via:
docker ps -a
and then
docker logs
you can also use kubectl get pods -oyaml to get details like restart count that will validate that the container is crash-looping.