Dask Hub/JupyterHub - Cannot start Python Kernel - google-cloud-platform

I deployed Dask Hub (Dask Gateway + Jupyterhub) on Google Kubernetes Engine using helm. I am experiencing trouble fetching the Python Kernel when I start up my jupyter notebook instance. This error occurs when I am on my company's VPN, but no error occurs when I'm not on my company's VPN. I'm going to guess that this is firewall related, but I don't know enough about the internal of the Jupyterhub kernel to understand why it's being blocked. Could someone please shed some light on this?
I can't see anything wrong from inspecting the logs of the jupyter pod:
From the Browser's Developer Console, here is the error:
Update:
I added the following to the Jupyterhub config:
jupyterhub:
hub:
extraConfig: |
c.JupyterHub.hub_connect_ip = '0.0.0.0'
c.JupyterHub.hub_bind_url = 'http://127.0.0.1:8000'
singleuser:
extraEnv:
DASK_GATEWAY__CLUSTER__OPTIONS__IMAGE: '{JUPYTER_IMAGE_SPEC}'

This has definitely something to do with the routing of your VPN. I don't know which spawner you are using, but here are some possible solutions:
Check if you have the correct settings for the following configuration options. The hub.connect.ip is important for the internal workings of Jupyterhub. bind_url is important for external traffic.
c.JupyterHub.hub_connect_ip = '0.0.0.0'
c.JupyterHub.bind_url = 'http://127.0.0.1:8000'
Switch protocols for your VPN if possible. Try switching from UDP to TCP (if possible at all).
Enforce an SSL connection for Jupyterhub. The VPN provider of your company could block non-secure connections. Read the documentation for Jupyterhub to enable SSL. Alternatively, you could also go for GKE managed certificates, more information can be found here.

Related

Unable to open Public IPv4 DNS in AWS EC2 - Linux instance

I have a Spring boot project which I want to host on an AWS-EC2 instance. I was able to create its image using Git-hub, Jenkin and docker. I was also able to successfully pull and run this image in the Linux console of my AWS-EC2 instance.
According the tutorial I was following I should have been able to open the project now using the public IPv4 DNS but the response I got was that it refuse to connect.
I know that this usually has to do with Inbound rules so I added a rule to allow all traffic but it didn't help.
For anyone who wants to know:
Git-hub repository: https://github.com/SalahuddinShayan/telecom
Docker-Hub repository: https://hub.docker.com/repository/docker/salahuddinshayan/telecom
Command I used to run the image in AWS:
docker run -p8081:8081 --name final-app --link docker-mysql:mysql salahuddinshayan/telecom
Security Groups:
Networking Details:
Here is the Error:
I am completely stumped by it. Does anyone an idea on what to do to fix this?
Please check if your client is calling the right protocol, e.g. http vs https.
You are transmitting on port 8081. http://3.110.29.193:8081/ works fine from the EC2 side. 404 status is raised, so this is a client side error, not a server side error.
It means that no firewall is blocking traffic and a process (your app) was found that listens on IP:Port that you require. The problem is that the process it encountered (your app) is sending only a WhiteLabel Error Page, which is a generic Spring Boot error page that is displayed when no custom error page is present. So the issue is with the Spring app itself and not with EC2 or with connection. In other words: the traffic can reach your Spring app, but your Spring app has nothing to say in response.
As a side note, after deploying your app I would advise to refine the inbound traffic rules to allow only the traffic you want. There is no need of allowing all traffic on all ports.

Google Cloud SSH won't connect

it seems I've run though everything and it will not connect,
I've verified my firewall is not blocking anything, I reset my ssh keys, I've set my user roles, and I've tried resolving it through the cloud shell.
Code: 4003
Reason: failed to connect to backend
You may be able to connect without using the Cloud Identity-Aware Proxy.
Anyone have any ideas? I just need this to work.
vm instance- OpenLiteSpeed Wordpress
zone- us-east4-b
machine type- n1-standard-1
cpu platform- Intel Broadwell
It seems you already tried to troubleshoot issue , I had similar issue so just want to know below
what is your role in project ( owner/editor)?
VM instance having external ip or not?
If firewall blocking it will give timeout error, its not timeout error.

Could not able to access flask application deployed in GCP compute engine

I deployed flask application in GCP compute engine. It is exposed at 5000 port. When I tried to do curl from vm, curl "localhost:5000/health", I am getting response "service up". But when I tried accessing through public IP, I am not able to access. I have created network firewall rule allowing both http & https traffic and for all the ports and for all IP (0.0.0.0/0).
Please let me know, if I am missing anything here.
Posting this answer based on the solution that was provided by #Rakesh.
Issue got resolved by changing the local host in the flask code to 0.0.0.0.
So the final configuration looks as follows:
app.run(host='0.0.0.0',debug=True,port=5000)

istio default installation - traffic blocked?

I'm quite new to Istio.
I just installed Istio on a k8 cluster in GCP. I have 2 services in my private cluster. one of them needs to talk to a Redis memorystore (over internal private IP - 10.x.x.x).
I'm seeing errors trying to connect to redis. What am I missing in my Istio configuration?
Update: I have found that the redis error is misleading. The real issue it seems is something else - see one my comments below. I don't understand what that error means.
Some additional background: this is for a Tyk installation. The issue it seems is communication between the Tyk Dashboard and Tyk Gateway pods. I'm seeing the SSL error (see comments below) when trying to connect from Gateway to Dashboard (Dashboard to Gateway is fine). The error goes away if I rebuild everything without Istio. I must be doing something really silly. :( Both pods are in the same cluster, same namespace.
I managed to fix the issue. Redis wasn't the issue. Issue was communication from Tyk Gateway -> Tyk Dashboard was failing. The gateway talks to the dashboard to register its presence. The connection logs showed what looked like a tls origination issue with Istio envoy proxy when it is routing the traffic. I configured a DestinationRule that explicitly turned off mtls for the dashboard and the problem went away.

This connection is not secure issue using Datalab on Dataproc cluster

I successfully installed Datalab on Dataproc cluster and I followed the instructions in this tutorial, but when I try to use Datalab on Google Chrome, it shows "This connection is not secure".
When I try to modify the connection with notebook interface from http://cluster-name-m:8080 to https://cluster-name-m:8080, the page didn't load.
Can someone please help me to solve this issue :'(
If you follow the Cluster web interfaces documentation it has you:
Create an SSH tunnel
Setup a SOCKS proxy to use that tunnel
Not open random ports that have no security
I'd recommend you follow those directions, as they will provide access to the web interfaces but encrypt all traffic via the SSH tunnel. While the browser will indicate the connection does not use SSL, the mechanism that is moving data from your browser to the cluster (SSH) is encrypting all data in the tunnel. Unfortunately, the browser does not know about this so it creates a warning.