Accessing localhost of GCP instance from local machine - google-cloud-platform

I am trying to run my flask app on GCP instance. However the app gets published at local host of that instance. I want to access that instances localhost.
I saw couple of videos and article but almost all were about deploying app on GCP. Is there no simple way to just forward whatever is published on localhost of VM instance to my PC browser and If I submit some information in the app then it goes to VM instance and gives back the result to my local machine's browser via VM instances localhost.

You can use Local Port Forwarding when you ssh into the target instance hosted in GCP.
Local port forwarding lets you connect from your local machine to another server. To use local port forwarding, you need to know your destination server, source port and target port.
You should already know your destination server. The target port must be the one on which your flask app is listening. The source port can be any port that is not in use on your local computer.
Assuming flask app is listening on port 8080 on the GCP instance and you want to make the app available in your local computer on port 9876, ssh into your GCP instance using the following command:
ssh -L 9876:127.0.0.1:8080 <username>#<gcpInstanceIP>
Same result can be achieved using gcloud compute ssh if you don't have the ssh key on the target instance.
The -- argument must be specified between gcloud specific args on the left and SSH_ARGS on the right:
gcloud compute ssh <gcp-instance-name> --zone=<instance-zone> -- -L <source-port>:localhost:<target-port>
You can also use the Google Cloud Shell:
Activate Cloud Shell located at the top-right corner in the GCP Web Interface
ssh into your instance with Local Port Forwarding
gcloud compute ssh <gcp-instance-name> --zone=<instance-zone> -- -L 8080:localhost:<target-port>
Click the Web Preview in the Google Cloud Shell, the Preview on port 8080.

Related

Configuring local laptop as puppet server and aws ec2 instance as puppet agent

I am trying to configure the puppet server and agent making my local laptop with ubuntu 18.04 as puppet server and aws ec2 instance as puppet agent. When trying to do so i am facing the issues related to hostname adding in /etc/hosts file and whether to use the public ip or private ip address and how to do the final configuration and make this work.
I have used the public ip and public dns of both the system to specify in the /etc/hosts file but when trying to run the puppet agent --test from the agent getting the error as temporary failure in name resolution and connecting to https://puppet:8140 failed. I am using this for a project and my setup needs to remain like this.
The connection is initiated from the Puppet agent to the PE server, so the agent is going to be looking for your laptop, even if you have the details of your laptop in the hosts file it probably has no route back to your laptop across the internet as the IP of your laptop was probably provided by your router at home.
Why not build your Puppet master on an ec2 instance and keep it all on the same network, edit code on your laptop, push to github/gitlab and then deploy the code from there to your PE server using code-manager.
Alternatively you may be able to use a VPN to get your laptop onto the AWS VPC directly in which case it'll appear as just another node on the network and everything should work.
The problem here is that the puppet server needs a public IP or an IP in the same network as your ec2 instance to which your puppet agent can connect to. However, there's one solution without using a VPN though it can't be permanent. You can tunnel your local port to the ec2 instance
ssh -i <pemfile-location> -R 8140:localhost:8140 username#ec2_ip -> This tunnels port 8140 on your ec2 instance to port 8140 in your localhost.
Then inside your ec2 instance you can modify your /etc/hosts file to add this:
127.0.0.1 puppet
Now run the puppet agent on your ec2 instance and everything should work as expected. Also note that if you close the ssh connection created above then the ssh tunnel will stop working.
If you want to keep the ssh tunnel open a bit more reliably then this answer might be helpful: https://superuser.com/questions/37738/how-to-reliably-keep-an-ssh-tunnel-open

Accessing a dev server when doing remote / cloud development

I'm attempting to find a completely remote / cloud-based development workflow.
I've created an aws free-tier ec2 instance and on that box I've been developing a gatsby site (the framework doesn't matter, the solution I'm looking for should be framework agnostic). Since the code is on another box, I can't run the dev server and then from the local computer hit localhost as I would normally.
So,
What do I need to do so that I can run gatsby develop and hit my dev server that's hosted on the ec2 box?
How do I provide public access to that endpoint?
Is it possible to provide temporary access so that when I log off of the box, it's no longer accessible?
Is there some mechanism I can put into place so that I'm the only one that can hit that endpoint?
Are there other features that I should be taking advantage to secure that endpoint?
Thanks.
I can't run the dev server and then from the local computer hit localhost as I would normally
You can. You can use ssh to tunnel your remote port to your localhost, and access the server from your localhost.
What do I need to do so that I can run gatsby develop and hit my dev server that's hosted on the ec2 box?
ssh into the dev server, run gatsby develop and either access it on localhost through ssh tunnel or make it public to access through its public IP address.
Use sshfs to mount a development folder on the dev server onto your localhost.
Alternatively, you can setup vncserver on the dev server, tunnel vnc connection using ssh, and access the dev server using through a remove desktop. Something liteweight would be good, e.g. fluxbox as a desktop environment for vnc.
Is it possible to provide temporary access so that when I log off of the box, it's no longer accessible?
yes. through ssh tunnel. You close tunnel and the access is finished.
Is there some mechanism I can put into place so that I'm the only one that can hit that endpoint?
ssh tunnel along with security group to allow ssh for your IP address only.
Are there other features that I should be taking advantage to secure that endpoint?
Security groups and ssh tunneling would be primary choices to ensure secure access to the dev server.
You can also make the endpoint public, but set security group of your dev server to allow internet access only from your IP.
You could also put the dev server in a private subnet for full separation from the internet. Use bastion host to access it or setup double ssh tunnel to your localhost.
Other way is to do all development on localhost, push code to CodeCommit and have CodePipeline manage deployment of your code to your dev server using CodeDeploy.
You can also partially eliminate ssh by using SSM Session Manager.
Hope this helps.

Viewing Cloud Compute Engine Application in Web Browser

I have a Dash application that I can run locally and view in my browser. I have moved it to Google Cloud Compute Engine and the app runs, but I can't see it in my browser at the 127.0.0.1 address where it's running.
I have tried to allow http and https traffic to the virtual machine using
gcloud compute firewall-rules create FIREWALL_RULE --allow tcp:80,tcp:443 in the console without any luck. How can I view it in my browser?
You were able to reach http://127.0.0.1 and/or https://127.0.0.1 when you run it locally because you run your web browser on the same computer. More information you can find here:
The local loopback mechanism may be used to run a network service on a
host without requiring a physical network interface, or without making
the service accessible from the networks the computer may be connected
to. For example, a locally installed website may be accessed from a
Web browser by the URL http://localhost to display its home page.
The name localhost normally resolves to the IPv4 loopback address
127.0.0.1, and to the IPv6 loopback address ::1.
As result, you can access IP 127.0.0.1 located on your VM instance only from your VM instance.
To check your application on IP 127.0.0.1 you can use command curl from command line of your VM instance:
instance:~$ curl -I http://127.0.0.1
instance:~$ curl -I https://127.0.0.1
To allow access to your application via ports 80/443 you should go to Compute Engine -> VM instances -> click on NAME_OF_YOUR_VM_INSTANCE-> click on EDIT -> go to Firewalls and select Allow HTTP traffic and Allow HTTP traffic -> click Save. Have a look at the documentation Firewall rules overview and Configuring network tags to find more details.
To access your application from web browser you should use external IP address that you can find at Compute Engine -> VM instances -> look for NAME_OF_YOUR_VM_INSTANCE and External IP:
http://EXTENAL_IP_OF_YOUR_VM_INSTANCE
https://EXTENAL_IP_OF_YOUR_VM_INSTANCE

Is it possible to connect to Cloud SQL Proxy via Host Compute Engine VM's Internal or External IP?

I am testing the following configuration.
Cloud SQL (tetsql-1) in Region X Zone A
A Compute Engine VM (TestVM-1) in the same Region X Zone A. OS is Centos 7
Compute Engine VM is running cloud SQL proxy on non default port (9090)
With the above configuration I am able to logon to testsql-1 from TestVM-1 with below command:
`mysql -h 127.0.0.1 --port 9090 -u testuser -D testDB -p`
However I am not able use the internal IP of TestVM-1 in the above command. It gives an error.
Another observation is I am able to do telnet 127.0.0.1 9090 but when I try telnet <VM -Internal-IP> 9090 returns a connection refused error.
Does anyone know if this is expected behaviour? If this is expected, why is it so?
The cloud proxy uses 127.0.0.1 by default, where it accepts connections.
To configure another IP Address, you have to set it in the instances parameter:
./cloud_sql_proxy -instances=<myCloudSQLproject:myCloudSQLzone:mycloudsqlinstance>=tcp:<IP_Address>:<PORT>
Something like this:
./cloud_sql_proxy -instances=project_xxx:us-central1:database_yyy=tcp:10.203.23.12:9090
This configuration allows connecting to this cloud proxy from others hosts as well.
The reason that you can connect to 127.0.0.1 but you cannot connect using the VM's private IP address is that the Proxy is NOT listening on the private IP address.
The Cloud SQL Proxy listens on the loopback adapter's internal address which is 127.0.0.1. This address only exists inside the computer.
You're able to connect from your VM to Cloud SQL because you're using the proxy. If you would like to connect to your Cloud SQL then you have whitelist the IP address of your VM in Cloud SQL's connections tab, please refer to this documentation.
This is expected behavior. Private IPs are only accessible from a Virtual Private Cloud (VPC). In order for a resource (such as a GCE instance) to connect, it must also be on that VPC.
See this page for instructions on how to add a GCE instance to a VPC, and see this page for more on the environment requirements for Private IP.

Open Remote Server Local URL on my own System (Dynamic Mode)

So I have access to a AWS server (let's say IP: 132.31.55.178).
I am starting a job in this server remotely which creates a local URL (http://0.0.0.0:3000/import) on the SERVER. After starting the job, I should go to this local URL, import a model there and run the main job using the UI provided in that local URL.
Is there anyway that I can see this URL in my system and do what is
needed?
I did use WinSCP but when opening a file, it moves it to my temp folder so whatever I do there does not reflect the server in the real time.
Any idea how to fix this?
0.0.0.0 is not a "local" address. This address is typically used to bind a service to all IPv4 interfaces. If you are accessing something locally you are probably using the 127.0.0.1 "localhost" address.
If you have something running on an EC2 server, bound to all IPv4 interfaces, then you would access it like <ec2-public-address>:<port>. Given your example of address 132.31.55.178 running on port 3000 you would access this service at http://132.31.55.178:3000/import. Note that you would need to open port 3000 in the AWS Security Group assigned to the EC2 instance before you would be able to access that service.
Alternatively, a more secure method would be to use SSH tunneling (which you have tagged your question with, but not mentioned in your question at all). With SSH tunneling you could bind port 3000 on your local computer to port 3000 on the remote EC2 server. Then you could access the service from your local computer by loading http://localhost:3000/import. The SSH command to establish this tunnel would be something like:
ssh user#ec2-server-address -i ssh-key-location -L 3000:localhost:3000 -N