I have a cloud instance which I would like to restrict access to. I'm wondering what's the right way to do it.
The setup:
1. I've opened a Google Compute cloud access and it has an external IP a.b.c.d
2. I would like everyone to be accessing a.b.c.d to be automatically redirected to google authentication, and if the account meets the policy, they will be able to proceed
Can anyone suggest a proper way of doing that? without adding code to the application running in a.b.c.d, but by configuring cloud instance
Look into using the IAP (Internet Aware Proxy)... it can now be used to shield ssh requests. I've not done what you're trying, but I think it is what you're looking for.
Related
Not sure what the right terms were to start this question but basically I have a downloaded UI tool that runs on 0.0.0.0:5000 on my AWS EC2 instance and my ec2 instance has a public ip address associated with it. So right now everyone in the world can access this tool by going to {ec2_public_ip}:5000.
I want to run some kinda script or add security group inbound rules that will require authorization prior to letting someone view the page. The application running on port 5000 is a downloaded tool not my own code so it wouldnt be possible to add authentication to the tool itself (Its KafkaMagic FYI).
The one security measure I was able to do so far was only allow specific IPs TCP connection to port 5000, which is a good start but not enough as there is no guarantee someone on that IP is authorized to view the tool. Is it possible to require an IAM role to access the IP? I do have a separate api with a login endpoint that could be useful if it was possible to run a script before forwarding the request, is that a possible/viable solution? Not sure what best practice is in this case, there might be a third option I have not considered.
ADD-ON EDIT
Additionally, I am using EC2 Instance Connect and if it is possible to require an active ssh connection before accessing the ec2 instances ip that would be a good solution as well.
EDIT FOLLOWING INITIAL DISCUSSION
Another approach that would work for me is if I had a small app running on a different port that could leverage our existing UI to log a user in. If a user authenticated through this app, would it be possible to display the ui from port 5000 to them then? In this case KafkaMagic would be on a private ip and there would be a different IP that the user would go through before seeing the tool
In short, the answer is no. If you want authorization (I think, you mean, authentication) to access an application running on the server - you need tools that run on the server. If your tool offers such capability - use it. It looks like Kafka Magic has such capability: https://www.kafkamagic.com/faq/#how-to-authenticate-kafka-client-by-consumer-group-id
But you can't use external tools, like AWS, that perform such authentication. Security group is like a firewall - it either allows or blocks access to the port.
You can easily create a script that uses the aws sdk or even just executes the aws CLI to view/add/remove an ip address of a security group. How you execute that script depends on your audience and what language you use.
For a small number of trusted users you could issue them an IAM user and API key with a policy that allows them to manage a single dynamic security group. Then provide a script they can run/shortcut to click that gets the current gateway ip and adds/removes it from the security group.
If you want to allow users via website a simple script behind some existing authentication is also possible with sdk/cli approach(depending on available server side scripting).
If users have SSH access - you could authorise the ip by calling the script/cli from bashrc or some other startup script.
In any case the IAM policy that grants permissions to modify the SG should be as restrictive as possible (basically dont use any *'s in the policy). You can add additional conditions like the source IP/range (ie in your VPC) or that MFA must be active for user etc to make this more secure (can be handled in either case via script). If your running on ec2 id suggest looking at IAM Instance Roles as an easy way to give your server access to credentials for your script (but you can create a user and deploy the key/secret to the server and manage it manually if you wanted).
I would also suggest creating a dedicated security group for dynamically managed access alongside existing SGs required for internal operation for safety. It would be a good idea to implement a lambda function on a schedule to flush the dynamic SG (even if you script de-authorising an IP it might not happen so its good to clean up safely/automatically).
Let's assume, I run a Cloud Run service of Google.
Let's also assume someone wants to really harm you and finds out all API routes or is able to send a lot of post-requests by spamming the site.
There is a Email notification, which will popup on certain limits you set up before.
Is there also a way to automatically cut the Cloud Run service, or set it temporarily offline? I couldn't find any good resource or solution to this.
There are several solution to remove from traffic Cloud Run service, in addition of authentication solution proposed by Dondi
Delete the Cloud Run service. It might seem overkill, but, because the service is stateless, you will lost nothing (except the revision history)
If you have your Cloud Run service behind a Load Balancer
You can remove the serverless NEG that route the traffic to it
You can add a Cloud Armor policy that filter the originator IP to exclude it from the traffic
You can set the ingress to internal, or internal and cloud load balancing.
You can deploy a dummy revision (a hello world container for example), and route 100% of the traffic to it (traffic splitting feature)
You can't really "turn off" a Cloud Run service as it's fully managed by Google. A Cloud Run instance automatically scales down to zero if there are no requests, but it will continue on serving traffic.
To emulate what you want to do, make sure that your service requires authentication then revoke access on the offending user (or all users). As mentioned in the docs:
Cloud Run (fully managed) does not offer a direct way to make a service stop serving traffic, but you can achieve a similar result by revoking the permission to invoke the service to identities that are invoking the service. Notably, if your service is "public", remove allUsers from the Cloud Run Invoker role (roles/run.invoker).
Update: Access to a resource is managed through an IAM policy. In order to control access programmatically, you have to get the IAM policy first, then revoke the role to a user or a service account. Here's the documentation that gives an overview.
I am working on a project where a user clicks a link/button that says Access VM on a webpage, it should internally spin up a Linux based VM (using GCP, AWS or Azure) and provide the VM terminal in a new browser tab for the user to play around in the VM.
How can I achieve this using GCP/AWS/Azure? Which type of VM should I create so that the user can access the VM terminal over a browser without using an SSH client?
I tried creating a VM on Azure and explored the Bastion option. But this Bastion session should always be initiated from within the Azure portal.
Do we have any other option within GCP, AWS or Azure to achieve this?
I am looking for something similar to katacoda website.
There's no built in feature in GCP that will allow such thing possible. There is a button "SSH" in the VM's list but you have to be able to view the list and also have the permission to connect to the instance. But that requires to actually log into GCP which I believe is not what you want.
**You could try and built some solution that after clicking an "Connect" button you your website would send a series of commands to GCP's API to create & connect to the new isntance. It's possible but rather complicated.
Have a look at the documentation how to connect to VM using browser - maybe it will give yolu some ideas.
Ultimately use many other 3rd party tools but you still need to provide an address and credentials - additionally you rely on a service that you don't control so you have to take security (and reliability) into consideration.
At the end you may also consider going through general information how to connect to GCP's instances.
I have an Owner access to GCP project.We need to access a Notebook instance which does not have external ip , so its using IAP tunneling. I am able to access the same but my team members who I have given IAP-Secured Tunnel User and have the Notebook Viewer access are NOT able to access and get the error in putty terminal
No supported authentication methods available(server sent:public key).
As per google documentation the firewall rule should also be set for IAP which is NOT set for this instance. But if that's the issue how am I able to access and not others. Is there some other role need to be added ?
I had a similar issue before which I fixed it from the screen below. You can click on add member button.
Can you try adding them and then if you are using cli, ask them to update the sdk before trying the command.
Hope this helps.
I have to docker images A and B running on google run. A need a small memory footprint and slow scaling (it is the front end) and B needs a high memory foot-sprint and heavy scaling under load (it is the backend).
I have made A public (allUser can touch :80 ), and B private (I didn't checked the checkbox).
Since google cloud instance doesn't have a static IP but a dynamic URL, how can I make A "speak" to B (through http) while maintaining B inaccessible from the wild ?
Right now, the only work around I found is to open HTTP ports to allUser for both and use a sub domain name for B (like b.my.app) and call "http://b.my.app" from A.
This is a very bad solution since B can be touched from outside google's network.
Since service B is private (requires authentication), service A will need to include an HTTP Authorization header in requests to service B.
The header looks like this:
Authorization: Bearer <replace_with_token
The token is an OAuth 2.0 Identity Token (not an Access Token). The IAM member email address for the User Credentials or Service Account is added to service B with the role roles/run.invoker.
You will still need to call the endpoint URL (xxx.y.run.app) of service B. That does not change unless you also implement Custom Domains.
A nice feature of Cloud Run is that when authentication is required, the Cloud Run Proxy handles this for you. The Proxy sits in front of Cloud Run and blocks all unathorized requests. Your instance is never launched so there is no billing time while hackers try to get thru.
In one of my articles on my website, I show how to generate the Identity Token in Go (link). In this article using CURL (link) which is a three-part series. There are numerous articles on the Internet that explain this also. In another article, I explain how Cloud Run Identity works (link) and how Cloud Run Identity Based Access Control works (link).
Review the --service-account option which allows you to set the service account to use for identity (link).
Cloud Run Authentication documentation (link).