In a project we start using GKE to host some services.
Those services should be accessible by all team members, but should not be accessible for anyone else in the world.
Our team works from home, hence we cannot restrict IP addresses or something like that.
What is the best way to make sure only team members can access the service?
I tried to set up IAP. That works, but it is much setup for each service and I did not find a way to allow "technical user" like allowing sonarscanner to reach sonarqube.
Maybe another option would be setting up a dedicated nginx-ingress controller that I can secure using BasicAuth or client certificates. - But it feels like my situation is quite common and I am missing something existing. - Any hints?
The current challenge with IAP is, that I have services like Sonarqube, that offer both a web interface and an API. Using a browser to access the web interface works fine. But it's not clear to me how to configure for example sonarscanner to access the IAP-protected API.
The second issue with IAP is, that it requires each service to configure quite a bit of GKE-specific boilerplate (Frontendconfig/Backendconfig/Annotations/etc.).
I would really like to shift that kind of configuration from the services (i.e. developers) to the Cluster/IngressController (i.e. cluster admin).
Related
I'm building a simple analytic service that needs to work for multiple countries. It's likely that someone from a restricted jurisdiction (e.g. Iran) hits the endpoint. I am not offering any service that would fall under sanctions-related restrictions, but it seems like Cloud Run endpoints do not allow traffic from places like Iran. I tried various configurations (adding a domain mapping, an external HTTPS LB, calling from Firebase, etc) and it doesn't work.
Is there a way to let read-only traffic through from these territories? Or is there another Google product that would allow this? It seems like the Google Maps prohibited territory list applies to some services, but not others (e.g. Firebase doesn't have this issue).
You should serve traffic through Load Balancer with Cloud Armour policy. Cloud Armour provide a feature for filtering traffic based on location.
Not sure what the right terms were to start this question but basically I have a downloaded UI tool that runs on 0.0.0.0:5000 on my AWS EC2 instance and my ec2 instance has a public ip address associated with it. So right now everyone in the world can access this tool by going to {ec2_public_ip}:5000.
I want to run some kinda script or add security group inbound rules that will require authorization prior to letting someone view the page. The application running on port 5000 is a downloaded tool not my own code so it wouldnt be possible to add authentication to the tool itself (Its KafkaMagic FYI).
The one security measure I was able to do so far was only allow specific IPs TCP connection to port 5000, which is a good start but not enough as there is no guarantee someone on that IP is authorized to view the tool. Is it possible to require an IAM role to access the IP? I do have a separate api with a login endpoint that could be useful if it was possible to run a script before forwarding the request, is that a possible/viable solution? Not sure what best practice is in this case, there might be a third option I have not considered.
ADD-ON EDIT
Additionally, I am using EC2 Instance Connect and if it is possible to require an active ssh connection before accessing the ec2 instances ip that would be a good solution as well.
EDIT FOLLOWING INITIAL DISCUSSION
Another approach that would work for me is if I had a small app running on a different port that could leverage our existing UI to log a user in. If a user authenticated through this app, would it be possible to display the ui from port 5000 to them then? In this case KafkaMagic would be on a private ip and there would be a different IP that the user would go through before seeing the tool
In short, the answer is no. If you want authorization (I think, you mean, authentication) to access an application running on the server - you need tools that run on the server. If your tool offers such capability - use it. It looks like Kafka Magic has such capability: https://www.kafkamagic.com/faq/#how-to-authenticate-kafka-client-by-consumer-group-id
But you can't use external tools, like AWS, that perform such authentication. Security group is like a firewall - it either allows or blocks access to the port.
You can easily create a script that uses the aws sdk or even just executes the aws CLI to view/add/remove an ip address of a security group. How you execute that script depends on your audience and what language you use.
For a small number of trusted users you could issue them an IAM user and API key with a policy that allows them to manage a single dynamic security group. Then provide a script they can run/shortcut to click that gets the current gateway ip and adds/removes it from the security group.
If you want to allow users via website a simple script behind some existing authentication is also possible with sdk/cli approach(depending on available server side scripting).
If users have SSH access - you could authorise the ip by calling the script/cli from bashrc or some other startup script.
In any case the IAM policy that grants permissions to modify the SG should be as restrictive as possible (basically dont use any *'s in the policy). You can add additional conditions like the source IP/range (ie in your VPC) or that MFA must be active for user etc to make this more secure (can be handled in either case via script). If your running on ec2 id suggest looking at IAM Instance Roles as an easy way to give your server access to credentials for your script (but you can create a user and deploy the key/secret to the server and manage it manually if you wanted).
I would also suggest creating a dedicated security group for dynamically managed access alongside existing SGs required for internal operation for safety. It would be a good idea to implement a lambda function on a schedule to flush the dynamic SG (even if you script de-authorising an IP it might not happen so its good to clean up safely/automatically).
I have to docker images A and B running on google run. A need a small memory footprint and slow scaling (it is the front end) and B needs a high memory foot-sprint and heavy scaling under load (it is the backend).
I have made A public (allUser can touch :80 ), and B private (I didn't checked the checkbox).
Since google cloud instance doesn't have a static IP but a dynamic URL, how can I make A "speak" to B (through http) while maintaining B inaccessible from the wild ?
Right now, the only work around I found is to open HTTP ports to allUser for both and use a sub domain name for B (like b.my.app) and call "http://b.my.app" from A.
This is a very bad solution since B can be touched from outside google's network.
Since service B is private (requires authentication), service A will need to include an HTTP Authorization header in requests to service B.
The header looks like this:
Authorization: Bearer <replace_with_token
The token is an OAuth 2.0 Identity Token (not an Access Token). The IAM member email address for the User Credentials or Service Account is added to service B with the role roles/run.invoker.
You will still need to call the endpoint URL (xxx.y.run.app) of service B. That does not change unless you also implement Custom Domains.
A nice feature of Cloud Run is that when authentication is required, the Cloud Run Proxy handles this for you. The Proxy sits in front of Cloud Run and blocks all unathorized requests. Your instance is never launched so there is no billing time while hackers try to get thru.
In one of my articles on my website, I show how to generate the Identity Token in Go (link). In this article using CURL (link) which is a three-part series. There are numerous articles on the Internet that explain this also. In another article, I explain how Cloud Run Identity works (link) and how Cloud Run Identity Based Access Control works (link).
Review the --service-account option which allows you to set the service account to use for identity (link).
Cloud Run Authentication documentation (link).
I am new to AWS EC2. I want to set up a website only for my family members.
It will contain some content that is not necessarily private, but would be more appropriate if only family members can access.
IP address discrimination wouldn't work here as we may on the go and use other wifi.
I'm considering MAC code as the screening basis.
Is such access restriction allowed in EC2? Thanks.
Restricting is using MAC won't work, the devices will reach EC2 over a public Network and the MAC changes at every hop. I assume you would be interested in setting remote vpn/L2TP vpn ? EC2 can be used as VPN server and can be allowed from certain client, if no, try to setup a log in based page and create account for your family members.
Here a free open source tool to achieve it:
https://www.digitalocean.com/community/tutorials/how-to-sync-and-share-your-files-with-seafile-on-ubuntu-18-04
Cognito is designed for such things, you can manage there user accounts. You can add Application Load Balancer before your EC2 instance, which will forward to Cognito authentication - but this is a bit expensive solution for "family usage".
If there's no very fragile data on this website, you can use just BasicAuth, which will prompt for username and password on site-entry, or you can add standard login page in your website.
Least, but not last, is Lambda with ApiGateway (free tier allows to free usage of this service for ALOT of requests) - this is more programatically solution - but - it's up to you which one to choose.
I'm building a mobile app that needs a backend that I've chosen to host using Amazon Web Services.
Their mobile SDKs provide APIs to work directly with the DynamoDB (making my app a thick client), including user authentication/authorization with their IAM service (which is what I'm going to use to track users). This makes it easy to say "user X wants their information. Here's their temporary access key. Oh, here's the information you requested."
However, if I used RDS as a backend database, I'd have to create web services (in PHP or Java/etc) that my app can talk to. Then I'd also have to implement the authentication/authorization myself within my web service (which I feel could get very messy). I'd also have to host the web service on an EC2 instance, as well as having the RDS instance. So my costs would increase.
The latter seems like it would be a lot of work, something which I could avoid by using DynamoDB (and its API) as my backend.
Am I correct in my reasoning here? Or is there an easy way to authenticate/authorize a PHP web service with an AWS RDS database?
I ask because I've only ever worked with relational databases before, so there would be a learning curve to get the NoSQL db running. Though hypothetically my plan is to eventually switch to a NoSQL db at some point in the future anyways due to my apps increasing demands.
Side note: I already have my database designed in MySQL.
There is no solution to use IAM directly with RDS because of the unavailability of fine-grained access control over RDS tables. Moreover IAM policies cannot be enforced dynamically (i.e. with an Identity Pool).
RDS is an unmanaged service, so it is not provided as a SaaS endpoint. DynamoDB is a REST service presented as a distributed key-value store and exposes endpoints to clients (AWS SDK is just a wrapper around them).
DynamoDB is born as a distributed service and can guarantee fine-grained control over data access, thus allowing concurrent access.