I have a host project with 2 VPCs , both of them aew shared with a service project that has no VPCs. in the console all works great, but I want to create automation for that. I am not able to list the VPCs in the service project. I am trying to use
https://www.googleapis.com/compute/v1/projects/{project}/aggregated/subnetworks/listUsable
from the documantation
Retrieves an aggregated list of all usable subnetworks in the project. The list contains all of the subnetworks in the project and the subnetworks that were shared by a Shared VPC host project.
but I am getting empty result set
what I am missing?
You have to be relatively careful with the permissions and what user you authenticate as. You will only be able to see subnetworks where the calling user has the appropriate compute.subnetworks.* permissions.
If you're looking at the Cloud Console, you will be acting with your Google Account which most likely has owners or at least roles/compute.networkUser access.
Depending on how you authenticate your API calls, you are most likely using a service account. Ensure that this service account has the required roles as well.
For further debugging, you can also try using the gcloud CLI tool. It has a handy option: --log-http that will show you all HTTP calls done. This is often a great help when piecing together functionality in external code.
I have looked on how GCP console is doing it
1. it query to see if there is host project
2. if there is host project - it send query to the host project to list the subnets
Related
I currently have the following structure
Project 1
Cloud SQL Instance
Project 2
Cloud Run Instance
Service account
I would like the project 2 cloud run instance to access the Project 1 Cloud SQL instance.
To do this I...
Add the project 2 service account to project 1 and give it permissions.
Go into CloudSQL and setup the user and connect to the DB to setup permissions and roles.
Try to access CloudSQL instance on the Cloud Run instance using SQL Auth Proxy
But I see posts like this that suggest I should be using a VPC...
Accessing Cloud SQL from another GCP project
But I would really like to avoid managing 2 vpcs for this. Is there a way to do it without VPCs? And based on the post the best option is to pair 2 VPCs?
Posting this answer for the awareness of other users on what we have discussed in this question. In this scenario we have discussed that firewall rule from this concern is needed to have access from the public server.
I would also like to share this link on how you can enable public ip address in cloud sql.
I'm trying to create a GKE Cluster through Terraform. Facing an issue w.r.t service accounts. In our enterprise, service accounts to be used by Terraform are created in a project svc-accnts which resides in a folder named prod.
I'm trying to create the GKE cluster in a different folder which is Dev and the project name is apigw. Thro Terraform, when I use a service account with the necessary permissions reside in the project apigw, it works fine.
But when I try to use a service account with the same permissions where the service account resides in a different folder, getting this error
Error: googleapi: Error 403: Kubernetes Engine API has not been used in project 8075178406 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/container.googleapis.com/overview?project=8075178406 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
where 8075178406 is the project number of svc-accnts
Why does it try to enable the API in svc-accnts when the GKE cluster is created in apigw. Are service accounts not meant to used across folders?
Thanks.
The error you provide is not about permissions of the service account. Maybe you did not change the project in the provider? Remember, you can have multiple providers of the same type (google) that point to different projects. Some code example would provide more information.
See:
https://medium.com/scalereal/how-to-use-multiple-aws-providers-in-a-terraform-project-672da074c3eb (this is for AWS, but same idea)
https://www.terraform.io/language/providers/configuration
Looks like this is a known issue and happens through gcloud cli as well.
https://issuetracker.google.com/180053712
The workaround is to enable the Kubernetes Engine API on the project(svc-accnts) and it works fine. I was hesitant to do that as I thought this might create the resources in the project.
Not sure what the right terms were to start this question but basically I have a downloaded UI tool that runs on 0.0.0.0:5000 on my AWS EC2 instance and my ec2 instance has a public ip address associated with it. So right now everyone in the world can access this tool by going to {ec2_public_ip}:5000.
I want to run some kinda script or add security group inbound rules that will require authorization prior to letting someone view the page. The application running on port 5000 is a downloaded tool not my own code so it wouldnt be possible to add authentication to the tool itself (Its KafkaMagic FYI).
The one security measure I was able to do so far was only allow specific IPs TCP connection to port 5000, which is a good start but not enough as there is no guarantee someone on that IP is authorized to view the tool. Is it possible to require an IAM role to access the IP? I do have a separate api with a login endpoint that could be useful if it was possible to run a script before forwarding the request, is that a possible/viable solution? Not sure what best practice is in this case, there might be a third option I have not considered.
ADD-ON EDIT
Additionally, I am using EC2 Instance Connect and if it is possible to require an active ssh connection before accessing the ec2 instances ip that would be a good solution as well.
EDIT FOLLOWING INITIAL DISCUSSION
Another approach that would work for me is if I had a small app running on a different port that could leverage our existing UI to log a user in. If a user authenticated through this app, would it be possible to display the ui from port 5000 to them then? In this case KafkaMagic would be on a private ip and there would be a different IP that the user would go through before seeing the tool
In short, the answer is no. If you want authorization (I think, you mean, authentication) to access an application running on the server - you need tools that run on the server. If your tool offers such capability - use it. It looks like Kafka Magic has such capability: https://www.kafkamagic.com/faq/#how-to-authenticate-kafka-client-by-consumer-group-id
But you can't use external tools, like AWS, that perform such authentication. Security group is like a firewall - it either allows or blocks access to the port.
You can easily create a script that uses the aws sdk or even just executes the aws CLI to view/add/remove an ip address of a security group. How you execute that script depends on your audience and what language you use.
For a small number of trusted users you could issue them an IAM user and API key with a policy that allows them to manage a single dynamic security group. Then provide a script they can run/shortcut to click that gets the current gateway ip and adds/removes it from the security group.
If you want to allow users via website a simple script behind some existing authentication is also possible with sdk/cli approach(depending on available server side scripting).
If users have SSH access - you could authorise the ip by calling the script/cli from bashrc or some other startup script.
In any case the IAM policy that grants permissions to modify the SG should be as restrictive as possible (basically dont use any *'s in the policy). You can add additional conditions like the source IP/range (ie in your VPC) or that MFA must be active for user etc to make this more secure (can be handled in either case via script). If your running on ec2 id suggest looking at IAM Instance Roles as an easy way to give your server access to credentials for your script (but you can create a user and deploy the key/secret to the server and manage it manually if you wanted).
I would also suggest creating a dedicated security group for dynamically managed access alongside existing SGs required for internal operation for safety. It would be a good idea to implement a lambda function on a schedule to flush the dynamic SG (even if you script de-authorising an IP it might not happen so its good to clean up safely/automatically).
so I am trying to deploy a cloud function to shutdown all VM's in different projects on GCP.
I have added the functionality to a single project using this guide: https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
It shutdowns / starts VM's with the correct tag.
Now I want to extend this to all VM's across projects, so I was thinking i need another service account that I Can add under the cloud Function.
I have gotten a service account from the cloud Admin that has access to all projects, and added that under IAM, and given role as owner. But the issue is that I cannot assign the service account to the function.
Is there something I am missing? Or is there an easier way of doing what I am trying to accomplish?
The easiest way is to give the service account used by that Cloud Function access to the other projects. You just need to go to your other projects and add this service account in the IAM section and give it the permissions it needs, for example compute.admin in this case.
Note that by default, Cloud Functions uses the App Engine default service account, which may not be convenient for you since the App Engine app in your Cloud Function's project would also be granted the compute.admin role in the other projects.
I'd recommend to create a dedicated service account for this use case (in the same project than your Function) and assign it to the Function and then add it as member of the other projects.
Then, in your Cloud Function, you'll need to run your code for each project you'd like to act upon. You can create a separate client object for each specifying the project Id as constructor option, like so:
const compute = new Compute({
projectId: 'your-project-id'
});
So far you only loop through the VMs in the current project where the Function runs in.
Another option would be to have such a function defined in each project you'd like to act upon. You'd have a "master" function that you'd call, it'd act on the VMs in its project and call the other functions in the other project to act on theirs.
On the instance details page of my Google Cloud SQL instance, I see that there's a "Service account" card on the dashboard with a value. The domain of which includes speckle-umbrella. This account doesn't show up in the IAM settings or any service account lists. Regarding its purpose, the most I've been able to find is this question but it seems to only deal with granting the account privileges. A couple of questions:
What is this account for?
Why is this account not enumerated with the rest of the service accounts?
Google Cloud SQL is a managed service and the instances you create and use actually run in a Google-owned project. The service account you mentioned belongs to that project and is used to perform operations in that project.
The relationship between both projects is clearer with some specific use-cases. For example, in the other SO question you linked about exporting data from CloudSQL to Google Cloud Storage, you need to grant access to your bucket to the service account in question since this SA will be used to authenticate the request to GCS.
Another example is when you create a CloudSQL instance with private IP. The connection through internal IP is made available by actually peering your network in your project with the network in the speckle-umbrella project where your CloudSQL instance resides. You're then able to see that peering in your Developer Console showing the "speckle-umbrella" project as peered project ID.