Is there a way to share a specific subnet of a Shared VPC to a project?
Right now, when I share the subnets of a Shared VPC, I can only specify which subnets to share and which projects to share with and then all the shared subnets show up in each project.
I would like to share shared-subnet-1 to project-1 and shared-subnet-2 to project-2 but I don't want shared-subnet-1 to show up in project-2 and vice versa.
Shared VPC makes use of Identity and Access Management (IAM) roles for delegated administration access. With Project-level permissions and subnet-level permission, a shared VPC admin can grant permission to use the whole host project or just some/specific subnets, for details check this GCP documentation. Based on this, using a more granular IAM roles it is possible to grant access to specific resource(s) only.
Directions detail on how to modify the configuration of an existing host project, can be found in this article (step#7 VPC network sharing mode section). In addition to that the same GCP article also describes how to define an IAM role for:
Service Project Admins for all subnets
Service Project Admins for some subnets
Service Accounts as Service Project Admins
Related
Background:
I have a Shared VPC project called SharedVPC with a network network01 and a subnet serverless-subnet01: 10.200.12.0/28
The Shared VPC Project shares its networks and subnets with another project project1
Nothing else is using serverless-subnet01
All resources in both projects are in us-central1
I have the owner role in both projects
vpcaccess.googleapis.com is enabled in project1
The issue:
I want to create a Serverless VPC Access Connector in project1 using network01 and serverless-subnet01, but when trying to follow the documentation to create a connector, the following error occurs after clicking "create" with us-central1 as the region, network01 as the network, and serverless-subnet01 as the subnet:
Operation failed: VPC Access did not have permission to resolve the subnet or the provided subnet does not exist.
I have attempted to apply the troubleshooting steps in the documentation, with the following results:
There is no such account with a name like service-PROJECT_NUMBER#gcp-sa-vpcaccess.iam.gserviceaccount.com or the role of roles/vpcaccess.serviceAgent in either project1 or SharedVPC Edit: there is an account in SharedVPC with the name service-SharedVPC_PROJECT_NUMBER#gcp-sa-vpcaccess.iam.gserviceaccount.com, but it is only visible through gcloud commands and adding the appropriate roles to it does not fix the issue.
No network overlaps with serverless-subnet-01,
There are no firewall rules with a priority over 1000 that denies ingress
The solution was that there was a vpc access service account for project1, but it was only visible through gcloud commands rather than the console. This account needs the roles/vpcaccess.serviceAgent role in the shared vpc project in order to access the subnet.
I am writing terraform file in GCP to create a shared vpc, GKE, compute engine in the service project of shared vpc.
I am facing an error for GKE saying error
403 permission error service.hostagent even though it has required permissions.
And also I am using service account key. Not sure whether it's correct approach like I created service account in host project and I added that service account id in the iam of service project. Using host project service key. Is that right approach?.
Thanks.
While creating a shared VPC, sharing the subnet from host project to service project allows all the members mentioned in the service account of the service project.
From the error message, it looks like IAM permissions are missing. While creating a shared VPC with GKE, make sure that you have following permissions:
To create a shared VPC, a shared VPC admin role is required(which you seemingly already have).
To share your subnets, you need to give users the Compute Network User role.
While creating GKE configuration, make sure to enable Google Kubernetes Engine API in all projects. Enabling the API in a project creates a GKE service account for the project.
When attaching a service project, enabling Kubernetes Engine access grants the service project's GKE service account the permissions to perform network management operations in the host project.
Each service project's GKE service account must have a binding for the Host Service Agent User role on the host project. This role is specifically used for shared VPC clusters which include the following permissions:
a) compute.firewalls.get
b) container.hostServiceAgent.*
For additional information, you can see here.
How to enable VPC access for AWS CodeBuild/Code Pipeline?
I am working on the Neptune database and it requires VPC to access. While building code inside AWS CodeBuild. My tests are failing because it's not able to access the Neptune database. How can I configure the pipeline to allow CodeBuild to access the VPC?
This AWS Documentation guide will help you to configure your Code Build Project with your VPC.
But I am sure, you must have gone through it. Please share the error as well.
Link
Select environments from your CodeBuild project settings and in the advanced setting section you can select VPC, subnet and security group for your project.
For Subnets, choose a private subnet that has routes to your db. If internet access required, NAT gateway must be attached in the route table of private subnet. CodeBuild only works with Nat not with public subnet for internet access.
Be sure you have enabled AWS IAM authentication on your Neptune database config. You then need to allow the role you are running CodeBuild under to access that Neptune database. you will then be able to access it. Assuming it is an IAM error, please post more information if this is not the case. You will need to ensure the role you run as has the correct permissions to query Neptune.
There are detailed documents here on how to do this.
You can assign a managed policy to your role the following are available
NeptuneReadOnlyAccess
NeptuneFullAccess
NuptuneConsoleFullAccess <-- not really applicable to a CI process.
I have three separate GCP accounts. One account for productA, one account for productB and one account for devops monitoring. Each account has currently 1 project (more to be added in the future) which has multiple VMs. I want to monitor the VMs (for productA/project and productB/project) from the devops GCP account so I can consolidate the monitoring. The monitoring products are Promethesus, Grafana and Graylog (not GCP).
I am not using organisations at the moment (don't use gsuite or cloud identity)
Do I need VPC networking peering or shared VPC?
Any advice or recommendations on how to do this would be much appreciated.
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network. If your resources are already in different projects, but using the same VPC, the shared VPC concept is already in place.
However, in your case it seems like your resources are using different VPC’s specifically to their own project. Here the concept for VPC peering can be used regardless of whether they belong to the same project or the same organization. It is possible to set up VPC Network Peering between two Shared VPC networks. Here is the Example on VPC Network Peering setup.
I have my RDS instances on one AWS account and I have set up my application on Kubernetes Cluster on another account. I need the application to talk to RDS instances on another account. I chose VPC Endpoint(Private Link) to achieve the same, so that the RDS data is safe and secure. Is it possible to have a Private Link established between multiple AWS accounts. Both the accounts are under the same AWS organization.
Is it possible to have a Private Link established between multiple AWS accounts.
Yes. The AWS documentation explains that a service consumer can be a different account:
Grant permissions to specific service consumers (AWS accounts, IAM users, and IAM roles) to create a connection to your endpoint service.
Setting up permissions for other accounts to your Private Link service is explained in:
Adding and removing permissions for your endpoint service
I think the better architecture would be to use VPC Peering to connect the VPC with the database to the VPC with the Kubernetes cluster.
The data remains "safe and secure" because it stays within the two VPCs.
No Network Load Balancer would be required.