How to get Google Cloud Build working inside VPC Perimeter? - google-cloud-platform

I have a question that is confusing me a little. I have a project locked down at the org level through a perimeter fence. This is to whitelist ip ranges to access a cloud storage bucket as the user has no ability to authenticate through service accounts or api's and requires a streaming of data.
This is fine and working however I am confused about how to open up access to serverless enviroments aswell inside gcp. The issue in question is cloud build. Since introduction of the perimeter I can no longer run cloud build due to violation of vpc controls. Wondering can anyone point me in the direction of how to enable this as obviously white listing the entire cloud build ip range is not an option?

You want to create a Perimeter Bridge between the resources that you want to be able to access each other. You can do this in the console or using gcloud as noted in the docs that I linked.

The official documentation mention that if you use VPC service controls, some services are not supported, for example, Cloud Build, for this reason the problem started right after you deployed the perimeter.

Hi all so the answer is this.
What you want to do is set up one project that is locked down by vpc and has no api's available for ingestion of the ip white listed storage bucket. Then you create a 2nd project that has a vpc but does not disable cloud storage api's etc. Now from here you can read directly from the ip whitelisted cloud storage bucket in the other project.
Hope this makes sense as I wanted to share back to the awesome guys above who put me on the right track.
Thanks again

Cloud Build is now supported by VPC Service Controls VPC Supported products and limitations

Related

I can't find and disable AWS resources

My free AWS tier is going to expire in 8 days. I removed every EC2 resource and elastic IP associated with it. Because that is what I recall initializing and experimenting with. I deleted all the roles I created because as I understand it, roles permit AWS to perform actions for AWS services. And yet, when I go to the billing page it shows I have these three services that are in current usage.
[1]: https://i.stack.imgur.com/RvKZc.png
I used the script as recommended by AWS documentation to check for all instances and it shows "no resources found".
Link for script: https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-listec2resources.html
I tried searching for each service using the dashboard and didn't get anywhere. I found an S3 bucket, I don't remember creating it but I deleted it anyway, and still, I get the same output.
Any help is much appreciated.
ok, I was able to get in touch with AWS support via Live chat, and they informed me that those services in my billing were usages generated before the services were terminated. AWS support was much faster than I expected.

Is there any way to safely upgrade a Google Cloud Platform VPC?

Is there a practice to update google cloud platform VPC?
There is no practice to update VPC in Google cloud. The Google Cloud Platform VPC is automatically updated by Google and you can check the release notes here. With regards to the effects on your project you can also check that in the release notes page.
If you are referring to updating a specific function on VPC like adding a new region, please elaborate.

How do I ensure that my customer does not misuse my solution offered via Google Cloud Marketplace

I am planning to deploy a vm solution on Google Cloud Marketplace. I have two concerns about this.
I have read this document and it mentions that google cloud images can be exported to cloud storage. Can this method be used to run the image in a local environment.
The VM solution I am providing contains an application binary. If customer copies this binary on some other machine, he will be able to run it. How do I prevent that from happening.
Forgive me if my questions are absurd. I am not able to find answers to these questions anywhere.
The method described in your provided link states that Cloud Storage is it’s only possible destination. I have looked into it and so far I haven’t found another way through the console to do this. I have as well looked into the gcloud command to see if it could once again come to the rescue - so far no luck. Though I am not sure if this would suit your needs you can download objects from buckets in Cloud Storage. So in a way you can get the image (if it’s an object) downloaded to your local environment.
As for your second question , about preventing someone from copying your application binary, what you are asking is copy protection. This is where I quote something from the GCP console itself :
Cloud Storage IDs
Project members can access Cloud Storage data according to their project roles. To modify other permissions, use these group IDs to identify these roles.
Which may or may not suit your needs ( as it pertains to Cloud Storage on not on-premises).

Website with Google cloud compute

Total NOOB question. I want to setup a website on google cloud compute platform with:
static IP/IP range(external API requirement)
simple front-end
average to low traffic with a maximum of few thousand requests a
day.
separate database instance.
I went through the documentation of services offered Google and Amazon. Not fully sure what is the best way to go about it. Understand that there is no right answer.
A viable solution is:
Spawn up an n1-standard instance on GCP (I prefer to use Debian)
Get a static IP, which is free if you don't let it dangling.
Depending upon your DB type choose Cloud SQL for structured data or Cloud Datastore for unstructured data
Nginx is a viable option for web-server. Get started here
Rest is upon you. What kind of stack are you using to build your app? How are you gonna deploy your code to instance? You might later wanna use Docker and k8s to get flexibility between cloud providers and scaling needs.
The easiest way of creating the website you want would be Google App Engine with the Datastore as DB. However it doesn't support static IP's, this is due to a design choice. Is this absolutely mandatory?
App Engine does not currently provide a way to map static IP addresses
to an application. In order to optimize the network path between an
end user and an App Engine application, end users on different ISPs or
geographic locations might use different IP addresses to access the
same App Engine application. DNS might return different IP addresses
to access App Engine over time or from different network locations.

Are there any cloud libraries that takes EC2 inputs?

This might seem like a somewhat strange question, but are there any cloud libraries (jClouds, libclouds etc) that can take EC2 commands and port them to other clouds?
The idea is basically to enable a company with a native EC2 integration to move to a different cloud provider without having to rewrite the provisioning code.
Not exactly what you're looking for, but you can use a service such as Ravello, which supports multiple public clouds for deployment.
The user interacts with the Ravello API/UI, and Ravello handles the interaction with the various cloud APIs.