Google Cloud Build on private VPC network - google-cloud-platform

I have a Google Cloud Build trigger that build my image on the Google Cloud. I also have VPC network that hosting some resources that should be accessible while building the images.
While building the image, my docker script need to access the web server. It seems like the GCP Cloud build network is not my private VPC network. So, the script is not accessible to required resources that needed while building.
Is this possible to run the build in the VPC network? If yes, how?

It WILL be possible. Today, the feature is opened to Alpha testers and will be soon (I expect by 2 months) released in beta.
Last week, gcloud SDK received this update that allow you to create a worker pool. In fact, you will create a pool of VM in your project. And thus, the VM will be directly connected to your VPC.
I don't know the pricing model, but I think you will pay the Worker pool as standard VM price. Therefore, it won't as cheaper as Cloud Build. And it seems not planned to create a connector (peering? VPC connector?) between your VPC and the current Cloud Build managed version.

Related

Deploy AWS Amplify Web App to EC2 (Not Lambda)

I recently realised my NEXT JS project I deployed on AWS Amplify uses Lambda but I need to deploy it on EC2. Is this possible at all?
I'm new to this whole thing so excuse the ignorance but for certain reasons I need to use EC2?
Is that possible?
Thanks
AWS EC2 is a service that provides all the compute, storage, and networking needs you may have for any application you want to develop. From its site:
Amazon EC2 offers the broadest and deepest compute platform with a choice of processor, storage, networking, operating system, and purchase model.
Source
Basically, you can create any number of virtual machines, connected among themselves and to the Internet however you like; and use any data persistence strategy.
There are many things to unpack when using EC2, but to start, I would suggest that you learn how to set up an EC2 instance using the default VPC that comes with your account. Be sure to configure the instance to have a public IP so you can access it through the Internet. Once inside, you can deploy your application however you like and access it through your public IP.
Before moving on, trying to decide why you need your app to run on EC2, Lambda is a SaaS (Software as a Service) product, meaning that all of the service provider's infrastructures are managed. On the other hand, EC2 is an IaaS product (Infrastructure as a Service) which means that you have to handle most of the infrastructure.

Not able to use Serverless VPC Access for Cloud Run and Compute Engine

I have a Cloud Run service and a Compute Engine VM instance, both are in europe-north1 region.
I would like to connect Cloud Run to Compute Engine VM Instance's internal IP address. For that I tried to create a 'Serverless VPC Access'. When I see the supported regions, there are europe-west[1-3] but not europe-north... And the documentation says that:
In the Region field, select a region for your connector. This must match the region of your serverless service
Does this mean that I cannot use Serverless VPC Access if my services are in europe-north1?
Nevertheless, I created the VPC in europe-west3, thinking that it is the closest one, with suggested IP range: 10.8.0.0/28. However, when I go to CloudRun>service>Edit&Deploy New Revision>Connections tab, I don't get the VPC Connector listed in dropdown box. It has already been 30 mins that I created the connector. Does it take more time to appear?
Europe-north1 isn't a supported region for serverless vpc connector.
If you created a serverless VPC access in europe-west3, it is immediately available for Cloud RUn (or other services). If you don't see it, I think it's because your Cloud Run service isn't in the same region. Only the compliant serverless VPC connectors are shown (and available).

how to create table automatically in aws aurora serverless with serverless framework

I'm trying to create table automatically with npm migrate whenever we deploy any changes with serverless framework. It's quite fine when I used with aurora database. But I've moved to Aurora Serverless RDS (Sydney region), it's not working at all. Because Aurora Serverless RDS itself is working inside VPC, thus when we need to access it lambda function should must be at same VPC.
PS: we're using Github Action as pipeline to deploy everything to Lambda.
Please let me know how to solve that issue, thanks.
There are only two basic ways that you can approach this: open a tunnel into the VPC or run your updates inside the VPC. Here are some of the approaches to each that I've used in the past:
Tunnel into the VPC:
VPN, such as OpenVPN.
Relatively easy to set up, but designed to connect two networks together and represents an always-on charge for the server. Would work well if you're running the migrations from, say, your corporate network, but not something that you want to try to configure for GitHub Actions (or any third-party build tool).
Bastion host
This is an EC2 instance that runs in a public subnet and exposes SSH to the world. You make an SSH connection to the Bastion and then tunnel whatever protocol you want underneath. Typically run as an "always on" instance, but you can start and stop programmatically.
I think this would add a lot of complexity to your build. Assuming that you just want to run on demand, you'd need a script that would start the instance and wait for it to be ready to accept connections. You would probably also want to adjust the security group ingress rules to only allow traffic from your build machine (whose IP is likely to change for each build). Then you'd have to open the tunnel, by running ssh in the background, and close it again after the build is done.
Running the migration inside the VPC:
Simplest approach (imo) is to just move your build inside the VPC, using CodeBuild. If you do this you'll need to have a NAT so that the build can talk to the outside world. It's also not as easy to configure CodeBuild to talk to GitHub as it should be (there's one manual step where you need to provide an access token).
If you're doing a containerized deployment with ECS, then I recommend packaging your migrations in a container and deploying it onto the same cluster that runs the application. Then you'd trigger the run with aws ecs run-task (I assume there's something similar for EKS, but haven't used it).
If you aren't already working with ECS/EKS, then you can implement the same idea with AWS Batch.
Here is an example on how you could approach database schema migration using Amazon API Gateway, AWS Lambda, Amazon Aurora Serverless (MySQL) and Python CDK.

Can not connect between Cloud Run and Compute engine using Internal IP

I have a service which runs on Cloud Run, and a MYSQL, MongoDB databases on Compute Engine. Currently, I'm using public IP for connect between them, I want to use internal IP for improving performance, but i cant find solution for this problem, Please help me some ideas, Thanks.
Now is supported. You can use VPC network connector (Beta):
This feature is in a pre-release state and might change or have
limited support. For more information, see the product launch stages.
This page shows how to use Serverless VPC Access to connect a Cloud
Run (fully managed) service directly to your VPC network, allowing
access to Compute Engine VM instances, Memorystore instances, and any
other resources with an internal IP address.
To use Serverless VPC Access in a Cloud Run (fully managed) service,
you first need to create a Serverless VPC Access connector to handle
communication to your VPC network. After you create the connector, you
set your Cloud Run (fully managed) service configuration to use that
connector.
Here how to create: Creating a Serverless VPC Access connector and here an overview about it: Serverless VPC Access example
According to official documentation Connecting to instances using advanced methods
If you have an isolated instance that doesn't have an external IP
address (such as an instance that is intentionally isolated from
external networks), you can still connect to it by using its internal
IP address on a Google Cloud Virtual Private Cloud (VPC) network
However, if you check the services not yet supported for Cloud Run, you will find:
Virtual Private Cloud Cloud Run (fully managed) cannot connect to VPC
network.
Services not yet supported
You can now do that by running this command upon deployment:
gcloud run deploy SERVICE --image gcr.io/PROJECT_ID/IMAGE --vpc-connector CONNECTOR_NAME
If you already have a Cloud Run deployment, you can update it by running the command:
cloud run services update SERVICE --vpc-connector CONNECTOR_NAME
More information about that here
Connecting from Cloud Run Managed to VPC private addresses is not yet supported.
This feature is in development and is called Serverless VPC Access. You can read more here.
If you have a Compute Engine instance running in the same VPC with a public IP address, you can create an SSH tunnel to connect to private IP addresses through the public instance. This requires creating the tunnel in your own code, which is easy to do.

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.