How to access GCP resource in private server - google-cloud-platform

My application deployed to our private server and I want to use some service from GCP like Bucket and Secret manager.
Suppose my application deployed in internal server and my applicate use GCP services. Is it possible or we should deploy our app to GCP also. My application is in JSP.
How to do this. Which is best practice for this.

You have more than one option. You can use Cloud VPN, as it securely connects your peer network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection. Follow this GCP’s official documentation to set it up.
Another option is Google Cloud Hybrid Connectivity focused on Cloud Interconnect as it allows you to connect your infrastructure to Google Cloud. Visit the following link for the best practices and the set up guide.
Finally, see the following thread for more reference on your connection requirement.

Related

AWS to Azure connectivity: How to setup the Site to site connectivity?

I have requirement to connect both AWS & Azure sites - say, I have a VM in both AWS & Azure sites, I should be able to communicate between the VMs.
How do I setup the VPN connection between AWS & Azure? any reference article would be helpful.
There are a few options to achieve it in general:
In theory, you can cross-connect the two instances directly if they are publicly available and if you need to link just the two of them and nothing else will rely on that VPN tunnel, however in general this is a bad approach.
You can bring up firewall or a router instances capable of handling IPsec tunnels. Such instances are available in the AWS Marketplace and it's Azure analogue.
The recommended option would be to use the AWS Site-to-Site VPN service and it's counterpart in Azure (but this depends on the use-case).
Use a hybrid between the two - AWS service to firewall instance in Azure, or vice-versa
This blog post should get you started in case you choose option 3.

GCP: connect to Memotystore from cloud run Django app?

I'd like to add cache to my Django app hosting on Cloud Run.
From Django official docs, we can connect Django to a memory-based cache. Since I'm using Cloud Run, the memory get cleaned.
Memotystore seems good for this purpose, but there's only tutorial for flask and redis.
How could I achieve this?
Or should I just use a database caching?
Connect Redis instance to Cloud Run service using the steps in
documentation.
To connect from Cloud Run (fully managed) to Memorystore you need to
use the mechanism called "Serverless VPC Access" or a "VPC
Connector"
First, you have to create a Serverless VPC Access Connector
and then configure Cloud Run to use this connector
See connecting to a VPC Network for more information.
Alternatives to using this include:
Use Cloud Run for Anthos, where GKE provides the capability to
connect to Memorystore if the cluster is configured for it.
Stay within fully managed Serverless but use a GA version of the
Serverless VPC Access feature by using App Engine with Memorystore.
See this answer to connect to Memorystore from Cloud Run using an SSH
tunnel via GCE.

Best way to establish a private secure connection between an Azure App Service and a AWS ECS service

I have an app service (Rest API) in Azure and I am planning on hosting another service that has to be integrated with the Azure app service. Could someone please let me know the preferred way(s) to make sure the communication is on a private secure channel?
According the official Azure Docs, you have three options, I can say that the VPN option will be one of the easiest ones, but you can have problems like limited throughput, unpredictable routing via the public internet, and the cost of the AWS and Azure data transfer fees.
To understand better which option to use you can check this flow chart:
Option 1: Connect Azure ExpressRoute and the other cloud provider's equivalent private connection. The customer manages routing.
Option 2: Connect ExpressRoute and the other cloud provider's equivalent private connection. A cloud exchange provider handles routing.
Option 3: Use Site-to-Site VPN over the internet. For more information, see Connect on-premises networks to Azure by using Site-to-Site VPN gateways.
The options 1 and 2 are the best options to avoid use of the public internet, if you require an SLA, if you want predictable throughput, or need to handle data volume transfer. Consider whether to use a customer-managed routing or a cloud exchange provider if you haven't implemented ExpressRoute already.
In the AWS side, you will be able to configure your VPC, to understand how to do this check here.
For more information about these three options, check here

What is the GCP equivalent of AWS Client VPN Endpoint

We are moving from AWS to the GCP. I used Client VPN Endpoint in AWS to get into the VPC network in the AWS. What is the alternative in GCP which I can quickly setup and get my laptop into the VPC network? If there is no exact alternative, what's the closest one and please provide instructions to set it up.
AWS Client VPN is a managed client-based VPN service that enables you to securely access your AWS resources and resources in your on-premises network. With Client VPN, you can access your resources from any location using an OpenVPN-based VPN client.
Currently there is no managed product available on GCP to allow VPN connections from multiple clients to directly access resources within a VPC as Cloud VPN only supports site-to-site connectivity, however there is an existing Feature Request for this.
As an alternative a Compute Engine Instance can be used instead with OpenVPN server manually installed and configured following the OpenVPN documentation, however this would be a self managed solution.

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.