Django on GAE and AWS RDS PostgreSQL - django

Is it possible to use an AWS RDS PostgreSQL database with a Django app hosted on Google App Engine standard (Python 3)?
Currently, any code that tries to connect to the RDS database hangs, but I can connect to the RDS database from my machine.

I would point out that that from the AWS RDS documentation it’s mentioned that you can to allow access to the specific subnet (VPC) with “publicly available” to an IP Range, and you can get the App Engine ranges by performing nslookups as shown here. Please keep into consideration that the IP ranges for the App Engine services change in a regular basis.
If you are not using VPCs then you should give a look to this AWS doc.
I would also suggest testing the connectivity to the GCP resources by using Compute Engine Virtual Machines, this can be performed also by using App Engine Flexible since you are able to ssh into the GCE VM instance.

Related

GCP: connect to Memotystore from cloud run Django app?

I'd like to add cache to my Django app hosting on Cloud Run.
From Django official docs, we can connect Django to a memory-based cache. Since I'm using Cloud Run, the memory get cleaned.
Memotystore seems good for this purpose, but there's only tutorial for flask and redis.
How could I achieve this?
Or should I just use a database caching?
Connect Redis instance to Cloud Run service using the steps in
documentation.
To connect from Cloud Run (fully managed) to Memorystore you need to
use the mechanism called "Serverless VPC Access" or a "VPC
Connector"
First, you have to create a Serverless VPC Access Connector
and then configure Cloud Run to use this connector
See connecting to a VPC Network for more information.
Alternatives to using this include:
Use Cloud Run for Anthos, where GKE provides the capability to
connect to Memorystore if the cluster is configured for it.
Stay within fully managed Serverless but use a GA version of the
Serverless VPC Access feature by using App Engine with Memorystore.
See this answer to connect to Memorystore from Cloud Run using an SSH
tunnel via GCE.

Can not connect between Cloud Run and Compute engine using Internal IP

I have a service which runs on Cloud Run, and a MYSQL, MongoDB databases on Compute Engine. Currently, I'm using public IP for connect between them, I want to use internal IP for improving performance, but i cant find solution for this problem, Please help me some ideas, Thanks.
Now is supported. You can use VPC network connector (Beta):
This feature is in a pre-release state and might change or have
limited support. For more information, see the product launch stages.
This page shows how to use Serverless VPC Access to connect a Cloud
Run (fully managed) service directly to your VPC network, allowing
access to Compute Engine VM instances, Memorystore instances, and any
other resources with an internal IP address.
To use Serverless VPC Access in a Cloud Run (fully managed) service,
you first need to create a Serverless VPC Access connector to handle
communication to your VPC network. After you create the connector, you
set your Cloud Run (fully managed) service configuration to use that
connector.
Here how to create: Creating a Serverless VPC Access connector and here an overview about it: Serverless VPC Access example
According to official documentation Connecting to instances using advanced methods
If you have an isolated instance that doesn't have an external IP
address (such as an instance that is intentionally isolated from
external networks), you can still connect to it by using its internal
IP address on a Google Cloud Virtual Private Cloud (VPC) network
However, if you check the services not yet supported for Cloud Run, you will find:
Virtual Private Cloud Cloud Run (fully managed) cannot connect to VPC
network.
Services not yet supported
You can now do that by running this command upon deployment:
gcloud run deploy SERVICE --image gcr.io/PROJECT_ID/IMAGE --vpc-connector CONNECTOR_NAME
If you already have a Cloud Run deployment, you can update it by running the command:
cloud run services update SERVICE --vpc-connector CONNECTOR_NAME
More information about that here
Connecting from Cloud Run Managed to VPC private addresses is not yet supported.
This feature is in development and is called Serverless VPC Access. You can read more here.
If you have a Compute Engine instance running in the same VPC with a public IP address, you can create an SSH tunnel to connect to private IP addresses through the public instance. This requires creating the tunnel in your own code, which is easy to do.

cannot connect to Redis Instance in GCP

I created an instance on GCP, but I am not able to access it.
This is similar to this one, but the proposed solution isn't working for me:
Unable to telnet to GCP MemoryStore
I have tried to telnet to it, I am in the same project and region, but apparently I need to be in the same network as it's a private ip, but what if you want to connect using the cloud shell? Also, how would an application running on my local machine access it?
I also included a firewall rule to make sure incoming connections are allowed.
To connect a client to a Cloud Memorystore for Redis instance, the client and the instance must be located in the same region, in same project and in the same VPC network. Please check the “Networking” document where you’ll have information on Basic network settings, limited and unsupported networks, network peering, IP address range.
You can connect to Redis from different GCP products like Compute Engine VM, Google Kubernetes Engine Cluster or Google Kubernetes Engine pod, but you can’t connect directly from the Cloud shell or from your local machine since they are not in your VPC network.
It may also have to do with a missing peering connection to your network. Check in your console at https://console.cloud.google.com/networking/peering/ to see if the peering is set up properly.
Using terraform you can use the following docs: https://www.terraform.io/docs/providers/google/r/redis_instance.html

Possible to attach Elastic IP to sagemaker notebook instance?

I want to connect to a database running in different cloud provider and it is exposed publicly.
I need to connect to that database from sagemaker notebook instance.
But the public ip of the sagemaker notebook instance needs to be whitelisted on the other side.
Is it possible to attach elastic ip to sagemaker notebook instance as I don't see any option to attach eip to sagemaker notebook instance?
No, it is not possible to assign a SageMaker notebook an Elastic IP, which is a disappointment. This missing feature makes the SageMaker product a lot more difficult to use with many sources of data, limiting its utility.
Official Amazon Answer
From the AWS SageMaker product forums on Dec 12, 2019: Possible to attach Elastic IP to sagemaker notebook instance?
Question> Is it possible to attach elastic ip to sagemaker notebook instance?
Answer> We are always re-evaluating our backlog of features based on customer requests,
so we appreciate the feedback on this feature.
You might want to start a new thread or chime in on that one if you want them to add this feature.
Possible Solutions
A general strategy for using a particular IP to access a resource would be to setup a proxy machine and authorize its IP and use it as a proxy to access your service. How hard this is depends on what you are doing - for S3 it doesn't seem possible - but for web-based requests this shouldn't be too hard. For AWS services you can use a proxy.
Personally I am trying to access Algoseek's requestor-pays S3 buckets directly from SageMaker notebooks and this isn't possible. I looked at setting up a proxy but can't figure out how. Instead I will copy the S3 data each time they add a day into our own S3 bucket.
In my case, I have whitelisted the NAT Gateway's IP in the external database.
EDIT: This works only for private subnets.

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.