I have a server running at home but I haven't a fixed ip address. So, I use DDNS to update my domains DNS when IP changed, and it is working fine. My problem comes trying to access a MySQL instance, because currently it is using a VPC, so I need to update manually adding new IP as Authorized Network. I wonder if it is possible to do that with a API REST call, in that way I can add a crontab in my server to check changes each n minutes and update Authorized Networks.
I read Google documentation, but in my understanding (I am not an english speaker) it is possible just from an authorized network. Somebody can give me a clue?.
Thanks in advance.
Take a look at installing and using Cloud SQL Auth Proxy on your local server. This will remove the need to keep updating Authorized Networks when your IP changes.
I wonder if it is possible to do that with a API REST call, in that
way I can add a crontab in my server to check changes each n minutes
and update Authorized Networks.
Google Cloud provides the Cloud SQL Admin API. To modify the authorized networks, use the instances.patch API.
Google Cloud SQL Method: instances.patch
Modify this data structure to change the authorized networks:
Google Cloud SQL IP Configuration
You might find it easier to use the CLI to modify the authorized networks:
gcloud sql instances patch <INSTANCENAME> --authorized-networks=x.x.x.x/32
gcloud sql instances patch
I do not recommend constantly updating the authorized networks when not required. Use an external service to fetch your public IP and compare with the last saved value. Only update Cloud SQL if your public IP address changed.
Common public services to determine your public IP address. Note you should randomly select one as these services can rate limit you. Some of the endpoints require query parameters to only return your IP address and not a web page. Consult their documentation.
https://checkip.amazonaws.com/
https://ifconfig.me/
https://icanhazip.com/
https://ipecho.net/plain
https://api.ipify.org
https://ipinfo.io/ip
Note: I recommend that you use the Google Cloud SQL Auth Proxy. This provides several benefits including network traffic encryption. The auth proxy does not require that you whitelist your network.
Refer to my other answer for more details
Related
I have an application deployed on Cloud Run and I want to apply IP filtering on it.
Can you please suggest me cheapest solution for both public and internal Cloud Run application.
For public IP, you can use an HTTPS Load Balancer with your Cloud Run service as serverless NEG as Backendservice, and add a Cloud Armor policy to filter the incoming IP
There is no built in solution for internal IP
You can also implement the IP filtering check directly in your Cloud Run service by reading the header of the request, especially the field X-Forwarded-For
Finally, filtering on IP is not a good idea. Google says: Don't trust the network, and that's why it's not so easy to implement what you want to achieve, because it's not a suitable design.
Base your security on the identity and OAuth2 protocol, instead of IP.
Based on my current understanding, when I enable a service connection to my Cloud SQL instance in one of my revisions, the path /cloudsql/[instance name]/.s.PGSQL.5432 becomes populated. This is a UNIX socket connection.
Unfortunately, a 3rd party application I'm using doesn't support UNIX socket connections and as such I'm required to connect via TCP.
Does the Google Cloud SQL Proxy also configure any way I can connect to Cloud SQL via something like localhost:5432, or other equivalent? Some of the documentation I'm reading suggests that I have to do elaborate networking configuration with private IPs just to enable TCP based Cloud SQL for my Cloud Run revisions, but I feel like the Cloud Proxy is already capable of giving me a TCP connection instead of a UNIX socket.
What is the right and most minimal way forward here, obviously assuming I do not have the ability to modify the code I'm running.
I've also cross posted this question to the Google Cloud SQL Proxy repo.
The most secure and easiest way is to use the private IP. It's not so long and so hard, you have 3 steps
Create a serverless VPC connector. Create it in the same region as your Cloud Run service. Note the VPC Network that you use (by default it's "default")
Add the serverless VPC Connector to Cloud Run service. Route only the private IPs through this connector
Add a private connection to your Cloud SQL database. Attached it in the same VPC Network as your serverless VPC Connector.
The Cloud configuration is over. Now you have to get the Cloud SQL private IP of your instance and to add it in parameters of your Cloud Run service to open a connection to this IP.
I have setup a standard Debian Linux VM via Compute Engine on GCP. The VM does not have an external IP address. I can connect to it via ssh by using the browser. I allowed incomming ssh (port 22) traffic and all outgoing traffic. I have tested BigQuery by executing queries via the browser interface and it works. I have configured BigQuery to be enabled for the VM via settings -> Cloud API access scopes. Now I would like to do a simple thing as the following:
bq show bigquery-public-data:samples.shakespeare
But nothing happens. I tried to do the following to get more info:
bq --apilog=stdout show bigquery-public-data:samples.shakespeare
Output is the following:
I0106 15:29:47.271125 140258687915840 bigquery_client.py:1205] Requesting discovery document from https://www.googleapis.com/discovery/v1/apis/bigquery/v2/rest
I0106 15:29:47.271456 140258687915840 transport.py:158] Attempting refresh to obtain initial access_token
Nothing more happens. Any ideas what the issue could be?
After reading the documentation it seems to me that the connection via the BigQuery command line tool should work by itself.
Firstly, why it doesn't work. In fact, when you use bq CLI, it's only a wrapper that call the BigQuery apis: https://bigquery.googleapis.com. The domain name is public. The Compute Engine try to resolve it on the public internet. But the compute engine doesn't have public IP and can't go on internet (internet server doesn't know how to route back the answer, because the VM is not reachable!)
Then, how to solve. 2 solutions:
You can set a Cloud NAT on your Compute Engine and thus, grant it a shareable public IP, only used to initiate outgoing traffic
You can use a not well known trick: activate the Google private API access in your subnet. For this, note the subnet of your Compute engine. Then go to VPC and select this subnet. Edit it and set to ON the private Google access.
On AWS, I know how to set up a web server with inbound rules allowing HTTP and HTTPS and a database security group that only connect to the web server. The issue is I need to create a front end to manage the databases without using Internet access - this will be internal only and precludes the use of a public IP / public DNS. Does anyone know how I would do this?
To further elaborate, some of our AWS accounts are for internal use only - we can log in to the console, use CygWin to SSH in, see what's there, etc. But these accounts are for development purposes, and in a large enterprise such as this one, these are not allowed an IGW. So - no inbound Internet access is allowed. How do I create an app (e.g., phpMyAdmin type) in which our manager can easily view and edit the data in the database given the restriction that this must be done without inbound Internet access?
Host your database on RDS inside a VPC and create a VPN connection between your client network and your VPC.
host your database on one EC2 and also upload your front end there. your database will be running on locally on EC2 and you can connect front end to database. where database will not have public DNS it will running locally you can access only using SSH and front end script.
you check this official documentation from aws : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html
for frontend script you can use https://www.adminer.org/ which is one file database management system. one simple file is there using this make connection to locally running database on EC2
A Django app of mine (with a postgresql backend) is hosted over two separate Ubuntu VMs. I use Azure as my infrastructure provider, and the VMs are classic. Both are part of the same resource group, and map to the same DNS as well (i.e. they both live on xyz.cloudapp.net). Currently, I have the following database url defined in my app's settings.py:
DATABASE_URL = 'postgres://username:password#public_ip_address:5432/dbname'
The DB port 5432 is publicly open, and I'm assuming the above DB url implies the web app is connecting to the DB as if it's on a remote machine. If so, that's not the best practice: it has security repercussions, not to mention it adds anything from 20-30 milliseconds to a hundred milliseconds to each query (in latency).
My question is, how does one program such a Django+postgres setup on Azure such that the database is only exposed on the private network? I want to keep the two-VM set up intact. An illustrative example would be nice - I'm guessing I'll have to replace the public ip address in my settings.py with a private IP? I can see a private IP address listed under Virtual machines(classic) > VMname > Settings > IP Addresses in the Azure portal. Is this the one to use? If so, it's dynamically assigned, thus wouldn't it change after a while? Looking forward to guidance on this.
In Classic (ASM) mode, the Cloud Service is the network security boundary and the Endpoints with ACLs are used to restrict access from the outside Internet.
A simple solution to secure access would be:
Ensure that the the DB port (5432) is removed from the cloud service endpoint (to avoid exposing it for the entire Internet).
Get at static private IP address for the DB server.
Use the private IP address of
the DB server in the connection string.
Keep the servers in the same Cloud Service.
You can find detailed instructions here:
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-static-private-ip-classic-pportal/
This should work. But for future implementations, I would recommend the more modern Azure Resource Model (ARM), where you can benefit from many nice new features, including virtual networks (VNETs) where you get more fine-grained security.