Confusion Around Creating a VPC Access Connector - google-cloud-platform

I am trying to set up Serverless VPC access
Serverless VPC Access enables you to connect from your Cloud Functions directly to Compute Engine VM instances, Memorystore instances, Cloud SQL instances,
Sounds great. But the documentation is not super friendly to a beginner. Step 2 is to create a connector, about which I have a couple of questions:
In the Network field, select the VPC network to connect to.
My dropdown here contains only "Default". Is this normal? What should IO expect to see here?
In the IP range field, enter an unused CIDR /28 IP range. Addresses in this range are used as source addresses for traffic sent through the connector. This IP range must not overlap with any existing IP address reservations in your VPC network.
I don't know what to do here. I tried using the information in the linked document to first) enter an IP from the region I had selected, and, second) enter an IP from outside that region. Both resulted in connectors that were created with the error. "Connector is in a bad state, manual deletion is recommended"
The documentation continues with a couple of troubleshooting steps if the creation fails:
Specify an IP range that does not overlap with any existing IP address reservations in the VPC network.
I don't know what this means. Maybe like, if I have other connectors I should be sure the IP range for the new one doesn't overlap with those. That's just a guess, but anyway I have none.
Grant your project permission to use Compute Engine VM images from the project with ID serverless-vpc-access-images. See Setting image access constraints for information on how to update your organization policy accordingly.
This leads me to another document about updating my organization's "Image Policy". This one has me so out of my depth, I don't even think I should be here.
This has all started with just wanting to connect to a SQL Server instance from Firebase. Creating the VPC connector seems like a good step, but I've just fallen at every hurdle. Can a cloud-dweller please help me with a few of these points of confusion?

I think you've resolved the issue but I will write an answer to summarize all the steps for future reference.
1. Create a Serverless VPC Access
I think the best reference is to follow the steps in this doc. In step 7, it says the following:
In the IP range field, enter an unreserved CIDR /28 IP range.
The IP you can use is for example 10.8.0.0/28 or even 10.64.0.0/28 with the condition it is not in use for any other network. You can check which IPs are in use going to VPC Network > VPC networks. In the Network field you will have the "default" option so it's okay.
This can take some minutes, so in the meantime you can create your SQL Server/MySQL/PostgreSQL instance.
2. Creating a CloudSQL instance
Create your desired instance (MySQL/PostgreSQL/SQL Server). In your case it will be a SQL Server instance. Also check these steps to configure the Private IP for your instance at creation time or if you have created an instance you can check this. Take note of the Private IP as you will use it later.
3. Create a Cloud function
Before creating your Cloud Function, you have to grant permission to the CF service account to use the VPC. Please follow these steps.
Then follow these steps to configure the connector of your function to use the VPC. In step 5 it says the following:
In the VPC connector field, enter the fully-qualified name of your connector in the following format:
projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME
It is not necessary to add your VPC with this format. There is already a list where you can choose your VPC. Finally deploy your function.
I wrote a little function to test the connection. I would prefer to use Python but it needs more system dependencies than NodeJS.
index.js:
var express = require('express');
var app = express();
var sql = require("mssql");
exports.helloWorld = (req, res) => {
var config = {
user: 'sqlserver',
password: 'password',
server: 'Your.SQL.Priavte.IP',
database: 'dbname'
};
// connect to your database
sql.connect(config, function (err) {
if (err) console.log(err);
// create Request object
var request = new sql.Request();
// query to the database and get the records
request.query('select * from a_table', function (err, recordset) {
if (err) console.log(err)
// send records as a response
res.send(recordset);
});
});
};
package.json:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"express": "4.17.1",
"mssql": "6.0.1"
}
}
And that's all! :D
It's important to mention that this procedure is more about connecting Cloud Functions to SQL Server as there is already an easier way to connect CF to PostgreSQL and MySQL.

I discovered that there exists a hard limit on how many IP you can use for such connectors. You can increase quota or you can switch to other region.
Hard limit on IP are imposed by quota on the free tier https://console.cloud.google.com/iam-admin/quotas.
When not in free tier, you can request an increment on quota.

Related

aws redshift connection timeout

I get a timeout when trying to connect to my newly set up amazon redshift database.
I tried telnet:
telnet redshift-cluster-1.foobar.us-east-1.redshift.amazonaws.com 5439
With the same result.
I set the database configuration to "Publicly accessible".
Note that I am just experimenting. I have set up aws services for fun before, but don't have much knowledge of the network and security setup. So I expect it to be a simple mistake I make.
I want to keep it simple, so my goal is just to connect to the database from a local SQL client and I don't care about anything else at this stage :)
It would be great if you could give me some pointers for me to understand what the problem could be and what I should try next.
I had to add a new inbound rule to the security group and set the source to "anywhere-ipv4" or "my ip". The default inbound rule has a source with the name of the security group itself, which might mean that it is only accessible from within the VPC. At least it is not accessible from the outside.
I set the protocol to tcp and type to redshift, which seemed like the sensible choice for my use case.
See the picture for a sample configuration.
Another option you can try is to connect to it through an API. To connect to this database using a Java client, you need these values:
private static final String database = "dev";
private static final String dbUser ="awsuser";
private static final String clusterId = "redshift-cluster-1";
I have never experienced an issue when using the RedshiftDataClient.

AWS ECS container networking, communication between services and within containers

I am having trouble understanding the point of AWS implementation of service discovery in ECS when using bridge mode, and in general a path forward to (relatively basic) container networking, despite the numerous AWS blog posts on the subject.
Service discovery seems to me about solving dynamically generated containers accessibility (in tasks), so that, similar to docker user defined networks, I can access different tasks on a cluster with a predefined canonical host name.name-space, in a VPC.
I've made sure in the VPC that:
DNS hostnames: Enabled
DNS resolution: Enabled
When service discovery is defined when using bridge mode:
it still tacks on a dynamic portion to the name i did not specify.
{
"Name": "my-service.my-namespace.",
"Type": "SRV",
"SetIdentifier": "4b46cb82ba434dasdb163c1f06ca5c083",
"MultiValueAnswer": true,
"TTL": 60,
"ResourceRecords": [
{
"Value": "1 1 27017 4b46cb82ba434dasdb163c1f06ca5c083.my-service.my-namespace."
}
],
"HealthCheckId": "862bd287-2b41-43ac-8442-a3d27042482b"
},
So i need to manually look up the record each time a service is created, or updated. I cannot dig
my-service.my-namespace for example, that record does not exist.
And:
2. every time the service is updated, the record is regenerated...
To get here i need to do:
$ aws servicediscovery list-namespaces
$ aws route53 list-resource-record-sets --hosted-zone-id $ZONE_ID --region us-east-1
My application currently accesses task hosts via injected environment variables, but if the record refreshes on every service update, this is a non starter. All documentation/forums I've come across seem to say, create some kind of dynamic SRV lookup workaround (seems hackish?) or just switch to awsvpc mode, but then why is this service available at all under bridge/host?
Clearly I'm missing something fundamental.
In addition I'm using dynamic port mapping. If I don't, things like rolling updates fail with port already in use errors. Similarly attempting to run a new instance of a task via scheduling creates the same error.
I can connect within a given docker container in a task with the internal private DNS of the instance, i.e. ip-172-31-52-141.ec2.internal, but here I'm outside of the VPC (?) i.e. now I need to be specifying the dynamically mapped port. So this is a non starter as well.
All of this sits behind a public ALB (for dynamic port resolution etc), and this has been working fine, requests from outside AWS resolve correctly to the target groups / targeted services.
If I switch to awsvpc mode, and enable service discovery, I can have multiple tasks/services communicate privately.
However what if I want to have multiple services communicate, but additionally a single service/task might house multiple docker containers, (e.g. a localized redis cache). I cannot specify the 'link' for these containers without the network mode being 'bridge' again.
here's the TLDR question:
I have 2 tasks and a service associated with each task. There may be multiple instances of each task, therefore ports need to be dynamic. In each task i have 2 containers.
What is the general approach here for allowing different services to communicate via a predefined host.namespace dns resolution, and have the containers inside each task communicate with each other?
Apologies for the long post, but as a novice to ECS/AWS, I'm really struggling here ;)
Any feedback or advice is really appreciated.

private IP address range for GCP Cloud SQL is ignored

I've been trying to set up Google Cloud SQL with a private IP connection, where
the IP range it's bound to is manually allocated, and have not succeeded. I
don't know if this is a bug in the implementation because it's still in beta, if
there's something missing from the docs, or if I'm just doing something wrong.
(A command-line session is at the bottom, for a quick summary of what I'm
seeing.)
Initially, I set it up to automatically allocate the IP range. It all worked
just fine, except that it chose 172.17.0.0/24, which is one of the networks
managed by docker on my GCE instance, so I couldn't connect from there (but
could on another machine without docker). So then I tried going down the manual
allocation route.
First, I tore down all the associated network objects that had been created on
my behalf. There were two VPC Peerings, cloudsql-postgres-googleapis-com and
servicenetworking-googleapis-com, which I deleted, and then I confirmed that
the routing entry associated with them disappeared as well.
Then, I followed the directions at https://cloud.google.com/vpc/docs/configure-private-services-access#allocating-range, creating 10.10.0.0/16, because I wanted it in my default network, which is
auto mode, so I'm limited to the low half (which is currently clear).
At that point, I went back to the Cloud SQL instance creation page, since it
should be doing the rest for me. I checked the "Private IP" box, and chose the
default network.
I wasn't taking notes at the time, so my recollection may be flawed,
particularly since my experience in later attempts was consistently different,
but what I remember seeing was that below the network choice dropdown, it said
"This instance will use the existing managed service connection". I assumed
that meant it would use the address range I'd created, and went forward with the
instance creation, but the instance landed on the 172.17.0.0/24 network again.
Back around the third time, where that message was before, it had a choice box
listing my address range. Again, my recollection was poor, so I don't know if I
either saw or clicked on the "Connect" button, but the end result was the same.
On the fourth attempt, I did notice the "Connect" button, and made sure to click
it, and wait for it to say it succeeded. Which it did, sort of: it replaced the
dropdown and buttons with the same message I'd seen before about using the
existing connection. And again, the instance was created on the wrong network.
I tried a fifth time, this time having created a new address range with a new
name -- google-managed-services-default -- which was the name that the
automatic allocation had given it back when I first started (and what the
private services access docs suggest). But even with that name, and explicitly
choosing it, I still ended up with the instance on the wrong network.
Indeed, I now see that after I click "Connect", I can go check the routes and
see that the route that was created is to 172.17.0.0/24.
The same thing seems to happen if I do everything from the command-line:
$ gcloud beta compute addresses list
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
google-managed-services-default 10.11.0.0/16 INTERNAL VPC_PEERING default RESERVED
$ gcloud beta services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-default \
--network=default \
--project=...
$ gcloud beta services vpc-peerings list --network=default
---
network: projects/.../global/networks/default
peering: servicenetworking-googleapis-com
reservedPeeringRanges:
- google-managed-services-default
---
network: projects/.../global/networks/default
peering: cloudsql-postgres-googleapis-com
reservedPeeringRanges:
- google-managed-services-default
$ gcloud beta compute routes list
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
peering-route-ad7b64a0841426ea default 172.17.0.0/24 cloudsql-postgres-googleapis-com 1000
So now I'm not sure what else to try. Is there some state I didn't think to clear? How is the route supposed to be connected to the address range? Why is it creating two peerings when I only asked for one? If I were to create a route manually to the right address range, I presume that wouldn't work, because the Postgres endpoint would still be at the wrong address.
(Yes, I could reconfigure docker, but I'd rather not.)
I found here https://cloud.google.com/sql/docs/mysql/private-ip that this seems to be the correct behaviour:
After you have established a private services access connection, and created a Cloud SQL instance with private IP configured for that connection, the corresponding (internal) subnet and range used by the Cloud SQL service cannot be modified or deleted. This is true even if you delete the peering and your IP range. After the internal configuration is established, any Cloud SQL instance created in that same region and configured for private IP uses the original internal configuration.
There turned out to be a bug somewhere in the service machinery of Google Cloud, which is now fixed. For reference, see the conversation at https://issuetracker.google.com/issues/118849070.

Redshift and without VPC

Please attach a message if this line of questioning is no longer appropriate for Stack and I will close / find a different forum. It looks like there are similar questions posted, so I am going to post - but I do realize this is an evolving community.
I am following this tutorial on launching a redshift sample cluster, so I can evaluate the product for usage: [http://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-launch-sample-cluster.html]
I am step 3 - launch the cluster. The tutorial suggests I do not need a VPC established "If you do not have a VPC, you will use the EC2-Classic platform to launch your cluster. Your screen will look similar to the following".
my screen doesn't look exactly like that (perhaps there has been drift between the console and the tutorial). The major difference is that the screen I see does not present a drop down for "Cluster Parameter Group" and where the VPC is selected it says "Not in VPC" and next to it there is the error message "You must select a VPC. If you do not have one, please create one using the VPC console."
And I guess another problem is that the error message at the bottom of the screen reads: "There was a problem fetching information required to launch: Not Authorized" not allowing me to continue (which is expected).
Do I need to setup a VPC? As I understand that isn't available on the free tier therefore...
You require a VPC.
If no VPC appears in the list, go to the VPC console for that region and select Actions -> VPCs -> Create Default VPC.
If that option is not available, it is because your account is enabled for EC2-Classic in that region. In this case, return to the VPC Dashboard and Start VPC Wizard to create a VPC that matches your requirements (eg the second option that creates a public & private subnet).
No, you can not create the VPC
You need to check below details:
1) Make sure you provided all necessary information during signup. Complete your AWS registration.
2) Check your email to see if you have received any requests for additional information. If you have, please respond to those emails with the information requested.
3) Verify your Credit/Debit card information is correct.
No, you can not create the VPC You need to check below details:
1) Make sure you provided all necessary information during signup.
Complete your AWS registration.
2) Check your email to see if you have received any requests for additional information. If you have, please respond to those emails with the information requested.
3) Verify your Credit/Debit card information is correct.

AWS Vpn routing to multiple subnets

We have a VPN setup with two static routes
10.254.18.0/24
10.254.19.0/24
We have a problem that we can only ever communicate from AWS - to one of the above blocks at a time. At some times it is .18 and at other times it is .19 - I cannot figure out what is the trigger.
I never have any problem communicating from either of my local subnets out to aws at the same time.
Kinda stuck here. Any suggestions?
What have we tried? Well the 'firewall' guys said they dont see anything being blocked. But I read another post here that stated the same thing and the problem still ended up being the firewall.
Throughout the course of playing with this the "good" subnet has flipped 3 times. Meaning
Right now I can talk to .19 but not .18
10 min ago I could talk to .18 but not .19
It just keeps flipping.
We've been able to get this resolved. We changed the static routes configured in AWS from:
10.254.18.0/24
10.254.19.0/24
To use instead:
10.254.18.0/23
This will encompass all the addresses we need and has resolved the issue. Here was Amazon's response:
Hello,
Thank you for contacting AWS support. I can understand you have issues
with reaching your two subnets: 10.254.18.0/24 and 10.254.19.0/24 at
the same time from AWS.
I am pretty sure I know why this is happening. On AWS, we can accept
only one SA (security association) pair. On your firewall, the
"firewall" guys must have configured policy based VPN. In policy/ACL
based VPN, if you create following policys for eg: 1) source
10.254.18.0/24 and destination "VPC CIDR" 2) source 10.254.19.0/24 and destination "VPC CIDR"
OR 1) source "10.254.18.0/24, 10.254.19.0/24" and destination "VPC CIDR"
In both the cases, you will form 2 SA pairs as we have two different
source mentioned in the policy/ACL. You just have to use source as
"ANY" or "10.254.0.0/16" or "10.254.0.0/25", etc. We would prefer if
you can use source as "ANY" then micro-manage the traffic using
VPN-filters if you are using Cisco ASA device. How to use VPN-filters
is given in the configuration file for CISCO ASA. If you are using
some other device then you will have to find a solution accordingly.
If your device supports route based VPN then I would advice you to
configure route based VPN. Route based VPNs always create only one SA
pair.
Once you find a solution to create only one ACL/Policy on your
firewall, you will be able to reach both the networks at the same
time. I can see multiple SA formation on your VPN. This is the reason
why you cannot reach both the subnets at the same time.
If you have any additional questions feel free to update the case and
we will respond to them.