I get a timeout when trying to connect to my newly set up amazon redshift database.
I tried telnet:
telnet redshift-cluster-1.foobar.us-east-1.redshift.amazonaws.com 5439
With the same result.
I set the database configuration to "Publicly accessible".
Note that I am just experimenting. I have set up aws services for fun before, but don't have much knowledge of the network and security setup. So I expect it to be a simple mistake I make.
I want to keep it simple, so my goal is just to connect to the database from a local SQL client and I don't care about anything else at this stage :)
It would be great if you could give me some pointers for me to understand what the problem could be and what I should try next.
I had to add a new inbound rule to the security group and set the source to "anywhere-ipv4" or "my ip". The default inbound rule has a source with the name of the security group itself, which might mean that it is only accessible from within the VPC. At least it is not accessible from the outside.
I set the protocol to tcp and type to redshift, which seemed like the sensible choice for my use case.
See the picture for a sample configuration.
Another option you can try is to connect to it through an API. To connect to this database using a Java client, you need these values:
private static final String database = "dev";
private static final String dbUser ="awsuser";
private static final String clusterId = "redshift-cluster-1";
I have never experienced an issue when using the RedshiftDataClient.
Related
I'm interested in better securing the RDS instances that Lambda accessing with Jets. I watched the introductory tutorial that is provided for jets, and two things stuck out to me:
A database connection string with a plaintext password is used as an environment variable in Lambda to get an RDS connection.
The lambdas are not launched in a VPC alongside the RDS instance by default; this means that the RDS instance is publicly accessible.
Both these things seem like bad practices: What's the best way to mitigate these issues with Jets? I haven't found anything in the docs.
I'm imagining the best way to go would be to remove public accessibility from RDS and allow my lambda to connect to the VPC the RDS instance is in via its role. Then I think I could remove the DB URL string in the environment variable. Does this sound the best approach, and, if so, how can I do this with Jets?
Associated video: https://www.youtube.com/watch?time_continue=59&v=yJIZFc9TZJo&feature=emb_title
Documentation website: https://community.rubyonjets.com/
I am trying to set up Serverless VPC access
Serverless VPC Access enables you to connect from your Cloud Functions directly to Compute Engine VM instances, Memorystore instances, Cloud SQL instances,
Sounds great. But the documentation is not super friendly to a beginner. Step 2 is to create a connector, about which I have a couple of questions:
In the Network field, select the VPC network to connect to.
My dropdown here contains only "Default". Is this normal? What should IO expect to see here?
In the IP range field, enter an unused CIDR /28 IP range. Addresses in this range are used as source addresses for traffic sent through the connector. This IP range must not overlap with any existing IP address reservations in your VPC network.
I don't know what to do here. I tried using the information in the linked document to first) enter an IP from the region I had selected, and, second) enter an IP from outside that region. Both resulted in connectors that were created with the error. "Connector is in a bad state, manual deletion is recommended"
The documentation continues with a couple of troubleshooting steps if the creation fails:
Specify an IP range that does not overlap with any existing IP address reservations in the VPC network.
I don't know what this means. Maybe like, if I have other connectors I should be sure the IP range for the new one doesn't overlap with those. That's just a guess, but anyway I have none.
Grant your project permission to use Compute Engine VM images from the project with ID serverless-vpc-access-images. See Setting image access constraints for information on how to update your organization policy accordingly.
This leads me to another document about updating my organization's "Image Policy". This one has me so out of my depth, I don't even think I should be here.
This has all started with just wanting to connect to a SQL Server instance from Firebase. Creating the VPC connector seems like a good step, but I've just fallen at every hurdle. Can a cloud-dweller please help me with a few of these points of confusion?
I think you've resolved the issue but I will write an answer to summarize all the steps for future reference.
1. Create a Serverless VPC Access
I think the best reference is to follow the steps in this doc. In step 7, it says the following:
In the IP range field, enter an unreserved CIDR /28 IP range.
The IP you can use is for example 10.8.0.0/28 or even 10.64.0.0/28 with the condition it is not in use for any other network. You can check which IPs are in use going to VPC Network > VPC networks. In the Network field you will have the "default" option so it's okay.
This can take some minutes, so in the meantime you can create your SQL Server/MySQL/PostgreSQL instance.
2. Creating a CloudSQL instance
Create your desired instance (MySQL/PostgreSQL/SQL Server). In your case it will be a SQL Server instance. Also check these steps to configure the Private IP for your instance at creation time or if you have created an instance you can check this. Take note of the Private IP as you will use it later.
3. Create a Cloud function
Before creating your Cloud Function, you have to grant permission to the CF service account to use the VPC. Please follow these steps.
Then follow these steps to configure the connector of your function to use the VPC. In step 5 it says the following:
In the VPC connector field, enter the fully-qualified name of your connector in the following format:
projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR_NAME
It is not necessary to add your VPC with this format. There is already a list where you can choose your VPC. Finally deploy your function.
I wrote a little function to test the connection. I would prefer to use Python but it needs more system dependencies than NodeJS.
index.js:
var express = require('express');
var app = express();
var sql = require("mssql");
exports.helloWorld = (req, res) => {
var config = {
user: 'sqlserver',
password: 'password',
server: 'Your.SQL.Priavte.IP',
database: 'dbname'
};
// connect to your database
sql.connect(config, function (err) {
if (err) console.log(err);
// create Request object
var request = new sql.Request();
// query to the database and get the records
request.query('select * from a_table', function (err, recordset) {
if (err) console.log(err)
// send records as a response
res.send(recordset);
});
});
};
package.json:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"express": "4.17.1",
"mssql": "6.0.1"
}
}
And that's all! :D
It's important to mention that this procedure is more about connecting Cloud Functions to SQL Server as there is already an easier way to connect CF to PostgreSQL and MySQL.
I discovered that there exists a hard limit on how many IP you can use for such connectors. You can increase quota or you can switch to other region.
Hard limit on IP are imposed by quota on the free tier https://console.cloud.google.com/iam-admin/quotas.
When not in free tier, you can request an increment on quota.
I've been trying to set up Google Cloud SQL with a private IP connection, where
the IP range it's bound to is manually allocated, and have not succeeded. I
don't know if this is a bug in the implementation because it's still in beta, if
there's something missing from the docs, or if I'm just doing something wrong.
(A command-line session is at the bottom, for a quick summary of what I'm
seeing.)
Initially, I set it up to automatically allocate the IP range. It all worked
just fine, except that it chose 172.17.0.0/24, which is one of the networks
managed by docker on my GCE instance, so I couldn't connect from there (but
could on another machine without docker). So then I tried going down the manual
allocation route.
First, I tore down all the associated network objects that had been created on
my behalf. There were two VPC Peerings, cloudsql-postgres-googleapis-com and
servicenetworking-googleapis-com, which I deleted, and then I confirmed that
the routing entry associated with them disappeared as well.
Then, I followed the directions at https://cloud.google.com/vpc/docs/configure-private-services-access#allocating-range, creating 10.10.0.0/16, because I wanted it in my default network, which is
auto mode, so I'm limited to the low half (which is currently clear).
At that point, I went back to the Cloud SQL instance creation page, since it
should be doing the rest for me. I checked the "Private IP" box, and chose the
default network.
I wasn't taking notes at the time, so my recollection may be flawed,
particularly since my experience in later attempts was consistently different,
but what I remember seeing was that below the network choice dropdown, it said
"This instance will use the existing managed service connection". I assumed
that meant it would use the address range I'd created, and went forward with the
instance creation, but the instance landed on the 172.17.0.0/24 network again.
Back around the third time, where that message was before, it had a choice box
listing my address range. Again, my recollection was poor, so I don't know if I
either saw or clicked on the "Connect" button, but the end result was the same.
On the fourth attempt, I did notice the "Connect" button, and made sure to click
it, and wait for it to say it succeeded. Which it did, sort of: it replaced the
dropdown and buttons with the same message I'd seen before about using the
existing connection. And again, the instance was created on the wrong network.
I tried a fifth time, this time having created a new address range with a new
name -- google-managed-services-default -- which was the name that the
automatic allocation had given it back when I first started (and what the
private services access docs suggest). But even with that name, and explicitly
choosing it, I still ended up with the instance on the wrong network.
Indeed, I now see that after I click "Connect", I can go check the routes and
see that the route that was created is to 172.17.0.0/24.
The same thing seems to happen if I do everything from the command-line:
$ gcloud beta compute addresses list
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
google-managed-services-default 10.11.0.0/16 INTERNAL VPC_PEERING default RESERVED
$ gcloud beta services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-default \
--network=default \
--project=...
$ gcloud beta services vpc-peerings list --network=default
---
network: projects/.../global/networks/default
peering: servicenetworking-googleapis-com
reservedPeeringRanges:
- google-managed-services-default
---
network: projects/.../global/networks/default
peering: cloudsql-postgres-googleapis-com
reservedPeeringRanges:
- google-managed-services-default
$ gcloud beta compute routes list
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
peering-route-ad7b64a0841426ea default 172.17.0.0/24 cloudsql-postgres-googleapis-com 1000
So now I'm not sure what else to try. Is there some state I didn't think to clear? How is the route supposed to be connected to the address range? Why is it creating two peerings when I only asked for one? If I were to create a route manually to the right address range, I presume that wouldn't work, because the Postgres endpoint would still be at the wrong address.
(Yes, I could reconfigure docker, but I'd rather not.)
I found here https://cloud.google.com/sql/docs/mysql/private-ip that this seems to be the correct behaviour:
After you have established a private services access connection, and created a Cloud SQL instance with private IP configured for that connection, the corresponding (internal) subnet and range used by the Cloud SQL service cannot be modified or deleted. This is true even if you delete the peering and your IP range. After the internal configuration is established, any Cloud SQL instance created in that same region and configured for private IP uses the original internal configuration.
There turned out to be a bug somewhere in the service machinery of Google Cloud, which is now fixed. For reference, see the conversation at https://issuetracker.google.com/issues/118849070.
on my aws maria db rds instance I got lots of connections ... and then the database is not accassible again.
in this case there are only 77 connections. but often there are more then 150...
I run a microservice construct with 4 microservices each of them is a playframework application written in scala.
thanks
They are indeed connections, to double check this login to the RDS instance and run this:
SHOW PROCESSLIST;
What could be happening is that your applications are not closing the connection after using it.
Check also this answer https://stackoverflow.com/a/4284212/1135424 could help to set better values to the timeout variables:
set global wait_timeout=3;
set global interactive_timeout=3;
We have a VPN setup with two static routes
10.254.18.0/24
10.254.19.0/24
We have a problem that we can only ever communicate from AWS - to one of the above blocks at a time. At some times it is .18 and at other times it is .19 - I cannot figure out what is the trigger.
I never have any problem communicating from either of my local subnets out to aws at the same time.
Kinda stuck here. Any suggestions?
What have we tried? Well the 'firewall' guys said they dont see anything being blocked. But I read another post here that stated the same thing and the problem still ended up being the firewall.
Throughout the course of playing with this the "good" subnet has flipped 3 times. Meaning
Right now I can talk to .19 but not .18
10 min ago I could talk to .18 but not .19
It just keeps flipping.
We've been able to get this resolved. We changed the static routes configured in AWS from:
10.254.18.0/24
10.254.19.0/24
To use instead:
10.254.18.0/23
This will encompass all the addresses we need and has resolved the issue. Here was Amazon's response:
Hello,
Thank you for contacting AWS support. I can understand you have issues
with reaching your two subnets: 10.254.18.0/24 and 10.254.19.0/24 at
the same time from AWS.
I am pretty sure I know why this is happening. On AWS, we can accept
only one SA (security association) pair. On your firewall, the
"firewall" guys must have configured policy based VPN. In policy/ACL
based VPN, if you create following policys for eg: 1) source
10.254.18.0/24 and destination "VPC CIDR" 2) source 10.254.19.0/24 and destination "VPC CIDR"
OR 1) source "10.254.18.0/24, 10.254.19.0/24" and destination "VPC CIDR"
In both the cases, you will form 2 SA pairs as we have two different
source mentioned in the policy/ACL. You just have to use source as
"ANY" or "10.254.0.0/16" or "10.254.0.0/25", etc. We would prefer if
you can use source as "ANY" then micro-manage the traffic using
VPN-filters if you are using Cisco ASA device. How to use VPN-filters
is given in the configuration file for CISCO ASA. If you are using
some other device then you will have to find a solution accordingly.
If your device supports route based VPN then I would advice you to
configure route based VPN. Route based VPNs always create only one SA
pair.
Once you find a solution to create only one ACL/Policy on your
firewall, you will be able to reach both the networks at the same
time. I can see multiple SA formation on your VPN. This is the reason
why you cannot reach both the subnets at the same time.
If you have any additional questions feel free to update the case and
we will respond to them.