AWS lambda pymssql error connecting to local database - amazon-web-services

I am creating an Alexa skill to get some information from my local database on request. I have SQLServer 2008 running and I can use the pymssql library to pull data when I run my code locally. However after deploying it with AWS, I see the following error :
File "pymssql.pyx", line 644, in pymssql.connect (pymssql.c:10892)
InterfaceError: Connection to the database failed for an unknown reason.
Here is a screen capture of my aws package, which I created with the help of Amazon EC2 :
The relevant code to connect to the SQL DB is as below :
def getDB_response():
session_attributes = {}
card_title = "farzi"
server = 'LT-CMU352CCM5'
user = 'ss'
passwd = 'abc#1234'
DB = 'Market_DB'
port = '1433'
conn = pymssql.connect(server, port,user, passwd, DB)
cursor = conn.cursor()
cursor.execute('select ##servername')
row = cursor.fetchone()
Does anyone know if there is anything else needed for my aws function to talk to my local DB?

Since you're using your own laptop as a server, you can't use server name like that. "LT-CMU352CCM5" is recognizable from your local network (that's thy connection works when you run function locally), but not from the internet. Database has to be visible from the internet. You'd have to provide your public IP, and to set your router to forward the traffic from the internet (for port 1433) to your laptop (look at port forwarding for your specific router model). Also, you have to set your firewall to allow incoming traffic, and SQL server as well (turn on the TCP/IP and Named Pipes Protocols in your SQL server configuration manager).
Since you're not using RDS, you don't have to put Lambda function in a security group you'd do it by going to Lambda section in your AWS account, choose appropriate Lambda function, and in Network section for that Lambda function choose appropriate VPC, subnet and security group).
Issue that you have with your set-up is that your IP address will change sometimes, so you'd have to change the Lambda function every time that happens. I would urge you to reconsider using local laptop for DB server for something that's going to be used publicly.

Related

How to connect to Google Cloud SQL with enforced SSL from within App Engine app

I would like to have SSL enforced when connecting to my Google Cloud SQL (postgres) instance, but I can't find a way to provide a certificate to my Python app deployed to App Engine - psycopg2 requires the certificate to have a proper rights: <cert> has group or world access; permissions should be u=rw(0600) or less, but since there's no access to chmod on App Engine, I don't know how to change it.
Any ideas?
To provide a context, the goal is primarily to enforce secure access to the DB outside of GCP (from my workstation). The problem with App Engine apps is a side-effect of this change. But maybe there's a different and better way to achieve the same?
Thanks.
Google Cloud App Engine uses the Cloud SQL proxy for any connections using the build in /cloudsql unix socket directory. These connections are automatically wrapped in an SSL layer until they reach you Cloud SQL instance.
You don't need to worry about using certificates, as the proxy automates this process for you.
This is a bit tricky because there is a chicken-and-egg problem here:
The Postgres driver wants a file (because it's a wrapper around a C library that reads the file)
And you cannot set the correct file permission inside app engine on boot.
There are several cases (and solutions) for this problem:
AppEngine Classic environment
Workaround - write the cert file yourself in the /tmp ramdisk
Use a UNIX socket to connect to a Cloud SQL instance with public IP address (no SSL).
Or use a VPC and private VPC access for a cloud SQL instance with private IP address and use the workaround.
AppEngine Flex Environment
Include the cert file in your docker image
Workaround: Write the cert file yourself in /tmp
The easiest way to work around this problem is to write the cert file yourself into /tmp - this is a ramdisk in the app engine environment.
Something like this would do the trick:
import os
import psycopg2
certfile_path = '/tmp/certfile' # /tmp/ is a ramdisk
certfile_content = os.environ.get('CERTFILE')
if certfile_content != None:
with open(certfile_path,'w+') as fp:
fp.write(certfile_content)
os.chmod(certfile_path, 0o600)
# Now the certfile is "safe" to use, we can pass it in our connection string:
connection_string= f"postgres+prycopg2://host/database?sslmode=require&sslcert={certfile_path}"
else:
connection_string = 'postgres+prycopg2://host/database'
# or just trow an exception, log the problem, and exit.
(See Postgres connection string reference for SSL options)
Long term solution
The point of using SSL is to have both encrypted traffic between the database and you, plus avoiding man-in-the-middle attacks.
We have some options here:
Option 1: When you're inside App Engine AND your cloud SQL instance has a public IP address.
In this case, you can connect to a Postgres instance with a Unix domain socket. Google uses a proxy listening on that socket and will take care of encryption and securing the endpoints (btw - SSL is disabled).
In this case, you'll need to add SQL Client permissions to the service account you use AND add the UNIX socket directory in your Yaml file (see
Google documentation).
Option 2: When you're inside App Engine Classic AND your cloud SQL instance has a PRIVATE IP address.
If your cloud SQL instance has a private IP, this can be a bit "simpler".
We need to enable Serverless VPC Access - this connects AppEngine to the VPC where we have resources on private IP addresses (for public IPs it not needed because public IP addresses don't overlap) AND the Cloud SQL server must be connected to the VPC.
Now you can connect to Postgres as usual. The traffic inside the VPC network is encrypted. If you have just this CloudSQL instance in the VPC there is no need to use SSL (nobody can put a sniffer on the network or do a Mitm attack).
But if you still want/need to use SSL, you have to use the workaround described before.
Option 3: Use AppEngine FLEX environment.
The flex environment is 'just' a docker image.
You'll need to build a custom docker image for the Python runtime that includes the certfile.
Remember the Flex environment is not included in the free tier.

Solving connectivity issues to AWS with MariaDB on RDS from local machine

I currently develop a small Java web application with following stack: Java 8, Spring Boot, Hibernate, MariaDB, Docker, AWS (RDS, Fargate, etc.). I use AWS to deploy and to run my application. My java web application runs inside of the docker container, which is managed by AWS Fargate; this web application communicates with Amazon RDS (MariaDB instance) via injected secrets and doesn't need to go through public internet for this kind of communication (instead it uses VPC). My recent problems have begun after I've managed to roll out an software update, that enforced me to make some manual database changes with use of MySQL Workbench and I could not perform this because of local connectivity problems.
Therefore my biggest problem right now is the connectivity to the database from the local machine - I simply can't connect to the RDS instance via MySQL Workbench or even from within the IDE (but it used to work before without such problems). MySQL Workbench gave me following error message as a hint:
After check of given hints from MySQL Workbench I've also checked that:
I use valid database credentials, URL and port (the app in Fargate has the same secrets injected)
Public accessibility flag on RDS is (temporarily) set to "yes"
database security group allows MySQL/Aurora connections from my IP Address range (I've also tested the 0.0.0.0/0 range without further luck)
Therefore my question is: what else should I check to find out the reason of my connectivity failure?
After I've changed my laptop network by switching to the mobile internet the connectivity problem was solved - therefore I suspect, that my laptop was not able to establish the socket connection from the previous network (possibly the communication port or DNS was blocked).
Therefore also don't forget to check the network connectivity by establishing a socket connection like it is described in this answer.

How to set up a front end for AWS DBs without using the Internet

On AWS, I know how to set up a web server with inbound rules allowing HTTP and HTTPS and a database security group that only connect to the web server. The issue is I need to create a front end to manage the databases without using Internet access - this will be internal only and precludes the use of a public IP / public DNS. Does anyone know how I would do this?
To further elaborate, some of our AWS accounts are for internal use only - we can log in to the console, use CygWin to SSH in, see what's there, etc. But these accounts are for development purposes, and in a large enterprise such as this one, these are not allowed an IGW. So - no inbound Internet access is allowed. How do I create an app (e.g., phpMyAdmin type) in which our manager can easily view and edit the data in the database given the restriction that this must be done without inbound Internet access?
Host your database on RDS inside a VPC and create a VPN connection between your client network and your VPC.
host your database on one EC2 and also upload your front end there. your database will be running on locally on EC2 and you can connect front end to database. where database will not have public DNS it will running locally you can access only using SSH and front end script.
you check this official documentation from aws : https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.html
for frontend script you can use https://www.adminer.org/ which is one file database management system. one simple file is there using this make connection to locally running database on EC2

Jasperreports server works fine with actual EC2, but not from an instance taken from the same AMI

I have an EC2-1 which has a jasperreports server installed on it, and I could easily access it through http://IP_ADDRESS1:8081/jasperserver.
Now I have taken an image of EC2-1. once AMI is available, I launched a new EC2-2. As usual I logged in to EC2 using SSH , and was able to run the script ./ctrlscript.sh start ,to access the application. but when I tried to login to http://IP_ADDRESS2:8081/jasperserver and run the report, I am getting below error in jasperserver.log and unable to get the report
300 ERROR WebServiceConnector,pool-4-thread-1:139 - Communication error java.net.ConnectException: Connection timed out
320 ERROR AsyncJasperPrintAccessor,pool-4-thread-1:321 - Error during report execution
can any one give me some clarification on my understanding of **EC2 vs AMI**.As per my understanding EC2-1 and EC2-2 has to be same. but in this case why I am not able to run the reports in EC2-2, when I am still able to run the reports in EC2-1.
Also please guide me if I am missing something here. thank you all.
You are correct that a new EC2 instance that was launched from an AMI taken of the original EC2 instance should include any configuration changes you made on your source instance.
From your description, it sounds like everything about the new EC2 instance is good: you can SSH into it, you can start JasperReports Server, and you can log into the web interface. The problems only begin when you try to run a report -- an important detail, because running a report has an external dependency on an external data source.
Test Connection To Data Sources
In JasperReports Server Web UI, find your Data Source and go to its edit page to test the connection. You should be able to find it at the bottom of the edit page for most data sources. For example, in the JDBC UI:
Try to test Jasper's connection to the data source from this page on your new instance.
Verify Networking Rules
This reads to me like a networking error, specifically between the new EC2 JasperReports instance and the report's data source. There is likely a networking rule external to this EC2 instance that existed for your original instance, but wasn't updated for the new instance. For example, if you had a security group that allowed inbound traffic from the original instance's CIDR to the data source, and it wasn't updated for the new instance's CIDR, you would see these sort of timeouts when JasperReports Server attempts to connect to the data source.
If testing the connection to the data source above failed, check external networking rules on resources such as security groups or VPC network ACLs, and verify that all rules for your original EC2 instance have been updated to also be valid for your new EC2 instance.

AWS Data Pipeline Cannot Connect with RDS Mysql (connection time out)

I am stuck on making a AWS Data Pipeline which takes data from RDS Mysql to s3.
I ahve tried Template but failed alot. Then I made this self configured pipeline but still no success. Can anyone point out the problem by seeing the architect?
Here are the RDS MySQL Details -> NOTE <- that username in picture is different because I am using a separate user and the username in picture is administrator
This is the Data Pile Line Architect
Below are the settings of first block i.e Configuration
Below are the settings of RDS MySQL DataBase
Below are settings of EC2 Machine
Below are the Settings of SQL Data node - which i guess gets data from RDS
Below are the Settings of Copy Activity
Below are the settings of S3 Data Node - which i guess puts data on S3
Here is the ERROR LOG
I read that it could be an error due to VPC (Virtual Private Cloud) permissions but I am not sure how to add these settings as the server is a Production Server and I am afraid to perform this test. Can any one provide a solid solution please?
As previously mentioned, your ec2 instance is not able to contact the Database endpoint. Please use the link to configure the security groups correctly http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Scenarios.html
To test this, spin up a ec2 instance in the subnet and telnet to the database endpoint to ensure the connection is fine. You can then resume the activation of your pipeline.
Commands
sudo yum install telnet
telnet hostname port