How do we connect AWS Lightsail to Elasticache? I've set VPC peering but that's as far as I get. Anyone knows a step-by-step guide that shows how to do it?
[edit]
We have an application that uses Memcached to persist some data between nodes. The installation currently uses 4 servers: two nodes, 1 RDBMS and 1 Memcached server and is currently running on DigitalOcean servers. Our intention is to install the nodes on Lightsail servers and use AWS services directly from the Lightsail servers, such as RDS and Elasticache.
There are no problems connecting to an RDS MySQL instance from the Lightsail servers but a timeout occurs when trying to attach to the Memcached service. I know it's a security issue, I just can't find any doc that makes that connection work.
Related
I have a redis instance on AWS that I want to connect using Redis Desktop Manager from my local machine
I am able to ssh into my ec2 instace and then run redis-cli -h host and connect to it.
But the same is not possible from my local machine.
I am sure there must be a way to monitor my redis using the GUI, and I think if I can connect to the ec2 using pem file and I can connect to redis from insde there, must be a way to combine both? And connect to the redis instance locally via my ec2 instace? Any ideas?
By design AWS EC domain is deployed for use only within AWS. From docs:
Elasticache is a service designed to be used internally to your VPC. External access is discouraged due to the latency of Internet traffic and security concerns. However, if external access to Elasticache is required for test or development purposes, it can be done through a VPN.
Thus, it can't be accessed directly from outside of your VPC. For this, you need to setup a VPN between your local home/work network and your VPC, or what is often easier to do for testing and development, establish a ssh tunnel.
For the ssh tunnel you will need a public proxy/bastion EC2 instance through which the tunnel will be established. There are number tutorials on how to do it for different AWS services. General procedures are same, whether this is ES, EC, Aurora Serverless or RDS Proxy. Some examples:
SSH Tunnels (How to Access AWS RDS Locally Without Exposing it to Internet)
How can I use an SSH tunnel to access Kibana from outside of a VPC with Amazon Cognito authentication?
As #Marcin mentioned, AWS recommends only using Elasticache within your VPC for latency reasons, but you've got to develop on it some how... (Please be sure to read #Marcin's answer)
AWS is a huge mystery, and it's hard to find beginner-intermediate resources, so I'll expand upon #Marcin's answer a little for those that might stumble across this.
It's pretty simple to set up what's often referred to as a "jump box" to connect to all sorts of AWS resources - this is just any EC2 instance that's within the same VPC (network) as the resource you're trying to connect to - in this case the Elasticache redis cluster. (If you're running into trouble, just spin up a new instance - t4g.nano or something super small works just fine.)
You'll want to make sure you're in the directory with your key, but then should be able to run the following command to link whatever port you'd like to use to the remote redis cluster:
ssh -i ${your_ssh_key_name.pem} ${accessible_ec2_host} -L ${port_to_use_locally}:${inaccessable_redis_or_other_host}:${inaccessable_redis_port}
Then you can use localhost and ${port_to_use_locally} to connect to redis
The company I'm working for recently decided to deploy a new application with docker swarm on AWS using EC2 instances. We set up a cluster of three EC2 instances as nodes (one manager, two workers) and we use stack to deploy the services.
The problem is that one of the services, a django app, runs into a timeout when trying to connect to the postgres database that runs on RDS in the same VPC. But ONLY when the service publishes a port.
A service that doesn't publish any port can connect to the DB just fine.
The RDS endpoint gets resolved to the proper IP, so it shouldn't be a DNS issue and the containers can connect to the internet. The services are also able to talk to each other on the different nodes.
There also shouldn't be a problem with the security group definition of the db, because the EC2 instances themselves can get a connection to the DB.
Further, the services can connect to things that are running on other instances within the VPC.
It seems that it has something to do with swarm (and overlay networks) as running the app inside a normal container with a bridge network doesn't cause any problems.
Stack doesn't seem to be the problem, because even when creating the services manually, the issue still persists.
We are using Docker CE version 19.03.8, on Ubuntu 18.04.4 LTS and docker-compose version 3.
The problem come when you config your swarm subnet conflict with your subnets in VPC. You must change your swarm subnet different CIDR.
I have an EC2 instance, and two separate databases - one Sql Server instance and one MySql instance - both within AWS RDS.
So when Amazon refers to client applications needing new certificates - does it only mean if I am connecting to those databases on AWS via clients on my PC (eg. Sql Server Management Studio, MySql Connector)?
Do I have to do anything to my asp.net and php web applications running on EC2, which connect to the AWS RDS instances?
Thanks for any clarification.
Mark
The EC2 instances are clients of the database. If you are currently performing SSL Certificate validation on the EC2 instances when connecting to the RDS instances, then you need to update the certificates. If you are not currently performing SSL Certificate validation, then you don't need to do anything, except maybe go ahead and update your RDS instances with the new certificates so Amazon stops emailing and calling you about it.
There is no need to update anything in code as ssl handshake is managed by the driver.
Not a duplicate of AWS Aurora MySQL serverless: how to connect from MySQL Workbench.
Aurora Serverless doesn't support public connections yet.
I used Cloud9 to create an EC2 instance on the same VPN as the database. I then connected to the database from the Cloud9 terminal.
My (GraphQL Prisma) service that I'm attempting to host (on Zeit Now) only takes a HOST and a PASSWORD for configuration.
How can I make the EC2 instance act as a proxy that I can treat exactly as a database endpoint. Can tunneling fully do that and I'm just bad at it?
I think this blog may help you. The idea, is make a forwarding port from ec2-dns:3306 to aurora-serverless-cluster-dns:3306
I have a project in the AWS as an AMI image and its an elasticsearch depended. I have an elasticsearch in my local system. Is there anyway to connect the AMI with the elasticsearch in my local system?
To clarify - are you asking how to connect your self-hosted/on-premise ElasticSearch to an AWS EC2 instance?
If so, you can create a VPN connection between your on-premise Network to your AWS VPC using strategies outlined in the AWS documentation
You can also route the traffic via the Internet if you'd prefer - which depending on how your network looks may involve setting up some NAT port forwards.