I've been tasked with getting a new SSL installed on a website, the site is hosted on AWS EC2.
I've discovered that I need the key pair in order to connect to the server instance, however the client doesn't have contact with the former web master.
I don't have much familiarity with AWS so I'm somewhat at a loss of how to proceed. I'm guessing I would need the old key pair to access the server instance and install the SSL?
I see there's also the Certificate Manager section in AWS, but don't currently see an SSL in there. Will installing it here attach it to the website or do I need to access the server instance and install it there?
There is a documented process for updating the SSH keys on an EC2 instance. However, this will require some downtime, and must not be run on an instance-store-backed instance. If you're new to AWS then you might not be able to determine whether this is the case, so would be risky.
Instead, I think your best option is to bring up an Elastic Load Balancer to be the new front-end for the application: clients will connect to it, and it will in turn connect to the application instance. You can attach an ACM cert to the ELB, and shifting traffic should be a matter of changing the DNS entry (but, of course, test it out first!).
Moving forward, you should redeploy the application to a new EC2 instance, and then point the ELB at this instance. This may be easier said than done, because the old instance is probably manually configured. With luck you have the site in source control, and can do deploys in a test environment.
If not, and you're running on Linux, you'll need to make a snapshot of the live instance and attach it to a different instance to learn how it's configured. Start with the EC2 EBS docs and try it out in a test environment before touching production.
I'm not sure if there's any good way to recover the content from a Windows EC2 instance. And if you're not comfortable with doing ops, you should find someone who is.
Related
I have an EC2 instance which hosts a windows service, .net API and a simple .net website. There's also the added complication of a Route 53 endpoint pointing to it and an https cert being allocated via Amazon certificate manager. Yes, it's a lot of apps on a single instance and I will look at separating them later. I got a message from AWS saying that due to the underlying infrastructure becoming unstable, they'll need to terminate the instance in a week.
Lot of options come to mind, none of which I've tried before or know much about. These options include spinning up another instance, backing up and restoring this instance on to the new one. OR using AWS elastic beanstalk or something to automate the infrastructure setup and code deployment. Which of these (or another) options is most feasible and quick to get working and where should I start looking?
If it's just the instance, I'd go for an EBS snapshot and then restore the ec2 instance from it. Finally, swap the IP in Route 53.
It's a relatively quick and rather straight-forward process, that's well documented by AWS and there are loads of how-to's on the Web too.
Here's where to start:
Create Amazon EBS Snapshot
and here's how to restore it.
On the other hand, you could go for a .Net app on Elastic Beanstalk but that requires a bit more work to set up the environment and prepare the app for deployment.
More on how to create and deploy .NET on Elastic Beanstalk.
I'm a newer AWS user and today I got stuck while working on a sample project. I successfully created a docker container that runs a simple R script that connects to my AWS RDS MySQL Database and creates & writes some basic files to it. I built a public ECR repository, pushed my docker image there, and built a ECS cluster & task choosing Fargate and using the container image from my repository. My task ran and I could see the R code being executed when I went through the logs, but it was never able to connect to the SQL Database and exited afterwards.
I've had to whitelist my own IP address in the security group for the RDS Database so that I can connect to it, so I'm aware I probably have to do that for my ECS task to establish that connection too. But won't that IP address constantly change because I won't have a static IP for the Fargate Server that is executing my task? I'm trying to stay on the free tier so I'm not sure I want to setup an elastic IP address for this server.
These 2 articles seem close if not the same issue I'm having but I can't figure out a solution. I haven't found any other info.
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-task-database-connection/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-static-elastic-ip-address/
The end goal is to get this sample project successfully running on a scheduled fixed interval, and then running actual scripts on there to help automate things and make my life easier, so this sample project is a first step towards that. Any help or info on the questions I'm having would be appreciated !
Yes, your task is ephemeral (whether you launch it manually or as part of an ECS service) and its private/public ip address may change over time if it gets replaced. The way you'd make the connectivity rules to stick is to assign a security group to the task (that may have inbound access on a specific port you need I assume and outbound to everything) and assign another security group to the RDS db that has inbound access on port 3306 for the security group you assigned to the task (this is the trick, the SG will not change and you are telling RDS to allow access to ALL traffic coming from that SG). I see the first article you posted doesn't talk about this part (it should).
I asked this on serverfault but evidently to basic for them.
I have read through a ton of documents on the Google cloud platform but most of it is over my head, I am a developer and not a network type person. I think what I am trying to do is pretty basic but I can't find anywhere that has step by step instructions on how to accomplish the process. Google documentation seems to assume a good deal of networking knowledge.
I have :
created a "managed instance group" with Autoscaling turned on.
RDP'd into the server and installed the required software
upload all the code to run a site
set up DNS to point to that site
tested and everything seems to work just as I would expect.
I need to set up a load balancer and change the DNS to point to that instead of the server.
My web app doesn't have a back-end perse as it is entirely api driven so not sure what to do with the "backend configuration" part of setting up the load balance service.
I have an SSL cert on the server but don't know how to move it to the load balancer.
When the autoscaling kicks in will all the software and code from the current server be used or is there another step that I need to do to make this happen. If I update code on the server via RDP will the new autoscale created instances be aware of it?
Can anyone explain these steps to point me to a place NOT written for a sysadmin that I can try to understand them myself?
Here I am sharing with you a short YouTube video (less than 5 mins) of step by step instructions on how to quickly configure a load balancer in Google Cloud Platform with backend services.
I also would like to mention here that SSL terminates at the load balancer. Here is the public documentation on Creating and Using SSL Certificates in load balancing.
Finally, you want to make sure that all the software and configurations you want on each instance is done before you create the managed instance group, otherwise, the changes you make on one server will not reflect in the others.
To do this, configure your server with all the necessary software and settings. Once the server is in the correct state, create an image out of your server. You can then use this image to create an instance template which you will use for the managed instance group.
I am setting up a Tomcat application in EC2. For reliability, I am running two or more instances. If one server goes down, my users should be redirected to the other instance. This suggests that session state should be kept in an external source, or mirrored between the servers.
AWS offers a hosted service, Elasticache, which seems like it would work well. I even found a nice library, memcached-session-manager. However, I soon ran into some issues.
Unless someone can convince me otherwise, I need the session states to be encrypted in transit. Otherwise someone could intercept the network traffic and pretend to be someone else on my site. I don't see any built-in Amazon method to keep traffic off the internet. (Is peering available here?)
The library mentioned earlier does have Redis support with SSL, but it does not support a Redis cluster. Someone put in a pull request for this but it has not been incorporated and this library is a complex build. I may talk myself into living without the cluster, but that puts us back at a single point of failure.
Tomcat is running on EC2 in your VPC, and ElastiCache is in your VPC. Your AWS VPC is an isolated network. Nobody can intercept the traffic between the EC2 and Elasticache servers unless your VPC network becomes compromised in some way.
If you want to use Redis instead, with SSL connections, then I believe at this time you would need a Tomcat Session Manager implementation that uses Jedis. This one uses Jedis, but you would need to upgrade the version of Jedis it uses in order to use SSL connections.
I am trying to set up an Amazon Server to host a dynamic website I'm currently creating. I have the domain bought on GoDaddy.com, and I believe that what I've done so far has linked the domain to my Amazon account.
I followed this tutorial : http://www.mycowsworld.com/blog/2013/07/29/setting-up-a-godaddy-domain-name-with-amazon-web-services/
In short, this walked me through setting up and Amazon S3 (Simple Storage Service) and Amazon Route 53. I then configured the DNS Servers, and my website now launches properly on the domain.
I'm not sure on the next step from here, but I would like to set up:
-A database server
-Anything else that might be necessary to run a dynamic website.
I am very new to hosting websites, and semi-new to web development in general, so the more in depth the better.
Thanks a lot
You have two options on AWS. Run an EC2 server and setup your application or continue to use the AWS managed services like S3.
Flask apps can be hosted on Elastic Beanstalk and
your database can be hosted on RDS (Relational Database Service). Then the two can be integrated.
Otherwise, spin up your own t2.micro instance in EC2. Log in via ssh and set up the database server and application like you have locally. This server could also host the (currently S3 hosted) static files too.
I have no idea what your requirements are, personally I would start with setting up the EC2 instance and go from there as integrating AWS services is without knowing what you need is probably not the easiest first step.
Heroku might be another option. They host their services on AWS and give you an end to end solution for deploying and running your python code without getting your hands dirty setting up servers.