AWS Migration - Hardcoded IP addresses - amazon-web-services

We are looking to migrate to AWS in the start of the new year.
One of the issues we have is the fact that some of the applications that we will be migrating over have been configured with hardcoded IP addresses (DB Hostnames).
We will be using ELB's to fully utilise the elasticity and dynamic nature of AWS for our infrastructure. With this in mind, those IP addresses that were static before will now be dynamic (so frequently assigned new IPs).
What is the best approach to solving these hardcoded values?
In particular IP addresses? I appreciate usernames, passwords etc. can be placed into a single config file and called using ini function etc.
I think one solution could be:
1) To make an AWS API call to query what the IP address of the host is? Then call the value that way.
Appreciate any help with this!

You should avoid hard code IP addresses, and use the hostname for the referenced resource. With either RDS or a self hosted DB running on EC2, you can use DNS to resolve the IP by host name at run time.
Assuming you are using CodeDeploy to provision the software, you can use the CodeDepoly Lifecycle Event Hooks to configure the application after the software as been installed. And after install hook could be configured to retrieve your application parameters, and make them available to the application before it starts.
Regarding the storage of application configuration data, consider using the AWS Parameter Store. Using this as a secure and durable source of application configuration data, you can retrieve the DB host address and other application parameters at software provision time, using the features of CodeDeploy mentioned above.

Related

Best way to handle EC2 instance forced termination

I have an EC2 instance which hosts a windows service, .net API and a simple .net website. There's also the added complication of a Route 53 endpoint pointing to it and an https cert being allocated via Amazon certificate manager. Yes, it's a lot of apps on a single instance and I will look at separating them later. I got a message from AWS saying that due to the underlying infrastructure becoming unstable, they'll need to terminate the instance in a week.
Lot of options come to mind, none of which I've tried before or know much about. These options include spinning up another instance, backing up and restoring this instance on to the new one. OR using AWS elastic beanstalk or something to automate the infrastructure setup and code deployment. Which of these (or another) options is most feasible and quick to get working and where should I start looking?
If it's just the instance, I'd go for an EBS snapshot and then restore the ec2 instance from it. Finally, swap the IP in Route 53.
It's a relatively quick and rather straight-forward process, that's well documented by AWS and there are loads of how-to's on the Web too.
Here's where to start:
Create Amazon EBS Snapshot
and here's how to restore it.
On the other hand, you could go for a .Net app on Elastic Beanstalk but that requires a bit more work to set up the environment and prepare the app for deployment.
More on how to create and deploy .NET on Elastic Beanstalk.

SSL Install on AWS

I've been tasked with getting a new SSL installed on a website, the site is hosted on AWS EC2.
I've discovered that I need the key pair in order to connect to the server instance, however the client doesn't have contact with the former web master.
I don't have much familiarity with AWS so I'm somewhat at a loss of how to proceed. I'm guessing I would need the old key pair to access the server instance and install the SSL?
I see there's also the Certificate Manager section in AWS, but don't currently see an SSL in there. Will installing it here attach it to the website or do I need to access the server instance and install it there?
There is a documented process for updating the SSH keys on an EC2 instance. However, this will require some downtime, and must not be run on an instance-store-backed instance. If you're new to AWS then you might not be able to determine whether this is the case, so would be risky.
Instead, I think your best option is to bring up an Elastic Load Balancer to be the new front-end for the application: clients will connect to it, and it will in turn connect to the application instance. You can attach an ACM cert to the ELB, and shifting traffic should be a matter of changing the DNS entry (but, of course, test it out first!).
Moving forward, you should redeploy the application to a new EC2 instance, and then point the ELB at this instance. This may be easier said than done, because the old instance is probably manually configured. With luck you have the site in source control, and can do deploys in a test environment.
If not, and you're running on Linux, you'll need to make a snapshot of the live instance and attach it to a different instance to learn how it's configured. Start with the EC2 EBS docs and try it out in a test environment before touching production.
I'm not sure if there's any good way to recover the content from a Windows EC2 instance. And if you're not comfortable with doing ops, you should find someone who is.

Which AWS services to pick for the right architecture?

AWS seems a little daunting with too many overlapping services so I'm looking for some advice and direction.
We have a mobile app for which we've developed a sync server (i.e. user will sign-up, sync data kept on AWS). Currently we've setup an EC2 instance with a web server, Django end-points and a postgres server. However we need the following:
Ensure the service is available from different regions of the
world for faster access
If that requires putting the postgres server outside of the EC2, what service do we need and how would replication work?
We will have larger file attachments stored on S3 separately, but need to do this securely and encrypt the files
Eventually we will host a web-app (i.e. an Angular 2 app) that would
connect to the same database.
We also would need to do all this in the most economical way and then scale up as the load increases.
Please any guidance would be appreciated. I'm struggling with terminologies at the moment. We also setup an Amazon SSL Certificate however that requires an Elastic Load Balancer but we only have one EC2 instance. What do we do to get this all working securely?
Based on the information provided, I would recommend you to start with AWS Elastic Beanstalk, where it will manage autoscaling and loadbalancing while providing you with a DNS URL for external domain mapping.
To ensure that the service is available from different regions for faster access, you can cache the static Angular App using Cloudfront. Then you will be able to add SSL Certificate to Cloudfront instead of ELB. If you plan to create multiple environments for different regions, you can use Route53 for geo based routing.
To take Postgres server outside EC2, you can use AWS RDS and it supports synchronous replication with fail-over for Multi-AZ deployments and also Postgres in RDS also supports Cross Region Replication if you plan to setup multiple deployment environments in different regions. Also you can create Read Replicas to improve reading speeds which will be asynchronously replicated.
You can encrypt the files in S3 using AES256 using Keys from KMS or from your client and I would recommend using Signed URLs with Cloudfront in front of S3 serving these files, so that clients can securely and directly access them improving the performance by getting advantage from distributed caching.
You can host the Angular App in AWS S3 and Cache using Cloudfront for faster access. Another option is to cache the static asset path in Cloudfront so that subsequent requests for static assets will be served from Cloudfront.
FAQs from Amazon
Who should use AWS Elastic Beanstalk?
Those who want to deploy and manage their applications within minutes
in the AWS Cloud. You don’t need experience with cloud computing to
get started. AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js,
Python, Ruby, Go, and Docker web applications.
Your current environment isn't scalable (either load-responsive or to another region). If you need scalability then it should be re-arranged. It is difficult to provide you with details because the required environment depends on the applications architecture, however there are some suggestions:
DB: For better stability multi-AZ RDS setup for the DB is recommended. Benefit is RDS is fully managed service so you don't need to worry about replication, maintenance etc.
Web/app servers: you can deploy a copy in any region you want and connect to the same DB.
S3: you can enable crosss-region replication as well as encryption, but make sure it is used wisely (e.g. files are served to the client from bucket in closest region)
You can set up your own SSL on the server and it does not require ELB. However, you can use ELB with one webnode only.
I do NOT suggest to use Beanstalk because despite it really makes the first steps more easier you may have trouble trying to configure something non-standard in the future (unless you're very well familiar with EBT, of course).
To add efficiency you may want to add CDN (either AWS ot another vendor).
Make sure your environment configuration is really secure. You may need for your team someone who is familiar with AWS because every topic can be converted to a separate article.

AWS Best practice - When external ip address on stop/start

Here's what's bothering me. Is there a better way than sending emails to devs that the ip address for their dev server has changed after the instance is stopped and started?
I was thinking of a single small instance that has an elastic ip which the devs can log in using terminal, and ssh again to the internal ip address of the dev server. Is that effective?
Does it mean that the devs need to be informed of the change every time?
It's unclear exactly what you are saying "there's a new public dns for the server"? -thanks for the comment, that's clearer what you mean! It's the aws domain name in the format "ec2-54-222-213-143.eu-west-1.compute.amazonaws.com" you are referring to
You are asking how can these name/address changes be managed?
Generally speaking for fixing these kinds of problems there are a couple of things to be aware of
Firstly, if it is the public ip address that is changing instead of an ephemeral public ip address use an elastic ip. This will stay the same and can be transferred from an old instance to a new instance. Please read http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html about the differences between "Elastic IP" and normal public IP addresses on AWS
Secondly, if you are concerned about maintenance of the dns records that map the ip addresses to the domain names then it is possible to automate the updates to aws route53. I have used the aws cli command "route53 change-resource-record-sets" for this and also CloudFormation
Automating events to occur on instance start up does take a little research of the available APIs and hooks for example see this answer with a simple use of cloud-init Using cloud-init user data

Having specific website access on specific EC2 instance under ELB

I wanted to know if there is an option in Amazon Web Services, two have two EC2 instances running, and me, as a developer, being able to have a direct access to one of my choice when both servers serve under the same domain.
By access, I mean regular access to the website via a web browser (e.g. www.domain.com/some-post/)
I want my site to continue be up and live. I currently have a single EC2 server that servers under www.domain.com. If I add another server via Elastic Load Balancer,I don't have control over which server the load balancer sends me.
I have a Wordpress site which I want to upgrade its theme, plugins and the core files, so I want only me to have access to that server and test it out. I could open a server and test it on a public ip, I did it, and it doesn't work as expected, so I need to run it under the original address to make sure that if it runs OK like that, it will run OK live.
The only way that I thought about doing it is to create an image of the server, create an EC2 instance, use a different domain name, restrict access to the server to my IP address, change in the DB to the new domain name, than after everything works, change the domain back to original and make the Elastic IP point to the new server.
No you can't achieve this behavior with an ELB. This would totally defeat the purpose of an ELB - who's purpose is to evenly distribute traffic amongst the instances associated with it.
By the sounds of it, you're looking for a testing stage that you can use to test out new updates etc without damaging the live site.
You could always set up a DNS name for your domain for your testing stage - eg."alpha.mysite.com".
It's quite common practice to use environment variables for use cases like this. You might have an environment variable set on machines that on prod could be eg: stage=prod and on your testing stage could be stage=test. Then in your code, you can get this environment variable an do something different depending on what stage the code is running on. For example, use the prod/development database.
It might be an idea to start using Code Deploy for pushing your code. This way, you can have deployment hooks set up your environment on each instance - install dependencies, load the code, start the application etc. And then using the environment variables already on the instances being deployed to, your code will do the correct thing.
I suppose you could put the test stage on a different port on your prod machines and that way you could use the same domain, but this would be a really bad idea. I think to get a safe, fault tolerant and scalable solution, you're going to need an additional DNS name. And you most certainly shouldn't use the same ELB. If you want to test load balancing for your test application, you should use an additional ELB.
In fact some people even go the lengths of using different AWS accounts for managing test environments.
You might also be interested in Code Pipeline to help you with this.
If I understand correctly, you run multiple instances behind a single ELB and want to be able to access one of the instances to test upgrades. I assume that, while performance and testing the upgrade, you don't want other users to access that instance.
I can think of a few ways to accomplish this. Here are two practical ones:
1. Remove the instance from the load balancer using the AWS console or CLI. No requests to the ELB will go to this instance.
Access the instance you want to upgrade directly on it's own address. For this, the security group on the instance must be configured to allow HTTP connections from the outsite. You could allow only access from your own IP and the load balancer, for example.
2. Create another ELB for test purposes. Make sure that the instance you're upgrading only responds to the test ELB, not to the production ELB. Two ways to accomplish this: either remove it from the production ELB manually, or make the ELB health check on the instance fail. (in the latter case, you would need different healthchecks for the test and production elb).
My advice: when costs are an issue, go for option one. When the additional costs of an extra ELB is not an issue, go for option 2, manually remove the instance from the production ELB while upgrading, and re-attach it when done and tested.
Update (i realized i didn't answer your question completely): for this to work without changing the domain in your database, you would need to point the machine you're testing from to the right host.
Two options:
1. When going for the direct http connection to the instance, make sure that the instance has an external ip. Put the domain in your hosts file and point it to the ip.
2. When going for an extra test elb, either point the domain in your hosts file to one of the ELB ip's, or run a local dns server that has a record for the domain with a CNAME to the ELB hostname.
Although verifying the correct upgrade of a single production node is a valid use case, in this case you're probably better off creating a separate test environment on a different domain.
This way, you can test all changes/upgrades in isolation.
For best results, you would need to periodically transfer the database from production to the test environment. You could write a database scripts that automatically changes the domain in the database so you can (partially or fully) automate the production-to-test-database-restore process.