I wanted to know if there is an option in Amazon Web Services, two have two EC2 instances running, and me, as a developer, being able to have a direct access to one of my choice when both servers serve under the same domain.
By access, I mean regular access to the website via a web browser (e.g. www.domain.com/some-post/)
I want my site to continue be up and live. I currently have a single EC2 server that servers under www.domain.com. If I add another server via Elastic Load Balancer,I don't have control over which server the load balancer sends me.
I have a Wordpress site which I want to upgrade its theme, plugins and the core files, so I want only me to have access to that server and test it out. I could open a server and test it on a public ip, I did it, and it doesn't work as expected, so I need to run it under the original address to make sure that if it runs OK like that, it will run OK live.
The only way that I thought about doing it is to create an image of the server, create an EC2 instance, use a different domain name, restrict access to the server to my IP address, change in the DB to the new domain name, than after everything works, change the domain back to original and make the Elastic IP point to the new server.
No you can't achieve this behavior with an ELB. This would totally defeat the purpose of an ELB - who's purpose is to evenly distribute traffic amongst the instances associated with it.
By the sounds of it, you're looking for a testing stage that you can use to test out new updates etc without damaging the live site.
You could always set up a DNS name for your domain for your testing stage - eg."alpha.mysite.com".
It's quite common practice to use environment variables for use cases like this. You might have an environment variable set on machines that on prod could be eg: stage=prod and on your testing stage could be stage=test. Then in your code, you can get this environment variable an do something different depending on what stage the code is running on. For example, use the prod/development database.
It might be an idea to start using Code Deploy for pushing your code. This way, you can have deployment hooks set up your environment on each instance - install dependencies, load the code, start the application etc. And then using the environment variables already on the instances being deployed to, your code will do the correct thing.
I suppose you could put the test stage on a different port on your prod machines and that way you could use the same domain, but this would be a really bad idea. I think to get a safe, fault tolerant and scalable solution, you're going to need an additional DNS name. And you most certainly shouldn't use the same ELB. If you want to test load balancing for your test application, you should use an additional ELB.
In fact some people even go the lengths of using different AWS accounts for managing test environments.
You might also be interested in Code Pipeline to help you with this.
If I understand correctly, you run multiple instances behind a single ELB and want to be able to access one of the instances to test upgrades. I assume that, while performance and testing the upgrade, you don't want other users to access that instance.
I can think of a few ways to accomplish this. Here are two practical ones:
1. Remove the instance from the load balancer using the AWS console or CLI. No requests to the ELB will go to this instance.
Access the instance you want to upgrade directly on it's own address. For this, the security group on the instance must be configured to allow HTTP connections from the outsite. You could allow only access from your own IP and the load balancer, for example.
2. Create another ELB for test purposes. Make sure that the instance you're upgrading only responds to the test ELB, not to the production ELB. Two ways to accomplish this: either remove it from the production ELB manually, or make the ELB health check on the instance fail. (in the latter case, you would need different healthchecks for the test and production elb).
My advice: when costs are an issue, go for option one. When the additional costs of an extra ELB is not an issue, go for option 2, manually remove the instance from the production ELB while upgrading, and re-attach it when done and tested.
Update (i realized i didn't answer your question completely): for this to work without changing the domain in your database, you would need to point the machine you're testing from to the right host.
Two options:
1. When going for the direct http connection to the instance, make sure that the instance has an external ip. Put the domain in your hosts file and point it to the ip.
2. When going for an extra test elb, either point the domain in your hosts file to one of the ELB ip's, or run a local dns server that has a record for the domain with a CNAME to the ELB hostname.
Although verifying the correct upgrade of a single production node is a valid use case, in this case you're probably better off creating a separate test environment on a different domain.
This way, you can test all changes/upgrades in isolation.
For best results, you would need to periodically transfer the database from production to the test environment. You could write a database scripts that automatically changes the domain in the database so you can (partially or fully) automate the production-to-test-database-restore process.
Related
I am working with an application deployed on AWS EC2. One head instance, www.me.com, provides login & admin such that a user can spawn an instance of the application. A new AWS EC2 instance is started to run the application, then the user's browser is redirected to that instance (with long URL ending amazonaws.com), and the application is available until the user closes it (then the EC2 instance is stopped).
We now wish to move the application to use SSL. We can get a certificate from some CA, and tie it to *.me.com. The question is, how do we use the same certificate to secure the application instances?
One idea is to use Elastic IP - we reserve N IPs, and tie N sub-domains (foo1.me.com, foo2, ...) to these. Then we start each application instance associated with one of these, and direct the user to the associated sub-domain. Certificate is valid, all is well. I think this works?
Trouble is, the application should scale to 1000's of simultaneous users, but may well spend the majority of time bouncing along around zero users, so we'll pay significant penalty costs for reserving the unused IPs, and besides, we might exceed N and have to deny access.
Simpler, perhaps, would be to provide access to the application routed through the head server, www.me.com, using either of "a.me.com" or "www.me.com/a". The former, I think doesn't work, because it would need the DNS records to be updated to be useful to the user, which cannot happen quickly enough to be offered to a user on the fly. The latter might work, but I don't know web infrastructure well enough to imagine how to engineer it. Even if we were only serving port 80, and in fact we are also providing services on other ports. So we'd need something like:
www.me.com/a:80 <--> foo.amazonaws.com:80
www.me.com/a:8001 <--> foo.amazonaws.com:8001
www.me.com/a:8002 <--> foo.amazonaws.com:8002
...
It seems to me there are two options, either a head server that handles all traffic (under me.com, and thus under the certificate) and somehow hands it off to the application instances, or some method for allowing the users to connect directly to the application instances but in such a way that we can reasonably manage securing these connections using one (or a small number of) certificates.
Can anyone suggest what is the route one way to do this? I'm assuming it's not Route 53, since - again - that's a DNS thing, with DNS lag. Unless I've misunderstood.
Thanks.
we currently have a site that is on AWS ElasticBeanstalk as a Single Instance in production, but we would like to change that to Load Balanced.
The site currently uses .ebextensions to run a few things, and also set up the SSL from LetsEncrypt. Apart from that, it's a pretty bog-standard Magento site.
So, what would be the best way to switch without causing any downtime, and keeping the site available with HTTPS?
I am assuming it isn't as simple as changing it from a Single Instance to Load Balanced in Configuration -> Modify capacity, and setting it to, say, 1:5 for instances availabilities?
Yes, it really is that easy. However, there will be a small amount of downtime.
Create an SSL certificate in Certificate Manager. This will be used for the load balancer.
Then switch the configuration in the AWS Console to add load balancing and setup the auto scaling launch configuration.
The reconfiguration will take about two minutes (in my experience).
I've been tasked with getting a new SSL installed on a website, the site is hosted on AWS EC2.
I've discovered that I need the key pair in order to connect to the server instance, however the client doesn't have contact with the former web master.
I don't have much familiarity with AWS so I'm somewhat at a loss of how to proceed. I'm guessing I would need the old key pair to access the server instance and install the SSL?
I see there's also the Certificate Manager section in AWS, but don't currently see an SSL in there. Will installing it here attach it to the website or do I need to access the server instance and install it there?
There is a documented process for updating the SSH keys on an EC2 instance. However, this will require some downtime, and must not be run on an instance-store-backed instance. If you're new to AWS then you might not be able to determine whether this is the case, so would be risky.
Instead, I think your best option is to bring up an Elastic Load Balancer to be the new front-end for the application: clients will connect to it, and it will in turn connect to the application instance. You can attach an ACM cert to the ELB, and shifting traffic should be a matter of changing the DNS entry (but, of course, test it out first!).
Moving forward, you should redeploy the application to a new EC2 instance, and then point the ELB at this instance. This may be easier said than done, because the old instance is probably manually configured. With luck you have the site in source control, and can do deploys in a test environment.
If not, and you're running on Linux, you'll need to make a snapshot of the live instance and attach it to a different instance to learn how it's configured. Start with the EC2 EBS docs and try it out in a test environment before touching production.
I'm not sure if there's any good way to recover the content from a Windows EC2 instance. And if you're not comfortable with doing ops, you should find someone who is.
We are looking to migrate to AWS in the start of the new year.
One of the issues we have is the fact that some of the applications that we will be migrating over have been configured with hardcoded IP addresses (DB Hostnames).
We will be using ELB's to fully utilise the elasticity and dynamic nature of AWS for our infrastructure. With this in mind, those IP addresses that were static before will now be dynamic (so frequently assigned new IPs).
What is the best approach to solving these hardcoded values?
In particular IP addresses? I appreciate usernames, passwords etc. can be placed into a single config file and called using ini function etc.
I think one solution could be:
1) To make an AWS API call to query what the IP address of the host is? Then call the value that way.
Appreciate any help with this!
You should avoid hard code IP addresses, and use the hostname for the referenced resource. With either RDS or a self hosted DB running on EC2, you can use DNS to resolve the IP by host name at run time.
Assuming you are using CodeDeploy to provision the software, you can use the CodeDepoly Lifecycle Event Hooks to configure the application after the software as been installed. And after install hook could be configured to retrieve your application parameters, and make them available to the application before it starts.
Regarding the storage of application configuration data, consider using the AWS Parameter Store. Using this as a secure and durable source of application configuration data, you can retrieve the DB host address and other application parameters at software provision time, using the features of CodeDeploy mentioned above.
AWS said when we deploy the same type of application, it is better to deploy on the same server instances.
I am not sure if it is a best/better practise for deployment.
Is there any further references for that?
http://docs.aws.amazon.com/opsworks/latest/userguide/workingapps-multiple.html
Running Multiple Applications on the Same Application Server
If you have multiple applications of the same type, it is sometimes more cost-effective to run them on the same application server instances.
To run multiple applications on the same server
Add an app to the stack for each application.
Obtain a separate subdomain for each app and map the subdomains to the application server's or load balancer's IP address.
Edit each app's configuration to specify the appropriate subdomain.
For more information on how to perform these tasks, see Using Custom Domains.
They don't actually say 'its better', they say it is more cost effecient. If the instance you are running on has excess capacity, and money is a concern, why not use it?
If money is no object, by all means run just a single app on each instance. It does give you a bit more flexibility and a cleaner seperation between the duties of each server. i.e. if you had just a single app on a single instance, if you didn't need the app anymore you could terminate the instance.