Accessing AWS Website to change one .html file - django

For some time ago (years) I deployed a website made with Django in AWS, now I'm trying to make some changes in just one .html file but I don't remember how to connect to my instance anymore, I tried to look at the documentation from AWS but there is too many changes and also they added a lot of new components, that I don't know where to start and I also don't remember correctly how it worked.
Unfortunately all the private keys, password and this kind of stuff are in a old computer that I don't even have anymore. The only thing that I have is the username, password from my AWS console and a Mac.
I would appreciate if you can point me out where to start!
Thank you!

You have a few options. Personally, I would find the EC2 instance your site is hosted on and create an image of it. This can be done by selecting your instance in the EC2 console. This will create an Amazon Machine Imagine (AMI). You can then launch a new EC2 instance from this AMI and specify a new PEM key. Once the instance is launched, you can connect to the instance and edit the .html file.

you can create an AMI (Amazon Machine Image) of any running EC2 servers (where your application likely lives).
After creating an AMI, you can spin up a new EC2 instance using that AMI. The new one will be a clone of the old one, but you can specify/download a new keypair.
From there you can log in via ssh, and I'd recommend putting things in source control afterwards.
A lot of the rest depends on your DNS settings... if you're using an Elastic IP you can just assign it to the new machine. If you're using an Elastic Load Balancer you can simply move the old one off, and the new one on.

Related

How to add some new code to an existing EC2 instance

Bear with me, what I am requesting may be impossible. I am a AWS noob.
So I am going to describe to you the situation I am in...
I am doing a freelance gig and was essentially handed the keys to AWS. That is, I was handed the root user login credentials for the AWS account that powers this website.
Now there are 3 EC2 instances. One of the instances is a linux box that, from what I am being told, is running a Django Python backend.
My new "service" if you will must exist within this instance.
How do I introduce new source code into this instance? Is there a way to pull down the existing source code that lives within it?
I am not be helped by any existing/previous developers so I am kind of just handed the AWS credentials and have no idea where to start.
Is this even possible. That is, is it possible to pull the source code from an EC2 instance and/or modify the code? How do I do this?
EC2 instances are just virtual machines. So you can use SSH/SCP/SFTP files to and from. You can use the AWS CLI tools to copy stuff from S3. Dealers choice...
Now to get into this instance... If you look in the web console you can find its IP(s), what the security groups (firewall rules), and the key pair name. Hopefully they gave you the keys. You need these to SSH in.
You'll also want to check to make sure there's a security group applied that has SSH open. Hopefully only to your IP :)
If you don't have the keys you'll have to create an AMI image of the instance so you can create a new one with a key pair you do have.
Amazon has a set of tools for you in Amazon CodeSuite.
The tool used for "deploying" the code is Amazon CodeDeploy. By using this service you install an agent onto your host, then when triggered it will pull down an artifact of a code base and install it matching hosts. You can even specify additional commands through the hook system.
But you also want to trigger this to happen, maybe even automatically? CodeDeploy can be orchestrated using the CodePipeline tool.

Deploying to EC2 instances behind a load balancer; PHPStorm + GitHub

I know this has been partially answered in a bunch of places, but the answers are so.. all over the map, dated and not well explained. I'm looking the best practice as of February 2016.
The setup:
A PHP-based RESTful application service that lives in an EC2 instance. The EC2 instance uses S3 for uploaded user data (image files), and RDS MySql for its DB (these two points aren't particularly important.)
We develop in PHPStorm, and our source control is GitHub. When we deploy, we just use PHPStorm's built-in SFTP deployment to upload files directly to the EC2 instance (we have one instance for our Staging environment, and another for our Production environment). I deploy to Staging very often. Could be 20 times a day. I just click on a file in PHPStorm and say 'deploy to Staging', which does the SFTP transfer. Or, I might just click on the entire project and click 'deploy to Staging' - certain folders and files are excluded from the upload, which is part of PHPStorm's deployment configuration.
Recently, I put our EC2 instance behind a Load Balancer. I did this so that I can take advantage of Amazon's free SSL offering via the Certificate Manager, which does not support individual EC2 instances.
So, right now, there's a Load Balancer with only a single EC2 instance behind it. I maintain an Elastic IP pointing to the EC2 instance so that I can access it directly (see my current deployment method above).
Question:
I have not yet had the guts to create additional (clone) EC2 instances behind my Load Balancer, because I'm not sure how I should be deploying to them. A few ideas came to mind, but they're all pretty hacky.
Given the scenario above, what is currently the smoothest and best way to A) quickly deploy a codebase to a set of EC2 instances behind a Load Balancer, and B) actually 'clone' my current EC2 instance to create additional instances.
I haven't been able to really paint a clear picture of the above in my head yet, despite the fact that I've gone over a few (highly technical) suggestions.
Thanks!
You need to treat your EC2 instance as 100% dispensable. Meaning, that it can be terminated at any time and you should not care. A replacement EC2 instance would start and take over the work.
There are 3 ways to accomplish this:
Method 1: Each deployment creates a new AMI image.
When you deploy you app, you deploy it to a worker EC2 instance whose sole purpose is for "setup" of your app. Once the new version is deployed, you create a fresh AMI image from the EC2 instance and update your Auto Scaling launch configuration with the new AMI image. The old EC2 instances are terminated and replaced with the new code.
New EC2 instances have the recent code already on them so they're ready to be added to the load balancer.
Method 2: Each deployment is done to off-instance storage (like Amazon S3).
The EC2 instances will download the recent code from Amazon S3 and install it on boot.
So to put the new code in action, you terminate the old instances and new ones are launched to replace them which start using the new code.
This could be done in a rolling-update fashion, or as a blue/green deployment.
Method 3: Similar to method 2, but this time the instances have some smarts and can be signaled to download and install the code.
This way, you don't need to terminate instances: the existing instances are told to update from S3 and they do so on their own.
Some tools that may help include:
Chef
Ansible
CloudFormation
Update:
Methods 2 & 3 both start with a "basic" AMI which is configured to pull the webpage assets from S3. This AMI is not changed from version-to-version of your website.
For example, the AMI can have Apache and PHP already installed and on boot it pulls the .php website assets from S3 and puts them in /var/www/html.
CloudFormation works well for this. In addition, for method 3, you can use cfn-hup to wait for update signals. When signaled, it'll pull updated assets from S3.
Another possibility is using Elastic Beanstalk which could be used to manage all of this for you.
Update:
To have your AMI image pull from Git, try the following:
Setup an EC2 instance with everything installed that you need to have installed for your web app
Install Git and setup a local repo ready to Git pull.
Shutdown and create an AMI of your instance.
When you deploy, you do the following:
Git push to GitHub
Launch a new EC2 instance, based on your AMI image.
As part of the User Data (specified during the EC2 instance launch), specify something like the following:
#!/bin/sh
cd /git/app
git pull
; copy files from repo to web folder
composer install
When done like this, that user data acts as a script which will run on first boot.

SSH from AWS BeanStalk

I have an application that runs on AWS BeanStalk and one requirement is to connect to another server using ssh. I could log as root into the server and generate a key pair that i can use but this would not scale. (we have auto-scaling enabled)
Is there a way to generate and replicate a key pair across the instances that are running?
Edit - I feel the need to provide a better description to my problem.
When I lunch the BeanStalk instance i selected the previously created keypair but looking at the EC2 documentation here it states the following:
Amazon EC2 stores the public key only, and you store the private key.
This seems to work ok as I am able to ssh into the ec2 instance. We have another service that is running on a DigitalOcean hosted machine, and we need to ssh from the ec2 instance to the digitalocean instance.
Important The DigitalOcean instance can only allow key based authentication (user/password authentication is not allowed)
When i log into the ec2 machine i can see that in the .sshfolder i only have the authorized_keys file and that would make sense taking into consideration the documentation paragraph.
Is there a way to get a public key that i could use to log into the digitalocean instance from the ec2 instance?
If I understand you correctly, you need the Beanstalk application to SSH in to another server?
Every EC2 instance gets launched with a designated keypair. You have the option of either creating a new keypair or using a keypair already set (i.e. the keypair created by the Beanstalk application for the first instance).
Keeping the private key on the Beanstalk instance, launching the other instance(s) using that same keypair would allow the application to use the private key to SSH in and also allow you to scale the instances without your having to go in to each one and create new keypairs.
That said, I believe the documentation suggests against keeping the private key on the instance, so perhaps consider launching the non-Beanstalk instances with a configuration script that creates a customized user, perhaps using a key and password and pre-configuring the application with that information? You can keep that information as environment variables within Beanstalk itself, similar to how you would keep RDS credentials.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
When you launch an instance in Amazon EC2, you have the option of
passing user data to the instance that can be used to perform common
automated configuration tasks and even run scripts after the instance
starts. You can pass two types of user data to Amazon EC2: shell
scripts and cloud-init directives. You can also pass this data into
the launch wizard as plain text, as a file (this is useful for
launching instances via the command line tools), or as base64-encoded
text (for API calls).
EDIT 1
In order to SSH from computer A to computer B, computer A needs to have the private key in the .ssh directory and computer B needs to have the public key appended to the authorized_keys file in the .ssh directory, so that's perhaps why you don't see either key in the Elastic Beanstalk EC2 instance.
Since you have the public key within the authorized_keys file, you can simply replicate it to the DigitalOcean instance (once it's on the server, do cat public_key >> authorized_hosts) and since you're able to SSH in to the Elastic Beanstalk instance, you can simply take the private key from your computer and put it in the .ssh directory of the Beanstalk instance. That way, now the DigitalOcean instance will have just the public key appended to the authorized_keys file and the beanstalk instance will now have both the private key and public key as authorized to login.
That said, this is probably the most insecure way of doing this...I would prefer you generate a new key and use that to be able to SSH from the Beanstalk instance to the DigitalOcean instance.
Note, this is not the same as creating a new IAM user, though you can use IAM to simply create new key pairs.
EDIT 2
I guess it will be difficult for an EC2 instance to automatically obtain the private key upon being automatically launched, so the way I see it, you have three options;
1) EC2 instances can be (auto)launched with a custom "user data" script, which I referenced above. In that script, you can include the actual private key data (pretty bad idea IMO) OR have it obtain the private key from somewhere (e.g. SCP with username/password into some machine and download it). Again, all pretty bad ideas.
2) Embed the private key within your Beanstalk application. Not knowing what language your application is written in, it's difficult to determine how bad / good of an idea this is. If it's in Java, private / sensitive keys get embedded all the time, so I don't see why this would be any different. This seems to me to be a fine idea, iff this is an application developed specifically for this use case and will never be used anywhere else. I'd hate to see a developer accidentally deploy this app somewhere else and now that private key is potentially compromised.
3) You could create an AMI of the EC2 instance with the key embedded in it. Then simply instruct the autoscaling to launch a new instance of that AMI and voila, you will have the key in the .ssh directory. I tend to like this idea the best as it uses AWS resources for what they're intended, and I would think makes the key a bit more 'secure' (outside of compromising the EC2 instance itself, it will be much more difficult for anyone to access the key). This wouldn't add any additional scalability over option #2 as you can scale / deploy a Beanstalk application just as much as you can an AMI image. That preference is up to you.
NOTE, this of course says nothing about scaling the DO machine, assuming that's even a requirement.

Scale amazon ec2 instance

I have amazon ec2 instance with mapped to 1 volume of data.
This instance running my http and have my server code.
now i have to scale my app with creating new instance and load balancing.
But if i create new instance with cloning existing instance how can the code and http vhost file will be in sync.
Using snapshot i close instance first time.
But i want when one instance i upload my code that should sync with other instance also.
How can i achieve this? should i need to configure rsync from 1 instance to another instance?
"Baking" custom AMIs is a very simple way to do this. Start a new instance from your AMI (start with a snapshot of your current instance), make changes on it like update application/configurations/system, test, create new AMI from it, start new instances from that new AMI, test them and then swap old instance in the ELB with the new ones.
There are also many tools you can use to automate your application deployment like Puppet, Chef or one of Amazons offerings: CodeDeploy, OpsWorks, Elastic Beanstalk and I recommend you use one such tool eventually.
From your description you cloned your first web server (www1) to make a second web server (www2).
Now when you make code edits you want the code to be in sync between the two webservers.
Rsync can help to make that easy.
From the 2nd web server (www2)
rsync -chavzP --stats username#IPorNAMEofwww1:/path/to/copy/on/www1 /path/to/putfiles/on/www2
Once you get that working from the command line. Add it to a cronjob so it syncs on a schedule (hourly). It should only sync the changes, not every file.

Amazon AWS - How to copy and replace instance?

Pretty new with Amazon AWS. I've inherited the setup from another dev, and I've never used AWS before. Need some help with copy and replacing instances. Pretty much there's a dev and a production instance. I need to backup the dev instance and then replace the dev instance with the current production instance.
What I've done so far:
Created Image of test instance. SO it's currently under AMIs.
What I need:
Instructions on how to replace the current dev instance with the production one.
Additional Info:
Both instances have different Public DNS and Public IP.
Anyone got any quick instructions on how to accomplish this?
Thanks!
The 'quick' instructions are to simply create an AMI of the production image (shut it down first), then you use that saved AMI as the basis for creating a new instance. You'll end up with 3 instance, and you can delete your old dev instance once you confirm the new copy of production came up as you wanted - you don't actually 'replace' the dev instance, (you create new and delete the old).
Lots of links to be found for this, but here is one:
http://docs.aws.amazon.com/AWSToolkitVS/latest/UserGuide/tkv-create-ami-from-instance.html