Using Amazon AWS as a development server. - amazon-web-services

I'm still cheap.
I have a software development environment which is a bog-standard Ubuntu 11.04 plus a pile of updates from Canonical. I would like to set it up such that I can use an Amazon EC2 instance for the 2 hours per week when I need to do full system testing on a server "in the wild".
Is there a way to set up an Amazon EC2 server image (Ubuntu 11.04) so that whenever I fire it up, it starts, automatically downloads code updates (or conversely accepts git push updates), and then has me ready to fire up an instance of the application server. Is it also possible to tie that server to a URL (e.g ec2.1.mydomain.com) so that I can hit my web app with a browser?
Furthermore, is there a way that I can run a command line utility to fire up my instance when I'm ready to test, and then to shut it down when I'm done? Using this model, I would be able to allocate one or more development servers to each developer and only pay for them when they are being used.

Yes, yes and more yes. Here are some good things to google/hunt down on SO and SF
--ec2 command line tools,
--making your own AMI's from running instances (to save tedious and time consuming startup gumf),
--route53 APIs for doing DNS magic,
--ubunutu cloud-init for startup scripts,
--32bit micro instances are your friend for dev work as they fall in the free usage bracket

All of what James said is good. If you're looking for something requiring less technical know-how and research, I'd also consider:
juju (sudo apt-get install -y juju). This lets you start up a series of instances. Basic tutorial is here: https://juju.ubuntu.com/docs/user-tutorial.html

Related

How to get notified when droplet reboots and when droplet finishes boot?

I found this answer https://stackoverflow.com/a/35456310/80353 and it recommends either API or using user_data which actually is cloud-init underneath.
I can think of several ways to possibly get notified that a server is up:
detect droplet status via API
I notice that the status never changes during reboot so I guess this is out.
using DigitalOcean native monitoring agent
The monitoring agent seems to only cover resource utilisation. No alert when the server is being rebooted or finishes booting up
using cloud-init
This answer https://stackoverflow.com/a/35456310/80353 I mentioned earlier uses wget to send signals out. I can possibly use wget for every time the droplet finishes booting up using bootcmd in cloud-init. But not for reboot.
There's also the issue of how to ensure the wget request from the right DigitalOcean droplet can correctly identify itself to my server.
Any advice on how to accomplish getting notifications at my server whenever a droplet reboots or finishes booting up?
cloud-init
bootcmd actually runs every time. Check out the module frequency key in the docs
Another module you might consider for this is phone home.
Systemd
Since the OP is looking for notifications on shutdown/reboot as well, cloud-init is probably not the best for a single solution since it handles boot/init primarily. Assuming systemd:
This post discusses setting up a service to run on shutdown.
This post discusses setting up a service to run on startup.

Speed up Chromedriver/Selenium in AWS EC2 instance

Hi i've developed a bot that automates the shopping process on a specific website. When testing it on my mac it works perfectly and can place an order quite fast. I have tried to run the script on an AWS EC2 instance using the free t2.micro tier with an Ubuntu instance.
The script runs fine and all the packages work but i've noticed the time it takes to open chrome in headless mode and finish the process is 5/6 times longer than when I run it on my local macbook. Ive tried all the suggested things in the chromedriver options to do with the proxy server but my EC2 instance still isn't fast enough.
Is it the small t2.micro free tier thats slowing me down or should i be using a different instance other than Ubuntu if I want to speed up my selenium script?
You're using an incredibly small machine, which is going to be much slower than the powerful machine you're running locally.

Work(flow) Setup: Remote Debian VM (in office), ssh, web development

Normally I've developed locally (on my own machine) and pushed to wherever things needed to go via mapped drives, ftp, github, etc. I have done a bit of work with vagrant/virtualbox (but again, locally) with a shared/mirrored folder.
I am now in a situation where everyone here has access to their own dev box (a vm on the network). I see some working in Vim directly via SSH, I believe, but I'm not there yet. So I'm left with the question: What's the best way for (more of a front end guy) to approach this?
I have heard of doing an SSH-mount from my workstation... if that's a viable thing. I'm curious what everyone's take on this kind of environment is and (perhaps) any best practices. Tips, links, and reading is highly welcome and appreciated, too... any pointing in a good direction would be wonderful.
Thank you.
The best answer will come from what virtual resources do you want to capitalize on for the virtual networked VMs. If you just want the storage space, then share the VM's drives, and mount them locally, treat them as local, end of story. If you want to run all the processing on the remote machine, and connect from a thin client, you have a couple of options, but they all take the same form. Connect to the machine, edit the files on the remote machine. Depending on your OS, you will have different options available.
If the remote machine doesn't have an graphical client installed you are stuck with either, mounting the remote share locally (you can use whatever editor you want) or ssh to the remote machine and using a commandline editor (vim, nano, emacs).
If there is a graphical client installed you have more options:
Remote in the server using any visual viewer (mstsc for windows, vnc is an option), and then use any remotely installed editor of your choice.
Remote in using ssh -X, and then run the remotely installed editor. Assuming you have an X-Server locally (if you are running linux you already do), the GUI part of the application will be run on the client side of the ssh tunnel, and the process will be run on the server. This is probably the best option.
So:
Make sure the remote server has a desktop client software (gtk, kde, gnome, almost any windows os, etc...)
install GUI editor of your choice on that server
ssh -X to that server
install sublime text, geany, or your choice of editor
run subl, geany, or other to start the application.
SSH mounting would indeed allow you to use all of the files on the VM as if they were stored in your local machine, letting you edit and update files without having to manually copy them every time you perform changes. You will run into a speed bump though, since files changed will have to be synchronized/copied to your remote machine every time and that takes a couple of seconds. Check this post by DigitalOcean, they explain how to get the SSH mount working.
A better option you have (IMHO) is to use an IDE in your local machine that allows you to push changes to a server after saving or by manually doing so. This would allow you to develop faster by using your local resources (local web server) since no files would have to be copied over the network to the remote VM; and would also allow you to test on that remote VM when needed by uploading the files when you are ready to test on that environment.
PS: Exporting visual apps or environments form the remote machine to your local one can be slow (depending on your network and the VM host load running your machine). If you still like that approach, you could also install something to access that VM over something more standard and lightweight like RDP for GNU/Linux (xrdp).

Can't rerun meteor leaderboard on AWS EC2 micro T1 instance after failing keepalive

I'm unable to run a Meteor leaderboard demo after a failed keepalive error on an AWS EC2 micro.T1 instance. If I start from a freshly booted Amazon Machine Instance (AMI) I'm able to run the leaderboard demo at localhost:3000 from Firefox when I'm connected with a VNC client (TightNVC Viewer). It runs very, very slowly, but it runs.
If I fail to interact with it soon enough however I get these messages
I2051-00:03:03.173(0)?Failed to receive keepalive! Exiting.
=> Exited with code:1
=> Meteor server restarted
From that point forward everything on that instance runs at a glacial pace. Switching back to the Firefox window takes 3 minutes. when I try to connect to //localhost:3000 Firefox I usually get a message about a script no longer running and eventually the terminal window adds this to what I wrote above:
I2051-00:06:02.443(0)?Failed to receive keepalive! Exiting.
=> Exited with code:1
=> Meteor server restarted
I2051-00:08:17.227(0)?Failed to receive keepalive! Exiting.
=> Exited with code:1
=> Your application is crashing. Waiting for file change.
Can anyone translate for me what is happening?
I'm wondering whether the t1.micro instance I'm running is just too under-powered or because it's not shutting down meteor properly thereby leaving an instance of MongoDB running and trying to launch another.
I'm using Amazon Machine Image ubuntu-precise-12.04-amd64-server-20130411.1 (ami-70f96e40) which says this about it's configuration:
Size: t1.micro
ECUs: up to 2
vCPUs: 1
Memory (GiB): 0.613
Instance Storage (GiB): EBS only
EBS-Optimized Available: -
Netw. Performance: -Very Low
Micro instances
Micro instances are a low-cost instance option, providing a small amount of CPU resources. They are suited for lower throughput applications, and websites that require additional compute cycles periodically, but are not appropriate for applications that require sustained CPU performance. Popular uses for micro instances include low traffic websites or blogs, small administrative applications, bastion hosts, and free trials to explore EC2 functionality.
If my guess is right, can anyone suggest an AMI suitable for Meteor development?
Thanks
check this answer
Try to remove meteor remove autopublish
How are you running the app on ec2? I have been able to run apps on a micro instance so I don't see why this should be an issue.
If you are running it by using 'meteor' as you would locally that's probably the issue. You get way better performance when running it as a node app, this typically isn't an issue when developing locally but may be too much for a ec2 micro.
What you want to do is 'meteor bundle example.tgz', upload that to the server and run it as a node app.
Here is a guide that I remember using a while ago to get it done on ec2:
http://julien-c.fr/2012/10/meteor-amazon-ec2/
You shouldn't need to use VNC either, you can access it from your own computer in a browser using the public address your instance gets assigned.
If you get a node fibers error message which is pretty common then cd into bundle/program/server do 'npm uninstall fibers' and then 'npm install fibers'

Vagrant "Waiting for VM to boot. This can take a few minutes" is slow

I'm working on Chef recipes, and often need to test the full run-through with a clean box by destroying a VM and bringing it back up. However, this means I get this message in Vagrant/VirtualBox:
Waiting for VM to boot. This can take a few minutes.
very often. What are some steps I can take to make the boot faster?
I am aware this is an "opinion" question and would welcome some suggestions to make this more acceptable, besides breaking it into a bunch of small questions like "Will switching to an SSD make my VirtualBox boot faster? Will reducing the number of forwarded ports make my VirtualBox boot faster", etc.
I would go for using LXC containers instead of VirtualBox. That gives you much faster feedback cycle.
Here is a nice introduction to the vagrant-lxc provider.
You could set up a VirtualBox VM for Vagrant / Chef development with LXC containers (e.g. like this dev-box). Then take this sample-cookbook and run either the ChefSpec unit tests via rake test or the kitchen-ci integration tests via rake integration. You will see that it's much faster with LXC than it is with VirtualBox (or any other full virtualization hypervisor).
Apart from that:
yes, SSDs help a lot :-)
use vagrant-cachier which speeds up loads of other things via caching
use a recent Vagrant version which uses Ruby 2.0+ (much faster than 1.9.3)
don't always run a full integration test, some things can be caught via unit tests / chefspec already
use SSH connection sharing and persistent connections
etc...
As an another alternative you could also use chef-runner, which explicitly tries to solve the fast feedback problem