Hi i've developed a bot that automates the shopping process on a specific website. When testing it on my mac it works perfectly and can place an order quite fast. I have tried to run the script on an AWS EC2 instance using the free t2.micro tier with an Ubuntu instance.
The script runs fine and all the packages work but i've noticed the time it takes to open chrome in headless mode and finish the process is 5/6 times longer than when I run it on my local macbook. Ive tried all the suggested things in the chromedriver options to do with the proxy server but my EC2 instance still isn't fast enough.
Is it the small t2.micro free tier thats slowing me down or should i be using a different instance other than Ubuntu if I want to speed up my selenium script?
You're using an incredibly small machine, which is going to be much slower than the powerful machine you're running locally.
Related
Happy ugly Christmas sweater day :-)
I am running into some strange problems with my AWS Linux 16.04 instance running Hadoop 2.9.2.
I have just successfully installed and configured Hadoop to run in a simulated distributed mode. Everything seems to be fine. When I start hdfs and yarn I don't get any errors. But as soon as I try to do even something as simple as list the contents of the root hdfs directory, or create a new directory, the whole instance becomes super slow. I wait for about 10 min and it never produces a directory listing so I hit Ctrl+C and it takes another 5 minutes to kill the process. Then I try to stop both, the hdfs and yarn, and it succeeds but also takes a long time to do that. And even after hdfs and yarn have been stopped the instance is still being barely responsive. At this point all I can do to make it function normally again is to go to AWS console and restart it.
Does anyone have any idea what I might've screwed up ( I am pretty sure it's something I did. It usually is :-) )?
Thank you.
Well, I think I figured out what was wrong and the answer is trivial. Basically, my ec2 instance doesn't have enough RAM. It's a basic free tier eligible instance and by default it comes with only 1GB of RAM. Hilarious. Totally useless.
But I learned something useful anyway. One other thing I had to do to make my Hadoop installation work (I was getting "connection refused" error but I did make it work) was that in core-site.xml file I had to change the line that says
<value>hdfs://localhost:9000</value>
to
<value>hdfs://ec2-XXX-XXX-XXX-XXX.compute-1.amazonaws:9000</value>
(replace the XXXs in the above with your instance's IP address)
So I have been working on a Django application and everything seems to work fine in my developing environment but not on a cloud instance.
The application's server must do two things in parallel. I initially used a subprocess to execute some task while the main process keeps collecting data.
Everything works as expected offline but when I deploy the application onto AWS it just does one thing at a time.
I made sure I deployed it on an instance with multiple cores. I even went back to change the subprocess task and spawned a new thread instead but that did not seem to work either.
Also not sure if this could be the issue but running the subtask involves running a shell command that starts up a docker container.
Has anyone ran into a similar issue or got any suggestions about what could be causing this behavior?
Thanks.
I'm unable to run a Meteor leaderboard demo after a failed keepalive error on an AWS EC2 micro.T1 instance. If I start from a freshly booted Amazon Machine Instance (AMI) I'm able to run the leaderboard demo at localhost:3000 from Firefox when I'm connected with a VNC client (TightNVC Viewer). It runs very, very slowly, but it runs.
If I fail to interact with it soon enough however I get these messages
I2051-00:03:03.173(0)?Failed to receive keepalive! Exiting.
=> Exited with code:1
=> Meteor server restarted
From that point forward everything on that instance runs at a glacial pace. Switching back to the Firefox window takes 3 minutes. when I try to connect to //localhost:3000 Firefox I usually get a message about a script no longer running and eventually the terminal window adds this to what I wrote above:
I2051-00:06:02.443(0)?Failed to receive keepalive! Exiting.
=> Exited with code:1
=> Meteor server restarted
I2051-00:08:17.227(0)?Failed to receive keepalive! Exiting.
=> Exited with code:1
=> Your application is crashing. Waiting for file change.
Can anyone translate for me what is happening?
I'm wondering whether the t1.micro instance I'm running is just too under-powered or because it's not shutting down meteor properly thereby leaving an instance of MongoDB running and trying to launch another.
I'm using Amazon Machine Image ubuntu-precise-12.04-amd64-server-20130411.1 (ami-70f96e40) which says this about it's configuration:
Size: t1.micro
ECUs: up to 2
vCPUs: 1
Memory (GiB): 0.613
Instance Storage (GiB): EBS only
EBS-Optimized Available: -
Netw. Performance: -Very Low
Micro instances
Micro instances are a low-cost instance option, providing a small amount of CPU resources. They are suited for lower throughput applications, and websites that require additional compute cycles periodically, but are not appropriate for applications that require sustained CPU performance. Popular uses for micro instances include low traffic websites or blogs, small administrative applications, bastion hosts, and free trials to explore EC2 functionality.
If my guess is right, can anyone suggest an AMI suitable for Meteor development?
Thanks
check this answer
Try to remove meteor remove autopublish
How are you running the app on ec2? I have been able to run apps on a micro instance so I don't see why this should be an issue.
If you are running it by using 'meteor' as you would locally that's probably the issue. You get way better performance when running it as a node app, this typically isn't an issue when developing locally but may be too much for a ec2 micro.
What you want to do is 'meteor bundle example.tgz', upload that to the server and run it as a node app.
Here is a guide that I remember using a while ago to get it done on ec2:
http://julien-c.fr/2012/10/meteor-amazon-ec2/
You shouldn't need to use VNC either, you can access it from your own computer in a browser using the public address your instance gets assigned.
If you get a node fibers error message which is pretty common then cd into bundle/program/server do 'npm uninstall fibers' and then 'npm install fibers'
I'm working on Chef recipes, and often need to test the full run-through with a clean box by destroying a VM and bringing it back up. However, this means I get this message in Vagrant/VirtualBox:
Waiting for VM to boot. This can take a few minutes.
very often. What are some steps I can take to make the boot faster?
I am aware this is an "opinion" question and would welcome some suggestions to make this more acceptable, besides breaking it into a bunch of small questions like "Will switching to an SSD make my VirtualBox boot faster? Will reducing the number of forwarded ports make my VirtualBox boot faster", etc.
I would go for using LXC containers instead of VirtualBox. That gives you much faster feedback cycle.
Here is a nice introduction to the vagrant-lxc provider.
You could set up a VirtualBox VM for Vagrant / Chef development with LXC containers (e.g. like this dev-box). Then take this sample-cookbook and run either the ChefSpec unit tests via rake test or the kitchen-ci integration tests via rake integration. You will see that it's much faster with LXC than it is with VirtualBox (or any other full virtualization hypervisor).
Apart from that:
yes, SSDs help a lot :-)
use vagrant-cachier which speeds up loads of other things via caching
use a recent Vagrant version which uses Ruby 2.0+ (much faster than 1.9.3)
don't always run a full integration test, some things can be caught via unit tests / chefspec already
use SSH connection sharing and persistent connections
etc...
As an another alternative you could also use chef-runner, which explicitly tries to solve the fast feedback problem
I'm still cheap.
I have a software development environment which is a bog-standard Ubuntu 11.04 plus a pile of updates from Canonical. I would like to set it up such that I can use an Amazon EC2 instance for the 2 hours per week when I need to do full system testing on a server "in the wild".
Is there a way to set up an Amazon EC2 server image (Ubuntu 11.04) so that whenever I fire it up, it starts, automatically downloads code updates (or conversely accepts git push updates), and then has me ready to fire up an instance of the application server. Is it also possible to tie that server to a URL (e.g ec2.1.mydomain.com) so that I can hit my web app with a browser?
Furthermore, is there a way that I can run a command line utility to fire up my instance when I'm ready to test, and then to shut it down when I'm done? Using this model, I would be able to allocate one or more development servers to each developer and only pay for them when they are being used.
Yes, yes and more yes. Here are some good things to google/hunt down on SO and SF
--ec2 command line tools,
--making your own AMI's from running instances (to save tedious and time consuming startup gumf),
--route53 APIs for doing DNS magic,
--ubunutu cloud-init for startup scripts,
--32bit micro instances are your friend for dev work as they fall in the free usage bracket
All of what James said is good. If you're looking for something requiring less technical know-how and research, I'd also consider:
juju (sudo apt-get install -y juju). This lets you start up a series of instances. Basic tutorial is here: https://juju.ubuntu.com/docs/user-tutorial.html