I recently spun up an EC2 instance from AWS - "Amazon Linux 2 AMI with .NET Core 2.2" and do not see a webserver running (maybe I'm just missing something here)
Does this mean that despite it having .net core, I need to now just install apache and then do the whole Kestrel and reverse proxy thing in httpd.conf as per this:
https://gooroo.io/GoorooThink/Article/17422/Deploy-ASPNET-Core-Application-On-EC2-Amazon-Linux-Instance/32558#.XZ1NGi-ZPUI
Just needed to do a gut check and make sure there wasn't an easier way before I make a mistake.
Thank you everyone!
Related
Hi i've developed a bot that automates the shopping process on a specific website. When testing it on my mac it works perfectly and can place an order quite fast. I have tried to run the script on an AWS EC2 instance using the free t2.micro tier with an Ubuntu instance.
The script runs fine and all the packages work but i've noticed the time it takes to open chrome in headless mode and finish the process is 5/6 times longer than when I run it on my local macbook. Ive tried all the suggested things in the chromedriver options to do with the proxy server but my EC2 instance still isn't fast enough.
Is it the small t2.micro free tier thats slowing me down or should i be using a different instance other than Ubuntu if I want to speed up my selenium script?
You're using an incredibly small machine, which is going to be much slower than the powerful machine you're running locally.
We are migrating to / experimenting with AWS. We have chosen Lightsail, as our needs are pretty simple and this seems like a great, simple, affordable option. With that said, we have hit an early roadblock! I cannot figure out how to setup SFTP (or alternatively FTPS) to transfer files to the server?!
FWIW, I am a total AWS newbie. I have searched fairly extensively, and there are troves of information on how to do this on Lightsail w/ Linux, but nothing on Windows.
On our existing infrastructure we simply set up a third party SSH server (it's called Bitvise - FYI), and opened port 22 for it (IP restricted, etc). We can then connect with our FTP client of choice (whether that be FileZilla or our IDEs, etc). However, the same approach did not work on our Lightsail instance (no idea why)!
Does anyone have any idea how to do this? Any assistance is hugely appreciated. Thanks!
I am currently running JMeter in 5 local VMs in which one acts as master and 4 as slaves. I want to move them to amazon servers. Can anyone suggest step by step configuration methods. Searched internet and couldn't find a documentation with full clarity. Or can anyone share a good documentation link on this?
jmeter version: 3.2
My requirements are:
1 master and 4 slaves.
the master should have Linux GUI because I need JMETER GUI to run the test, since we are analyzing real time running data.
First of all, double check you looked for instructions well enough, i.e. there is JMeter ec2 Script project which automates the process of installation and configuration of JMeter remote engines.
In general, the process doesn't differ from configuring JMeter in distributed mode locally, Amazon EC2 instances are basically the same machines as local ones and require the same configuration steps. Just make sure to open the following ports:
1099
the port you define as server.rmi.localport
the ports you define as client.rmi.localport
It has to be done both in Linux Firewall and AWS Security Groups
Check out the following material:
Remote Testing
JMeter Distributed Testing Step-by-step
JMeter Distributed Testing with Docker
Load Testing with Jmeter and Amazon EC2
Can someone help me to understand the basics of spawning EC2 instances and deploying AMIs and how to configure them properly?
Current situation:
In my company we have 1 server and a few clients which run calculations and return the results when they are done. The system is written in Python but sometimes we run out of machine power so I am considering to support the clients with additional EC2 clients - on demand. The clients connect to the server via an internal IP which is set in a config file.
Question:
Am I assuming right that I just create an AMI where our Python client sits in autostart and once its started it connects to the public IP and picks up new tasks? Is that the entire magic or do I miss some really great features in this concept?
Question II
While spawning a new instance, can I start such instance with updated configuration or meta information or do I have to update my AMI before all the time I make a small change?
if you want to stick with just plain spawning EC2 instances, here are the answers to your questions:
Question I - This is one of the valid approaches and yes, if your Python client will be configured properly, it will 'just work'.
Question II - Yes, you can achieve that, which is very well explained here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html. There's also another way of having your configuration stored somewhere else, and just fetch it when the instance is starting.
I'm still cheap.
I have a software development environment which is a bog-standard Ubuntu 11.04 plus a pile of updates from Canonical. I would like to set it up such that I can use an Amazon EC2 instance for the 2 hours per week when I need to do full system testing on a server "in the wild".
Is there a way to set up an Amazon EC2 server image (Ubuntu 11.04) so that whenever I fire it up, it starts, automatically downloads code updates (or conversely accepts git push updates), and then has me ready to fire up an instance of the application server. Is it also possible to tie that server to a URL (e.g ec2.1.mydomain.com) so that I can hit my web app with a browser?
Furthermore, is there a way that I can run a command line utility to fire up my instance when I'm ready to test, and then to shut it down when I'm done? Using this model, I would be able to allocate one or more development servers to each developer and only pay for them when they are being used.
Yes, yes and more yes. Here are some good things to google/hunt down on SO and SF
--ec2 command line tools,
--making your own AMI's from running instances (to save tedious and time consuming startup gumf),
--route53 APIs for doing DNS magic,
--ubunutu cloud-init for startup scripts,
--32bit micro instances are your friend for dev work as they fall in the free usage bracket
All of what James said is good. If you're looking for something requiring less technical know-how and research, I'd also consider:
juju (sudo apt-get install -y juju). This lets you start up a series of instances. Basic tutorial is here: https://juju.ubuntu.com/docs/user-tutorial.html