I am currently running JMeter in 5 local VMs in which one acts as master and 4 as slaves. I want to move them to amazon servers. Can anyone suggest step by step configuration methods. Searched internet and couldn't find a documentation with full clarity. Or can anyone share a good documentation link on this?
jmeter version: 3.2
My requirements are:
1 master and 4 slaves.
the master should have Linux GUI because I need JMETER GUI to run the test, since we are analyzing real time running data.
First of all, double check you looked for instructions well enough, i.e. there is JMeter ec2 Script project which automates the process of installation and configuration of JMeter remote engines.
In general, the process doesn't differ from configuring JMeter in distributed mode locally, Amazon EC2 instances are basically the same machines as local ones and require the same configuration steps. Just make sure to open the following ports:
1099
the port you define as server.rmi.localport
the ports you define as client.rmi.localport
It has to be done both in Linux Firewall and AWS Security Groups
Check out the following material:
Remote Testing
JMeter Distributed Testing Step-by-step
JMeter Distributed Testing with Docker
Load Testing with Jmeter and Amazon EC2
Related
i m exploring GCP and i love the way it lets the developer play with such costly infrastructure. till now i have learnt a lot many things. i m no more a beginner and i have this case which i m unable to find docs or example for or i might be thinking in wrong direction.
I want to build an auto-scaling hosting solution where users can :
Create Account
Create multiple websites [these websites are basically tempaltes where user can define certain fields and the website is rendered in a specific manner | users are not allowed to upload file instead just some data entries]
In a website user can connect domain [put 'A' record DNS entry in their domain]
After that an SSl is provisioned automatically by the platform and the website is up and running. [somewhat like firebase]
I could easily create such a project on one server with the following configuration[skipped simple steps like user auth etc.]:
I use ubunutu 16.04 as my machine type with 4GB ram and 10GB persistance disk
Then i install nvm [a package to manage node.js]
after that i install specific version of node.js using nvm
i have written a simple javascript package in which i use express server to respond to the client requests with some html
for managing ssl i use letsencrypt's certbot package
i use pm2 to run the javascipt file as service in background
after being able to accomplish this thing i could see everything works the way i want it to.
then i started exploring GCP's load balancers there i learnt about the 4 layer and 7 layer LBs and i implemented some hello world tests [using startup scripts] in all possible configuration like
7 layer http
7 layer https
4 layer internal tcp
4 layer internal ssl
Here is the main problem i m facing :
I can't find a way to dynamically allocate an SSL to an incoming request to the load balancer
In my case requests might be coming from any domain so GCP load balacer must have some sort of configuration to provision SSL for specific domain [i have read that it can alloccate an SSL for upto 100 domains but how could i automate things] or could there be a way that instead of requests being proxied[LB generates a new requeest to the internal servers], requests are just being redirected so that the internal servers can handle the SSL management themseleves
I might be wrong somewhere in my understanding of the concepts. Please help me solve the problem. i want to build firebase-hosting clone at my own. anykind of response is welcomed 🙏🙏🙏
One way to do it would be to update your JS script to generate Google-managed certificate for each new domain via gcloud:
gcloud compute ssl-certificates create CERTIFICATE_NAME \
--description=DESCRIPTION \
--domains=DOMAIN_LIST \
--global
and then apply it to the load balancer:
gcloud compute target-https-proxies update TARGET_PROXY_NAME \
--ssl-certificates SSL_CERTIFICATE_LIST \
--global-ssl-certificates \
--global
Please be aware that it may take anywhere from 5 to 20 minutes for the Load Balancer to start using new certificates.
You can find more information here.
My REST application is being developed with Python and Flask, I am also using Rasa Core and Rasa NLU. Currently everything is a single local development server. Would you like to know what ideal recommendations for production?
A scenario that I imagined: treat all REST flames and database structure on one server, keep Rasa Core and together with a "micro" python application on another server and Rasa NLU on a third server.
But the question is: all users would end up asking the 3 cascading servers, so I think all servers are subject to the same bottleneck of requests.
And what would be the ideal settings if you leave 1 server with all or 3 servers? (for AWS)
To be the most scale-able you can use a containerized solution with load balancing.
Rasa NLU has a public docker container (or you could create your
own). Use docker & kubernetes to scale out the NLU to however large
you need your base
Create separate docker containers for your rasa core, connecting to the NLU load balancer for NLU translation. Use a load balancer here too if you need to.
Do the same for your REST application, connecting to #2 load balancer
This solution would allow you to scale your NLU and core separately however you need to as well as your REST application if you need to do that separately.
I wrote a tutorial on this if you are interested here:
I have HDFS-HA(namenode high availability) setup in my hadoop cluster(using Apache Ambari).
Now, I have one scenario in which my ambari-server machine(which also consist one Namenode i.e. active/Primary) went offline so that my other Namenode(Standby) was active and running but after some time it went offline too for some reason.Services were offline I mean.I was unable to do any operation.What if I have to start the services manually that is used to start using ambari.
I mean using command-line or something
Services can be started from the command line but they should not be in an Ambari environment typically. This is because Ambari does more then just start the service when you issue the start/restart command for any given service. Ambari also makes sure the most up to date configuration is written to each node along with other various house keeping type tasks.
You can look at the logs in Ambari when you start/restart a service to see exactly what Ambari does with respect to writing the configuration, other house keeping, and the exact command to start/restart the given service.
so there is a set of applications that position itself as a distributed cluster O/S called DCOS.
It has an MPI and spark running on top of it.
I am a developer and I have a set of distributed services running connected via socket or ZeroMQ communication system.
How can I port my existing services to DCOS?
Meaning use it's communication facilities instead of sockets/zmq.
Is there any API \ Docs on how not to run it but develop for it?
There are a number of ways to get your application to run on DCOS (and/or Mesos).
First for legacy applications you can use the marathon framework which you can view as kind of the init system of DCOS/Mesos.
If you need more elaborate applications and want to really program against the apis you would write a mesos framework: see the framework development guide for more details.
For deeper integration of your framework into DCOS as for example using the package repository/ command line install option check out/contact mesosphere for more details.
Hope this helps!
Joerg
I'm still cheap.
I have a software development environment which is a bog-standard Ubuntu 11.04 plus a pile of updates from Canonical. I would like to set it up such that I can use an Amazon EC2 instance for the 2 hours per week when I need to do full system testing on a server "in the wild".
Is there a way to set up an Amazon EC2 server image (Ubuntu 11.04) so that whenever I fire it up, it starts, automatically downloads code updates (or conversely accepts git push updates), and then has me ready to fire up an instance of the application server. Is it also possible to tie that server to a URL (e.g ec2.1.mydomain.com) so that I can hit my web app with a browser?
Furthermore, is there a way that I can run a command line utility to fire up my instance when I'm ready to test, and then to shut it down when I'm done? Using this model, I would be able to allocate one or more development servers to each developer and only pay for them when they are being used.
Yes, yes and more yes. Here are some good things to google/hunt down on SO and SF
--ec2 command line tools,
--making your own AMI's from running instances (to save tedious and time consuming startup gumf),
--route53 APIs for doing DNS magic,
--ubunutu cloud-init for startup scripts,
--32bit micro instances are your friend for dev work as they fall in the free usage bracket
All of what James said is good. If you're looking for something requiring less technical know-how and research, I'd also consider:
juju (sudo apt-get install -y juju). This lets you start up a series of instances. Basic tutorial is here: https://juju.ubuntu.com/docs/user-tutorial.html