Stress testing of ParseServer app - ionic2

I have an app with ParseServer back-end and Ionic2 front-end. I need to simulate multiple users to stress test the back-end.
What load testing tools would you recommend to use for such setup?
Thanks.

You need to split your process into 2 phases:
Server-side testing. You need to load test your backend to ensure that it is in position to simulate anticipated number of users. In fact any tool capable of sending HTTP Requests will fit, the most popular free and open source load testing solutions are JMeter, Grinder, Gatling, and Tsung. All of them come with record-and-replay functionality so you will be able to build your test by just interacting with your mobile application and using the load testing tool as a proxy. See Open Source Load Testing Tools: Which One Should You Use? article for main features highlighted and compared.
Client-side testing. Even if your server responds very fast, handles enormous loads, able to scale, etc. your application user experience may be not that good as client side performance also matters. You can go for Chrome Dev Tools Remote Debugging and/or Intel XDK which is capable of profiling existing applications.

You can try using ZebraTester and record the script for this test. The trial version allows you to have up to 20 Virtual Users and these can run multiple loops depending on the length of your test. The same tool can record the script as well as run the load test from your local machine.

I use to test parse server
in http://jmeter.apache.org/
is a free tool you can install your computer then start testing

Related

Can I launch protractor mobile test on the AWS Device Farm

Can I run my e2e test developed using Protractor on the AWS device farm?
Because I want to complete mobile testing of my project using the AWS device farm, and do not really understand can I do that or not. I found 3 types about that on the AWS forum, but it is too old from 2018.
First forum discussion
Second forum discussion
Third forum discussion
Maybe something changed?
I have protractor e2e tests written for the desktop browser and want to use those ones for the mobile browser too.
I will answer this for both mobile browsers and desktop testing.
Mobile Browsers
AWS Device Farm has 2 execution modes: Standard Mode and Custom Mode.
Standard mode gives you granular reporting if you don't generate a report for your tests locally. This splits up the artifacts for each test.
Custom mode gives you as close as possible execution state and results as you would get locally. It does not give you the granular reporting which is fine for most as you already get reports locally which will be available on Device Farm as well. It is recommended for customers to use custom mode as that is the one that is most up to date and adds supports for latest frameworks unless of course they absolutely need granular reporting.
Protractor on Device Farm
It is not officially support today.
However, Device Farm supports Appium Nodejs in custom mode. You get a yaml file where you can run shell commands on the host machine where the tests will be executed. So in case of protractor you could select this test type (Appium Nodejs), install the missing dependencies needed for the tests, start your server, and run your tests.
The points to evaluate: Since Device Farm takes your tests as inputs, you will have to upload the zip file of your tests. I would highly recommend checking the instructions for nodejs tests and using the same. Alternatively, you can also download your tests on the fly using the yaml file.
Desktop Browsers
Device Farm has a selenium grid that you can connect to from your local machine and run your tests. The browsers Chrome and Firefox run on Windows platform and Safari is not supported today. If you use a selenium grid on your local machine for your tests, then you most likely should be able to run the same tests using the Selenium grid on Device Farm. Of course, pending validation.
If you need more help on any of these items feel free to reach out to aws-devicefarm-support#amazon.com and I can help you further.
You can test in chrome with an emulated mobile mode:
You can add "mobileEmulation" in a new protractor.conf-mobile.js
chromeOptions: {
args: ['--disable-infobars', '--headless', '--disable-gpu', '--window-size=1920,1080'],
'mobileEmulation' : { 'deviceName': 'Galaxy S5' },

Is there any way to test the highcharts with large data-set with Jest and React.?

I am trying the write test for highcharts.
But how to test the highchart with larger data set, to check the performance and to test if highchart works with larger data set.
Library : Highchart-react-official, Jest, React
I usually use Chrome Dev Tools / lighthouse for benchmarking javascripts / browser performance which can be automated using nodejs module:
For chrome dev tools see:
https://developers.google.com/web/tools/chrome-devtools/rendering-tools
For lighthouse see: https://developers.google.com/web/updates/2018/05/lighthouse#javascript_boot-up_time_is_high
PS: Depending on the browser and device you are testing on you'll see different performance. So it's best to use a controlled system in a dedicated VM (rather than local machine where other processes, software and tabs might affect performance at any given time.
Highcharts React Wrapper contains tests for: testing environment, chart rendering and passing down container props.
API: https://github.com/highcharts/highcharts-react#tests

Run automation on headless browser in an ec2 instance with Amazon linux

I have an automation framework that uses static html pages within its project directory to perform certain aws operations such as dynamoDB scan and Aws Lambda executions. Due to some performance bottleneck in a dependent api component for the test we are trying to move the framework to an ec2 instance with Amazon linux and run the tests from there.
Since we have methods in the TestNG class that actually uses selenium web driver to spin up a browser and open up the static page in order to perform the required Aws operations I am pretty sure this test is going to run into issues.
There are two potential approach I see for solving this issue:
Implement AWSUtil classes and use necessary aws clients to replace the web dependent logics (Will require some effort and re-engineering)
Use a headless chrome browser (or any compatible one) in order to run the web dependent steps.
I am pretty sure that number 1 can be easily achieved, just a matter of time and effort. However, would love to know if there is an easy way of accomplishing #2 since this would not require any code rewrite.
We got the same issue and been successful with puppeteer,
https://github.com/GoogleChrome/puppeteer
If you don't want to install the latest version of node, you can dockerize your tests.
puppeteer can run headless or with browser.
Hope it helps.
There is no need to change anything in your tests, just the setup and the execution. The tests can run headlessly on a Continuous Integration (CI) server. There is no out-of-the-box setup since there is no display output for the browser to launch in. However with Xvfb you can launch the browser virtually. Straight from the docs:
Xvfb (short for X virtual framebuffer) is an in-memory display server for UNIX-like operating system (e.g., Linux). It enables you to run graphical applications without a display
Depending from do you want to keep Xvfb running in the background until the process is killed, there are two options:
Xvfb :99 &
export DISPLAY=:99
run-your-tests-here
or
xvfb-run run-your-tests-here
Here is a Linux tutorial. I am using this for my Docker based Jenkins setup and works like a charm, every time.

Deploy a local webservice on many machines - is it the right strategy?

I was wondering about the best way to deliver private web service instances to lots of users, so the user would always be able to connect to their own offline version of a service, just like running a web service from visual studios while debugging. I was struggling with setting this up in VS2013 even with the many online tutorials, but I am not sure if its not working because it was never supposed to work this way.
I have provided this in-depth explanation of my issue as i am not sure i am going about this in the right way and would appreciate feedback:
Background:
I have a web service to interface with an engine. This deals with the front-end and builds a set of commands for how to make a CAD model. These commands are for controlling the 3rd party CAD software's API. Therefore the engine can be seen to have two main functions -
Build the CAD's API instructions, which can be saved for later
Execution, where it catches the instance of the CAD software
running on the same computer and it builds the model.
The second part is restricted for the general public. Only our in-house users should be able to use it. However, they want to have an otherwise identical front-end and user experience.
The problem is, if they connect to the same engine as the public, which exists on our main server, then the engine will be looking for an instance of the CAD package on the same machine as itself - i.e the server, as stressed in the emboldened point above. What should happen is the engine finds the CAD instance running on the machine that the controlling UI is based on and it uses that for its target. I have spoke to the CAD API support and they say they do not know how to do that.
And so we get to my solution of providing an offline stand alone of my web service on each of the employees computers. This means the front-end will check at the start of the session if a localhost connection is available. If not it will use the main address, which takes it to my server. Otherwise it uses the local engine which will look perform the default behavior of looking for a CAD package on the same machine as itself. Because its locally installed that is now the right machine and it will find the CAD instance of the user successfully.
Final points:
The engine cannot be accessed by the UI directly as i am using
Unity3D for the front-end and there is .Net compatibility issues.
I need a completely self contained version of the software in the
future anyway, so eventually i have to deal with having the engine
accessed locally
I ended up using IISExpress. I got the user to install this and then get them to call a batch file installer i made which sets up the config file and moves my web project to the correct directory.

What GUI tool can I use for building applications that interact with multiple APIs?

My company uses a lot of different web services on daily bases. I find that I repeat same steps over and over again on daily bases.
For example, when I start a new project, I perform the following actions:
Create a new client & project in Liquid Planner.
Create a new client Freshbooks
Create a project in Github or Codebasehq
Developers to Codebasehq or Github who are going to be working on this project
Create tasks in Ticketing system on Codebasehq and tasks in Liquid Planner
This is just when starting new projects. When I have to track tasks, it gets even trickier because I have to monitor tasks in 2 different systems.
So my question is, is there a tool that I can use to create a web service that will automate some of these interactions? Ideally, it would be something that would allow me to graphically work with the web service API and produce an executable that I can run on a server.
I don't want to build it from scratch. I know, I can do it with Python or RoR, but I don't want to get that low level.
I would like to add my sources and pass data around from one service to another. What could I use? Any suggestions?
Progress DataXtend Semantic Integrator lets you build WebServices through an Eclipse based GUI.
It is a commercial product, and I happen to work for the company that makes it. In some respects I think it might be overkill for you, as it's really an enterprise-level data mapping tool for mapping disparate data sources (web services, databases, xml files, COBOL) to a common model, as opposed to a simple web services builder, and it doesn't really support your github bits, anymore than normal Eclipse plugins would.
That said, I do believe there are Mantis plugins for github to do task tracking, and I know there's a git plugin for Eclipse that works really well (jgit).
Couldn't you simply use Selenium to execute some of this tasks for you? Basically as long as you can do something from the browser, Selenium will also be able to do. Selenium comes with a language called "selenese", so you can even use it to programmatically create an "API" with your tasks.
I know this is a different approach to what you're originally looking for, but I've been using selenium for a number of tasks, and found it's even good to execute ANT tasks or unit tests.
Hope this helps you
What about Apache Camel?
Camel lets you create the Enterprise Integration Patterns to implement routing and mediation rules in either a Java based Domain Specific Language (or Fluent API), via Spring based Xml Configuration files or via the Scala DSL. This means you get smart completion of routing rules in your IDE whether in your Java, Scala or XML editor.
Apache Camel uses URIs so that it can easily work directly with any kind of Transport or messaging model such as HTTP, ActiveMQ, JMS, JBI, SCA, MINA or CXF Bus API together with working with pluggable Data Format options.