I have written a simple test for my website. The test simply search for a word in my search page and waits for results.
What I need is to run the same test 40 times simultaneously to mimic a situation where 40 users are searching for the same word at the same time.
Basically I want to know how to run them simultaneously not in a queue.
Thanks.
What you probably need is Selenium RC and Selenium Grid as Silenium IDE is quite limited on automated testing. RC allows you to run remote selenium tests (tho rc can run locally too) and grid allows you to simplify the access to all running rcs.
You need 40 clients at once. If you are using selenium-rc you can start severall clients simultaniously by configuring them to run on different ports. After that you have to start your test 40 times at once. That is the tricky part depending on what framework you are using to launch the tests.
I would suggest JMeter for load-test like situations. It is quite easy to setup and you can configure how many simulated users you want on your website at once. JMeter works fine for manuell tests and for automated tests.
Don't mind but I guess you need to do that in Jmeter, actually what you are trying to do is part of Load/Stress where number of user tries to do some certain action simultaneously. Have a look a Jmeter.
Related
I have been rushing to get my app, a mixture between nodejs server driving an sqlite database and a litElement based client side providing the ui, into a useable state as a beta release. I achieved that a couple of days ago and now I am (belatedly I know) thinking how to put together a test framework. However I am really struggling to understand how to best test the client side. I think its because I am having difficulty understanding conceptually what the two main choices of framework are. Before I go into more detail, let me explain the structure of the app in top level terms.
At the project root level there are three main directories node_modules which comprises all the modules I've pulled in (including lit-element and web-components-loader which are client side elements - but see below) server which contains all the code for the server side of my application and client which consist of all the code for the client side of my app. I run rollup ONLY at module install time to "package" lit-element and the directives I use and the web-component-loader and effectively treeshake and copy them to the client/libs. As a result of this my client is coded to assume the modules are in the libs directory AND I DO NOT NEED OR HAVE any build stage. I guess the root of the client is index.html which pulls in a service-worker.js and main-app.js. main-app is the root of a tree in lit-element based components that make up the entire client app. Nginx is the web server for all the static files in the client, but also acts as a proxy to pass any urls that start /api to a standard node http web server (not even express, although I do use the router, body-parser and final-handler modules) and these get passed to various api handlers each of which is a separate javascript file - although these can "require" a few common modules that I have written and those in the node_modules directory.
I plan on using jest as a test environment. For my server I think it is easy. For each api handler I want to test I can build a test script that "requires" the javascript file I want to test. I am in two minds about whether to use a sqlite database for testing or mock something - I am leaning towards the former as I am using better-sqlite3 and it is totally synchronous and very fast. I already have scripts to create empty databases, so I have no worry about test isolation.
Client testing is where I get confused. I "think" that in essence jest can run tests the same way as for the server, one element at a time. BUT, these elements and my test scripts are going to need a "web-platform" set of APIs - not least of which is the entire shadow dom and custom components stuff that lit-element uses. This is where, I think, puppeteer or electron in with there associated jest plugins which can put these platform apis into the test environment. But, and this is the essence of my confusion, puppeteer instructions all start with a something like
const browser = await puppeteer.launch({headless: true});
const page = await browser.newPage()
await page.goto(SOME URL);
What is this URL? - do I have to also run a server? I cannot relate this snippet to running a test controlled by jest. All the examples seem to use webpack and typescript, neither of which I know anything about.
The other module I have seen mentioned is electron, in particular this article, which everything else seems to point to (or the same text).
https://www.ninkovic.dev/blog/2020/testing-web-components-with-jest-and-lit-element
From the code snippets in this article it "seems" like it might be what I want, BUT ...
I cannot find very many references to electron other than on its own web site. Here it is telling you to use electron as a tool to build a cross platform application, but nowhere can I find what it is - it assumes you already know. I don't want a UI for my unit testing I want it to be headless like in puppeteer.
Hence my confusion and why I am unsure how to achieve what I want. Can someone give me some pointers as to
How can I set up puppeteer to run headless tests without needing a server OR
What exactly is electron (and can I use it to
a) run my tests
b) provide me with tools to examine the dom elements I have created to see I have created the right ones) and how is it different from puppeteer and can I use it to conduct headless tests of my client.
UPDATE
I've done some more digging and am beginning to understand the differences. Let me summarise what I think I have found.
Puppeteer is great for end to end testing of your site. You run the tests by launching the included puppeteer at the home page of your site (or the more likely scenario of a development test site) and programatically pretend to be a user who can click on buttons etc. You can use various methods, including functions such as document.querySelector() to check your UI has behaved how you think, or you can take screen shots and compare with standardised version. I could possible use it for unit tests, but I would have to run a server, create a test fixture html page for every test and navigate to it. jest-puppeteer is a package with some of that built in.
Electron is a platform for building apps. What the url I was referencing was using a test runner app jest-electron built using electron. So worrying about electron is a red herring, I should be worrying about jest-electron.
My main concern right now, I think, is that I need different jest configurations for my three scenarios
unit tests on the server
unit tests on the client
end to end testing of the complete app.
Given I have only one package.json file and one set of node_modules I need to figure out a way to have three different jest-config.js files.
I have an automation framework that uses static html pages within its project directory to perform certain aws operations such as dynamoDB scan and Aws Lambda executions. Due to some performance bottleneck in a dependent api component for the test we are trying to move the framework to an ec2 instance with Amazon linux and run the tests from there.
Since we have methods in the TestNG class that actually uses selenium web driver to spin up a browser and open up the static page in order to perform the required Aws operations I am pretty sure this test is going to run into issues.
There are two potential approach I see for solving this issue:
Implement AWSUtil classes and use necessary aws clients to replace the web dependent logics (Will require some effort and re-engineering)
Use a headless chrome browser (or any compatible one) in order to run the web dependent steps.
I am pretty sure that number 1 can be easily achieved, just a matter of time and effort. However, would love to know if there is an easy way of accomplishing #2 since this would not require any code rewrite.
We got the same issue and been successful with puppeteer,
https://github.com/GoogleChrome/puppeteer
If you don't want to install the latest version of node, you can dockerize your tests.
puppeteer can run headless or with browser.
Hope it helps.
There is no need to change anything in your tests, just the setup and the execution. The tests can run headlessly on a Continuous Integration (CI) server. There is no out-of-the-box setup since there is no display output for the browser to launch in. However with Xvfb you can launch the browser virtually. Straight from the docs:
Xvfb (short for X virtual framebuffer) is an in-memory display server for UNIX-like operating system (e.g., Linux). It enables you to run graphical applications without a display
Depending from do you want to keep Xvfb running in the background until the process is killed, there are two options:
Xvfb :99 &
export DISPLAY=:99
run-your-tests-here
or
xvfb-run run-your-tests-here
Here is a Linux tutorial. I am using this for my Docker based Jenkins setup and works like a charm, every time.
I have an app with ParseServer back-end and Ionic2 front-end. I need to simulate multiple users to stress test the back-end.
What load testing tools would you recommend to use for such setup?
Thanks.
You need to split your process into 2 phases:
Server-side testing. You need to load test your backend to ensure that it is in position to simulate anticipated number of users. In fact any tool capable of sending HTTP Requests will fit, the most popular free and open source load testing solutions are JMeter, Grinder, Gatling, and Tsung. All of them come with record-and-replay functionality so you will be able to build your test by just interacting with your mobile application and using the load testing tool as a proxy. See Open Source Load Testing Tools: Which One Should You Use? article for main features highlighted and compared.
Client-side testing. Even if your server responds very fast, handles enormous loads, able to scale, etc. your application user experience may be not that good as client side performance also matters. You can go for Chrome Dev Tools Remote Debugging and/or Intel XDK which is capable of profiling existing applications.
You can try using ZebraTester and record the script for this test. The trial version allows you to have up to 20 Virtual Users and these can run multiple loops depending on the length of your test. The same tool can record the script as well as run the load test from your local machine.
I use to test parse server
in http://jmeter.apache.org/
is a free tool you can install your computer then start testing
So after much hunting I failed to find a continuous testing tool for IntelliJ 14.
I stumbled across a post that references uses eclipse and Ant in order to simulate this. On save, Ant then runs the tests for any tests that were modified.
I've tried to replicate this but, alas! I've never used Ant before and am finding it extremely difficult. I've setup and configured a generic Ant build file in Intellij but simply cannot figure out how to achieve my task.
Any help, pointers in the right direction is very much appreciated. I've searched but only found information that needs to be decrypted first.
Eclipse has the builder feature, you create an AntBuilder for your project, see also https://stackoverflow.com/a/15075732/130683.
IntelliJ has a trigger feature that might serve the purpose.
Also Infinitest , which provides a Continous Testing Plugin for Eclipse and IntelliJ might be helpful.
Ant is a build tool. Although IntelliJ does that for you, you need IntelliJ to do this which means you can't distribute your application without IntelliJ.
Ant uses a dependency matrix for building. This is sometimes difficult for developers to understand, but it basically means that you define the steps, how the steps are dependent upon each other, and let the build tool figure out exactly how to do its job. Ant is for Java like Make is to C and C++ applications.
Ant uses targets which are the steps you specify to do. For example, you might have a target called package that will build your jar or war. That target might depend upon another target called compile to compile the code. That target might depend upon a code generation phases (like if you had WSDL files).
Each target is a set of tasks. For example, the compile target is likely to have the <javac> task in it. It might also need the <mkdir> task to create the work directories where you classfiles are stored.
There are plenty of books on Ant, and there's a tutorial on the Ant Website. You didn't explain the issues you were having, so it's hard to be more specific than this.
Ant can also run your unit tests too. There's a <junit> target which can run the tests, and you specify whether or not you want to run almost all of your <junit> tests via the <batchtest> sub-entity or if you have a program driver you specify via the <test> entity.
Once you get an Ant script that can build and run your tests outside of IntelliJ, you can now get a Continuous Integration tool like Jenkins. A continuous integration tool watches your repository for changes, and if a change occurs, will then build your application. It's a great way to catch errors early on.
What does this have to do with Continuous Testing? Well, if you have your Ant script able to run unit tests, the Continuous Integration engine not only can build your app, but then run the unit tests with each and every change that occurs.
Jenkins is nice because it's very simple to use. You download a jenkins.war and you can launch the Jenkins webpage via the java -jar jenkins.war command. This brings up a web server on port 8080 on your machine. Obviously, Jenkins can be configured to run on different ports and under Tomcat if you so desire. It can integrate with Windows Active Directory, LDAP, and many other user verification systems.
Jenkins will show you charts and graphs of your tests, let you know which tests failed or passed, and will notify you of any problems via email, tweets, IM, Jabber, and even Facebook posts. People have even setup a traffic light in their offices that turns red when builds or tests fail.
Take it one step at a time. Get a good book on Ant. Read the tutorial on the Ant website. Then try to get a working Ant script to just to build your app. If you are having specific issues, you can ask for help.
Once you have the build going, extend the script to run your unit tests. Once that is done, download Jenkins and try to get that up and running.
let me begin by saying I 'm a coldfusion newbie.
I 'm trying to research if its possible to do the following and what would be the best approach to achieve it.
Whenever a developer checks in code into SVN, I would like to do a get all the new changes/files and do an auto build to check if the code can be deployed successfully to production server. I guess there are two parts to it, one syntax checking and second integration test(if functionality is working as expected). For the later part some unit test tools would have to be used.
Can someone comment on their experience doing something similar for coldfusion.
Sorry for being a bit vague...I know its a very open-ended question but any feedback would be appreciated.
Thanks
There's a project called "Cloudy With A Chance of Tests" that purports to do what you require. In particular it brings together a number of other CFML code analysis projects (VarScope & QueryParam) to check code, as well as unit testing. I am not currently using it myself but did have a look at it some time ago (more than 12 months) and it appeared to be quite good.
https://github.com/mhenke/Cloudy-With-A-Chance-Of-Tests
Personally I run MXUnit tests in Jenkins using the instructions from the MXUnit site - available here:
http://wiki.mxunit.org/display/default/Continuous+Integration+--+Running+tests+with+Jenkins
Essentially this is set up as an ant task in Jenkins, which executes the MXUnit tests and reports back the results.
We're not doing fully continuos integration, but we have a process which automates some of the drudgery of our builds:
replace the site's application.cf(m|c) with one that tells users that the app is being deployed (we had QA staff raising defects that were due to re-deployments)
read a database manifest XML which lists all SQL scripts which make up the current release. We concatenate the scripts into a single upgrade script, suitable for shipping
execute the SQL script against the server's DB, noting any errors. The concatenation process also adds a line of SQL after each imported script that white to a runlog table, so we can see what ran, how long it took and which build it was associated with. If you're looking to replicate this step, take a look at Liquibase
deploy the latest code
make an http call to a ?reset=true type URL to tell the app to re-initialize
execute any tests
The build is requested manually through the build servers we have, but you click a button, make tea and it's done.
We've just extended the above to cope with multiple servers in a cluster and it ticks along nicely. I think the above suggestion of using the Jenkins SVN plugin to automate the process sounds like the way to go.