I have an automation framework that uses static html pages within its project directory to perform certain aws operations such as dynamoDB scan and Aws Lambda executions. Due to some performance bottleneck in a dependent api component for the test we are trying to move the framework to an ec2 instance with Amazon linux and run the tests from there.
Since we have methods in the TestNG class that actually uses selenium web driver to spin up a browser and open up the static page in order to perform the required Aws operations I am pretty sure this test is going to run into issues.
There are two potential approach I see for solving this issue:
Implement AWSUtil classes and use necessary aws clients to replace the web dependent logics (Will require some effort and re-engineering)
Use a headless chrome browser (or any compatible one) in order to run the web dependent steps.
I am pretty sure that number 1 can be easily achieved, just a matter of time and effort. However, would love to know if there is an easy way of accomplishing #2 since this would not require any code rewrite.
We got the same issue and been successful with puppeteer,
https://github.com/GoogleChrome/puppeteer
If you don't want to install the latest version of node, you can dockerize your tests.
puppeteer can run headless or with browser.
Hope it helps.
There is no need to change anything in your tests, just the setup and the execution. The tests can run headlessly on a Continuous Integration (CI) server. There is no out-of-the-box setup since there is no display output for the browser to launch in. However with Xvfb you can launch the browser virtually. Straight from the docs:
Xvfb (short for X virtual framebuffer) is an in-memory display server for UNIX-like operating system (e.g., Linux). It enables you to run graphical applications without a display
Depending from do you want to keep Xvfb running in the background until the process is killed, there are two options:
Xvfb :99 &
export DISPLAY=:99
run-your-tests-here
or
xvfb-run run-your-tests-here
Here is a Linux tutorial. I am using this for my Docker based Jenkins setup and works like a charm, every time.
Related
Can I run my e2e test developed using Protractor on the AWS device farm?
Because I want to complete mobile testing of my project using the AWS device farm, and do not really understand can I do that or not. I found 3 types about that on the AWS forum, but it is too old from 2018.
First forum discussion
Second forum discussion
Third forum discussion
Maybe something changed?
I have protractor e2e tests written for the desktop browser and want to use those ones for the mobile browser too.
I will answer this for both mobile browsers and desktop testing.
Mobile Browsers
AWS Device Farm has 2 execution modes: Standard Mode and Custom Mode.
Standard mode gives you granular reporting if you don't generate a report for your tests locally. This splits up the artifacts for each test.
Custom mode gives you as close as possible execution state and results as you would get locally. It does not give you the granular reporting which is fine for most as you already get reports locally which will be available on Device Farm as well. It is recommended for customers to use custom mode as that is the one that is most up to date and adds supports for latest frameworks unless of course they absolutely need granular reporting.
Protractor on Device Farm
It is not officially support today.
However, Device Farm supports Appium Nodejs in custom mode. You get a yaml file where you can run shell commands on the host machine where the tests will be executed. So in case of protractor you could select this test type (Appium Nodejs), install the missing dependencies needed for the tests, start your server, and run your tests.
The points to evaluate: Since Device Farm takes your tests as inputs, you will have to upload the zip file of your tests. I would highly recommend checking the instructions for nodejs tests and using the same. Alternatively, you can also download your tests on the fly using the yaml file.
Desktop Browsers
Device Farm has a selenium grid that you can connect to from your local machine and run your tests. The browsers Chrome and Firefox run on Windows platform and Safari is not supported today. If you use a selenium grid on your local machine for your tests, then you most likely should be able to run the same tests using the Selenium grid on Device Farm. Of course, pending validation.
If you need more help on any of these items feel free to reach out to aws-devicefarm-support#amazon.com and I can help you further.
You can test in chrome with an emulated mobile mode:
You can add "mobileEmulation" in a new protractor.conf-mobile.js
chromeOptions: {
args: ['--disable-infobars', '--headless', '--disable-gpu', '--window-size=1920,1080'],
'mobileEmulation' : { 'deviceName': 'Galaxy S5' },
I want to setup an cloud development environment for my personal use.
Requirements:
1. Have a cloud web server (basically any linux system) serving my Elixir (backend language) app.
2. Connect Sublime Text / Atom to this server (via sftp maybe), and make code changes and save. Automatic compilation and other stuff will be taken care by mix or task runner.
3. Multiple device connectivity to this setup.
Reasons for this setup:
I want to be able to develop from anywhere (office, home, etc), just configure the IDE and continue to work from where I left off last from any device.
Better productivity and less setup required.
Secure as well
Current solution I have:
Had setup a linux instance with sftp server enabled.
Created projects under the root of the sftp directory.
Run task runners in those projects to auto compile and server with other stuff.
Connected sublime text to this sftp server and start working. On save it uploads the file to the server.
I connect another laptop to this server and can start working on last saved state.
This setup works fine till now, but if there is a better way for this, I would love to know.
Since you are using git, you don't need a separate cloud server for syncing your dev environments. The easiest way to meet your needs would be to create a branch in git called workinprogress (for example), and then push and pull to it from your various locations. When you have something you want to publish to the main branch you can do an interactive rebase before merging, which enables you to rewrite the history of your workinprogress branch, squashing and rewording the commit messages as much as you like. Then once you have everything you want on the main branch, you either delete workinprogress and start a new one, or just git checkout workinprogress && git reset --hard master.
If you still want to have your Elixir app on a live server somewhere, then you can just pull from Github on that server and get latest updates for the app.
I work from various places too, and use this work flow. No problems so far.
I have an app with ParseServer back-end and Ionic2 front-end. I need to simulate multiple users to stress test the back-end.
What load testing tools would you recommend to use for such setup?
Thanks.
You need to split your process into 2 phases:
Server-side testing. You need to load test your backend to ensure that it is in position to simulate anticipated number of users. In fact any tool capable of sending HTTP Requests will fit, the most popular free and open source load testing solutions are JMeter, Grinder, Gatling, and Tsung. All of them come with record-and-replay functionality so you will be able to build your test by just interacting with your mobile application and using the load testing tool as a proxy. See Open Source Load Testing Tools: Which One Should You Use? article for main features highlighted and compared.
Client-side testing. Even if your server responds very fast, handles enormous loads, able to scale, etc. your application user experience may be not that good as client side performance also matters. You can go for Chrome Dev Tools Remote Debugging and/or Intel XDK which is capable of profiling existing applications.
You can try using ZebraTester and record the script for this test. The trial version allows you to have up to 20 Virtual Users and these can run multiple loops depending on the length of your test. The same tool can record the script as well as run the load test from your local machine.
I use to test parse server
in http://jmeter.apache.org/
is a free tool you can install your computer then start testing
I was wondering about the best way to deliver private web service instances to lots of users, so the user would always be able to connect to their own offline version of a service, just like running a web service from visual studios while debugging. I was struggling with setting this up in VS2013 even with the many online tutorials, but I am not sure if its not working because it was never supposed to work this way.
I have provided this in-depth explanation of my issue as i am not sure i am going about this in the right way and would appreciate feedback:
Background:
I have a web service to interface with an engine. This deals with the front-end and builds a set of commands for how to make a CAD model. These commands are for controlling the 3rd party CAD software's API. Therefore the engine can be seen to have two main functions -
Build the CAD's API instructions, which can be saved for later
Execution, where it catches the instance of the CAD software
running on the same computer and it builds the model.
The second part is restricted for the general public. Only our in-house users should be able to use it. However, they want to have an otherwise identical front-end and user experience.
The problem is, if they connect to the same engine as the public, which exists on our main server, then the engine will be looking for an instance of the CAD package on the same machine as itself - i.e the server, as stressed in the emboldened point above. What should happen is the engine finds the CAD instance running on the machine that the controlling UI is based on and it uses that for its target. I have spoke to the CAD API support and they say they do not know how to do that.
And so we get to my solution of providing an offline stand alone of my web service on each of the employees computers. This means the front-end will check at the start of the session if a localhost connection is available. If not it will use the main address, which takes it to my server. Otherwise it uses the local engine which will look perform the default behavior of looking for a CAD package on the same machine as itself. Because its locally installed that is now the right machine and it will find the CAD instance of the user successfully.
Final points:
The engine cannot be accessed by the UI directly as i am using
Unity3D for the front-end and there is .Net compatibility issues.
I need a completely self contained version of the software in the
future anyway, so eventually i have to deal with having the engine
accessed locally
I ended up using IISExpress. I got the user to install this and then get them to call a batch file installer i made which sets up the config file and moves my web project to the correct directory.
I'm using TeamCity 7 as CI Server, and I'd have to test several Web Application plugins, mainly written with PHP. I'm familiar with ANT and *Unit, but I have an issue to solve: to properly test a plugin, my idea would be the following:
Cleanup the testing environment.
Install a clean copy of the Web Application which will host the plugin.
Install the plugin.
Enable the plugin.
Run the tests.
Obviously, running the tests on an installed environment is the easy part. Most tests are fired by directly calling plugin's classes' methods, yet the framework must be configured, even with minimal settings, to allow calling its bootstrap file and perform due initialization. I tried running the tests in an environment I prepared manually, and they run as expected.
The issue is now automating the installation of the standard Web Application, and, most importantly, its configuration. The basic steps are the following:
Unzip framework somewhere (done).
Create a Database (done).
Create a Database User ans assign it propert privileges (done).
Run Web Application's setup.
The tricky part is that not all Web Application implement a command line interface, such as drush for Drupal, hence I came out with two possible ways to complete the installation:
Simulate manual installation via CURL
Take note of the installation steps and the forms that need to be filled.
POST data to each of the forms using CURL.
I tried this method, still manually, with acceptable results. The Web Application gets installed as expected, and it can be used.
However, this requires a Web Server where the application can run. As far as I know, TeamCity Agents work in their own, random named directories and anything "installed" in them cannot be accessed via HTTP requests.
Backup/Restore
Manually pre-install a Web Application and configure its basic settings.
Zip the Application's directory and a backup of its database.
Before running the tests, unzip the directory in Agent's working directory.
Restore the backed-up database. The application will now be "configured".
This method is a bit "rough", but it doesn't require a Web Server to be running. Although the Web Application won't be able to server HTTP requests, that doesn't necessarily matter, as the tests will be run against plugin's classes.
This method has two major drawbacks, though:
Tests involving interaction with the Web Application (e.g. hooks, event handlers, and so on) can't be run.
Since the Web Application and its database are pre-configured, their parameters will be the same at every run. Therefore, it wouldn't be possible to run two Agents at the same time, for example to test two different plugins.
I'm now wondering if there's a better solution, as both the above look less than optimal to me.
Please note that, although I'm using TeamCity, the CI Server itself should not be a big deal, as I'm running everything with ANT. Therefore, any suggestion, even related to another CI Serverm, will be welcome (I know Hudson, CruiseControl and BuildMaster, I can adapt a concept easily). Thanks.