I am working on an Automation project which has some xmlrpc calls integrated with it, as my setup is based on client server model.
I want to get an end to end code coverage details for the complete project while running automation tests.
I am wondering if there is any way through which we can determined code coverage over xmlrpc calls ? I have tried working with coverage.py but it works fine for local code only. Seems it can not go beyond rpc.
Related
I'm trying to create a simple program to have an actual stoplight show red or green based on whether or not my integration tests pass in Jenkins.
Red - No
Green - Yes
I realize this is very vague, but any sort of tutorial that hooks up any physical relay to a light from a raspberry pi that uses a web-hosted variable should be enough to get me going.
I used this API when I did mine.
Mine was super simple/simplistic. I was able to write a bash script using this API GPIO cli stuff. It just turns the pin on or off, so I have a script to turn the pin on and one to turn a pin off.
I used a web server and ran them as CGI scripts. So essentially the Rpi runs a web server, you hit a page that runs the on bash script. You hit another page that runs the bash off script.
Here's my project on github. It's a mix of my playing around and the actual code I used to get it to run.
I recently added support for subscriptions via WebSockets to my React+Redux+ApolloGraphQL application. Since doing that I cannot run my unit tests anymore.
The problem I come up with is:
Unable to find native implementation, or alternative implementation for WebSocket!
It seems there is not an available implementation for WebSockets on the node environment where I run my tests.
There are a couple of proposed solutions in this bug report that involve using the 'ws' library for node but I cannot make them work. Either I cannot load the library properly, or I can but then the application doesn't work.
I was thinking of a way to load the library depending on the environment I'm on but:
I don't know how to do that
I don't know if it's the right approach.
Any suggestions?
How are you running your tests? In my case, I'm using Jest, so I've put inside setupTests file (which is executed before running all of your tests) this code:
import WebSocket from 'ws';
Object.assign(global, {
WebSocket,
});
This is necessary because for tests which aren't running in browser, there's no native Web Socket implementation.
hope you can able to help on my problem. As of now, I can only test transmissions via SOAP UI in single requests, and the thing is I have a lot of data to transmit and it would be too problematic and laborous to do tests one by one.
Is there a way in SOAP UI wherein I can perform multiple requests at once?Appreciate your feedback and input as always. Thanks!
I would recommend using Ready API! SoapUI NG Pro and use the a DataSource test step with a DataSource Loop. The Data Driven Sample Project (available on the starter page of Ready! API) contains a minimal working sample to get you up and running.
Disclaimer: I'm working for the company developing SoapUI and might be biased on the greatness of the Pro version.
I am currently working on a web-app. Basically deployed as .war file on production enviornment. The server is tomcat7.
Since the code is written in java. There are ocassional log statements on server side.
If I am given an issue to resolve, I do not have a duplicate data-set / subset of data , as like production. So the problem that I face, is replicating the scenario , in case of testing.
Since this is production enviornment, attaching a remote debugger , would mean that the functionality would halt/ when I am stepping through break points, So I am not able to debug remotely.
so, currently, the only visiblity I have with regards to the behavior of the system is, The code base, and the log statements in code.
How Do you suggest I debug the server side code without restarting the application.
Any insight in this matter would be appreciated.
Thank you.
As you cannot attach a remote debugger in production there is no other way to debug your code other than the logs and matching the line number in your code.
1) You could ask for the sample scenarios which caused the issue and try to reproduce it in your development environment with debugger attached.
2) You could fine tune the log level to DEBUG mode to catch more useful logs.
3) Check various logs like application log, server log, access log, etc
3) You could ask for the test data which has caused the issue and try the same in dev environment.
Ideally you need a mock production environment with minimum mock data so that you could attach your debugger and step into the code.
Otherwise it would be time consuming and impossible to reproduce the production issues.
As the headline says, how would you test a client/server application, that is written in C/C++, that talks through a protocol over a network? Im a bit confused on how to do this. I have thought about making some mocking, but I have never tried mocking, so I dont know if this is the best way.
How should I do this? I have written many unit tests, but never tried to test something that interact over a network.
I use the command pattern in the unit test driver (client) to send test commands to the server. The advantage of this is that the test is coded in one place.
Example for testing a request timeout:
Client sends a sleep command to server and then the request. The request times out and the test case is passed.
Typically you'll want to use mocking to verify that each side reacts as required to messages from the other side (e.g., that when the client receives a response from the server that it processes that response correctly).
To test the network functionality itself, you can test both running on the same machine, and you can run one (or both) inside a virtual machine. If you have two network adapters, you can even dedicate each to a virtual machine so the network traffic actually goes out one, to a switch/router, and comes back in the other (particularly useful when/if you want to capture and verify packets).
I have some client/server code that I unit test through the loopback address. I use some mocks when I have to test error conditions. So, I test with the real code when I can and the mocks when I need to trigger very specific conditions.