There's a way to run a test (or an entire test suite) inside an iframe?
Specifically, I have to test some js functions that are checking if they are executed inside a cross origin frame or not
Related
The folder structure of my project is
base_dir
|__folder1
|__main.py
|__main_test.py
|__folder2
.
.
I have unittest written in main_test.py. The main.py uses pubsub client. The pubsub client is defined at the top as a module object. For testing I need to mock this object.
I used mock library to mock the client and it works fine(the client is successfully mocked). But when I run the same tests in github actions, the test fails. I as using nosetest to run tests in github actions. The nosetest loads all files together and results in pubsub client to be called.
I tried to move the import statements in the test itself, but still not useful.
I have an Azure Function which is triggered by an Azure Service Bus Queue.
The function is below.
How this Run method can be unit tested?
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
public static class AddContactFunction
{
[FunctionName("AddContactFunction")]
public static void Run([ServiceBusTrigger("AddContact", Connection = "AddContactFunctionConnectionString")]string myQueueItem, ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
}
I had the exact same doubts.
Adding Unit Tests is not too complicated, at the end of the day, its a function, so all we got to do is to call the Azure Function with the correct string, for parameter string myQueueItem.
Adding Integration tests needs some additional ground work. In the Github project, the author uses the TestFunctionHost class from Azure/azure-functions-host project.
I tried following this strategy, but the amount of code needed to setup all these is uncomfortably high for my liking. Not a lot of it is well documented, and some of the stuff needs developers to use Azure App Services myGet feed.
I wanted a simpler approach, and thankfully I found one.
Azure Functions is built on top of the Azure WebJobs SDK package, and leverages its JobHost class to run. So in our integration tests, all we need to do, is to setup this Host, and tell it where to look for the Azure Functions to load and run.
IHost host = new HostBuilder()
.ConfigureWebJobs()
.ConfigureDefaultTestHost<CLASS_CONTAINING_THE_AZURE_FUNCTIONS>(webjobsBuilder => {
webjobsBuilder.AddAzureStorage();
webjobsBuilder.AddServiceBus();
})
.ConfigureServices(services => {
services.AddSingleton<INameResolver>(resolver);
})
.Build();
using (host) {
await host.StartAsync();
// ..
}
...
Once this is done, we can send messages to ServiceBus and our Azure Functions will get triggered. Once can even set breakpoints in the Functions getting tested and debug issues!
I have blogged about the whole process here and I have also created a github repository at this link, to showcase test driven development with Azure Functions.
How this Run method can be unit tested?
The method is a static public method. You can unit test it by invoking the static method AddContactFunction.Run(/* parameters /*); You will not need a Service Bus namespace or a message for that matter as your function expects to receive a string from the SDK. Which you can provide and verify the logic works as expected.
And how an integration test can be done by starting with AddContact trigger, checking the logic in the method and the data being sent to a blob using the output binding?
This would be a much more sophisticated scenario. This would require to run Functions runtime and generate a real Service Bus message to trigger the functions as well as validate that the blob was written. There's no integration/end-to-end testing framework that is shipped with Functions and you'd need to come up with something custom. Azure Functions Core Tools could be helpful to achieve that.
I am using Django 1.8 and I have a management command that geocodes some items in my database, which requires an internet connection.
I have written a test for this management command. However, the test runs the script, so it also requires an internet connection.
After pushing the test to GitHub, my CI is broken, because Travis doesn't have an outside internet connection so it fails on this test.
I want to keep this test, and I'd like to continue to include it in python manage.py test when run locally.
However, is there a way I can explicitly tell Travis not to bother with this particular test?
Alternatively, is there some other clean way that I can keep this test as part of my main test suite, but stop it breaking Travis?
Maybe you could decorate your test with #unittest.skipIf(condition, reason) to test for the presence of a Travis CI specific environment variable to skip it or not. For example:
import os
...
#unittest.skipIf("TRAVIS" in os.environ and os.environ["TRAVIS"] == "true", "Skipping this test on Travis CI.")
def test_example(self):
...
If the external resource is an HTTP endpoint, you should consider using vcrpy to record and replay the HTTP requests/responses.
This way you can continue running the same test suite in different environments. It'll also speed this test up.
I have a Portal application running on one port--http://localhost:10039. I am trying to unit test individual Ember.js applications, which are loaded into the Portal app via portlets.
What I'd like to be able to do is have those QUnit tests run against the full application, which is running on that other port I mentioned. However, Karma seems to not be fond of running the test suite on a port that isn't the same one on which the application is running.
For example:
test('Page loads in browser', function() {
visit('/login').then(function() {
ok(exists('#login-form'), 'Page loaded successfully');
});
});
... launches Karma successfully on port 9876, but yields...
Page loaded successfully# 42 ms
Expected: true
Result: false
Diff: true false
Source:
at http://localhost:9876/absolute/Users/me/Sites/app/node_modules/qunitjs/qunit/qunit.js:1933:13
at http://localhost:9876/base/tests/unit-tests.js:9:8
at isolate (http://localhost:9876/base/bower_components/ember/ember.js:36720:15)
at http://localhost:9876/base/bower_components/ember/ember.js:36703:16
at tryCatch (http://localhost:9876/base/bower_components/ember/ember.js:45817:16)
at invokeCallback (http://localhost:9876/base/bower_components/ember/ember.js:45829:17)
Is it possible to run my test suite on, say, http://localhost:9876, and have it run its tests against another website/port http://localhost:10039?
The closest I could come to an answer was Karma proxies, though the proxy seems to have no effect. Karma is still running its tests against links relative to its own port 9876.
I would like to add that I am open to other testing frameworks if this can only be done elsewhere--Jasmine, Mocha, etc.
Thanks!
karma is intended for running unit tests, so the code will be loaded in karma client (localhost:9876) and test cases executed there.
If you are planning to run certain end to end tests with your portal application, you could look into alternatives like selenium. In fact, your test above (testing for successful page loading) is a good fit with selenium.
I have a long-running unit test job in hudson. If some tests fail I want to run them first and not wait for other tests to run before them (to see, have I fixed them, or not).
Is it possible to setup this in Hudson?
Thanks.
I have the same issue before, here is my solution.
You can write a standalone program to run a list of unit test cases. (In my case, I wrote a Java main class to run Junit manually.)
Create a job that can run with "Trigger builds remotely" and pass the list from the url
Use Selenium to grab the failure result from Hudson's "Test Result"
Use Selenium to trigger the job from "Trigger builds remotely" with the failure list.
By the way, you can also send a mail with the result when the rerun testing failure, and then you can just check mail if the test is real "failure".
Note that the Selenium is not necessary if you have another choice.
Don't think it's possible in Hudson, but if you're using Eclipse (sorry I'm assuming you're using Java), you can run the tests, and re-run them using 'Rerun Test - Failures First'.