I have Rails app which has an Rspec feature with selenium that always passes locally and periodically fails on travis. It fails on click_link("my link"), with a Net::ReadTimeout: error. The stack trace isn't all that helpful and It'd be nice if there was a way to tail the log (tail -f log/test.log), so see if that's helpful...or at least view the log output. Is this possible using travis ci? I'm already waiting for ajax to finish, which suggests something external, so ultimately I'm trying to find out what request it's getting hung up on.
I believe you can cat the logs to the console as the last step of your test task, or you can use Travis' artifact option to get them uploaded to S3 - see http://docs.travis-ci.com/user/uploading-artifacts/
Related
I have just tried to start my amplify mock service and received the follow error:
InternalFailure: The request processing has failed because of an unknown error, exception or failure.
This has previously worked, a few hours ago with no resets or other changes.
To fix this, I did have success with removing amplify completely, doing amplify init & amplify add api but this means I lose my local data each time, but it happens randomly multiple times in the last few hours.
For the full log while error is taking place:
hutber#hutber:/var/www/unsal.co.uk$ amplify mock
GraphQL schema compiled successfully.
Edit your schema at /var/www/unsal.co.uk/amplify/backend/api/unsalcouk/schema.graphql or place .graphql files in a directory at /var/www/unsal.co.uk/amplify/backend/api/unsalcouk/schema
Failed to start API Mock endpoint InternalFailure
the problem probably comes from the SQLite file use for the mock (lock stories I guess). Delete the file in the mock-data/dynamodb/your_db_file.db folder and execute the again amplify mock api. The file recreates itself correctly. This avoids resetting the whole amplify project.
I'm running on Windows 10 with the latest version, 7.21.1. I imported the example collection featured in their documentation - https://learning.postman.com/docs/postman/collection-runs/building-workflows/. I run Request 1 in the collection through the Runner and Request 4 does not trigger as it seems it should. It is setup in Tests and seems correct based on their documentation. I have my own collection I was trying this with, but when it was not working I then tried this sample and realized there was something else amiss.
Any assistance would be helpful! I can provide more information if needed.
This is what the example collection has in Request 1 under Tests: postman.setNextRequest('Request 4');
Thanks!
I figured this out. I didn't understand the flow of how the operation worked or the flow of runner. All requests that could be called next need to be selected in runner. And this operation works like a goto where all requests after the request that is run next all run. Closing this out.
I am trying to run my postman collection with the help of runner option while running the collection I am getting "Data unavailable" Error message and my script stop there itself.
Can anyone please guide me for same.
This is an ongoing issue from postman's end. For workaround you can delete your history and then run your collection.
I am using Django 1.8 and I have a management command that geocodes some items in my database, which requires an internet connection.
I have written a test for this management command. However, the test runs the script, so it also requires an internet connection.
After pushing the test to GitHub, my CI is broken, because Travis doesn't have an outside internet connection so it fails on this test.
I want to keep this test, and I'd like to continue to include it in python manage.py test when run locally.
However, is there a way I can explicitly tell Travis not to bother with this particular test?
Alternatively, is there some other clean way that I can keep this test as part of my main test suite, but stop it breaking Travis?
Maybe you could decorate your test with #unittest.skipIf(condition, reason) to test for the presence of a Travis CI specific environment variable to skip it or not. For example:
import os
...
#unittest.skipIf("TRAVIS" in os.environ and os.environ["TRAVIS"] == "true", "Skipping this test on Travis CI.")
def test_example(self):
...
If the external resource is an HTTP endpoint, you should consider using vcrpy to record and replay the HTTP requests/responses.
This way you can continue running the same test suite in different environments. It'll also speed this test up.
I get the following error:Service Invocation Exeption i am working with Version 8.7 IBM InfoSphere DataStage and QualityStage Designer and using a server job and there, i have 1 sequential file, web service, sequential file.
Any idea what could be the reason of this error ?
Make sure you have chosen proper DataStage job type and your stage that operates on web service is configured properly.
You should also check the DataStage logs to get more information about root cause of the error.