How to fail the seed job when using the Jenkins Job DSL plug-in - jenkins-job-dsl

I'm using the Jenkins Job DSL plug-in and configured a seed job that is a parameterized build. I would like to fail the build if someone forgets to fill in one of the required parameters. I have the following at the top of my DSL script:
def expectedParams = [
'BRANCH_NAME',
'FALLBACK_BRANCH',
'FOLDER_NAME',
'FOLDER_DISPLAYNAME',
'MAIL_TO'
];
boolean envChecksPass = true;
expectedParams.each {
if (! binding.variables.get(it)?.trim()) {
println "This script expects the $it environment variable to be set."
envChecksPass = false;
}
}
if (! envChecksPass) {
// TODO: SET THE JOB STATUS TO FAILED
return false;
}
How do I complete the TODO bit? Obviously I can throw an Exception here, but it seems a bit ugly. What is the preferred/best practice way.

Throwing an exception is currently the preferred way. If you throw a javaposse.jobdsl.dsl.DslException the stack trace will be suppressed and only the message will be shown.

Another way could be to exit with a non zero code, which will mark the job result as a FAILURE:
if (! envChecksPass) {
// TODO: SET THE JOB STATUS TO FAILED
exit 1;
}

in Jenkins pipelines it is better to use 'error' step
if (! binding.variables.get(it)?.trim()) {
error "This script expects the $it environment variable to be set."
}
you will see the following output in the log (without stack stace)
ERROR: This script expects the USER environment variable to be set.
Finished: FAILURE

Related

Apache Beam writing status information after BQ writes are done within the dataflow

I am struggling to find a good solution for the writing status of BQ writes right after it is done.
Each dataflow has to process one file, and after no errors occurred, the status should be written to Firestore.
I have a code that looks like this:
PCollection<TableRow> failedInserts = results.getFailedInserts();
failedInserts
.apply("Set Global Window",
Window.<TableRow>into(new GlobalWindows()))
.apply("Count failures", Count.globally()).apply(ParDo.of(new DoFn<Long, ReportStatusInfo>() {
#ProcessElement
public void processElement(final ProcessContext c) throws IOException {
Long errorNumbers = c.element();
if (errorNumbers > 1) {
//set status to failed
} else if (numberOfErrors == 0) {
//set status to ok
}
insert();
}
}))
It does not seem to work correctly as I have the impression that it does not wait for the whole BQ writing process to be finished.
Any other ideas on how to solve my problem in the dataflow or why the above does not work?
The getFailedInserts method is only supported when using Streaming Inserts, as opposed to file loads. In that mode, your code will do what you want

Informatica workflow fails on failure of command task with UNCHECKED fail parent if task fails property

I have a command task to call a batch file which returns 1 if File.Ok does not exists and 0 if File.Ok exists in a particular location. Following this command task I have 2 links:
link 1: $commandtask.status = succeeded
link 2: $commandtask.status = failed
After each of these links there are several session and other tasks.
PROBLEM: Whenever File.OK is not found, Link 2 is executed followed by tasks/sessions of this branch (as desired and expected) but after executing all remaining items the workflow gets failed.
note: I have not checked 'Fail Parent if task fails' property anywhere.
You might have checked the "Fail parent if task does not run" in some task. If the task with this property checked does not run then it fails the workflow.

Robot Framework Multiple Statements in If Condition

I am new to Robot Framework and am trying to figure out how to have multiple statements associated with an If condition.
The basic pre-code counts entries in an array WORDS, and assigns the value to length.
${length}= Get length ${WORDS}
Next I want to do a simple check for the size of the array (should have 10 items).
${status}= Set Variable If ${length} == 10 TRUE FALSE
Now I want to log the result and pass or fail the test case.
Run Keyword If $status == "TRUE"
... Log ${status}
... ELSE
... Log ${status}
Run Keyword If $status != "TRUE"
... FAIL Values do not match
I want to combine the ELSE + logging the status + failing the test case, but can not seem to figure it out. In C programming, I would simply brace the statements between { } and be ok.
In Robot Framework, I have tried 'Run keywords' but with no luck.
... ELSE Run keywords
... Log ${status}
... FAIL Values Do Not Match
I hope there is a way to accomplish this. Appreciate any input.
Using "Run keywords" is the correct solution. However, in order to run multiple keywords you must separate keywords with a literal AND:
... ELSE run keywords
... log ${status}
... AND FAIL Values Do Not Match
You can create keyword with log and fail keywords and pass status as argument, then just use this keyword after ELSE statement.
Or use run keywords like you did, but add AND statement before FAIL (read more here: http://robotframework.org/robotframework/latest/libraries/BuiltIn.html#Run%20Keywords)
Or consider adding ${status} to Fail message: Fail Values Do Not Match. Status: ${status}
Set Test Variable ${temp} rxu
Run Keyword if '${temp}'=='rxu'
... Run Keywords
... Log To Console this is one
... Log To Console This is two
... ELSE Run Keyword Log To Console another block

Rolling back a deletion to handle a server error in Ember.js

We have an Ember.js application which uses Ember Data. We are trying to do the following:
Delete a record.
If there is a server error (due to the fact that the application can have a "locked" state where records can not be deleted), roll the record back to its previous state, prompt the user to unlock app, and continue.
If there is no server error, continue as normal.
We have found that this does not work
object.destroyRecord().then ->
# handle success
, (reason)->
object.rollback()
# prompt for the unlock
In both cases, we see an error that looks like:
Error: Assertion Failed: calling set on destroyed object
But it isn't clear how to remove the isDestroyed state once it has been set.
In general, it seems that, in either case, once we call destroyRecord, there is no way to rollback the changes to a pre-deleted state once, even if there is a server error.
Try deleteRecord, followed by save. The docs explicitly state that this allows you to rollback on error.
object.deleteRecord()
object.save().then( ->
# handle success
, (reason) ->
object.rollback()
)
I've found that you need to put the rollback call in becameError() function.
// Overwrite default destroyRecord
destroyRecord: function () {
this.deleteRecord();
this.save().then(
function (){
//Success
},
function () {
//Failure
}
);
},
becameError: function (item) {
this.rollback();
}
The item will disappear from views until the server returns the error and then magically reappear.

Unit-testing a simple usage of RACSignal with RACSubject

(I may be using this in a totally incorrect manner, so feel free to challenge the premise of this post.)
I have a small RACTest app (sound familiar?) that I'm trying to unit test. I'd like to test MPSTicker, one of the most ReactiveCocoa-based components. It has a signal that sends a value once per second that accumulates, iff an accumulation flag is set to YES. I added an initializer to take a custom signal for its incrementing signal, rather than being only timer-based.
I wanted to unit test a couple of behaviours of MPSTicker:
Verify that its accumulation signal increments properly (i.e. monotonically increases) when accumulation is enabled and the input incrementing signal sends a new value.
Verify that it sends the same value (and not an incremented value) when the input signal sends a value.
I've added a test that uses the built-in timer to test the first increment, and it works as I expected (though I'm seeking advice on improving the goofy RACSequence initialization I did to get a signal with the #(1) value I wanted.)
I've had a very difficult time figuring out what input signal I can provide to MPSTicker that I can manually send values to. I'm envisioning a test like:
<set up ticker>
<send a tick value>
<verify accumulated value is 1>
<send another value>
<verify accumulated value is 2>
I tried using a RACSubject so I can use sendNext: to push in values as I see fit, but it's not working like I expect. Here's two broken tests:
- (void)testManualTimerTheFirst
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
[ticker.accumulateSignal subscribeNext:^(id x) {
NSLog(#"%s value is %#", __func__, x);
}];
[controlledSignal sendNext:#(2)];
}
- (void)testManualTimerTheSecond
{
// Create a custom tick with one value to send.
RACSubject *controlledSignal = [RACSubject subject];
MPSTicker *ticker = [[MPSTicker alloc] initWithTickSource:controlledSignal];
BOOL success = NO;
NSError *error = nil;
id value = [ticker.accumulateSignal asynchronousFirstOrDefault:nil success:&success error:&error];
if (!success) {
XCTAssertTrue(success, #"Signal failed to return a value. Error: %#", error);
} else {
XCTAssertNotNil(value, #"Signal returned a nil value.");
XCTAssertEqualObjects(#(1), value, #"Signal returned an unexpected value.");
}
// Send a value.
[controlledSignal sendNext:#(1)];
}
In testManualTimerTheFirst, I never see any value from controlledSignal's sendNext: come through to my subscribeNext: block.
In testManualTimerTheSecond, I tried using the asynchronousFirstOrDefault: call to get the first value from the signal, then manually sent a value on my subject, but the value didn't come through, and the test failed when asynchronousFirstOrDefault: timed out.
What am I missing here?
This may not answer your question exactly, but it may give you insights on how to effectively test your signals. I've used 2 approaches myself so far:
XCTestCase and TRVSMonitor
TRVSMonitor is a small utility which will pause the current thread for you while you run your assertions. For example:
TRVSMonitor *monitor = [TRVSMonitor monitor];
[[[self.service searchPodcastsWithTerm:#"security now"] collect] subscribeNext:^(NSArray *results) {
XCTAssertTrue([results count] > 0, #"Results count should be > 0";
[monitor signal];
} error:^(NSError *error) {
XCTFail(#"%#", error);
[monitor signal];
}];
[monitor wait];
As you can see, I'm telling the monitor to wait right after I subscribe and signal it to stop waiting at the end of subscribeNext and error blocks to make it continue executing (so other tests can run too). This approach has the benefit of not relying on a static timeout, so your code can run as long as it needs to.
Using CocoaPods, you can easily add TRVSMonitor to your project:
pod "TRVSMonitor", "~> 0.0.3"
Specta & Expecta
Specta is a BDD/TDD (behavior driven/test driven) test framework. Expecta is a framework which provides more convenient assertion matchers. It has built-in support for async tests. It enables you to write more descriptive tests with ReactiveCocoa, like so:
it(#"should return a valid image, with cache state 'new'", ^AsyncBlock {
[[cache imageForURL:[NSURL URLWithString:SECURITY_NOW_ARTWORK_URL]] subscribeNext:^(UIImage *image) {
expect(image).notTo.beNil();
expect(image.cacheState).to.equal(JPImageCacheStateNew);
} error:^(NSError *error) {
XCTFail(#"%#", error);
} completed:^{
done();
}];
});
Note the use of ^AsyncBlock {. Using simply ^ { would imply a synchronous test.
Here you call the done() function to signal the end of an asynchronous test. I believe Specta uses a 10 second timeout internally.
Using CocoaPods, you can easily add Expecta & Specta:
pod "Expecta", "~> 0.2.3"
pod "Specta", "~> 0.2.1"
See this question: https://stackoverflow.com/a/19127547/420594
The XCAsyncTestCase has some extra functionality to allow for asynchronous test cases.
Also, I haven't looked at it in depth yet, but could ReactiveCocoaTests be of some interest to you? On a glance, they appear to be using Expecta.