I'm trying to get a Groovy script that runs as a post-build step in a Jenkins job to access an injected variable but it keeps getting null.
I've kept the job as simple as possible so there are only 2 real bits of configuration to consider.
Here's how the property is injected. I could use other methods but this is intended for a more complicated job that reads in external properties.
This is the Groovy script I have so far. It will do something else with the value once it gets it.
This is the logging from running the job.
I'm not a Groovy expert and I've searched and tried a number of things but without success.
Of course having posted a question I then got the answer myself...
New script:
New logging:
I was able to set the value of the variable in Jenkins and access them in Slack notifications using the below script in Groovy PostBuild:
import hudson.model.*
import hudson.EnvVars
manager.listener.logger.println("======Search Log=======");
def total_tests_count=0
def total_pass_count = 0
def total_failed_count = 0
def total_skipped_count=0
def myVar
def envVar
def newEnv
def matcher = manager.getLogMatcher(".*Tests Summary(.*)\$")
if(matcher?.matches()) {
// manager.addShortText(matcher.group(1), "grey", "white", "0px", "white")
manager.listener.logger.println("matcher-0=== "+matcher.group(0));
manager.listener.logger.println("matcher-1 ====== "+matcher.group(1));
int total_start = matcher.group(1).indexOf("Total:")
total_tests_count = matcher.group(1).split('Total:')[1].split('Passed:')[0]
manager.listener.logger.println("extracted total_tests_count : ${total_tests_count}")
// Sets env var for total_tests_count
TOTAL_TESTS_COUNT=0
myVar = total_tests_count;
envVar = new EnvVars([TOTAL_TESTS_COUNT: myVar]);
newEnv = Environment.create(envVar);
manager.build.environments.add(0, newEnv);
int passed_start = matcher.group(1).indexOf("Passed:")
total_pass_count = matcher.group(1).split('Passed:')[1].split('Failed:')[0]
manager.listener.logger.println("extracted total_pass_count : ${total_pass_count}")
// Sets env var for total_pass_count
TOTAL_PASS_COUNT=0
myVar = total_pass_count;
envVar = new EnvVars([TOTAL_PASS_COUNT: myVar]);
newEnv = Environment.create(envVar);
manager.build.environments.add(1, newEnv);
int failed_start = matcher.group(1).indexOf("Failed:")
total_failed_count = matcher.group(1).split('Failed:')[1].split('Skipped:')[0]
manager.listener.logger.println("extracted total_failed_count : ${total_failed_count}" )
// Sets env var for total_failed_count
TOTAL_FAILED_COUNT=0
myVar = total_failed_count;
envVar = new EnvVars([TOTAL_FAILED_COUNT: myVar]);
newEnv = Environment.create(envVar);
manager.build.environments.add(2, newEnv);
int skipped_start = matcher.group(1).indexOf("Skipped:")
total_skipped_count = matcher.group(1).split('Skipped:')[1]
manager.listener.logger.println("extracted total_skipped_count : ${total_skipped_count}")
// Sets env var for total_skipped_count
TOTAL_SKIPPED_COUNT=0
myVar = total_skipped_count;
envVar = new EnvVars([TOTAL_SKIPPED_COUNT: myVar]);
newEnv = Environment.create(envVar);
manager.build.environments.add(3, newEnv);
}
In Jenkins, I was able to access using custom message:
Please check below url for test automation output for PR build
Total Cases: $TOTAL_TESTS_COUNT, Passed:$TOTAL_PASS_COUNT, Failed: $TOTAL_FAILED_COUNT, Skipped: $TOTAL_SKIPPED_COUNT.
Related
I have a module for which I need to pass a set of values via variables.tf, currently, the variables are grouped by the suffix on the name, i.e. -dev or -stg, etc
The module itself doesn't care which set it gets, but I must decide somewhere, so I pass the suffix at terraform invocation time or in a .tfvars file.
How can I get the following code to work, or how else should I do it?
module "alb" {
...
# these work but is ugly and inflexible
# connect_alb_client_id = var.connect_alb_client_id-dev
# connect_alb_client_id = var.connect_alb_client_id-stg
# this doesn't work
connect_alb_client_id = "${var.connect_alb_client_id}${var.suffix}"
# and neither does this
connect_alb_client_id = "${var.connect_alb_client_id${var.suffix}}"
# so what is the correct syntax or alternative way of doing it
...
}
variable "suffix"
type = string
default = "-dev"
# default = "-stg"
}
variable "connect_alb_client_id-dev" {
type = string
default = "abcdef"
}
variable "connect_alb_client_id-stg {
type = string
default = "ghijkl"
}
Your connect_alb_client_id should be a map with keys of dev, stg and so on:
variable "connect_alb_client_id" {
default = {
dev = "abcdef"
stg = "ghijkl"
}
}
then:
module "alb" {
connect_alb_client_id = var.connect_alb_client[var.suffix]
}
I am using OpenDaylight and trying to replace the default distributed database with Apache Ignite.
I am using the jar obtained by the source code here.
https://github.com/Romeh/akka-persistance-ignite
However, the class IgniteWriteJournal does not seem to load which i have checked by putting some print statements in its constuructor.
Is there any issue with the .conf file?
The following is a portion of the akka.conf file i am using in OpenDaylight.
odl-cluster-data {
akka {
remote {
artery {
enabled = off
canonical.hostname = "10.145.59.38"
canonical.port = 2550
}
netty.tcp {
hostname = "10.145.59.38"
port = 2550
}
# when under load we might trip a false positive on the failure detector
# transport-failure-detector {
# heartbeat-interval = 4 s
# acceptable-heartbeat-pause = 16s
# }
}
cluster {
# Remove ".tcp" when using artery.
seed-nodes = ["akka.tcp://opendaylight-cluster-data#10.145.59.38:2550"]
roles = ["member-1"]
}
extensions = ["akka.persistence.ignite.extension.IgniteExtensionProvider"]
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
persistence {
# Ignite journal plugin
journal {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.journal.IgniteWriteJournal"
cache-prefix = "akka-journal"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where journal cache is already created
cachesAlreadyCreated = false
}
}
# Ignite snapshot plugin
snapshot {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.snapshot.IgniteSnapshotStore"
cache-prefix = "akka-snapshot"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where snapshot cache is already created
cachesAlreadyCreated = false
}
}
}
}
ignite {
//to start client or server node to connect to Ignite data cluster
isClientNode = false
// for ONLY testing we use localhost
// used for grid cluster connectivity
tcpDiscoveryAddresses = "localhost"
metricsLogFrequency = 0
// thread pools used by Ignite , should based into target machine specs
queryThreadPoolSize = 4
dataStreamerThreadPoolSize = 1
managementThreadPoolSize = 2
publicThreadPoolSize = 4
systemThreadPoolSize = 2
rebalanceThreadPoolSize = 1
asyncCallbackPoolSize = 4
peerClassLoadingEnabled = false
// to enable or disable durable memory persistance
enableFilePersistence = true
// used for grid cluster connectivity, change it to suit your configuration
igniteConnectorPort = 11211
// used for grid cluster connectivity , change it to suit your configuration
igniteServerPortRange = "47500..47509"
//durable memory persistance storage file system path , change it to suit your configuration
ignitePersistenceFilePath = "./data"
}
}
I assume you modified the configuration/initial/akka.conf. First those sections need to be inside the odl-cluster-data section (can't tell from just your snippet). Also it looks like the following should be:
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
I have this piece of code in a controller:
def update = {
Map model = [:]
model.foo = params.foo
model.bar = params.bar
def result = ""
MyObject obj = MyObject.findWhere(bar:bar, foo:foo)
MyObjectService.updateObj(model,obj)
result = true
render result as JSON
}
And this simple unit test:
def 'controller update'() {
given:
controller.params.foo = foo
controller.params.bar = bar
MyObject obj = new MyObject(bar:bar, foo:foo)
mockDomain(MyObject,[obj])
when:
controller.update()
then:
1 * MyObject.findWhere(bar:bar, foo:foo) >> obj
1 * MyObjectService.updateObj(model,obj)
and:
def model = JSON.parse(controller.response.contentAsString)
model == true
where:
foo = "0"
bar = "1"
}
Now this is failing by and it is telling me that, "not static method findWhere is applicable..." for those arguments. That "MyObject" is just an orm class, and when I run that application everything seems to be working fine, but the test is failing.
My logic is this:
I want to count how many times the findWhere and updateObj methods are call and I am also mocking their response. So findWhere will return the object I already mocked, and pass it on to the service.
Any ideas why this is failing ?
For mocking static methods you should use Spock's GroovyStub class which introduced in v0.7.
Hi i trying to get coverage for a class that has few future method (for web service call) and few concrete static methods. but after calling future method, i am unable to call other methods... Can you please tell me how to take coverage for future method and web service calls.
my class structure :
public with sharing class AccountSynchController {
#future (callout=true)
public static void importAccount(Set<Id> ids) {
}
#future (callout=true)
public static void importContact(Set<Id> ids) {
}
}
[future methods are called from trigger]
Test Class Code :
VAT__c testVat = new VAT__c();
testVat.Code__c = '3';
insert testVat;
Account testAccount = new Account();
testAccount.Name = 'Test Account';
testAccount.VATSales__c = testVat.Id;
testAccount.VATPurchase__c = testVat.Id;
testAccount.KvK_Nummer__c = '12312312';
testAccount.PhoneExt__c = '12312312';
testAccount.Website = '12312312';
testAccount.BillingPostalCode = '12312312';
testAccount.BillingCity = '12312312';
testAccount.Fax = '12312312';
testAccount.Phone = '12312312';
testAccount.BillingStreet = '12312312';
testAccount.BTW_Nummer__c = '12312312';
testAccount.BillingCountry = '12312312';
testAccount.BillingState = '12312312';
testAccount.BTW_Nummer__c = '12312312';
testAccount.E_mail__c = 'test#gmail.com';
testAccount.Taal__c = 'NL';
testAccount.SalesPaymentConditionCode__c = '15';
testAccount.Code__c = '102';
testAccount.fromExact__c = false;
testAccount.Exact_Id__c = '123123';
insert testAccount;
Contact testContact = new Contact();
testContact.AccountId = testAccount.Id;
testContact.Birthdate = system.today();
testContact.Conact_Exact_Number__c = '12312312312';
testContact.Email = 'test#gmail.com';
testContact.FirstName = 'first';
testContact.Title_Code__c = 'Mr.';
testContact.Geslacht__c = 'M';
testContact.Initials__c = 'I';
testContact.Language_Code__c = 'NL';
testContact.LastName = 'last';
testContact.MiddleName__c = 'middle';
testContact.Phone = '12321312312';
testContact.fromExact__c = false;
insert testContact;
Thanks..
Begin your unit test by calling Test.startTest(), then run your test inserts. Finish by calling Test.stopTest(). Calling that last method ensures that your #future method will have fired. After that you can do your assertions to validate the trigger's actions.
Extending on Adam's answer to test the callouts you will need to either make use of the Test.isRunningTest() method to give you a chance to emulate returning the data from your web service - it's not the best but it's the commonly accepted way.
The other option is to use some mocking and injection but this isn't as straight forward as it should be so most people go for the first option.
Since is impossible to make a webservice call-out directly from your testing class, you have to write an addiction mock class. Purpose of that is to generate a fake response.
// This causes a fake response to be generated
Test.setMock(WebServiceMock.class, new WebServiceMockImpl());
where WebServiceMock.class is the class above-mentioned.
After that you are able to invoke the true webservice call-out method in your test class.
Check the following link for further information.
http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_callouts_wsdl2apex_testing.htm
I'm trying to use the following code to create a new SalesInvoice based on an existing SalesOrder:
SalesInvoice invoice = new SalesInvoice();
invoice.DocumentTypeKey = new SalesDocumentTypeKey { Type = SalesDocumentType.Invoice };
invoice.CustomerKey = originalOrder.CustomerKey;
invoice.BatchKey = originalOrder.BatchKey;
invoice.Terms = new SalesTerms { DiscountTakenAmount = new MoneyAmount { Value = 0, Currency = "USD", DecimalDigits = 2 }, DiscountAvailableAmount = new MoneyAmount { Value = 0, Currency = "USD", DecimalDigits = 0 } };
invoice.OriginalSalesDocumentKey = originalOrder.Key;
List<SalesInvoiceLine> lineList = new List<SalesInvoiceLine>();
for (int i = 0; i < originalOrder.Lines.Length; i++)
{
SalesInvoiceLine line = new SalesInvoiceLine();
line.ItemKey = originalOrder.Lines[i].ItemKey;
line.Key = new SalesLineKey { LineSequenceNumber = originalOrder.Lines[i].Key.LineSequenceNumber; }
SalesLineLot lot = new SalesLineLot();
lot.LotNumber = originalOrder.Lines[i].Lots[0].LotNumber;
lot.Quantity = new Quantity { Value = 2200 };
lot.Key = new SalesLineLotKey { SequenceNumber = originalOrder.Lines[i].Lots[0].Key.SequenceNumber };
line.Lots = new SalesLineLot[] { lot };
line.Quantity = new Quantity { Value = 2200 };
lineList.Add(line);
}
invoice.Lines = lineList.ToArray();
DynamicsWS.CreateSalesInvoice(invoice, DynamicsContext, DynamicsWS.GetPolicyByOperation("CreateSalesInvoice", DynamicsContext));
When executed, I receive the following error:
SQL Server Exception: Operation expects a parameter which was not supplied.
And the more detailed exception from the Exception Console in Dynamics:
Procedure or function 'taSopLotAuto' expects parameter '#I_vLNITMSEQ',
which was not supplied.
After a considerable amount of digging through Google, I discovered a few things.
'taSopLotAuto' is an eConnect procedure within the Sales Order Processing component that attempts to automatically fill lots. I do not want the lots automatically filled, which is why I try to fill them manually in the code. I've also modified the CreateSalesInvoice policy from Automatic lot fulfillment to Manual lot fulfillment for the GP web services user, but that didn't change which eConnect procedure was called.
'#I_vLNITMSEQ' refers to the LineSequenceNumber. The LineSequenceNumber and SequenceNumber (of the Lot itself) must match. In my case they are both the default: 16384. Not only is this parameter set in the code above, but it also appears in the SOAP message that the server attempted to process - hardly "not supplied."
I can create an invoice sans line items without a hitch, but if I add line items it fails. I do not understand why I am receiving an error for a missing parameter that is clearly present.
Any ideas on how to successfully create a SalesInvoice through Dynamics GP 10.0 Web Services?
Maybe you mess to add the line key to the lot:
lot.Key = new SalesLineKey();
lot.Key.SalesDocumentKey = new SalesDocumentKey();
lot.Key.SalesDocumentKey.Id = seq.ToString();