ignoreCommitterStrategy is not working in mutibranch job-dsl generator job - jenkins-job-dsl

enter code hereI'm trying to implement ignoreCommitterStrategy approach via multibranch generator job (i.e, job-dsl way)
I'm trying to implement ignoreCommitterStrategy approach via multibranch generator job (i.e, job-dsl way)
since we have too many existing multibranch-pipeline jobs and I try to achieve ignorecommitter stragegy inside branchsources of dsl.
After running seed job (i.e., multibranch generator job) I could see ignoreCommiter Strategy updated in existing multibranch pipeline jobs but still ignored author is not added. It means at this moment inside multibranch pipeline jobs -> config I have to manually click Add button and add ignored author list which is bit painful as we have many jobs.
buildStrategies {
ignoreCommitterStrategy {
ignoredAuthors("sathya#xyz.com")
allowBuildIfNotExcludedAuthor(false)
}
}
Note: I even tried with "au.com.versent.jenkins.plugins.ignoreCommitterStrategy"
Expecting IgnoredAuthor to be added into existing multibranch pipeline jobs upon execution of multibranch generator job

Currently the build strategy can not be set by using Dynamic DSL because of JENKINS-26535. But a fix is currently in review and will hopefully be released soon.
The correct syntax would be
multibranchPipelineJob('example') {
branchSources {
branchSource {
buildStrategies {
ignoreCommitterStrategy {
ignoredAuthors('test#acme.org')
allowBuildIfNotExcludedAuthor(true)
}
}
}
}
}
Until the problem has been fixed, you can use a Configure Block to set the necessary options.

Related

Is there any way to monitor Azure Synapse Pipelines execution?

In my project, I've a need where I need to show how Pipeline is progressing on custom Web Portal built in PHP. Is there any way in any language such as C# or Java through which I can list pipelines and monitor the progress or even log into Application Insights?
Are you labelling your queries with the OPTION (LABEL='MY LABEL') syntax?
This will make it easy to monitor the progress of your pipeline by querying sys.dm_pdw_exec_requests to pick individual queries (see last paragraph under link heading), and if you use a naming convention like 'pipeline_query' you can probably achieve what you want.
try
{
PipelineRunClient pipelineRunClient = new(new Uri(_Settings.SynapseExtractEndpoint), new DefaultAzureCredential());
run = await pipelineRunClient.GetPipelineRunAsync(runId);
while(run.Status == "InProgress" || run.Status == "Queued")
{
_Logger.LogInformation($"!!Pipeline {run.PipelineName} {runId} Status: {run.Status}");
Task.Delay(30000).Wait();
run = await pipelineRunClient.GetPipelineRunAsync(runId);
}
_Logger.LogInformation($"!!Pipeline {run.PipelineName} {runId} Status: {run.Status} Runtime: {run.DurationInMs} Message: {run.Message}");
}

How to deploy large nodejs package to AWS Lambda?

I am trying to deploy a simple script to AWS Lambda that would generate critical css for a website. Running this serverless seems to make sense (but I cannot find any working examples).
The problem is with package size. I am trying to use https://github.com/pocketjoso/penthouse. When I simply npm install penthouse suddenly the package size is over 300MB. Size limit on Lambda is only 250MB and it will not upload.
Is there any way to solve this? Perhaps download penthouse on the fly? If so, is there any example?
Performance is not so critical in this case as it would be called only a few times a day by an automated process.
Looking at the bundle size of the package (https://bundlephobia.com/result?p=penthouse), it doesn't appear that your issue is primarily with the penthouse package. Although I cannot say for certain, I think it's mainly down to the size of your other dependencies.
Nevertheless, seen as this isn't a critical system and will be accessed a few times a day via automation processes, you can reduce the size of your node_modules folder by using a CDN.
There are a number of services which allow you to do this, I have primarily used UNPKG and jsDelivr in the past as they appear to be reliable with minimal-to-no downtime.
I lack the required detail from your question regarding which technology you're specifically using and the extent you can go to in order to achieve your desired result, but there are a few options you can choose:
Utilise webpack's externals configuration:
https://webpack.js.org/configuration/externals/
Use a CDN library loader such as: https://www.npmjs.com/package/import-cdn-js
Or https://www.npmjs.com/package/from-cdn
loadjs is another option: https://github.com/muicss/loadjs
scriptjs https://www.npmjs.com/package/scriptjs
I don't know much about penthouse but with scriptjs, I assume you can achieve something like this:
var penthouseScript = require("scriptjs");
penthouseScript("https://cdn.jsdelivr.net/npm/penthouse#2.2.2/lib/index.min.js", () => {
// penthouse related code
penthouse({
url: 'http://google.com',
cssString: 'body { color: red }'
})
.then(criticalCss => {
// use the critical css
fs.writeFileSync('outfile.css', criticalCss);
});
});

Azure Queue Storage Triggered-Webjob - Dequeue Multiple Messages at a time

I have an Azure Queue Storage-Triggered Webjob. The process my webjob performs is to index the data into Azure Search. Best practice for Azure Search is to index multiple items together instead of one at a time, for performance reasons (indexing can take some time to complete).
For this reason, I would like for my webjob to dequeue multiple messages together so I can loop through, process them, and then index them all together into Azure Search.
However I can't figure out how to get my webjob to dequeue more than one at a time. How can this be accomplished?
For this reason, I would like for my webjob to dequeue multiple messages together so I can loop through, process them, and then index them all together into Azure Search.
According to your description, I suggest you could try to use Microsoft.Azure.WebJobs.Extensions.GroupQueueTrigger to achieve your requirement.
This extension will enable you to trigger functions and receive the group of messages instead of a single message like with [QueueTrigger].
More details, you could refer to below code sample and article.
Install:
Install-Package Microsoft.Azure.WebJobs.Extensions.GroupQueueTrigger
Program.cs:
static void Main()
{
var config = new JobHostConfiguration
{
StorageConnectionString = "...",
DashboardConnectionString = "...."
};
config.UseGroupQueueTriggers();
var host = new JobHost(config);
host.RunAndBlock();
}
Function.cs:
//Receive 10 messages at one time
public static void MyFunction([GroupQueueTrigger("queue3", 10)]List<string> messages)
{
foreach (var item in messages)
{
Console.WriteLine(item);
}
}
Result:
How would I get that changed to a GroupQueueTrigger? Is it an easy change?
In my opinion, it is an easy change.
You could follow below steps:
1.Install the package Microsoft.Azure.WebJobs.Extensions.GroupQueueTrigger from Nuget Package manager.
2.Change the program.cs file enable UseGroupQueueTriggers.
3.Change the webjobs functions according to your old triggered function.
Note:The group queue message trigger must use list.
As my code sample shows:
public static void MyFunction([GroupQueueTrigger("queue3", 10)]List<string> messages)
This function will get 10 messages from the "queue3" one time, so in this function you could change the function loop the list of the messages and process them, then index them all together into Azure Search.
4.Publish your webjobs to azure web apps.

Sitecore: Session Started Pipeline

I need to detect when a new session is started and I'm unable to locate a pipeline or processor for it. Ironically, I've found the sessionEnd pipeline!
The only other way I can seem to make it work is using Global.aspx to hook the session start event.
You can add your own processor to the httpRequestBegin pipeline with the following code:
if (Session["exist"] == null)
{
// your code that should be executed on session start.
Session["exist"] = true;
}
You might find what you're looking for here
Depending on what you're looking for maybe the startTracking pipeline is what you want.

How to get all goals triggered during Sitecore session in commitDataSet Analytics pipeline?

I have an Analytics pipeline added just before the standard one in section to delete duplicate triggered pageevents before submitting all to database so I can have unique triggered events as there seems to be a bug on android/ios devices that triggers several events within few seconds interval.
In this custom pipeline I need to get the list of all goals/events the current user triggered in his session so I can compare with the values in dataset obtained from args parameter and delete the ones already triggered.
The args.DataSet.Tables["PageEvents"] only returns the set to be submitted to database and that doesn't help since it is changing each time this pipeline runs. I also tried Sitecore.Analytics.Tracker.Visitor.DataSet but I get a null value for these properties.
Does anyone knows a way how to get a list with all goals the user triggered so far in his session without requesting it directly to the database ?
Some code:
public class CommitUniqueAnalytics : CommitDataSetProcessor
{
public override void Process(CommitDataSetArgs args)
{
Assert.ArgumentNotNull(args, "args");
var table = args.DataSet.Tables["PageEvents"];
if (table != null)
{
//Sitecore.Analytics.Tracker.Visitor.DataSet.PageEvents - this list always empty
...........
}
}
}
I had a similar question.
In Sitecore 7.5 I found that this worked:
Tracker.Current.Session.Interaction.Pages.SelectMany(x=>x.PageEvents)
However I'm a little worried that this will be inefficient if the Pages collection is very large.