I created GoCD pipeline.
Material type is: Github
Poll changes: True
Default polling time is 1 min.
I want to change poll time to 5 minutes for this pipeline only?
According to the configuration reference, there is no option to configure polling per git repository.
If your network topology allows it, you could also disable polling entirely and set up a webhook in github that notifies GoCD of new commits.
Related
We have an existing workflow where we need to increase timeout for an activity (Start to Close) to enable an urgent processing. Do we require to do a version bump up on activity ?
It depends on how you specify the timeout. If you specify it on registration then you have to change the version as registration is immutable. If you specify it as part of the invocation then no need for a new version.
If you are using AWS Flow Framework for Java use ActivitySchedulingOptions to pass the timeout. Here is the relevant documentation.
I have a AWS pipeline, which:
1) first stage, get template.yaml and build a ec2 windows instance via script
note when this machine boots up, via user data it starts a script to downloads requirements, git etc, code, setups iis and various other stuff.
so this happens once the cloudformation part has completed, and takes about another 5 mins
2) i then want to run external tests on this machine - maybe using blazemeter, as the second part of the pipeline
the problem is that between stage 1 and 2 i need to wait for the website to work on the box, so i need to wait at least 5 mins. i could add a manual approval stage, but this seams cumbersome.
does anyone have a way to add this timed wait? or a pipeline process to check the site is up?
say I have this:
Step 1: A azure webjob triggered by a timer, and this job will create 1000 messages and I will put them in a queue.
Step 2: I have another azure webjob triggered by above message queue, this webjob will process these messages.
Step 3: The final webjob should only be triggered when all messages have been processed by step 2.
Looks like azure Queue doesn't support ordering and the only way is to use ServiceBus. I am wondering is it really the only way?
What I am thinking is this kind of process:
Put all these messages into an azure table, with some guid as primary key and status to be 0.
after finishing step 2, change the status of this message to 1 (i.e. finished) and will trigger step 3 if every messages have been done.
Will it work? Or maybe there are some nuget packages that I can use to achieve what I want?
The simplest way I think is the combination of Azure Logic App and Azure Function.
Logic App is a automated scalable workflow, you could trigger it with Timer, HTTP request and etc. And Azure function is a serverless compute service that enables you to run code on-demand without having to explicitly provision or manage infrastructure.
The Logic App support add code with Function, as for the use of Function, It's similar with WebJob. So I think you could create a Logic App with three Function, the they will run one by one.
As for the WebJob, yes,the QueueTrigger doesn't support ordering. And the Service Bus you mentioned, It did meet your some requirements for its FIFO feature. However you need to make sure your step 3 would be triggered after step 1, because it's already null in your queue before creating queues.
Hope my answer could help you.
I am using config update and cloud functions for communication between mobile application and esp32 device by following the example here, but when I am sending config update messages frequently some of them are not sending; say out of 5 only 3 config update messages are going, I have two questions:
1) How frequently we can send config update to avoid some missing updates.
2) Is there any alternative way to communicate between cloud functions and IoT device.
According to the docs: [IoT docs]
Configuration updates are limited to 1 update per second, per device.
However, for best results, device configuration should be updated much
less often — at most, once every 10 seconds.
The update rate is calculated as the time between the most recent
server acknowledgment and the next update request.
If your operations are mostly configuration updates I cannot think another alternative that could perform better.
Tools:
Jenkins version 1.506
GitHub
GitHub SQS Plugin 1.4
Jenkins configured to consume the messages and GitHub for sending them over Amazon SQS (set the active key and secret key and queue name). Also configured for modules with "Build when a message is published to an SQS Queue"
The messages are sent by GitHub and consumed by Jenkins as warranted. I can see SQS Activity in Jenkins (see below) but for some reason Jenkins does not trigger the build.
I wonder what are we missing?
Last SQS Activity
Started on Mar 20, 2013 3:03:49 AM Using strategy: Default [poll] Last
Build : #16 [poll] Last Built Revision: Revision
408d9c4d6412e44737b62f25e9c36fc8b3b074ca (origin/maple-sprint-4)
Fetching changes from the remote Git repositories Fetching upstream
changes from origin Polling for changes in Done. Took 1.3 sec Changes
found
I had to set "Poll SCM" and set the period "* * * * *" that did the trick!