Jenkins is not starting the build upon SQS message arrival - build

Tools:
Jenkins version 1.506
GitHub
GitHub SQS Plugin 1.4
Jenkins configured to consume the messages and GitHub for sending them over Amazon SQS (set the active key and secret key and queue name). Also configured for modules with "Build when a message is published to an SQS Queue"
The messages are sent by GitHub and consumed by Jenkins as warranted. I can see SQS Activity in Jenkins (see below) but for some reason Jenkins does not trigger the build.
I wonder what are we missing?
Last SQS Activity
Started on Mar 20, 2013 3:03:49 AM Using strategy: Default [poll] Last
Build : #16 [poll] Last Built Revision: Revision
408d9c4d6412e44737b62f25e9c36fc8b3b074ca (origin/maple-sprint-4)
Fetching changes from the remote Git repositories Fetching upstream
changes from origin Polling for changes in Done. Took 1.3 sec Changes
found

I had to set "Poll SCM" and set the period "* * * * *" that did the trick!

Related

How to use librdkafka to change retention time for the running kafka topic

We can use the following command in kafka machine to update the retention time for the running kafka topic:
bin/kafka-topics.sh --zookeeper <kafka_ip> --alter --topic <target_topic> --config retention.ms=86400000
But I don't want to login to the kafka machine and run the command.
I just want to use C or C++ to change kafka retention time for the running kafka topic in the remote producer machine.
Question is: Can we use api in librdkafka to update the retention time for the running kafka topic?
note: Now we can produce and consume kafka data with C/C++.
Use rd_kafka_AlterConfigs() and pass it a TOPIC resource with all current topic configuration as well as your updated configuration retention.ms.
https://docs.confluent.io/platform/current/clients/librdkafka/rdkafka_8h.html#ade8d161dfb86a94179d286f36ec5b28e
Is there also a way to map this using the c ++ API?
Until the topic configuration is set by RdKafka::Topic::create, why the conf object doesn't support 'retention.ms', as I found here https://docs.confluent.io/platform/current/installation/configuration/topic-configs.html#retention.ms 'retention.ms' is a topic configuration.

How to remove cloud functions (slack-notification) from a specific cloud build trigger job

I have a cloud function which is configured for slack notification & also have two cloud build trigger against same repository (one is for push activity from git & another one for pull request activity from any branch).
Now when any jobs are getting triggered that cloud function will push the message in slack using pub/sub. But I just want the notification when any process failed or pass activity happen against push activity cloud build trigger, not for the pull request activity trigger. How do I remove the cloud functions for only pull request activity. Can I configure like this?
Thanks in advance. :)
To send the slack notification only according to the build status you can detect the type of the event and choose to send the slack notification or not.
On the event.data variable that you are receiving in the subscribe method you can read the variable status to check the status of the build, so you can send the slack notification only on SUCCESS or FAILURE status.
I do this by adding a substitution variable (either in the build trigger config (UI), or the cloudbuild.yaml itself:
substitutions:
_DISABLE_SLACK: 'true'
Then in your slack webhook code:
// don't send slack messages if we have _DISABLE_SLACK in `cloudbuild.yaml` file -- any value is accepted
if (build && build.substitutions && build.substitutions['_DISABLE_SLACK']) {
return;
}
PS -- I also have another method where you want to disable slack messages for a build, BUT you still want errors... Can post that if you're interested :)
[basically if build.status != 'WORKING', 'QUEUED', 'CANCELLED' && if build.buildTriggerId in [array], then we continue with slack message (overriding DISABLE_SLACK)]

Greengrass_HelloWorld lambda doesn't publish to Amazon IoT console

I have been following the documentation in every step, and I didn't face any errors. Configured, deployed and made a subscription to hello/world topic just as the documentation detailed. However, when I arrived at the testing step here: https://docs.aws.amazon.com/greengrass/latest/developerguide/lambda-check.html
No messages were showing up on the IoT console (subscription view hello/world)! I am using Greengrass core daemon which runs on my Ubuntu machine, it is active and listens to port 8000. I don't think there is anything wrong with my local device because the group was deployed successfully and because I see the communications going both ways on Wireshark.
I have these logs on my machine: /home/##/Desktop/greengrass/ggc/var/log/system/runtime.log:
[2019-09-28T06:57:42.492-07:00][INFO]-===========================================
[2019-09-28T06:57:42.492-07:00][INFO]-Greengrass Version: 1.9.3-RC3
[2019-09-28T06:57:42.492-07:00][INFO]-Greengrass Root: /home/##/Desktop/greengrass
[2019-09-28T06:57:42.492-07:00][INFO]-Greengrass Write Directory: /home/##/Desktop/greengrass/ggc
[2019-09-28T06:57:42.492-07:00][INFO]-Group File Directory: /home/##/Desktop/greengrass/ggc/deployment/group
[2019-09-28T06:57:42.492-07:00][INFO]-Default Lambda UID: 122
[2019-09-28T06:57:42.492-07:00][INFO]-Default Lambda GID: 127
[2019-09-28T06:57:42.492-07:00][INFO]-===========================================
[2019-09-28T06:57:42.492-07:00][INFO]-The current core is using the AWS IoT certificates with fingerprint. {"fingerprint": "90##4d"}
[2019-09-28T06:57:42.492-07:00][INFO]-Will persist worker process info. {"dir": "/home/##/Desktop/greengrass/ggc/ggc/core/var/worker/processes"}
[2019-09-28T06:57:42.493-07:00][INFO]-Will persist worker process info. {"dir": "/home/##/Desktop/greengrass/ggc/ggc/core/var/worker/processes"}
[2019-09-28T06:57:42.494-07:00][INFO]-No proxy URL found.
[2019-09-28T06:57:42.495-07:00][INFO]-Started Deployment Agent to listen for updates. [2019-09-28T06:57:42.495-07:00][INFO]-Connecting with MQTT. {"endpoint": "a6##ws-ats.iot.us-east-2.amazonaws.com:8883", "clientId": "simulators_gg_Core"}
[2019-09-28T06:57:42.497-07:00][INFO]-The current core is using the AWS IoT certificates with fingerprint. {"fingerprint": "90##4d"}
[2019-09-28T06:57:42.685-07:00][INFO]-MQTT connection successful. {"attemptId": "GVko", "clientId": "simulators_gg_Core"}
[2019-09-28T06:57:42.685-07:00][INFO]-MQTT connection established. {"endpoint": "a6##ws-ats.iot.us-east-2.amazonaws.com:8883", "clientId": "simulators_gg_Core"}
[2019-09-28T06:57:42.685-07:00][INFO]-MQTT connection connected. Start subscribing. {"clientId": "simulators_gg_Core"}
[2019-09-28T06:57:42.685-07:00][INFO]-Deployment agent connected to cloud.
[2019-09-28T06:57:42.685-07:00][INFO]-Start subscribing. {"numOfTopics": 2, "clientId": "simulators_gg_Core"}
[2019-09-28T06:57:42.685-07:00][INFO]-Trying to subscribe to topic $aws/things/simulators_gg_Core-gda/shadow/update/delta
[2019-09-28T06:57:42.727-07:00][INFO]-Trying to subscribe to topic $aws/things/simulators_gg_Core-gda/shadow/get/accepted
[2019-09-28T06:57:42.814-07:00][INFO]-All topics subscribed. {"clientId": "simulators_gg_Core"}
[2019-09-28T06:58:57.888-07:00][INFO]-Daemon received signal: terminated. [2019-09-28T06:58:57.888-07:00][INFO]-Shutting down daemon.
[2019-09-28T06:58:57.888-07:00][INFO]-Stopping all workers.
[2019-09-28T06:58:57.888-07:00][INFO]-Lifecycle manager is stopped.
[2019-09-28T06:58:57.888-07:00][INFO]-IPC server stopped.
/home/##/Desktop/greengrass/ggc/var/log/system/localwatch/localwatch.log:
[2019-09-28T06:57:42.491-07:00][DEBUG]-will keep the log files for the following lambdas {"readingPath": "/home/##/Desktop/greengrass/ggc/var/log/user", "lambdas": "map[]"}
[2019-09-28T06:57:42.492-07:00][WARN]-failed to list the user log directory {"path": "/home/##/Desktop/greengrass/ggc/var/log/user"}
Thanks in advance.
I had a similar issue on another platform (Jetson Nano). I could not get a response after going through the AWS instructions for setting up a simple Lambda using IOT Greengrass. In my search for answers I discovered that AWS has a qualification test script for any device you connect.
It goes through an automated process of deploying and testing a lambda function(as well as other functionality) and reports results for each step and docs provide troubleshooting info for failures.
By going through those tests I was able to narrow down the issues with my setup, installation, and configuration. The testing docs give pointers to troubleshoot test results. Here is a link to the test: https://docs.aws.amazon.com/greengrass/latest/developerguide/device-tester-for-greengrass-ug.html
If you follow the 'Next Topic' links, it will take you through the complete test. Let me warn you that its extensive, and will take some time, but for me it gave a lot of detailed insight that a hello world does not.

How to modify poll time for materials in GoCD pipeline?

I created GoCD pipeline.
Material type is: Github
Poll changes: True
Default polling time is 1 min.
I want to change poll time to 5 minutes for this pipeline only?
According to the configuration reference, there is no option to configure polling per git repository.
If your network topology allows it, you could also disable polling entirely and set up a webhook in github that notifies GoCD of new commits.

AWS SQS - Receive message from CRON

I have an AWS Elastic Beanstalk instance configured as a Worker Environment. It has a cron job that runs every x minutes. In Worker Configuration I have the path to a php file that runs when the cron fires. If I go to the SQS dashboard I can manually send the SQS for this worker a message as well as set an actual message, for example "hello".
My question is, how can I have the php file access the SQS message's message attribute?
The obvious answer is to use the AWS\SQSClient however the only way to read a message is to first get a message. The problem here is that the message has already been retrieved by the Elastic Beanstalk worker code. So how can I now read its attributes?
EDIT
Just to add more clarity to what I am describing I'm going to give a detailed write up of my steps to cause this.
I log into my elastic bean stalk and create a new Environment in my application.
I select 'create new worker'
I configure a PHP instance
I upload a new source for my environment
The source zip contains 2 files cron.yaml and someJob.php
See file codes below
I continue through set up until I get to the "Worker Details" section. Here I set the following:
Worker Queue - Autogenerated queue
HTTP Path - /someJob.php
MIME Type - default
HTTP Connections - 10
Visibility Timeout - 300
I let the environment build
During the build an autogenerated SQS message and dead letter queue are automatically built
Once finished the environment sits there until the first cron job time is hit
A message is somehow sent to the autogenerated SQS message queue
someJob.php runs
The message apparently gets deleted
Cron:
version: 1
cron:
- name: "Worker"
url: "someJob.php"
schedule: "0 * * * *"
PHP:
psuedo
<?send me an email, update the db, whatever?>
//NOTE: I don't even connect to the AWS file or perform ANY SQS actions for this to work
Now my question is, if I go to the autogenerated SQS queue I can select it, go to Queue Actions, go to send a message, and then send an actual message string ... such as "Hello".
Can I access the message message value "Hello" even though my PHP wasn't responsible for calling the message from SQS? Obviously, I would need to the call the AWS lib and associated SQS commands but the only command I can do is "receiveMessage" which I assume would pull a new message instead of the information from the currently received "Hello" message.
Note that sending the "Hello" message will also call someJob.php to run.