I'd like to shelve old builds in all of my jobs for example
build numbers 1-10
I'm wondering if there is way to do that from the jenkins UI using a single command.
First of all in order to make changes to a bulk of jobs of I would use something called configuration slicer.
you can get to that from here: https://wiki.jenkins-ci.org/display/JENKINS/Configuration+Slicing+Plugin
Also you want to delete your build? or archive them?! in case of deleting I would use the Log rotation eaither by date or number of builds. In the configure section of the job click on Discard old build and you will see the options.
and finally you can always use Artifact deployer and somether examples from that plug in.
Link Here: https://wiki.jenkins-ci.org/display/JENKINS/ArtifactDeployer+Plugin
Link on how to use the CLI in Jenkins : https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
EDIT 1
In regards to the comments below where you are asking about "Shelving Jobs" .
I think the phrase you are looking for here is "archive" and not shelving - that is a very Visual Studio/TFS concept - so I am not personally aware of any anything that does SHELVING per say.
In terms of Groovy script I believe that you are now asking a different question and so this should be raised specifically as different question - but as far as groovy script go you can use the following link as an intro :
http://groovy.codehaus.org/
Related
I m using CODEBUILD_BUILD_NUMBER in AWS Code build to append the build number to the artifacts that are deployed from the build. After every major version release, we need to again reset the build numbers.
For example, after v2.0.0-401 if we want to start building v3.0.0-1, not finding a way to reset the build numbers on the same code build project.
Any help is appreciated.
not finding a way to reset the build numbers on the same code build project.
This is because you can't reset it. Its managed by AWS.
You can setup new build project to start counting from zero if you want, or use different way of tagging your builds, not based on CODEBUILD_BUILD_NUMBER.
Yes this functionality is not available out of the box, but will be definitely useful.
For now my recommendation would be to keep your CUSTOM_BUILD_NUMBER in the SSM Parameter Store, CodeBuild has native integration with Param Store, providing easy way to lookup value:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec.env.parameter-store
Every time I roll the version number I just track it in my buildspec instead of having to store things anywhere else.
- $buildPrefix = "2.0.0."
- $resetBuildNumber = 8 #this should be set to the build number prior to a buildPrefix version update
- $currentBuild = "$buildPrefix" + ($env:CODEBUILD_BUILD_NUMBER - $resetBuildNumber)
I'm upgrading a google cloud dataflow job from dataflow java sdk 1.8 to version 2.4 and then trying to update its existing dataflow job on google cloud using the --update and --transformNameMapping arguments, but I can't figure out how to properly write the transformNameMappings such that the upgrade succeeds and passes the compatibility check.
My code fails at the compatibility check with the error:
Workflow failed. Causes: The new job is not compatible with 2018-04-06_13_48_04-12999941762965935736. The original job has not been aborted., The new job is missing steps BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey, PubsubIO.readStrings. If these steps have been renamed or deleted, please specify them with the update command.
The dataflow transform names for the existing, currently running job are:
PubsubIO.Read
ParDo(ExtractJsonPath) - A custom function we wrote
ParDo(AddMetadata) - Another custom function we wrote
BigQueryIO.Write
In my new code that uses the 2.4 sdk, I've changed the 1st and 4th transforms/functions because of some libraries being renamed and deprecation of some of the old sdk's functions in the new version.
You can see the specific transform code below:
The 1.8 SDK version:
PCollection<String> streamData =
pipeline
.apply(PubsubIO.Read
.timestampLabel(PUBSUB_TIMESTAMP_LABEL_KEY)
//.subscription(options.getPubsubSubscription())
.topic(options.getPubsubTopic()));
streamData
.apply(ParDo.of(new ExtractJsonPathFn(pathInfos)))
.apply(ParDo.of(new AddMetadataFn()))
.apply(BigQueryIO.Write
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.to(tableRef)
The 2.4 SDK version I rewrote:
PCollection<String> streamData =
pipeline
.apply("PubsubIO.readStrings", PubsubIO.readStrings()
.withTimestampAttribute(PUBSUB_TIMESTAMP_LABEL_KEY)
//.subscription(options.getPubsubSubscription())
.fromTopic(options.getPubsubTopic()));
streamData
.apply(ParDo.of(new ExtractJsonPathFn(pathInfos)))
.apply(ParDo.of(new AddMetadataFn()))
.apply("BigQueryIO.writeTableRows", BigQueryIO.writeTableRows()
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.to(tableRef)
So it seems to me like PubsubIO.Read should map to PubsubIO.readStrings and BigQueryIO.Write should map to BigQueryIO.writeTableRows. But I could be misunderstanding how this works.
I've been trying a wide variety of things - I tried to give those two transforms that I'm failing to remap defined names as they formerly were not explicity named, so I updated my applys to .apply("PubsubIO.readStrings" and .apply("BigQueryIO.writeTableRows" and then set my transformNameMapping argument to:
--transformNameMapping={\"BigQueryIO.Write\":\"BigQueryIO.writeTableRows\",\"PubsubIO.Read\":\"PubsubIO.readStrings\"}
or
--transformNameMapping={\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\",\"PubsubIO.Read\":\"PubsubIO.readStrings\"}
or even trying to remap all the internal transforms inside the composite transform
--transformNameMapping={\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\",\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle\",\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup\",\"BigQueryIO.Write\":\"BigQueryIO.writeTableRows\",\"PubsubIO.Read\":\"PubsubIO.readStrings\"}
but I seem to get the same exact error no matter what:
The new job is missing steps BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey, PubsubIO.readStrings.
Wondering if I'm doing something seriously wrong? Anybody whose written a transform mapping before who would be willing to share the format they used? I can't find any examples online at all besides the main google documentation on updating dataflow jobs which doesn't really cover anything but the most simple case --transformNameMapping={"oldTransform1":"newTransform1","oldTransform2":"newTransform2",...} and doesn't make the example very concrete.
It turns out there was additional information in the logs in the google cloud web console dataflow job details page that I was missing. I needed to adjust the log level from info to show any log level and then I found several step fusion messages like for example (although there were far more):
2018-04-16 (13:56:28) Mapping original step BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey to write/StreamingInserts/StreamingWriteTables/Reshuffle/GroupByKey in the new graph.
2018-04-16 (13:56:28) Mapping original step PubsubIO.Read to PubsubIO.Read/PubsubUnboundedSource in the new graph.
Instead of trying to map PubsubIO.Read to PubsubIO.readStrings I needed to map to the steps that I found mentioned in that additional logging. In this case I got past my errors by mapping PubsubIO.Read to PubsubIO.Read/PubsubUnboundedSource and BigQueryIO.Write/BigQueryIO.StreamWithDeDup to BigQueryIO.Write/StreamingInserts/StreamingWriteTables. So try mapping your old steps to those that are mentioned in the full logs before the job failure message in the logs.
Unfortunately I'm not working through a failure of the compatibility check due to a change in the coder used from the old code to the new code, but my missing step errors are solved.
Long story short - Familiar with BASE 9, now using EG (7.1) due to a new role with another company. The transition is painful, but there is one thing that bothers me the most and that is the log.
As I am sure most know, it will rewrite/refresh for every piece of code you execute.
Surely there must be an option to maintain a "running log" within the SAS code you are running/building (not necessarily for the whole project, but just for the program node within the project).
Can this be done?
Any assistance is greatly appreciated. Searched for some reference, but none citing the subject specifically.
Yes - from SAS's support pages:
You’ll notice that a separate log node is generated for each code node. By turning on Project Logging, you can
easily tell Enterprise Guide that you’d like a single SAS log to be generated for all of the tasks and code nodes in your
Project. This single Project Log will be created in addition to the individual logs created for each task or code node.
Helpful Hint: If Project Logging is turned on, the log represents a running log of the entire project. To
turn on the Project Logging, select Project Log in the Context Menu of the Process Flow, and then select
Turn On.
I've been following the advice in this question.
How to add a WiX custom action that happens only on uninstall (via MSI)?
I have an executable running as a custom action after InstallFinalize which I intend to purge all my files and folders. I was just going to write some standard deletion logic but I'm stuck on the point that Rob Mensching made that the windows installer should handle this incase someone bails midway through an uninstallation.
"create a CustomAction that adds temporary rows to the RemoveFiles table"
I'm looking for some more information on this. I'm not really sure how to achieve this in c++ and my searching hasn't turned up a whole lot.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa371201(v=vs.85).aspx
Thanks
Neil
EDIT: I've marked the answer due to the question being specific about how to add files to the removeFiles table in c++ however I'm inclined to agree that the better solution is to use the RemoveFolderEx functionality in wix even though it is currently in beta (3.6 I think)
Roughly you will have to use the following functions in this order:
MsiDatabaseOpenView - the (input) handle is the one you get inside your custom action functions
MsiCreateRecord - to create a record with the SQL stuff inside
MsiRecord* - set of functions to prepare the record
MsiViewExecute to insert the new record into whatever table you please ...
MsiCloseHandle - with the handle from the very first step and the record handle (from MsiCreateRecord)
Everything is explained in detail over at MSDN. However, pay special attention to the section "Functions Not for Use in Custom Actions".
The documentation of MsiViewExecute also explains how the SQL queries should look. To get a feel for them you may want to use one of the .vbs scripts that are part of the Windows Installer SDK.
If you use WiX to create your installation package, consider using RemoveFolderEx element. It does what you want and you don't have to write the code yourself.
Read Tactical directory nukes for an example of how to use it.
If you still want to implement it yourself, you can get your inspiration from this blog post, there's the code for doing this in VBScript.
Hhi all,
I'm using kettle4.0.1 communty version, here iam comfortable with spoon, but for running jobs and all i need to use pan and carte, my problem is other than spoon.bat niether of pan.bat nor carte.bat is opening. iam unable to run kitchen.bat also.. can someone suggest me with best solution
First of all, in order to run jobs you will need to use kitchen.bat (unless you want to execute them remotely). Kitchen, Pan and Carte are command line tools, therefore you will need to specify your parameters also on the command line.
For example, you want to run a file called job.kjb located in C:/jobs/ with a minimal log level. You would execute kitchen.bat from the commandline as follows
kitchen.bat -file=C:/jobs/job.kjb -level=minimal
Please see also more information here:
http://wiki.pentaho.com/display/EAI/Kitchen+User+Documentation