Why are Build Artifacts called "drop" by default?
What does the word "drop" mean, literally?
Is it a Microsoft thing or DevOps in general?
Why are Build Artifacts called "drop" by default?
What does the word "drop" mean, literally?
Yes, you could understand it literally. You can understand it as the meaning of water-drop.
When we use the Publish Build Artifacts task, this task in a build pipeline to publish build artifacts to Azure Pipelines. Azure Pipelines is a server built on the cloud. When our artifact is downloaded from the cloud to the local, it is like a raindrop. So, it is named drop by default.
Is it a Microsoft thing or DevOps in general?
This should be the product of Azure devops (a Microsoft thing).
Hope this helps.
The way the docs read today it looks like artifactName is optional (which it is) and the default value is drop which it is not. It seems like the default name is constructed from the stage and job that called the task. _ are removed and the two items are concatenated with a .
the docs needs to be updated to indicate the correct default name.
Specify the name of the artifact that you want to create. It can be whatever you want. For example:
task: PublishBuildArtifacts#1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop1
task: PublishBuildArtifacts#1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: drop2
Related
In AWS Codebuild's build project I am trying to set my FILE_PATH "Don't start a build under these conditions." However, I can't get it to work.
how would I go about setting the proper regex that the aws build project will recognize?
info
Source Provider: GitHub
Event type: Push
Commits to this file should not trigger a build: README.md
I don't believe amazon is following the traditional regex pattern but here is a picture what I currently have (I based it on the documentation where they have ^buildspec.*). Also this is just an example I want to get working before I move on, the real requirement is to list multiple files to be ignored by my build project.
P.S. I tried read this post but still could not figure it out. Specify file_path while triggering AWS codebuil
This might be a very specific question, but I will try anyway.
I want to explicitly set the Stage column in Model registry for a given Model Version:
This picture comes from the documentation and it gets set only when you run the example SageMaker Projects MLOps Templates they provide. When I create the Model Package (i.e. Model Version) manually, the column remains empty. How do I set it? What API do I call?
Additionally, the documentation on browsing the model version history has a following sentence
How do we send that exact event ("Deployed to stage XYZ") manually?
I already thoroughly went over all the files SageMaker MLOps Project generates (CodeBuild Builds, CodePipeline, CloudFormation, various .py files, SageMaker Pipeline) but could not find any direct and explicit call for that event.
I think it may be somehow connected to the Tag sagemaker:deployment-stage but I've already set it on Endpoint, EndpointConfiguration and Model, with no success. I also tried to blindly call the UpdateModelPackage API and set Stage in CustomerMetadataProperties. Again - no luck.
The only thing I get in that Activity tab is that given Model Version is deployed to Inference endpoint:
You can set the status with the ModelApprovalStatus parameter in the create_model_package API or the update_model_package API
Model package state change should create an event in EventBridge (like many other SageMaker events) https://docs.aws.amazon.com/sagemaker/latest/dg/automating-sagemaker-with-eventbridge.html#eventbridge-model-package, which enables you to run the automation of your choice.
In the default SageMaker Pipelines Project template, you can see the EventBridge-driven proposed logic in the CodePipeline pipeline created for deployment: you can see on top "Trigger - CloudWatchEvent".
You don't see the event source as code in the git, because the status change is expected to be done in the Studio model registry UI in that demo template.
Those EventBridge events emitted by the Model Registry can also be seen in few blogs:
Taming Machine Learning on AWS with MLOps: A Reference Architecture
Patterns for multi-account, hub-and-spoke Amazon SageMaker model registry
Build MLOps workflows with Amazon SageMaker projects, GitLab, and GitLab pipelines
I was having the exact same issue, I wanted to change the model stage but could not find where it was being done in the sample code AWS provides.
After some research and looking into the sample code I realized that it was being done in the cloud formation execution. First they add the tag
'sagemaker:deployment-stage': stage_config['Parameters']['StageName']
and then the cloud formation execution (cfnUpdate call) updates the stage and deploys.
I couldn't find another way to change the state with a call to update_model_package or other methods.
I am using the AWS CLI task to deploy a Lambda layer. The build pipeline upstream looks like this:
It zips up the code, publishes the artifact and then downloads that artifact.
Now in the release pipeline I'm deploying that artifact via an AWS CLI command. The release pipeline looks like this:
I'm trying to figure out a way to dynamically get the current working directory so I don't need to hardcode it. In the options and parameters section you can see I'm trying to use $(Pipeline.Workspace) but it doesn't resolve correctly.
Is this possible?
Correct me if I am wrong, but I looks like you are running this in Azure Release? Not Pipelines?
If that is the case I think the variable you are looking for is $(Release.PrimaryArtifactSourceAlias) .
See the section of the document that talks about release specific variables: https://learn.microsoft.com/en-us/azure/devops/pipelines/release/variables?view=azure-devops&tabs=batch#default-variables---release
Yes. This is completely achievable.
From your screenshot, you are using the Release Pipeline to deploy the Artifacts.
In your situation, the $(Pipeline.Workspace) can only be used in Build Pipeline.
Refer to this doc: Classic release and artifacts variables
You can use the variable: $(System.ArtifactsDirectory) or $(System.DefaultWorkingDirectory)
The directory to which artifacts are downloaded during deployment of a release. The directory is cleared before every deployment if it requires artifacts to be downloaded to the agent. Same as Agent.ReleaseDirectory and System.DefaultWorkingDirectory.
Is there a way to setup up a build's priority in a yaml based pipeline? There seem to be references to build priority in the Azure DevOps API, but nothing in how to do this via yaml. I thought there might be some docs in the Triggers section, but no.
We need this because we have some fast building NuGet packages, but these get starved via slow-build pipelines making turnaround time for packages painful.
The closest thing I could come up with to working around this is via agent demands in the yaml
demands:
- Agent.ComputerName = XYZ
to separate build pipelines, but this is a bit of a hack and doesn't use agents efficiently.
A way to set this in UI would be acceptable, but I couldn't seem to find anything.
Recently Azure DevOps introduced the ability to manually specify a build/release runs next.
This manifests as a Run next button. (image source).
So while you can't say "this pipeline always takes priority" yet, you can manually force a specific run to the front of the queue.
If you need a specific pipeline to always take priority, then you likely want to setup a separate agent pool just for those pipelines, or use demands as Leo Liu mentioned.
Setting build priority in yaml or UI
I'm afraid this feature is not yet supported in Azure DevOps at this moment.
There is a popular user voice about it, you can upvote it and check the feedback from that ticket.
Currently as a workaround, just like what you did, set the demands in build definitions to force building with the specific agents.
Hope this helps.
I installed the Promoted Build Plugin from Jenkins and now I'm facing some troubles to promote a build from an existing job. Here is the scenario:
There is an existing Nightly Build job that runs every night running all the tests and metrics needed;
There is an existing Deploy Build that accepts a parameter ${BUILD_NUMBER} and deploys the build that has the corresponding ${BUILD_NUMBER} from the Nightly Build
Say the [Nightly Build] ran and successfully built the artifact #39
Now I can just run the [Deploy Build] passing in #39 as a parameter
The artifacts from [Nightly Build] #39 are going to be deployed
So far so good. Now is the part where I want to add the Build Promotions...
Is there a way to promote the Nightly Build #39 (notice that it was already built before) from the Deploy Build? Or maybe even from somewhere else, quite frankly I`m kind of lost here :(
I don`t see them with a clear Upstream/Downstream relationship, because they don't have a: always runs this build and then the other during the execution - the [Deploy Build] is executed sometimes only and not always after the [Nightly Build].
Update as of version 2.23 of Parameterized Trigger Plugin:
With version 2.23+ behavior changed (thanks AbhijeetKamble for pointing out). Any parameter that is being passed by Predefined Parameters section of calling (build) job has to exist in the called (deploy) job. Furthermore, the restrictions of called job's parameters apply, so if the called job's parameter is a choice, it has to have all possible values (from promotions) pre-populated. Or just use Text parameter type.
Solution
Yes, I have the exact same setup: a build job (based on SVN commits) and manually executed deploy job. When the user selects any build from the build job (including older builds), they can then go to Promotion Status link and execute various deploy promotions, for example Deploy to DEV, Deploy to QA, etc
Here is how to setup the promotion on build job:
You will need these plugins: Parameterized Trigger Plugin, Promoted Builds Plugin
You will also need to setup default Archive the Artifacts post-build action on this build job.
Check mark Promote builds when
Define Name "Deploy to DEV"
Under Criteria check mark Only when manually approved
Under Actions use Trigger/call builds on other projects
In Projects to build enter the name to your deploy job here
Check mark Block until the triggered projects finish their builds
Mark this build as failure if the triggered build is worse or equal to: FAILURE (adjust according to statuses of your deploy job)
Predefined parameters (Code A)
Code A:
Server=IP_of_my_dev_server`
Job=$PROMOTED_JOB_NAME`
BuildSelection=<SpecificBuildSelector><buildNumber>$PROMOTED_NUMBER</buildNumber></SpecificBuildSelector>
Above, in the Predefined parameters section, the name to the left of = are the parameters that are defined in your deploy job. And to the right of = are the values that will be assigned to those parameters when this promotion executes. Defines three parameters Server, Job and BuildSelection.
The parameter Server= is my own, as my deploy job can deploy to multiple servers. However if your deploy job is hardcoded to always deploy to a specific location, you won't need that.
The Job= parameter is required, but the name of the param depends on what you've setup in your deploy job (I will explain configuration there). The value $PROMOTED_JOB_NAME has to remain as is. This is an environment variable that the promotion process is aware of and refers back to the name of your build job (the one where promotion process is configured)
The BuildSelection= parameter is required. This whole line has to remain as is. The value passed is $PROMOTED_NUMBER, which once again the promotion is aware of. In your example, it would be #39.
The Block until the triggered projects finish their builds check mark will make the promotion process wait until the deploy job finished. If not, the promotion process will trigger the deployment job and quit with success. Waiting for the deploy job to finish has the benefit that if the deploy job fails, the promotion star will be marked with failure too.
(One little note here: the promotion star will appear successful while the deploy job is running. If there is a deploy failure, it will only change to failure after the deploy job finished. Logical... but can be a bit confusing if you look at the promotion star before the deployment completed)
Here is how to setup deploy job
You will need Copy Artifacts plugin
Under This build is parameterized
Configure a parameter of type Choice (or Text) with name Server (this name has to match with configuration in promotion's Predefined Parameters in previous section)
Choices: Enter list of possible server IPs that would be used by the promotion's Predefined Parameters in previous section (see update note below)
Configure a parameter of type Choice (or Text) with name Job (this name has to match with configuration in promotion's Predefined Parameters in previous section)
Choices: Enter the name of your build job as default. This is only needed if you trigger the deploy job manually. When the deploy job is triggered from promotion, the promotion will supply the value (the Job= from Predefined parameters that we configured). Also, if there is no value passed from promotion's Predefined parameters, the first choice value will be used. If you have a 1-to-1 relationship between the build and deploy jobs, you can omit the Job= parameter in promotion's configuration.
Update: since version 2.23 of Parameterized Trigger, the available choices in the deploy job configuration have to have all possible values coming from the promotion's predefined parameters. If you don't want that limit, use "Text" instead of "Choice"
Configure a parameter of type Build selector for Copy Artifact with name: BuildSelection
Default Selector: Latest successful build
Under Build steps
Configure Copy artifacts from another project
In Project name enter ${Job}
At Which build choose Specified by a build parameter
In Parameter Name enter BuildSelection (without ${...}!)
Configure the rest accordingly for your artifacts that will be copied from build job to deploy job's workspace
Use the copied artifacts inside the deploy job as you need in order to deploy
So now, with the above deploy job, you can run it manually and select which build number from build job you want to deploy (last build, last successful, by build number, etc). You probably already have it configured very similarly. The promotion on the build job will basically execute the same thing, and supply the build number, based on what promotion was executed.
Let me know if you got any issues with the instructions.
Marked answer is great explanation for the question. But I would like to suggest a solution for those people looking for "how-to-promote-a-specific-build-number-from-another-job-in-jenkins"
We can use a generalized solution for doing force promotion using CURL and REST API. You can execute curl from Shell or Groovy scripts.
Shell Solution using CURL:
user_name="jenkins_user"
user_token="token"
promotion_name="Test_Promote"
jenkins_url="http://build-server.com"
JOB_NAME="job_name"
JOB_NO="job-no"
url="--silent -u $user_name:$user_token $jenkins_url/job/$JOB_NAME/$JOB_NO/promotion/forcePromotion?name=$promotion_name"
curl $url
Groovy Soultion:
user_name="jenkins_user"
user_token="token"
promotion_name="Test_Promote"
jenkins_url="http://build-server.com"
JOB_NAME="job_name"
JOB_NO="job-no"
def response = "curl -u $user_name:$user_token \" $jenkins_url/job/$JOB_NAME/$JOB_NO/promotion/forcePromotion?name=$promotion_name".execute().text
How to generate jenkins user token: https://jenkins.io/blog/2018/07/02/new-api-token-system/