Run a "PUT" request using Rest-Api on Azure DevOps - postman

I'm trying to run a PUT request using Postman to change the retention rules of a specific build definition, in Azure DevOps, and change the daysToKeep value.
But I keep getting the error:
"The request specifies pipeline ID 1722 but the supplied pipeline has ID 0."
Any idea where do I go wrong?

In order to change\update any parameter on the build definition, first run a GET request.
The result JSON output should be used as the body for the PUT request.
Use this body and change\update the relevant parameter(s) you need...
It is very important to increase the "revision" parameter, located in the root of the JSON output, in 1. (for example, if current is 97, for next run it should be 98).
Special thanks to Tinxuanna for directing me to this solution!

#ShaiO Please take a look at this link List pipelines Azure DevOps. It may help you because it gets a list of the pipelines.
Then you can create a pipeline following the documentation.

Related

AWS SFN JSONpath works in dataflow but fails on deployment

I have such a JSON path in my task parameters section
"foo.$": "$.MapResult[].Payload[].data"
I tested it in AWS console dataflow and it worked fine and returned list of values for "data" key from the Payload list as expected but when I tried to deploy it I got
The value for the field 'foo.$' must be a valid JSONPath or a valid intrinsic function call (at /States/...-Task/Parameters)
OK, I sorted it out and it seems like a bug in StepFunctions probably but anyway.
Despite the fact that it worked in dataflow simulator and it's a valid JSONpath to make it work in StateMachine you need to add a wildcard into brackets so it should be like this
"foo.$": "$.MapResult[*].Payload[*].data"

how to edit a already deployed pipeline in data fusion?

I am trying to edit a pipeline which is already deployed I understand that we can duplicate a same pipeline and rename it but how can do make a change in existing pipeline as renaming would require a change in production scheduling jobs as well.
There is one way thru http calls executor..
Open https://<cdf instnace url ..datafusion.googleusercontent.com>/cdap/httpexecutor
Select PUT(to change pipeline code) from drop down and give
namespaces/<namespaces_name>/apps/<pipeline_name>
Go to body part and paste the new pipeline code (export the code of updated pipeline to i.e. json formatted)
Click on SEND and Response would come as "Deploy Complete" with status code 200.

Please check if pipelines with the same name were previously submitted to a different endpoint

I'm getting below error whenever I tried to run a pipeline job using vertex-ai managed Jupiter notebook.
here I make sure that I'm creating a unique pipeline name every time by appending a timestamp in pipeline name sting. e.g my display name will be like AutoML-Pipeline-DS-v4-1637251623 still I'm getting errors like Please check if pipelines with the same name were previously submitted to a different endpoint.
here I'm using google-cloud-aiplatform==1.4.3 to run the pipeline job. also, I'm following this example from GCP.
com.google.cloud.ai.platform.common.errors.AiPlatformException: code=INVALID_ARGUMENT, message=User-specified resource ID must match the regular expression '[a-z0-9][a-z0-9-]{0,127}', cause=null; Failed to update context (id = projects/xxxx/locations/us-central1/metadataStores/default/contexts/AutoML-Pipeline-DS-v4-1637251623). Please check if pipelines with the same name were previously submitted to a different endpoint. If so, one may submit the current pipeline with a different name to avoid reusing the existing MLMD Context from the other endpoint.; Failed to update pipeline and run contexts: project_number=xxxx, job_id=xxxx.; Failed to handle the job: {project_number = xxxx, job_id = xxxx}
Please check regex, The word should be like this automlipelinedsv41637251623

Issues in Extracting data from Big Query from second time using Dataflow [ apache beam ]

I have a requirement to extract data from BigQuery table using Dataflow and write to GCS bucket.
Dataflow is built using apache beam (Java). The dataflow extracts from BigQuery and write to GCS perfectly for the first time.
But when a second dataflow is spinned up to extract data from same table after the first pipeline executes successfully, it is not extracting any data from Big Query. The only error i can see in the stackdriver log is
Blockquote "Request failed with code 409, performed 0 retries due to IOExceptions, performed 0 retries due to unsuccessful status codes, HTTP framework says request can be retried, (caller responsible for retrying): https://www.googleapis.com/bigquery/v2/projects/dataflow-begining/jobs"
The sample code i have used for extraction is
pipeline.apply("Extract from BQ", BigQueryIO.readTableRows().fromQuery("SELECT * from bq_test.employee"))
Any help is appreciated
I have seen this happen previously when using templates. As per the docs here, in Usage with templates section:
When using read() or readTableRows() in a template, it's required to
specify BigQueryIO.Read.withTemplateCompatibility(). Specifying this
in a non-template pipeline is not recommended because it has somewhat
lower performance.
and in the withTemplateCompatibility section:
Use new template-compatible source implementation. This implementation
is compatible with repeated template invocations.
If so, you should be using:
pipeline.apply("Extract from BQ", BigQueryIO
.readTableRows()
.withTemplateCompatibility()
.fromQuery("SELECT * from bq_test.employee"))

Failed to get the current sub/segment from the context and addAnnotation of NULL issue in NodeJS

I am getting "Failed to get the current sub/segment from the context" with node lambda. AFter adding environment variable as suggested in another post, I am getting addAnnotation of NULL. Because of this my test are getting failed.
Is there any workaround to make this pass? Help is much appreciated.
Which other post are you referring to?
Can you post the code to reproduce?
When in Lambda, X-Ray does not have access to the segment. The segment is created by Lambda and sent independently. In order for the X-Ray service to reconstruct the full segment/subsegment structure, the SDK picks up an environment varibale set by Lambda, and creates a "facade segment". This represents a placeholder to build off of and cannot be modified.
Typically we advise to create a new subsegment and add the annotation there.
Keep in mind, this will only work within the handler.