I am attempting to export a build from one project to another. Projects are in different collections. I have collection admin so perms should be good, but just to be sure I granted myself build and project admin.
I exported the build as json using the VSTS UI in the source project then imported in the target project. All the tasks are present, but the parameters are grayed out. I also cannot enable/disable tasks. There are some parameters that need to be filled in such as build agent. I was able to select the appropriate agent. I have no outstanding items at this point that the UI is indicating I would need to address prior to saving. The save, discard, queue options are all grayed out.
I can add a new phase, but I can't add any tasks to that phase. I also tried bringing up the yaml and compared it to the yaml in the source project, no differences.
Why can't I save my imported build definition?
Import succeeded after replacing all instances of the project and collection ids in the json I was attempting to import. At the time of this post those changes must be completed manually.
UPDATE:
I tried just removing the offending properties rather than replacing them and that worked. I created a simple script to clear out those properties:
Param(
[parameter(Mandatory=$true)]
[alias("p")]
$path
)
$removeProperties = #("triggers","metrics","_links","authoredBy","queue","project")
$json = Get-Content -Path $path -Raw | ConvertFrom-Json
foreach ($property in $removeProperties) {
$json.PSObject.Properties.Remove($property)
}
$json | ConvertTo-Json -Depth 100 | Set-Content -Path $path -Force
Related
Running a AWS CLI command for AWS to export all the subnets. Then running the below in PowerShell. When doing this, there are a couple fields (Tags, CIDR Blocks) that output as just "System.Object[]" in the CSV. If i could get them to expand or 'be selected' would be great. Specifically, I only really want the tag: "Name", but i'll take w/e is easier.
Get-Content -Path C:\Working\subnets.json |
ConvertFrom-Json |
Select-Object -expand * |
ConvertTo-Csv -NoTypeInformation |
Set-Content C:\Working\subnets12.csv
json was created from: aws ec2 describe-subnets --output=json
then, copied and pasted everything inside the { } into a text file.
I assume there is a better way to create the json file from bash prompt. And, might even be able to convert to csv from bash instead of PowerShell.
I was expecting to be able to get a CSV that had all the details from the original json file.
Everytime I download a "power portal" with the pac cli command:
pac paportal download -id <guid> --path ./ --overwrite true
Many of the files seem to be regenerated with new short guids on the end, although they haven't changed. And the sitestettings.yml file gets re-ordered so it shows a bunch of changes.
Below I made one change to a site setting, and I have 134 changes.
Can this be avoided? It makes it frustrating to track actual changes in source control.
If you have multiple records with the same name, then the short guid will be appended as file/folders cannot have same names, if you avoid creating records with exact same names (active/inactive both) you should not face this issue
Is there any command to list all the GCP project quota in a single excel file with only top headers. I tried to apply FOR loop on quota management however it gives me output with header included every time with new projects when appended.
gcloud compute project-info describe --flatten=quotas -- format='csv(quotas.metric,quotas.limit,quotas.usage)' will provide for one project. However require for all project on Org level and folder level in a single excel file.
I crafted this bash code that can help you in order to iterate all projects related with the account used with GCloud feel free to modify this code according your use case
#!/bin/bash
#unique header
echo "ProjectId,Metric,Quota,Usage"
gcloud projects list --format="csv[no-heading](projectId,name)" |\
while IFS="," read -r ID NAME
do
RESULT=$(\
gcloud compute project-info describe --project ${ID} \
--flatten=quotas \
--format="csv[no-heading](quotas.metric,quotas.limit,quotas.usage)")
# Prefix ${ID} to each line in the result
for LINE in ${RESULT}
do
echo ${ID},${LINE}
done
done
it is important that the account authenticated has the role project/viewer over all projects associated, also Compute Engine API must be enabled in the projects.
Having said that, you can create a service account associated per organization or by folder in order to get all necessary information.
I want to write an automated job in which the job will go through my files stored on the ec2 storage and check for the last modified date.If the date is more than (x) days the file should automatically get archived to my s3.
Also I don't want to convert the file to a zip file for now.
What I don't understand is how to give the path of the ec2 instance storage and the how do i put the condition for the last modified date.
aws s3 sync your-new-dir-name s3://your-s3-bucket-name/folder-name
Please correct me if I understand this wrong
Your requirement is to archive the older files
So you need a script that checks the modified time and if its not being modified since X days you simply need to make space by archiving it to S3 storage . You don't wish to store the file locally
is it correct ?
Here is some advice
1. Please provide OS information ..this would help us to suggest shell script or power shell script
Here is power shell script
$fileList = Get-Content "c:\pathtofolder"
foreach($file in $fileList) {
Get-Item $file | select -Property fullName, LastWriteTime | Export-Csv 'C:\fileAndDate.csv' -NoTypeInformation
}
then AWS s3 cp to s3 bucket.
You will do the same with Shell script.
Using aws s3 sync is a great way to backup files to S3. You could use a command like:
aws s3 sync /home/ec2-user/ s3://my-bucket/ec2-backup/
The first parameter (/home/ec2-user/) is where you can specify the source of the files. I recommend only backing-up user-created files, not the whole operating system.
There is no capability for specifying a number of days. I suggest you just copy all files.
You might choose to activate Versioning to keep copies of all versions of files in S3. This way, if a file gets overwritten you can still go back to a prior version. (Storage charges will apply for all versions kept in S3.)
We're using TeamCity 7 and wondered if it's possible to have a step run only if a previous one has failed? Our options in the build step configuration give you the choice to execute only if all steps were successful, even if a step failed, or always run it.
Is there a means to execute a step only if a previous one failed?
Theres no way to setup a step to execute only if a previous one failed.
The closest I've seen to this, is to setup a build that has a "Finish Build" trigger that would always execute after your first build finishes. (Regardless of success or failure).
Then in that second build, you could use the TeamCity REST API to determine if the last execution from the first build was successful or not. If it wasn't successful then you could whatever it is you want to do.
As a work around it is possible to set a variable via a command line step that only runs on success which can be checked later.
echo "##teamcity[setParameter name='env.BUILD_STATUS' value='SUCCESS']"
This can then be queried inside a powershell step that is set to run even if a step fails.
if($env:BUILD_STATUS -ne "SUCCESS"){
}
I was surprised that TeamCity does not support it out of the box in 2021. But API gives you a lot usefull features and you can do it
As a solution you need to write bash script and call TeamCity API inside
setup API key in MySettings & Tools => Access token
create env variable with API token
create a step in your configuration with Execute step: Even if some of the previous steps failed
build own container with jq or use any existing container with jq support
place this bash script
#!/bin/bash
set -e -x
declare api_response=$(curl -v -H "Authorization: Bearer %env.teamcity_internal_api_key%" %teamcity.serverUrl%/app/rest/latest/builds?locator=buildType:%system.teamcity.buildType.id%,running:any,canceled:all,count:2\&fields=build\(id,status\))
declare current_status=`echo ${api_response} | jq '.build[0].status'`
declare prev_status=`echo ${api_response} | jq '.build[1].status'`
if [ "$current_status" != "$prev_status" ]; then
do you code here
fi
some explanation of code above. with API call you get 2 last builds of current buildType. This is last build and previous build. After you assign variable with statuses and compare them in if statement. If you need to run some code in case of current build failed use
if [ "$current_status" = "FAILURE" ]; then
write your code here
fi
Another solution is Webhooks.
This plugin can send webhook to an URL if build fails too.
On the webhook side, you can handle some actions, for example, send notification.