Google compute instance available informations - google-cloud-platform

Is there a gcloud, or python API, equivalence to the information Google supplies in the OS patch management GUI. The GUI lists compute instances with package updates available.
I can use gcloud compute os-config vulnerability-reports describe to view CVE reports, but not if a patch is available.
Any suggestions on how to obtain this information?

I think the method projects.locations.instances.inventories.list will provide you with all the required information
As explained in the documentation, this method will list inventory data for all VM instances in the specified zone. After specifying the parent parameter the output was the following one:
{
"inventories": [
{
"osInfo": {
"longName": "Deb*** ******** ** (*****)",
"shortName": "*******",
"version": "***",
"architecture": "*******",
"kernelVersion": "* *** ***** *.**.***-* (****-**-**)",
"kernelRelease": "*.**.*-**-*****-****",
"osconfigAgentVersion": "**********",
"hostname": "***************"
},
"name": "***/****8**8***/l********s/*s-**as***-*/i******/2*******98**4/*****",
"updateTime": "****-**-*****:**:**.******"
}
],
"nextPageToken": "************=="
}
As explained in this document, the view parameter value should be set to FULL, otherwise its default value will be BASIC. If FULL value is not used, the method won’t provide you with the available packages and the already installed ones.
As seen in this example:
{
"osInfo": {
"longName": "Deb*** ******** ** (*****)",
"shortName": "*********",
"version": "****",
"architecture": "*********",
"kernelVersion": "#********* (*********)",
"kernelRelease": "*********",
"osconfigAgentVersion": "*********",
"hostname": "*********"
},
"items": {
"availablePackage-google-clo*****************-********* ********": {
"id": "****availablePackage-goo************************************",
"originType": "*********",
"type": "*********",
"availablePackage": {
"aptPackage": {
"architecture": "*********",
"version": "*********",
"packageName": "***************************"
…
…

Related

Where/how do I define a NotificationConfig in an AWS SSM Automation document?

Say I have an SSM document like the below, and I want to be alerted when a run fails or doesn't finish for whatever reason:
{
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"mainSteps": [
{
"action": "aws:runCommand",
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"inputs": {
"DocumentName": "AWS-RunShellScript",
"Parameters": {
"commands": [
"blahblahblah"
],
"executionTimeout": "1800"
},
"Targets": [
{
"Key": "InstanceIds",
"Values": [
"i-xxxxxxxx"
]
}
]
},
"name": "DBRestorer",
"nextStep": "RunQueries"
},
Terraform documents show me that RunCommand documents should support a NotificationConfig where I can pass in my SNS topic ARN and declare what state transitions should trigger a message: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_maintenance_window_task#notification_config
However, I can't find any Amazon docs that actually include the use of a notification configuration in the document itself (not just the maintenance window, which I have set up as automation so it doesn't support it at the window level), so I'm not sure if it belongs as a sub-parameter, or whether to define it with camel case or dash separation.
Try this
{
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"mainSteps": [
{
"action": "aws:runCommand",
"description": "Restores specified pg_dump backup to specified RDS/DB.",
"inputs": {
"DocumentName": "AWS-RunShellScript",
"NotificationConfig": {
"NotificationArn": "<<Replace this with a SNS Topic Arn>>",
"NotificationEvents": ["All"],
"NotificationType": "Invocation"
},
"ServiceRoleArn": "<<Replace this with an IAM role Arn that has access to SNS>>",
"Parameters": {
"commands": [
"blahblahblah"
],
"executionTimeout": "1800"
},
"Targets": [
{
"Key": "InstanceIds",
"Values": [
"i-xxxxxxxx"
]
}
]
},
"name": "DBRestorer",
"nextStep": "RunQueries"
},
...
]
}
Related documentation:
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-action-runcommand.html
https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_NotificationConfig.html#systemsmanager-Type-NotificationConfig-NotificationType

In Azure DevOps, is it possible to enumerate children pipeline build artifacts recursively with API?

In Azure DevOps, I want to get a list of recursive artifact elements from a pipeline build. It would be nice if I don't have to download the whole artifact root object. Does any one know how to do this with the current API?
The portal already supports this feature in the pipeline artifacts view. You can open and browse child artifacts, with the ability to download. The API however does not seem to support this use case.
Current API
https://learn.microsoft.com/en-us/rest/api/azure/devops/build/Artifacts/List?view=azure-devops-rest-6.0#buildartifact
I was able to find a request for the feature, but I'm not sure if it will be implemented soon.
https://developercommunity.visualstudio.com/idea/1300697/api-list-artifacts-enumerate-recursively-same-as-w.html
Has anyone else been able to work around this?
This is not documented but you can use the same API call as it is done on Azure DevOps. So it would be
POST https://dev.azure.com/{org}/_apis/Contribution/HierarchyQuery?api-version=5.0-preview
Minimal Json Payload:
{
"contributionIds": [
"ms.vss-build-web.run-artifacts-data-provider"
],
"dataProviderContext": {
"properties": {
"artifactId": 111, //obtain this from https://dev.azure.com/{org}/{proj}/_apis/build/builds/####/artifacts
"buildId": 1234,
"sourcePage": {
"routeValues": {
"project": "[ADOProjectNameHere]"
}
}
}
}
}
In my case it was:
https://dev.azure.com/thecodemanual/_apis/Contribution/HierarchyQuery/project/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2?api-version=5.0-preview.1
With similar payload similar to this one:
{
"contributionIds": [
"ms.vss-build-web.run-artifacts-data-provider"
],
"dataProviderContext": {
"properties": {
"artifactId": 1158,
"buildId": 7875,
"sourcePage": {
"url": "https://dev.azure.com/thecodemanual/DevOps%20Manual/_build/results?buildId=7875&view=artifacts&pathAsName=false&type=publishedArtifacts",
"routeId": "ms.vss-build-web.ci-results-hub-route",
"routeValues": {
"project": "DevOps Manual",
"viewname": "build-results",
"controller": "ContributedPage",
"action": "Execute",
"serviceHost": "be1a2b52-5ed1-4713-8508-ed226307f634 (thecodemanual)"
}
}
}
}
}
So you would get such response:
{
"dataProviderSharedData": {},
"dataProviders": {
"ms.vss-web.component-data": {},
"ms.vss-web.shared-data": null,
"ms.vss-build-web.run-artifacts-data-provider": {
"buildId": 7875,
"buildNumber": "20201114.2",
"definitionId": 72,
"definitionName": "kmadof.hadar",
"items": [
{
"artifactId": 1158,
"name": "/hadar.zip",
"sourcePath": "/hadar.zip",
"size": 1330975,
"type": "file",
"items": null
},
{
"artifactId": 1158,
"name": "/scripts",
"sourcePath": "/scripts",
"size": 843,
"type": "directory",
"items": [
{
"artifactId": 1158,
"name": "/scripts/check-hadar-settings.ps1",
"sourcePath": "/scripts/check-hadar-settings.ps1",
"size": 336,
"type": "file",
"items": null
},
{
"artifactId": 1158,
"name": "/scripts/check-webapp-settings.ps1",
"sourcePath": "/scripts/check-webapp-settings.ps1",
"size": 507,
"type": "file",
"items": null
}
]
}
]
}
}
}
You need to use a fully scoped Personal Access Token (PAT) to authorize your request.
You can try as the steps below:
Execute the endpoint "Artifacts - Get Artifact" of the Artifacts API. From the response body, you can see the value of "downloadUrl" like as this.
https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=zip
This URL is used to download (GET) the whole artifact as a ZIP file.
If you want to download a specified sub-folder or file in the artifact.
To download a specified sub-folder in the artifact, you can execute the following endpoint.
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=zip&subPath={/path/to/the/folder}
For example:
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=zip&subPath=/ef-tools
This will download the folder "ef-tools" and its content as a ZIP file from your artifact "drop".
To download a specified file in the artifact, you can execute the following endpoint.
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=file&subPath={/path/to/the/file}
For example:
GET https://artprodcus3.artifacts.visualstudio.com/{organization_ID}/{project_ID}/_apis/artifact/{object_ID}/content?format=file&subPath=/ef-tools/migrate.exe
This will download the file "ef-tools/migrate.exe" from your artifact "drop".

EMR cluster created with CloudFormation not shown

I have added an EMR cluster to a stack. After updating the stack successfully (CloudFormation), I can see the master and slave nodes in EC2 console and I can SSH into the master node. But AWS console does not show the new cluster. Even aws emr list-clusters doesn't show the cluster. I have triple checked the region and I am certain I'm looking at the right region.
Relevant CloudFormation JSON:
"Spark01EmrCluster": {
"Type": "AWS::EMR::Cluster",
"Properties": {
"Name": "Spark01EmrCluster",
"Applications": [
{
"Name": "Spark"
},
{
"Name": "Ganglia"
},
{
"Name": "Zeppelin"
}
],
"Instances": {
"Ec2KeyName": {"Ref": "KeyName"},
"Ec2SubnetId": {"Ref": "PublicSubnetId"},
"MasterInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Master"
},
"CoreInstanceGroup": {
"InstanceCount": 1,
"InstanceType": "m4.large",
"Name": "Core"
}
},
"Configurations": [
{
"Classification": "spark-env",
"Configurations": [
{
"Classification": "export",
"ConfigurationProperties": {
"PYSPARK_PYTHON": "/usr/bin/python3"
}
}
]
}
],
"BootstrapActions": [
{
"Name": "InstallPipPackages",
"ScriptBootstrapAction": {
"Path": "[S3 PATH]"
}
}
],
"JobFlowRole": {"Ref": "Spark01InstanceProfile"},
"ServiceRole": "MyStackEmrDefaultRole",
"ReleaseLabel": "emr-5.13.0"
}
}
The reason is missing VisibleToAllUsers property, which defaults to false. Since I'm using AWS Vault (i.e. using STS AssumeRole API to authenticate), I'm basically a different user every time, so I couldn't see the cluster. I couldn't update the stack to add VisibleToAllUsers either as I was getting Job flow ID does not exist.
The solution was to login as root user and fix things from there (I had to delete the cluster manually, but removing it from the stack template JSON and updating the stack would probably have worked if I hadn't messed things up already).
I then added the cluster back to the template (with VisibleToAllUsers set to true) and updated the stack as usual (AWS Vault).

Create tagged vm instance snapshots

I'm trying to create snapshots of my vm instance on google cloud platform with a custom tag, but it's currently not working as expected. I'm sending the following Post request body to API referring to this documentation Google docs
{
"name":"<SnapshotName>",
"labels": {
"<LabelKey>":"<LabelValue>"
}
}
this gives me a positive 200 OK response, but no label appears.
{
"kind": "compute#operation",
"id": "<id>",
"name": "<name>",
"zone": "<Zone Link>",
"operationType": "createSnapshot",
"targetLink": "<Target Link>",
"targetId": "<Target ID>",
"status": "PENDING",
"user": "<User>",
"progress": 0,
"insertTime": "<Time>",
"selfLink": "<Self Link>"
}
additionally I tried to use syntax described in "Labeling Resources" documentation Google Labeling Resources
{
"name":"<SnapshotName>",
"labels": [{
"<Key>":"<LabelKey>"
"<Value>":"<LabelValue>"
}]
}
this gave me the same result.
In web interface it's possible to create snapshots and label it manually, but I would like to create them with a custom label via API.
Am I doing something wrong, or is it just broken?

List PowerBI workspace collection keys from arm template

When using ARM templates to deploy various Azure components you can use some functions. One of them is called listkeys and you can use it to return through the output the keys that were created during the deployment, for example when deploying a storage account.
Is there a way to get the keys when deploying a Power BI workspace collection?
According to you mentioned link, if we want to use listKeys function, then we need to know resourceName and ApiVersion.
From the Azure PowerBI workspace collection get access keys API, we could get resource name
Microsoft.PowerBI/workspaceCollections/{workspaceCollectionName} and API version "2016-01-29"
So please have a try to use the follow coding, it works for me correctly.
"outputs": {
"exampleOutput": {
"value": "[listKeys(resourceId('Microsoft.PowerBI/workspaceCollections', parameters('workspaceCollections_tompowerBItest')), '2016-01-29')]",
"type": "object"
}
Check the created PowerBI Service from Azure portal
Whole ARM template I used:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workspaceCollections_tompowerBItest": {
"defaultValue": "tomjustforbitest",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.PowerBI/workspaceCollections",
"sku": {
"name": "S1",
"tier": "Standard"
},
"tags": {},
"name": "[parameters('workspaceCollections_tompowerBItest')]",
"apiVersion": "2016-01-29",
"location": "South Central US"
}
],
"outputs": {
"exampleOutput": {
"value": "[listKeys(resourceId('Microsoft.PowerBI/workspaceCollections', parameters('workspaceCollections_tompowerBItest')), '2016-01-29')]",
"type": "object"
}
}
}