I'm trying to create snapshots of my vm instance on google cloud platform with a custom tag, but it's currently not working as expected. I'm sending the following Post request body to API referring to this documentation Google docs
{
"name":"<SnapshotName>",
"labels": {
"<LabelKey>":"<LabelValue>"
}
}
this gives me a positive 200 OK response, but no label appears.
{
"kind": "compute#operation",
"id": "<id>",
"name": "<name>",
"zone": "<Zone Link>",
"operationType": "createSnapshot",
"targetLink": "<Target Link>",
"targetId": "<Target ID>",
"status": "PENDING",
"user": "<User>",
"progress": 0,
"insertTime": "<Time>",
"selfLink": "<Self Link>"
}
additionally I tried to use syntax described in "Labeling Resources" documentation Google Labeling Resources
{
"name":"<SnapshotName>",
"labels": [{
"<Key>":"<LabelKey>"
"<Value>":"<LabelValue>"
}]
}
this gave me the same result.
In web interface it's possible to create snapshots and label it manually, but I would like to create them with a custom label via API.
Am I doing something wrong, or is it just broken?
Related
I'm trying to follow this tutorial about setting up a schedule to trigger your pipeline using a cloud function and scheduler. I've followed the tutorial up to the letter, to my knowledge. I made sure the pipeline runs without errors, and set up the cloud function. For setting up the job:
I set the frequency to 0 9 * * 1
Set the URL to https://us-central1-[redacted].cloudfunctions.net/hello-world-scheduled-pipeline-function
For the Body section, following the guide I set
{
"pipeline_spec_uri": "gs://[redacted]/test/tab_classif_pipeline_test.json",
"parameter_values": {
"greet_name": "test"
}
}
Added OIDC token, set it to a service account that can invoke cloud functions (and other job scheduler permissions), and left all other fields to default.
After trying to run it manually, I get the following error, which I cannot for the life of me interpret:
{
"insertId": "j5h0k0flkeq6j",
"jsonPayload": {
"url": "https://us-central1-[redacted].cloudfunctions.net/hello-world-scheduled-pipeline-function",
"jobName": "projects/[redacted]/locations/us-central1/jobs/hello-world-cloud-scheduler",
"targetType": "HTTP",
"status": "INTERNAL",
"#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
},
"httpRequest": {
"status": 500
},
"resource": {
"type": "cloud_scheduler_job",
"labels": {
"job_id": "hello-world-cloud-scheduler",
"project_id": "[redacted]",
"location": "us-central1"
}
},
"timestamp": "2022-04-13T15:09:04.120977064Z",
"severity": "ERROR",
"logName": "projects/[redacted]/logs/cloudscheduler.googleapis.com%2Fexecutions",
"receiveTimestamp": "2022-04-13T15:09:04.120977064Z"
}
To my knowledge, I've been following this guide perfectly, so why isn't this working? Is there something wrong with how I set things up in the Body section? What could it be?
I need to list all the compute instance snapshots successfully created in a project (only for compute instance types), along with the compute engine names.
I am using this API: https://compute.googleapis.com/compute/v1/projects/my-project/global/snapshots
It lists the snapshot and I get the response like this:
"items": [
{
"id": "36734343434334343",
"creationTimestamp": "2020-09-16T11:38:54.780-07:00",
"name": "backup-data1-us-central1-c-3234234324-202009161",
"status": "READY",
"sourceDisk": "https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-c/disks/backup-data1",
"sourceDiskId": "323434232434970709",
"diskSizeGb": "10",
"storageBytes": "452416",
"storageBytesStatus": "UP_TO_DATE",
"selfLink": "https://www.googleapis.com/compute/v1/projects/my-project/global/snapshots/amtest-backup-data1-us-central1-c-3234234324-202009161",
"labelFingerprint": "23WmSpBrSM=",
"storageLocations": [
"us-central1"
],
"autoCreated": true,
"downloadBytes": "456717",
"kind": "compute#snapshot"
},
{
"id": "343486082509657007",
"creationTimestamp": "2020-09-17T11:38:56.840-07:00",
"name": "backup-data1-us-central1-c-3234234324-202009161",
"status": "READY",
"sourceDisk": "https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-c/disks/backup-data1",
"sourceDiskId": "323434232434970709",
"diskSizeGb": "10",
"storageBytes": "0",
"storageBytesStatus": "UP_TO_DATE",
"selfLink": "https://www.googleapis.com/compute/v1/projects/my-project/global/snapshots/amtest-backup-data1-us-central1-c-20200917183856-n2ipabzb",
"labelFingerprint": "23WmSpB8rSM=",
"storageLocations": [
"us-central1"
],
"autoCreated": true,
"downloadBytes": "456717",
"kind": "compute#snapshot"
}
From this information, I need to find out what is the VM which is associated with this snapshot. How can I find out the compute engine for which this snapshot is created? Is there any REST API for finding the compute engine from the snapshot?
There is a little misunderstanding here: You snapshot a disk, not a VM. Indeed, you can detach the disk and attach it to another VM. You can also set the disk in multi-reader and attach it to several VM.
So, your question is wrong. You can list, among all your VM, the disk attache to them. Then check if a snapshot exists for each of these disks.
I created an AWS Lambda Application API using AWS Toolkit for .Net Core 3.1. It has 2 Get request that expecting text JSON in a request body and returning text JSON as an output. It does not require any database connection or any other AWS resources. Locally everything works fine, all tests are passing. I publish my app to AWS account using AWS Toolkit which runs Cloud Formation setting file, again no problems, all passing. This creates my AWS Lambda API app with my API endpoint. However, when I try to use I am getting "403 Forbidden" errors:
Other thing I notice is that the default API Gateway type is Edge, I am unsure if that's making a problem. I would like to set it up to Private in cloud formation stuck from .Net Core level. I assume it is something to be change here:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "An AWS Serverless Application that uses the ASP.NET Core framework running in Amazon Lambda.",
"Resources": {
"AspNetCoreFunction": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "AES.Protocol::AES.Protocol.LambdaEntryPoint::FunctionHandlerAsync",
"Runtime": "dotnetcore3.1",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [
"AWSLambdaFullAccess"
],
"Events": {
"ProxyResource": {
"Type": "Api",
"Properties": {
"Path": "/{proxy+}",
"Method": "ANY"
}
},
"RootResource": {
"Type": "Api",
"Properties": {
"Path": "/",
"Method": "ANY"
}
}
}
}
}
},
"Outputs": {
"ApiURL": {
"Description": "API endpoint URL for Prod environment",
"Value": {
"Fn::Sub": "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
}
}
}
}
The previous question related to this API might be helpful.
I manage to find a solution. It seems like CloudFront is not supporting GET request with "body". So changing "GET" to "POST" request fix the problem.
Can I see GCP billing by instance name? (Not by type)
I am trying to filter GCP billing by instance name, is it possible? I only succeeded to filter by GCP Compute Engine and instance type (n1-standard) etc...
I am trying to programmatically match the machineType associated with a GCP compute instance to the corresponding billing SKU, but am unable to find a key for direct association. For example, here is the response from the machineType API:
{
"kind": "compute#machineType",
"name": "n1-standard-32",
"description": "32 vCPUs, 120 GB RAM",
"guestCpus": 32,
"memoryMb": 122880,
"imageSpaceGb": 0,
"maximumPersistentDisks": 128,
"maximumPersistentDisksSizeGb": "65536",
"zone": "us-east1-b",
"isSharedCpu": false
}
And here is the corresponding SKU from the cloudbilling APIs:
"name": "services/XXXX/skus/XXXX",
"skuId": "XXXX",
"description": "Standard Intel N1 32 VCPU running in Americas",
"category": {
"serviceDisplayName": "Compute Engine",
"resourceFamily": "Compute",
"resourceGroup": "N1Standard",
"usageType": "OnDemand"
},
"serviceRegions": [
"us-central1",
"us-east1",
"us-west1"
],
"pricingInfo": [
{
"summary": "",
"pricingExpression": {
"usageUnit": "h",
"usageUnitDescription": "hour",
"baseUnit": "s",
"baseUnitDescription": "second",
"baseUnitConversionFactor": 3600,
"displayQuantity": 1,
"tieredRates": [
{
"startUsageAmount": 0,
"unitPrice": {
"currencyCode": "USD",
"units": "1",
"nanos": 520000000
}
}
]
},
"currencyConversionRate": 1,
"effectiveTime": "2018-02-22T12:00:16.647Z"
}
],
"serviceProviderName": "Google"
There doesn't seem to be a field with value n1-standard-32 in the billing SKU. How do we tie these two together as this page seems to do: https://cloud.google.com/compute/pricing?
You can create labels and add to your instances in order to get a breakdown of the charges per instance. This label needs to be added on each instance, once added, you will be able to see the charges per instance on the billing reports by sorting it per label.
Creating and managing labels can be found here
You can use the Resource Manager API and perform a request such as
POST https://cloudresourcemanager.googleapis.com/v1beta1/projects
{
"labels": {
"color": "red"
},
"name": "myproject",
"projectId": "our-project-123"
}
You can also add and edit labels for your Compute Engine Instances using the gcloud commands
When using ARM templates to deploy various Azure components you can use some functions. One of them is called listkeys and you can use it to return through the output the keys that were created during the deployment, for example when deploying a storage account.
Is there a way to get the keys when deploying a Power BI workspace collection?
According to you mentioned link, if we want to use listKeys function, then we need to know resourceName and ApiVersion.
From the Azure PowerBI workspace collection get access keys API, we could get resource name
Microsoft.PowerBI/workspaceCollections/{workspaceCollectionName} and API version "2016-01-29"
So please have a try to use the follow coding, it works for me correctly.
"outputs": {
"exampleOutput": {
"value": "[listKeys(resourceId('Microsoft.PowerBI/workspaceCollections', parameters('workspaceCollections_tompowerBItest')), '2016-01-29')]",
"type": "object"
}
Check the created PowerBI Service from Azure portal
Whole ARM template I used:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workspaceCollections_tompowerBItest": {
"defaultValue": "tomjustforbitest",
"type": "string"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.PowerBI/workspaceCollections",
"sku": {
"name": "S1",
"tier": "Standard"
},
"tags": {},
"name": "[parameters('workspaceCollections_tompowerBItest')]",
"apiVersion": "2016-01-29",
"location": "South Central US"
}
],
"outputs": {
"exampleOutput": {
"value": "[listKeys(resourceId('Microsoft.PowerBI/workspaceCollections', parameters('workspaceCollections_tompowerBItest')), '2016-01-29')]",
"type": "object"
}
}
}