My code:-
resources:
name: snapshot-4
type: compute.v1.disk
properties:
zone: asia-south1-a
Kind: compute#snapshot
sourceDisk: https://www.googleapis.com/compute/v1/projects/project-id/zones/asia-south1-a/disks/disk1
But it is creating Disk.. i want the snapshot of the disk1..
The only two supported ways to create Persistent Disk snapshots are via the API and via REST or client libraries. At this time there is no possibility to create a PD snapshot using the YAML. However I can recommend you to create a feature request in the Google Cloud Platform issue tracker to review your request.
Related
I am experimenting with different containers for training and inference based on tutorial on aws sagemaker documentation. I'm using deep learning containers provided by aws here https://github.com/aws/deep-learning-containers/blob/master/available_images.md
, such as this 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.10.2-transformers4.17.0-gpu-py38-cu113-ubuntu20.04 to create models.
Model:
Type: "AWS::SageMaker::Model"
Properties:
PrimaryContainer:
Image: '763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-training:1.10.2-transformers4.17.0-gpu-py38-cu113-ubuntu20.04'
ExecutionRoleArn: !GetAtt ExecutionRole.Arn
I am trying to see , if i can download this image locally via docker pull, but what account do i need to download this ?
i have an aws account, would i be able to download from 763104351884.dkr.ecr.us-east-1.amazonaws.com this url via my free account?
How to deploy a web page architecture from a GCP Cloud Deployment yaml, which includes static files in a storage and a load balancer that has a backend bucket connected to this storage?
We need the load balancer to connect it to the GCP CDN.
I think you need to create the resources based on google's API on the deployment manager YAML script.
As my understanding you need to connect a load balancing with a backend bucket,
and the latter connect it to a storage bucket. I will asume the bucket creation is not necessary.
So the resources you need are compute.beta.backendBucket and the compute.v1.urlMap. The YAML file will look -kind- of this:
resources:
- type: compute.beta.backendBucket
name: backendbucket-test
properties:
bucketName: already-created-bucket
- type: compute.v1.urlMap
name: urlmap-test
properties:
defaultService: $(ref.backendbucket-test.selfLink)
hostRules:
- hosts: ["*"]
pathMatcher: "allpaths"
pathMatchers:
- name: "allpaths"
defaultService: $(ref.backendbucket-test.selfLink)
pathRules:
- service: $(ref.backendbucket-test.selfLink)
paths: ["/*"]
Note that the names are completely up to you. Also see there are ref (from reference) to link the backendBucket created on the first step to the urlMap of the second one.
Is good to mention that you will probably need more resources for a complete solution (specifically the frontend part of the load balancer).
Hope it can help in some way,
Cheers!
You can follow this guide from Google on how to create a Load Balancer to serve static content from a bucket. Note that the bucket and its content must already exists, the content will not be created by DM.
Follow the gcloud steps, not the console steps. For each step, find the correct API call and create a separate resource in your deployment manager config for each step.
I have a CloudFormation template that creates my RDS cluster using aurora serverless. I want the cluster to be created with the data API enabled.
The option exists on the web console:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html
But I can't find it on the CloudFormation documentation.
How can I turn this option on from the template?
Set the EnableHttpEndpoint property to true, e.g.:
AWSTemplateFormatVersion: '2010-09-09'
Description: Aurora PostgreSQL Serverless Cluster
Resources:
ServerlessWithDataAPI:
Type: AWS::RDS::DBCluster
Properties:
Engine: aurora-postgresql
EngineMode: serverless
EnableHttpEndpoint: true
ScalingConfiguration:
...
You can enable the Data API from CloudFormation by creating a custom resource backed lambda and enable it using any of the available SDK.
I use boto3 (python), so the lambda would have code similar as below:
import boto3
client = boto3.client('rds')
response = client.modify_db_cluster(
DBClusterIdentifier='string',
EnableHttpEndpoint=True|False
)
Obviously, you need to handle different custom resource request types and return from the lambda with success or failure. But to answer your question, this is the best possible way to set up data API via CloudFormation, for now, IMHO.
For more information about the function (Boto3):
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.modify_db_cluster
Enabling the Data API is currently only possible in the web console. This feature is still in beta so things like CloudFormation support and availability outside of us-east-1 are still pending, and using the Data API in production should be done with caution as it may still change.
I'm trying to create GKE REGION cluster (beta feature) with GCP deployment manager.
But I got error. Is there any way to use GKE beta features (include region cluster) with deployment manager?
ERROR: (gcloud.beta.deployment-manager.deployments.create) Error in
Operation [operation-1525054837836-56b077fdf48e0-a571296c-604523fb]:
errors:
- code: RESOURCE_ERROR
location: /deployments/test-cluster1/resources/source-cluster
message: '{"ResourceType":"container.v1.cluster","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"v1 API cannot be used to access GKE regional clusters. See https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta for more information.","status":"INVALID_ARGUMENT","statusMessage":"Bad Request","requestPath":"https://container.googleapis.com/v1/projects/project_id/zones/us-central1/clusters","httpMethod":"POST"}}'
In the error message, link of gcp help.
https://cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta
Configured as described there but error still appears.
My deployment manager yaml file looks like,
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1
cluster:
name: source
initialNodeCount: 3
Yet, zonal cluster is completely work. So I think it's related to usage of container v1beta api in deployment-manager commands.
resources:
- name: source-cluster
type: container.v1.cluster
properties:
zone: us-central1-b
cluster:
name: source
initialNodeCount: 3
Thanks.
The error message you are receiving appears to be related to the fact that you are attempting to use a beta feature but you are specifying a Deployment Manager resource as using API v1 (i.e. container.v1.cluster). This means there's inconstancy between the beta resource you are trying to create and the specified resource.
I've had a look into this and discovered that the ability to add regional clusters via Deployment Manager is a very recent addition to Google Cloud Platform as detailed in this public feature request which has only recently been implemented.
It seems you would need to specify the API type as 'gcp-types/container-v1beta1:projects.locations.clusters' for this to work, and rather than using the 'zone' or 'region' key in the YAML, you would instead use a parent property that includes locations.
So your YAML would look something like this (replace PROJECT_ID with your own).
resources:
- type: gcp-types/container-v1beta1:projects.locations.clusters
name: source-cluster
properties:
parent: projects/PROJECT_ID/locations/us-central1
cluster:
name: source
initialNodeCount: 3
I am creating an AWS EMR cluster running Spark using a Cloud Formation template. I am using Cloud Formation because that's how we create reproducible environments for our applications.
When I create the cluster from the web dashboard one of the options is to add a Key Pair. This is necessary in order to access via ssh the nodes of the cluster. http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/EMR_CreateJobFlow.html
I can't see how to do the same when using Cloud Formation templates.
The template structure (see below) doesn't have the same attribute.
Type: "AWS::EMR::Cluster"
Properties:
AdditionalInfo: JSON object
Applications:
- Applications
BootstrapActions:
- Bootstrap Actions
Configurations:
- Configurations
Instances:
JobFlowInstancesConfig
JobFlowRole: String
LogUri: String
Name: String
ReleaseLabel: String
ServiceRole: String
Tags:
- Resource Tag
VisibleToAllUsers: Boolean
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-emr-cluster.html#d0e76479
I had a loook to the attribute JobFlowRole that is a reference to an instance profile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html). Again, no sign of the KeyName.
Did anyone solved this problem before?
Thanks,
Marco
I solved this problem. I was just confused by the lack of naming consistency in Cloud Formation templates.
What is generally referred as KeyName becomes Ec2KeyName under
the JobFlowInstancesConfig.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-emr-cluster-jobflowinstancesconfig.html#cfn-emr-cluster-jobflowinstancesconfig-ec2keyname