GKE Creation from Cloud Deployment Manager - google-cloud-platform

Waiting for create [operation-1544424409972-57ca55456bd22-84bb0f13-64975fdc]...failed.
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1544424409972-57ca55456bd22-84bb0f13-64975fdc]: errors:
- code: CONDITION_NOT_MET
location: /deployments/infrastructure/resources/practice-gke-clusters->$.properties->$.cluster.name
message: |-
InputMapping for field [cluster.name] for method [create] could not be set from input, mapping was: [$.ifNull($.resource.properties.cluster.name, $.resource.name)
], and evaluation context was:
{
"deployment" : {
"id" : 4291795636642362677,
"name" : "infrastructure"
},
"intent" : "CREATE",
"matches" : [ ],
"project" : "resources-practice",
"requestId" : "",
"resource" : {
"name" : "practice-gke-clusters",
"properties" : {
"initialNodeCount" : 1,
"location" : "asia-east2-a",
"loggingService" : "logging.googleapis.com",
"monitoringService" : "monitoring.googleapis.com",
"network" : "$(ref.practice-gke-network.selfLink)",
"subnetwork" : "$(ref.practice-gke-network-subnet-1.selfLink)"
},
"self" : { }
}
}
I always experience this when I try to create a GKE out of deployment manager w/ the jinja template below
resources:
- name: practice-gke-clusters
type: container.v1.cluster
properties:
network: $(ref.practice-gke-network.selfLink)
subnetwork: $(ref.practice-gke-network-subnet-1.selfLink)
initialNodeCount: 1
loggingService: logging.googleapis.com
monitoringService: monitoring.googleapis.com
location: asia-east2-a

You are missing:
properties:
cluster:
name: practice-gke-clusters
initialNodeCount: 3
nodeConfig:
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
Modify the initialNodeCount and oauthScopes as required.

Related

How to start %index% with 1 in OS::Heat::ResourceGroup

I use %index% in OS::Heat::ResourceGroup and it starts with a 0 value.
I want it to start with the value of 1. How to do it?
resources:
server_group:
type: OS::Heat::ResourceGroup
properties:
count: 3
resource_def:
type: OS::Nova::Server
name: { get_param: [ server_names, '%index%' ] }
I tried with '%index% + 1' but it did not work:
Error return: invalid literal for int() with base 10: '0 + 1'

How to test a user has a managed policy using CDK assertions?

I've defined a user and a managed policy in CDK v2, similar to:
const policy = new iam.ManagedPolicy(this, `s3access`, {
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['s3:PutObject', 's3:GetObject'],
resources: ['*']
})
]
})
const someUser = new iam.User(this, 'some-user', { managedPolicies: [policy] });
I want to test that the user has the managed policy applied to it using CDK test assertions, however I'm struggling to figure out how using the existing test constructs:
template.hasResourceProperties('AWS::IAM::ManagedPolicy', {
PolicyDocument: Match.objectLike({
Statement: [
{
Action: ['s3:PutObject', 's3:GetObject'],
Effect: 'Allow',
Resource: [
'*'
]
},
]
})
})
...matches the managed policy, but doesn't test that the user has the managed policy applied.
What is the pattern / best practice for doing this?
You need to match the User's Managed Policy Arn as it appears in the template:
"Type": "AWS::IAM::User",
"Properties": {
"ManagedPolicyArns": [
{
"Ref": "s3access10922181"
}
]
The trick is to get the {"Ref": "s3access10922181"} reference to the policy. Here are two equivalent approaches:
Approach 1: stack.node.tryFindChild
const managedPolicyChild = stack.node.tryFindChild('s3access') as iam.ManagedPolicy | undefined;
if (!managedPolicyChild) throw new Error('Expected a defined ManagedPolicy');
const policyArnRef = stack.resolve(managedPolicyChild.managedPolicyArn);
template.hasResourceProperties('AWS::IAM::User', {
ManagedPolicyArns: Match.arrayWith([policyArnRef]),
});
Approach 2: template.findResources
const managedPolicyResources = template.findResources('AWS::IAM::ManagedPolicy');
const managedPolicyLogicalId = Object.keys(managedPolicyResources).find((k) => k.startsWith('s3access'));
if (!managedPolicyLogicalId) throw new Error('Expected to find a ManagedPolicy Id');
template.hasResourceProperties('AWS::IAM::User', {
ManagedPolicyArns: Match.arrayWith([{ Ref: managedPolicyLogicalId }]),
});

Not able to connect to Snowflake from EMR Cluster using Pyspark using airflow emr operator

I am trying to connect to snowflake from EMR cluster launched by airflow EMR operator but I'm getting the following error
py4j.protocol.Py4JJavaError: An error occurred while calling
o147.load. : java.lang.ClassNotFoundException: Failed to find data
source: net.snowflake.spark.snowflake. Please find packages at
http://spark.apache.org/third-party-projects.html
These are the steps I am adding to my EMRaddsteps operator to run the script load_updates.py and I am describing my snowflake packages in the "Args"
STEPS = [
{
"Name" : "convo_facts",
"ActionOnFailure" : "TERMINATE_CLUSTER",
"HadoopJarStep" : {
"Jar" : "command-runner.jar",
"Args" : ["spark-submit", "s3://dev-data-lake/spark_files/cf/load_updates.py", \
"--packages net.snowflake:snowflake-jdbc:3.8.0,net.snowflake:spark-snowflake_2.11:2.4.14-spark_2.4", \
"INPUT=s3://dev-data-lake/table_exports/public/", \
"OUTPUT=s3://dev-data-lake/emr_output/cf/"]
}
}
]
JOB_FLOW_OVERRIDES = {
'Name' : 'cftest',
'LogUri' : 's3://dev-data-lake/emr_logs/cf/log.txt',
'ReleaseLabel' : 'emr-5.32.0',
'Instances' : {
'InstanceGroups' : [
{
'Name' : 'Master nodes',
'Market' : 'ON_DEMAND',
'InstanceRole' : 'MASTER',
'InstanceType' : 'r6g.4xlarge',
'InstanceCount' : 1,
},
{
'Name' : 'Slave nodes',
'Market' : 'ON_DEMAND',
'InstanceRole' : 'CORE',
'InstanceType' : 'r6g.4xlarge',
'InstanceCount' : 3,
}
],
'KeepJobFlowAliveWhenNoSteps' : True,
'TerminationProtected' : False
},
'Applications' : [{
'Name' : 'Spark'
}],
'JobFlowRole' : 'EMR_EC2_DefaultRole',
'ServiceRole' : 'EMR_DefaultRole'
}
And, this is how I am adding snowflake creds in my load_updates.py script to extract into a pyspark dataframe.
# Set options below
sfOptions = {
"sfURL" : "xxxx.us-east-1.snowflakecomputing.com",
"sfUser" : "user",
"sfPassword" : "xxxx",
"sfDatabase" : "",
"sfSchema" : "PUBLIC",
"sfWarehouse" : ""
}
SNOWFLAKE_SOURCE_NAME = "net.snowflake.spark.snowflake"
query_sql = """select * from cf""";
messages_new = spark.read.format(SNOWFLAKE_SOURCE_NAME) \
.options(**sfOptions) \
.option("query", query_sql) \
.load()
Not sure if I am missing something here or where am I doing wrong.
The option --package should be placed before s3://.../load_updates.py in the spark-submit command. Otherwise, it'll be considered as application argument.
Try with this :
STEPS = [
{
"Name": "convo_facts",
"ActionOnFailure": "TERMINATE_CLUSTER",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
"spark-submit",
"--packages",
"net.snowflake:snowflake-jdbc:3.8.0,net.snowflake:spark-snowflake_2.11:2.4.14-spark_2.4",
"s3://dev-data-lake/spark_files/cf/load_updates.py",
"INPUT=s3://dev-data-lake/table_exports/public/",
"OUTPUT=s3://dev-data-lake/emr_output/cf/"
]
}
}
]

How to form key using query arguments in Dynamodb resolver request mapping?

Schema snippet:
type Query {
getPCSData(country: $String, year: $String, payCycle: $String): PCSDataOutput
}
Dynamodb Resolver:
PCSDataResolver:
Type: AWS::AppSync::Resolver
DependsOn: PCSGraphQLSchema
Properties:
ApiId:
Fn::GetAtt: [PCSGraphQLApi, ApiId]
TypeName: Query
FieldName: getPCSData
DataSourceName:
Fn::GetAtt: [PCSGraphQLDDBDataSource, Name]
RequestMappingTemplate: |
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"Country_Year_PayCycle": ?????????
}
}
ResponseMappingTemplate: "$util.toJson($ctx.result)"
Here, I am looking to form key Country_Year_PayCycle using all three arguments passed from the query something like this Country_Year_PayCycle = country+ year+payCycle
Is it possible to concat the query arguments and form key ?
This is how I resolve
RequestMappingTemplate: |
## Set up some space to keep track of things we're updating **
#set($concat ="_")
#set($country = $ctx.args.country )
#set($year = $ctx.args.year )
#set($payCycle = $ctx.args.payCycle )
#set($pk = "$country$concat$year$concat$payCycle")
{
"version": "2017-02-28",
"operation": "GetItem",
"key": {
"Country_Year_PayCycle": $util.dynamodb.toDynamoDBJson($pk)
}
}

Fn::Sub expression does not resolve to a string

I have created a parameter:
Parameters:
..
list:
Description: "Provide a list .."
Type: CommaDelimitedList
Default: "test1, test2"
Now I want to reference this list (which will resolve in "test1", "test2", ..) from a file in my cloudformation which looks like this:
configure_xx:
files:
/etc/file.conf:
content: !Sub |
input {
logs {
log_group => [ "${list}" ]
access_key_id => "${logstashUserKey}"
secret_access_key => "${logstashUserKey.SecretAccessKey}"
region => "eu-west-1"
}
}
How can I make this work for the parameter list? (the keys work).
error: Fn::Sub expression does not resolve to a string
Just switch the parameter type for a "String"
Parameters:
..
list:
Description: "Provide a list .."
Type: String
Default: "test1, test2"
If, for some reason, you have no control over this parameter type, you could use Fn::Join to transform the list to a string. For exemple:
configure_xx:
files:
/etc/file.conf:
content:
Fn::Sub:
- |-
input {
logs {
log_group => [ "${joinedlist}" ]
access_key_id => "${logstashUserKey}"
secret_access_key => "${logstashUserKey.SecretAccessKey}"
region => "eu-west-1"
}
}
- joinedlist:
Fn::Join:
- ', '
- !Ref list