Cloudformation stack is not updating from service catalog - amazon-web-services

I am updating provisioned service catalog product from lambda function. it is working fine for many products but for one provisioned product when i try to update provisioned service catalog product it is successfully updating but cloud formation stack is not getting updated by service catalog
Here code of lambda function
import json
import boto3
def lambda_handler(event, context):
client=boto3.client('servicecatalog')
response = client.update_provisioned_product(
AcceptLanguage='en',
ProvisionedProductId='pp-3mio2kzru2yc2',
ProductId='prod-zpvv57zereqfu',
ProvisioningArtifactId='pa-k3cx2pkgge4ce',
ProvisioningParameters=[
{
'Key': 'ScheduledScalingInDesiredInstances',
'Value': '0',
'UsePreviousValue': False
},
{
'Key': 'ScheduledScalingInMaxInstances',
'Value': '0',
'UsePreviousValue': False
},
{
'Key': 'ScheduledScalingInMinInstances',
'Value': '0',
'UsePreviousValue': False
},
{
'Key': 'ScheduledScalingInCron',
'Value': 'cron(42 19 * * ? *)',
'UsePreviousValue': False
},
{
'Key': 'EnvironmentName',
'UsePreviousValue': True
},
{
'Key': 'ClusterName',
'UsePreviousValue': True
},
]
)
Lambda function has required permissions as same role is used in another lambda function which is able to update cloudformation stack via provisioned service catalog.
what could be the reason ?

Issue is resolved.
Issue was with parameter mismatch. I was trying to update few parameters while dependent param should be updated as well

Related

How can I specify routing policy via the boto3 change_resource_record_sets function?

When I create a record in my hosted zone via the AWS Web Console, I can select the Routing Policy as "Simple".
When I try to create the same record programmatically via boto3, I seem to have no option to set a Routing Policy, and it is "Latency" by default.
What am I missing?
r53.change_resource_record_sets(
HostedZoneId=hz_id,
ChangeBatch={
'Changes': [{
'Action': 'UPSERT',
'ResourceRecordSet': {
'Name': root_domain,
'Type': 'A',
'Region': region,
'AliasTarget': {
'DNSName': f's3-website.{region}.amazonaws.com',
'EvaluateTargetHealth': False,
'HostedZoneId': s3_hz_id,
},
'SetIdentifier': str(uuid.uuid4())
}
}]
}
)
removing region and SetIdentifier works for me here - can't explain it though :)

How to tag AWS resources with pre defined set of tags using resource name prefix

With minimal experience in python, I am looking for ideas in building a lambda that tags AWS resources, especially ec2-instances and their volumes, by using prefix in the instance name tag.
Examples:
dev-cassandra-01
dev-cassandra-02
I would like to setup auto tagging for predefined set of tags by using the prefix "dev-cassandra" which should apply to any new instances created with this prefix.
Similarly different set of tags to different application instances.
This can be applied by provisioning tool for the regular instances, but not to the existing ASG instances.
You can insert tags upon instance creation by using resource-method create_instances
in the 'TagSpecifications' attribute.
Or you can also edit specific instance tags with the client-method create_tags for this method you need to get the instance id first.
The prefix logic can be added to the python script as desired.
Here are 2 examples of these methods:
create_instances:
# Getting resource object with aws credentials
s = boto3.Session(
region_name=<region_name>,
aws_access_key_id=<aws_access_key>,
aws_secret_access_key=<aws_secret_access_key>,
)
ec2 = s.resource('ec2')
# Some of these options are optional
instance = ec2.create_instances(
ImageId=<ami_id>,
MinCount=1,
MaxCount=1,
InstanceType=<instance_type>,
KeyName=<instance_key_pair>,
IamInstanceProfile={
'Name': <instance_profile>
},
SecurityGroupIds=[
<instance_security_group>,
],
TagSpecifications=[
{
'ResourceType': 'instance',
'Tags': [
{
'Key': <key_name>,
'Value': <value_data>,
},
]
},
],
)
create_tags:
# Next line is for getting client object from resource object
ec2_client = ec2.meta.client
ec2_client.create_tags(
Resources=[
<instance_id>,
],
Tags=[
{
'Key': 'Name',
'Value': <dev-cassandra><_your_name>
},
{
'Key': <key_name>,
'Value': <value_data>,
},
]
)
worth mentioning that the "instance name" is also a tag.
For example:
{
'Key': 'Name',
'Value': <dev-cassandra><_your_name>
}

Compute environments are not displayed in AWS console

Compute environments created via boto3 are not displayed in AWS console. I can see them in the batch_client.describe_compute_environments() call response:
{
'computeEnvironmentName': 'name',
'computeEnvironmentArn': 'arn:aws:batch:us-east-1:<ID>:compute-environment/ml-retraining-compute-env-second',
'ecsClusterArn': 'arn:aws:ecs:us-east-1:<ID>:cluster/ml-retraining-compute-env-second_Batch_b18fcd09-8d7e-351b-bc0f-13ffa83a6b15',
'type': 'MANAGED',
'state': 'ENABLED',
'status': 'INVALID',
'statusReason': "CLIENT_ERROR - The security group 'sg-2436d85c' does not exist",
'computeResources': {
'type': 'EC2',
'minvCpus': 0,
'maxvCpus': 512,
'desiredvCpus': 24,
'instanceTypes': [
'optimal'
],
'subnets': [
'subnet-fa22de86'
],
'securityGroupIds': [
'sg-2436d85c'
],
'instanceRole': 'arn:aws:iam::<ID>:instance-profile/ecsInstanceRole',
'tags': {
'component': 'ukai-training-pipeline',
'product': 'Cormorant',
'jira_project_team': 'CORPRJ',
'business_unit': 'Threat Systems Products',
'created_by': 'ml-pipeline'
}
},
'serviceRole': 'arn:aws:iam::<ID>:role/AWSBatchServiceRole'
}
but the Compute Environments table on the Batch page in AWS console UI does not show anything. The table is empty. When I try to create compute environment with the same name again via boto3 call, I get this response:
ERROR - Error setting compute environment: An error occurred
(ClientException) when calling the CreateComputeEnvironment operation: Object already exists.
Based on the comments, the issue was the use of different region in the console.
The solution was to change the region.

How to check Spark step status programmatically (submitted on EMR cluster)?

I created a simple step function as follows :
Start -> Start EMR cluster & submit job -> End
I want to find out a mechanism to identify whether my spark step completed successfully or not?
I am able to start EMR cluster and attach a spark job to it, which successfully completes and terminates the cluster.
Followed steps in this link :
Creating AWS EMR cluster with spark step using lambda function fails with "Local file does not exist"
Now, I am looking to get the status, th ejob poller will get me information whether the EMR cluster created successfully or not.
I am looking at ways how I can find out Spark job status
from botocore.vendored import requests
import boto3
import json
def lambda_handler(event, context):
conn = boto3.client("emr")
cluster_id = conn.run_job_flow(
Name='xyz',
ServiceRole='xyz',
JobFlowRole='asd',
VisibleToAllUsers=True,
LogUri='<location>',
ReleaseLabel='emr-5.16.0',
Instances={
'Ec2SubnetId': 'xyz',
'InstanceGroups': [
{
'Name': 'Master',
'Market': 'ON_DEMAND',
'InstanceRole': 'MASTER',
'InstanceType': 'm4.xlarge',
'InstanceCount': 1,
}
],
'KeepJobFlowAliveWhenNoSteps': False,
'TerminationProtected': False,
},
Applications=[
{
'Name': 'Spark'
},
{
'Name': 'Hadoop'
}
],
Steps=[{ 'Name': "mystep",
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Jar': 'jar',
'Args' : [
<insert args> , jar, mainclass
]
}
}]
)
return cluster_id
You can use cli or sdk to list all steps for the cluster and then describe particular step to get its status.

How to execute spark submit on amazon EMR from Lambda function?

I want to execute spark submit job on AWS EMR cluster based on the file upload event on S3. I am using AWS Lambda function to capture the event but I have no idea how to submit spark submit job on EMR cluster from Lambda function.
Most of the answers that i searched talked about adding a step in the EMR cluster. But I do not know if I can add add any step to fire "spark submit --with args" in the added step.
You can, I had to same thing last week!
Using boto3 for Python (other languages would definitely have a similar solution) you can either start a cluster with the defined step, or attach a step to an already up cluster.
Defining the cluster with the step
def lambda_handler(event, context):
conn = boto3.client("emr")
cluster_id = conn.run_job_flow(
Name='ClusterName',
ServiceRole='EMR_DefaultRole',
JobFlowRole='EMR_EC2_DefaultRole',
VisibleToAllUsers=True,
LogUri='s3n://some-log-uri/elasticmapreduce/',
ReleaseLabel='emr-5.8.0',
Instances={
'InstanceGroups': [
{
'Name': 'Master nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'MASTER',
'InstanceType': 'm3.xlarge',
'InstanceCount': 1,
},
{
'Name': 'Slave nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'CORE',
'InstanceType': 'm3.xlarge',
'InstanceCount': 2,
}
],
'Ec2KeyName': 'key-name',
'KeepJobFlowAliveWhenNoSteps': False,
'TerminationProtected': False
},
Applications=[{
'Name': 'Spark'
}],
Configurations=[{
"Classification":"spark-env",
"Properties":{},
"Configurations":[{
"Classification":"export",
"Properties":{
"PYSPARK_PYTHON":"python35",
"PYSPARK_DRIVER_PYTHON":"python35"
}
}]
}],
BootstrapActions=[{
'Name': 'Install',
'ScriptBootstrapAction': {
'Path': 's3://path/to/bootstrap.script'
}
}],
Steps=[{
'Name': 'StepName',
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': [
"/usr/bin/spark-submit", "--deploy-mode", "cluster",
's3://path/to/code.file', '-i', 'input_arg',
'-o', 'output_arg'
]
}
}],
)
return "Started cluster {}".format(cluster_id)
Attaching a step to an already running cluster
As per here
def lambda_handler(event, context):
conn = boto3.client("emr")
# chooses the first cluster which is Running or Waiting
# possibly can also choose by name or already have the cluster id
clusters = conn.list_clusters()
# choose the correct cluster
clusters = [c["Id"] for c in clusters["Clusters"]
if c["Status"]["State"] in ["RUNNING", "WAITING"]]
if not clusters:
sys.stderr.write("No valid clusters\n")
sys.stderr.exit()
# take the first relevant cluster
cluster_id = clusters[0]
# code location on your emr master node
CODE_DIR = "/home/hadoop/code/"
# spark configuration example
step_args = ["/usr/bin/spark-submit", "--spark-conf", "your-configuration",
CODE_DIR + "your_file.py", '--your-parameters', 'parameters']
step = {"Name": "what_you_do-" + time.strftime("%Y%m%d-%H:%M"),
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': step_args
}
}
action = conn.add_job_flow_steps(JobFlowId=cluster_id, Steps=[step])
return "Added step: %s"%(action)
AWS Lambda function python code if you want to execute Spark jar using spark submit command:
from botocore.vendored import requests
import json
def lambda_handler(event, context):
headers = { "content-type": "application/json" }
url = 'http://ip-address.ec2.internal:8998/batches'
payload = {
'file' : 's3://Bucket/Orchestration/RedshiftJDBC41.jar
s3://Bucket/Orchestration/mysql-connector-java-8.0.12.jar
s3://Bucket/Orchestration/SparkCode.jar',
'className' : 'Main Class Name',
'args' : [event.get('rootPath')]
}
res = requests.post(url, data = json.dumps(payload), headers = headers, verify = False)
json_data = json.loads(res.text)
return json_data.get('id')