I'm trying to get EC2 instances from Account B using Lambda in Account A. Not sure what I'm missing.
Account A: Lambda code is running.
Account B: EC2 Instances are running.
below Assume Role prints access key and session token ID, but does not return any results.
IAM role in Account B has AmazonEC2ReadOnlyAccess policy attached and trust relationship has arn:aws:iam::ACCOUNT_A:role/role-name_ACCOUNT_A
This is the code:
import json
import boto3
from collections import OrderedDict
from pprint import pprint
import time
from time import sleep
from datetime import date
import datetime
def lambda_handler(event, context):
# Assume Role To connect to other Account
sts_connection = boto3.client('sts')
acct_b = sts_connection.assume_role(
RoleArn="arn:aws:iam::ACCOUNT_B:role/role_name_account_B",
RoleSessionName="cross_acct_lambda"
)
ACCESS_KEY = acct_b['Credentials']['AccessKeyId']
SECRET_KEY = acct_b['Credentials']['SecretAccessKey']
SESSION_TOKEN = acct_b['Credentials']['SessionToken']
# create service client using the assumed role credentials, e.g. S3
ec2 = boto3.client(
"ec2",
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN,
)
status = ec2.describe_instance_status()
pprint(status)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Result:
Response:
{
"statusCode": 200,
"body": "\"Hello from Lambda!\""
}
Result:
Response:
{
"statusCode": 200,
"body": "\"Hello from Lambda!\""
}
Request ID:
"ZZZZZZZZZZZZZZZZZZZZ"
Function logs:
START RequestId: ZZZZZZZZZZZZZZZZZZ Version: $LATEST
{'InstanceStatuses': [],
Thanks.
Once I added the region I could see the results, Thanks John Rotenstein and Jarmod for your guidance.
Related
I'm not too sure how the formatting works with using json and boto3 in the same file. The function works how it should but I don't know how to get a response from an API without an Internal server error.
I don't know if it is permissions or the code is wrong.
import boto3
import json
def lambda_handler(event, context):
client = boto3.resource('dynamodb')
table = client.Table('Visit_Count')
input = {'Visits': 1}
table.put_item(Item=input)
return {
'statusCode': 200
body: json.dumps("Hello, World!")
}
Instead of body it should be 'body'. Other than that you should check CloudWatch logs for any further lambda errors.
Anon and Marcin were right, I just tried it and it worked
Your Lambda role also need to have dynamodb:PutItem
import boto3
import json
def lambda_handler(event, context):
client = boto3.resource('dynamodb')
table = client.Table('Visit_Count')
input = {'Visits': 1}
table.put_item(Item=input)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
Currently I have several hundred AWS IAM Roles with inline policies.
I would like to somehow convert these inline policies to managed policies.
While AWS Documentation has a way to do this via the Console, this will be very time consuming.
Does anyone know of a way, or have a script to do this via BOTO or AWS CLI...or direct me to some method that I can do this programmatically?
Thanks in advance
boto3 code will be like this.
In this code, inline policies that are embedded in the specified IAM user will be copied to customer managed policies.
Note delete part is commented out.
import json
import boto3
user_name = 'xxxxxxx'
client = boto3.client("iam")
response = client.list_user_policies(UserName=user_name)
for policy_name in response["PolicyNames"]:
response = client.get_user_policy(UserName=user_name, PolicyName=policy_name)
policy_document = json.dumps(response["PolicyDocument"])
response = client.create_policy(
PolicyName=policy_name, PolicyDocument=policy_document_json
)
# response = client.delete_user_policy(
# UserName=user_name,
# PolicyName=policy_name
# )
Updated:
For IAM roles, changing User to Role, user to role (case sensitive) above code works.
Besides, if you execute for multiple roles, use list_roles to get role_name.
response=client.list_roles()
for i in response['Roles']:
role_name = i['RoleName']
# print(role_name)
with #shimo snippet, the following works with added error handling and attaching the newly created managed policy to the IAM role:
import json
import boto3
from botocore.exceptions import ClientError
role_name = 'xxxxxxxx'
account_id = '123456789'
client = boto3.client("iam")
resource = boto3.resource('iam')
response = client.list_role_policies(RoleName=role_name)
for policy_name in response["PolicyNames"]:
response = client.get_role_policy(RoleName=role_name, PolicyName=policy_name)
policy_document = json.dumps(response["PolicyDocument"])
print(policy_document)
try:
response = client.create_policy(
PolicyName=policy_name, PolicyDocument=policy_document
)
print(policy_name + 'Policy Created')
except ClientError as error:
if error.response['Error']['Code'] == 'EntityAlreadyExists':
print(policy_name + ' policy already exists')
else:
print("Unexpected error: %s" % error)
policy_arn = f'arn:aws:iam::{account_id}:policy/{policy_name}'
role = resource.Role(role_name)
role.attach_policy(PolicyArn=policy_arn)
response = client.delete_role_policy(
RoleName=role_name,
PolicyName=policy_name
)
I am aware of the HTTP Data Collector API that can be used to pull data into Azure Log analytics, my ask here is on AWS Cloudwatch data to Azure. We have Azure hosted application and an external AWS hosted Serverless Lamda functions and we want to import the logs of those 13 serverless functions into Azure. I know from the documentation and there is a python function that can be used as a AWS Lamda function and the python example is in MSFT documentation. But what I am failing to understand is what Json format that AWS cloud collector needs to create so they can send it to Azure Log Analytics. Any examples on this ? Any help on how this can be done. I have come across this blog also but that is splunk specific. https://www.splunk.com/blog/2017/02/03/how-to-easily-stream-aws-cloudwatch-logs-to-splunk.html
Hey never mind I was able to dig a little deeper and I found that in AWS I can STREAM the Logs from one Lambda to other Lambda function thru subscription. Once that was setthen all I did was consumed that and on the fly created the JSON and sent it to Azure Logs. In case if you or anyone is interested in it, following is the code:-
import json
import datetime
import hashlib
import hmac
import base64
import boto3
import datetime
import gzip
from botocore.vendored import requests
from datetime import datetime
Update the customer ID to your Log Analytics workspace ID
customer_id = "XXXXXXXYYYYYYYYYYYYZZZZZZZZZZ"
For the shared key, use either the primary or the secondary Connected Sources client authentication key
shared_key = "XXXXXXXXXXXXXXXXXXXXXXXXXX"
The log type is the name of the event that is being submitted
log_type = 'AWSLambdafuncLogReal'
json_data = [{
"slot_ID": 12345,
"ID": "5cdad72f-c848-4df0-8aaa-ffe033e75d57",
"availability_Value": 100,
"performance_Value": 6.954,
"measurement_Name": "last_one_hour",
"duration": 3600,
"warning_Threshold": 0,
"critical_Threshold": 0,
"IsActive": "true"
},
{
"slot_ID": 67890,
"ID": "b6bee458-fb65-492e-996d-61c4d7fbb942",
"availability_Value": 100,
"performance_Value": 3.379,
"measurement_Name": "last_one_hour",
"duration": 3600,
"warning_Threshold": 0,
"critical_Threshold": 0,
"IsActive": "false"
}]
#body = json.dumps(json_data)
#####################
######Functions######
#####################
Build the API signature
def build_signature(customer_id, shared_key, date, content_length, method, content_type, resource):
x_headers = 'x-ms-date:' + date
string_to_hash = method + "\n" + str(content_length) + "\n" + content_type + "\n" + x_headers + "\n" + resource
bytes_to_hash = bytes(string_to_hash, encoding="utf-8")
decoded_key = base64.b64decode(shared_key)
encoded_hash = base64.b64encode(
hmac.new(decoded_key, bytes_to_hash, digestmod=hashlib.sha256).digest()).decode()
authorization = "SharedKey {}:{}".format(customer_id,encoded_hash)
return authorization
Build and send a request to the POST API
def post_data(customer_id, shared_key, body, log_type):
method = 'POST'
content_type = 'application/json'
resource = '/api/logs'
rfc1123date = datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
print (rfc1123date)
content_length = len(body)
signature = build_signature(customer_id, shared_key, rfc1123date, content_length, method, content_type, resource)
uri = 'https://' + customer_id + '.ods.opinsights.azure.com' + resource + '?api-version=2016-04-01'
headers = {
'content-type': content_type,
'Authorization': signature,
'Log-Type': log_type,
'x-ms-date': rfc1123date
}
response = requests.post(uri,data=body, headers=headers)
if (response.status_code >= 200 and response.status_code <= 299):
print("Accepted")
else:
print("Response code: {}".format(response.status_code))
print(response.text)
def lambda_handler(event, context):
cloudwatch_event = event["awslogs"]["data"]
decode_base64 = base64.b64decode(cloudwatch_event)
decompress_data = gzip.decompress(decode_base64)
log_data = json.loads(decompress_data)
print(log_data)
awslogdata = json.dumps(log_data)
post_data(customer_id, shared_key, awslogdata, log_type)
I have a simple AWS CodePipeline with the standard "Source" -> "Build" -> "Deploy" pipeline stages that work fine and I am trying to add my own custom final pipeline stage that is a single AWS Lambda Function. The problem is my last, custom Lambda function runs multiple times and after a very long time, errors with the following message:
Please see the attached screenshot for the whole pipeline:
When the pipeline reaches this final step, it spins for a very long time with the "Blue ( In-Progress )" status before showing an error as shown here:
Here is my Lambda Function code:
from __future__ import print_function
import hashlib
import time
import os
import boto3
import json
from botocore.exceptions import ClientError
def lambda_handler(event, context):
# Test
AWS_ACCESS_KEY = ASDF1234
AWS_SECRET_KEY = ASDF1234
SQS_TESTING_OUTPUT_STATUS_QUEUE_NAME = 'TestingOutputQueue'
# Get the code pipeline
code_pipeline = boto3.client('codepipeline')
# Get the job_id
for key, value in event.items():
print(key,value)
job_id = event['CodePipeline.job']['id']
DATA = json.dumps(event)
# Create a connection the SQS Notification service
sqs_resource_connection = boto3.resource(
'sqs',
aws_access_key_id = AWS_ACCESS_KEY,
aws_secret_access_key = AWS_SECRET_KEY,
region_name = 'us-west-2'
)
# Get the queue handle
print("Waiting for notification from AWS ...")
queue = sqs_resource_connection.get_queue_by_name(QueueName = SQS_TESTING_OUTPUT_STATUS_QUEUE_NAME)
messageContent = ""
cnt = 1
# Replace sender#example.com with your "From" address.
# This address must be verified with Amazon SES.
SENDER = ME
# Replace recipient#example.com with a "To" address. If your account
# is still in the sandbox, this address must be verified.
RECIPIENTS = [YOU]
# If necessary, replace us-west-2 with the AWS Region you're using for Amazon SES.
AWS_REGION = "us-east-1"
# The subject line for the email.
SUBJECT = "Test Case Results"
# The email body for recipients with non-HTML email clients.
BODY_TEXT = ("Test Case Results Were ...")
# The HTML body of the email.
BODY_HTML = """<html>
<head></head>
<body>
<h1>Amazon SES Test (SDK for Python)</h1>
<p>%s</p>
</body>
</html>
"""%(DATA)
# The character encoding for the email.
CHARSET = "UTF-8"
# Create a new SES resource and specify a region.
client = boto3.client('ses', region_name=AWS_REGION)
# Try to send the email.
try:
# Provide the contents of the email.
response = client.send_email(
Destination={
'ToAddresses': RECIPIENTS,
},
Message={
'Body': {
'Html': {
'Charset': CHARSET,
'Data': BODY_HTML,
},
'Text': {
'Charset': CHARSET,
'Data': BODY_TEXT,
},
},
'Subject': {
'Charset': CHARSET,
'Data': SUBJECT,
},
},
Source=SENDER,
# If you are not using a configuration set, comment or delete the
# following line
#ConfigurationSetName=CONFIGURATION_SET,
)
# Display an error if something goes wrong.
except ClientError as e:
code_pipeline.put_third_party_job_failure_result(jobId=job_id, failureDetails={'message': message, 'type': 'JobFailed'})
code_pipeline.put_job_failure_result(jobId=job_id, failureDetails={'message': message, 'type': 'JobFailed'})
print(e.response['Error']['Message'])
else:
code_pipeline.put_third_party_job_success_result(jobId=job_id)
code_pipeline.put_job_success_result(jobId=job_id)
print("Email sent! Message ID:"),
print(response['MessageId'])
print('Function complete.')
return "Complete."
How can I get the Lambda to fire once and return so the pipeline can complete properly.
You are missing an important integration between your Lambda Function and the CodePipeline service.
You MUST notify CodePipeline about the result of your custom step, whether it succeeded or not - see my examples below.
Reporting success:
function reportSuccess(job_id) {
var codepipeline = new AWS.CodePipeline();
var params = {
jobId: job_id,
};
return codepipeline.putJobSuccessResult(params).promise();
}
Reporting failure:
function reportFailure(job_id, invoke_id, message) {
var codepipeline = new AWS.CodePipeline();
var params = {
failureDetails: {
message: message,
type: 'JobFailed',
externalExecutionId: invoke_id,
},
jobId: job_id,
};
return codepipeline.putJobFailureResult(params).promise();
}
The integration was designed this way because one may want to integrate with an external job worker, in which their Lambda starts that worker (example, an approval process), and that worker then takes control and decides whether the whole step succeeded or failed.
I have a root AWS account which is linked with other three sub aws accounts. in my root account I created a Lambda function to get billing metrics from cloudwatch using python SDK and APIs . its working I am using IAM user's access key and secret key which has billing access and all admin access but I copied the lambda code and put into sub account's lambda function it doesn't retrieve any data. I can't understand why its not working in sub account ?
import boto3
from datetime import datetime, timedelta;
def get_metrics(event, context):
ACCESS_KEY='accesskey'
SECRET_KEY='secretkey'
client = boto3.client('cloudwatch',aws_access_key_id=ACCESS_KEY,aws_secret_access_key=SECRET_KEY)
response = client.get_metric_statistics(
Namespace='AWS/Billing',
MetricName='EstimatedCharges',
Dimensions=[
{
'Name': 'LinkedAccount',
'Value': '12 digit account number'
},
{
'Name': 'Currency',
'Value': 'USD'
},
],
StartTime='2017, 12, 19',
EndTime='2017, 12, 21',
Period=86400,
Statistics=[
'Maximum',
],
)
print response