i am new to serverless framework and i want to get an instance's status, so i used boto3 describe-instance-status() but i keep getting error that i am not authorized to perform this kind of operation althought i have administrator access to all aws services; please help, do i need to change, or add something to be recognized
here is my code :
import json
import boto3
import logging
import sys
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
from botocore.exceptions import ClientError
def instance_status(event, context):
"""Take an instance Id and return its status"""
#print "ttot"
body = {}
status_code = 200
client = boto3.client('ec2')
response = client.describe_instance_status(InstanceIds=['i-070ad071'])
return response
and here is my serverless.yml file
service: ec2
provider:
name: aws
runtime: python2.7
timeout: 30
memorySize: 128
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "ec2:DescribeInstanceStatus"
Resource: "*"
functions:
instance_status:
handler: handler.instance_status
description: Status ec2 instances
events:
- http:
path: ''
method: get
and here is the error message i am getting:
"errorType": "ClientError", "errorMessage": "An error occurred
(UnauthorizedOperation) when calling the DescribeInstanceStatus
operation: You are not authorized to perform this operation."
...i have administrator access to all aws services...
Take note that the Lambda function is NOT running under your user account. You're supposed to define its role and permissions in your YAML.
In the provider section in your serverless.yaml, add the following:
iamRoleStatements:
- Effect: Allow
Action:
- ec2:DescribeInstanceStatus
Resource: <insert your resource here>
Reference: https://serverless.com/framework/docs/providers/aws/guide/iam/
You are not authorized to perform this operation
This means you have no permission to perform this action client.describe_instance_status.
There some ways to make your function can get right permission:
Use IAM Role: Create IAM Role with permission accroding to your requirement. Then assign this IAM role for lambda function in the setting page. So your lambda will automatic get rotate key to perform actions.
Create AccessKey/SecretKey with permission accroding to your requirement. Setting in yaml file, in your lambda function, set boto3 to accquire these access/secretKey, then perform action.
Read more from this http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html
Related
I'm using deployment manager to set the IAM policy of an existing pub/sub topic- I don't want to acquire it and I cannot create it with deployment manager (because it exists). So I want to set a policy on an existing resource
I can do this with buckets but the docs are confusing and I can't find the right methods for buckets
I want to do this (resource level bindings) for a topic instead of bucket:
resources:
- name: mybucket
action: gcp-types/storage-v1:storage.buckets.setIamPolicy
properties:
bucket: mybucket
bindings:
- role: roles/storage.admin
members:
- "serviceAccount:sdfsfds#sdfsdf.com"
I can only find gcp-types/pubsub-v1:projects.topics.setIamPolicy which seems like its at the project level? What is the right api for setting an IAM policy on a specific topic?
The google APIs seem inconsistent here- are these too methods equivalent? Docs are confusing:
https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy
https://cloud.google.com/pubsub/docs/reference/rest/v1/projects.topics/setIamPolicy
I attempted this but getting an error:
- name: mytopic
action: gcp-types/pubsub-v1:pubsub.projects.topics.setIamPolicy
properties:
resource: mytopic
bindings:
- role: roles/pubsub.admin
members:
- "serviceAccount:ssdfsdf#sdfsdf.com"
Getting error:
message: '{"ResourceType":"gcp-types/pubsub-v1:pubsub.projects.topics.setIamPolicy","ResourceErrorCode":"400","ResourceErrorMessage":{"code":400,"message":"Invalid
JSON payload received. Unknown name \"bindings\": Cannot find field.","status":"INVALID_ARGUMENT","details":[{"#type":"type.googleapis.com/google.rpc.BadRequest","fieldViolations":[{"description":"Invalid
JSON payload received. Unknown name \"bindings\": Cannot find field."}]}],"statusMessage":"Bad
Request","requestPath":"https://pubsub.googleapis.com/v1/projects/myproject/topics/mytopic:setIamPolicy","httpMethod":"POST"}}
When I tried projects.topics.setIamPolicy I got:
- code: COLLECTION_NOT_FOUND
message: Collection 'projects.topics.setIamPolicy' not found in discovery doc 'https://pubsub.googleapis.com/$discovery/rest?version=v1'
The pubsub-v1:projects.topics.setIamPolicy is at the topic level and the https://iam.googleapis.com/v1/{resource=projects/*/serviceAccounts/*}:setIamPolicy is to set the a Pub/Sub or other resources at the project level.
You get those error because you are giving Pub/Sub admin and this is a role at the project level. The example roles you can provide are:
roles/viewer
roles/editor
roles/owner
I understand that you are trying to to deploy a topic having a IAM policy that allows only one service account to a topic. You have to use a yaml file and a python file if that is the environment you are using.
In the python file you will set the IAM for the topic with the method "set_iam_policy" which takes 2 arguments, the policy and the topic path:
client = pubsub_v1.PublisherClient()
topic_path = client.topic_path(project, topic_name)
policy = client.get_iam_policy(topic_path)
# Add all users as viewers.
policy.bindings.add(
role='roles/pubsub.viewer',
members=['allUsers'])
# Add a group as a publisher.
policy.bindings.add(
role='roles/pubsub.publisher',
members=['group:cloud-logs#google.com'])
# Set the policy
policy = client.set_iam_policy(topic_path, policy)
print('IAM policy for topic {} set: {}'.format(
topic_name, policy))
For deployment manager:
imports:
- path: templates/pubsub/pubsub.py
name: pubsub.py
resources:
- name: test-pubsub
type: pubsub.py
properties:
topic: test-topic
accessControl:
- role: roles/pubsub.subscriber
members:
- user:demo#user.com
subscriptions:
- name: first-subscription
accessControl:
- role: roles/pubsub.subscriber
members:
- user:demo#user.com
- name: second-subscription
ackDeadlineSeconds: 15
I've been following the serverless tutorial at https://serverless-stack.com/chapters/configure-cognito-user-pool-in-serverless.html
I've got the following serverless yaml snippit
Resources:
CognitoUserPool:
Type: AWS::Cognito::UserPool
Properties:
# Generate a name based on the stage
UserPoolName: ${self:custom.stage}-moochless-user-pool
# Set email as an alias
UsernameAttributes:
- email
AutoVerifiedAttributes:
- email
CognitoUserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
# Generate an app client name based on the stage
ClientName: ${self:custom.stage}-user-pool-client
UserPoolId:
Ref: CognitoUserPool
ExplicitAuthFlows:
- ADMIN_NO_SRP_AUTH
# >>>>> HOW DO I GET THIS VALUE IN OUTPUT <<<<<
GenerateSecret: true
# Print out the Id of the User Pool that is created
Outputs:
UserPoolId:
Value:
Ref: CognitoUserPool
UserPoolClientId:
Value:
Ref: CognitoUserPoolClient
#UserPoolSecret:
# WHAT GOES HERE?
I'm exporting all my other config variables to a json file (to be consumed by a mobile app, so I need the secret key).
How do I get the secret key generated to appear in my output list?
The ideal way to retrieve the secret key is to use "CognitoUserPoolClient.ClientSecret" in your cloudformation template.
UserPoolClientIdSecret:
Value:
!GetAtt CognitoUserPoolClient.ClientSecret
But it is not supported as explained here and gives message as shown in the image:
You can run below CLI command to retrieve the secret key as a work around:
aws cognito-idp describe-user-pool-client --user-pool-id "us-west-XXXXXX" --region us-west-2 --client-id "XXXXXXXXXXXXX" --query 'UserPoolClient.ClientSecret' --output text
As Prabhakar Reddy points out, currently you can't get the Cognito client secret using !GetAtt in your CloudFormation template. However, there is a way to avoid the manual step of using the AWS command line to get the secret. The AWS Command Runner utility for CloudFormation allows you to run AWS CLI commands from your CloudFormation templates, so you can run the CLI command to get the secret in the CloudFormation template and then use the output of the command elsewhere in your template using !GetAtt. Basically CommandRunner spins up an EC2 instance and runs the command you specify and saves the output of the command to a file on the instance while the CloudFormation template is running so that it can be retrieved later using !GetAtt. Note that CommandRunner is a special custom CloudFormation type that needs to be installed for the AWS account as a separate step. Below is an example CloudFormation template that will get a Cognito client secret and save it to AWS Secrets manager.
Resources:
CommandRunnerRole:
Type: AWS::IAM::Role
Properties:
# the AssumeRolePolicyDocument specifies which services can assume this role, for CommandRunner this needs to be ec2
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: CommandRunnerPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'logs:CancelUploadArchive'
- 'logs:GetBranch'
- 'logs:GetCommit'
- 'cognito-idp:*'
Resource: '*'
CommandRunnerInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref CommandRunnerRole
GetCognitoClientSecretCommand:
Type: AWSUtility::CloudFormation::CommandRunner
Properties:
Command: aws cognito-idp describe-user-pool-client --user-pool-id <user_pool_id> --region us-east-2 --client-id <client_id> --query UserPoolClient.ClientSecret --output text > /command-output.txt
Role: !Ref CommandRunnerInstanceProfile
InstanceType: "t2.nano"
LogGroup: command-runner-logs
CognitoClientSecret:
Type: AWS::SecretsManager::Secret
DependsOn: GetCognitoClientSecretCommand
Properties:
Name: "command-runner-secret"
SecretString: !GetAtt GetCognitoClientSecretCommand.Output
Note that you will need to replace the <user_pool_id> and <client_id> with your user pool and client pool id. A complete CloudFormation template would likely create the Cognito User Pool and User Pool Client and the user pool & client id values could be retrieved from those resources using !Ref as part of a !Join statement that creates the entire command, e.g.
Command: !Join [' ', ['aws cognito-idp describe-user-pool-client --user-pool-id', !Ref CognitoUserPool, '--region', !Ref AWS::Region, '--client-id', !Ref CognitoUserPoolClient, '--query UserPoolClient.ClientSecret --output text > /command-output.txt']]
One final note, depending on your operating system, the installation/registration of CommandRunner may fail trying to create the S3 bucket it needs. This is because it tries to generate a bucket name using uuidgen and will fail if uuidgen isn't installed. I have opened an issue on the CommandRunner GitHub repo for this. Until the issue is resolved, you can get around this by modifying the /scripts/register.sh script to use a static bucket name.
As it is still not possible to get the secret of a Cognito User Pool Client using !GetAtt in a CloudFormation Template I was looking for an alternative solution without manual steps so the infrastructure can get deployed automatically.
I like clav's solution but it requires the Command Runner to be installed first.
So, what I did in the end was using a Lambda-backed custom resource. I wrote it in JavaScript but you can also write it in Python.
Here is an overview of the 3 steps you need to follow:
Create IAM Policy and add it to the Lambda function execution role.
Add creation of In-Line Lambda function to CloudFormation Template.
Add creation of Lambda-backed custom resource to CloudFormation Template.
Get the output from the custom Ressource via !GetAtt
And here are the details:
Create IAM Policy and add it to the Lambda function execution role.
# IAM: Policy to describe user pool clients of Cognito user pools
CognitoDescribeUserPoolClientsPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
Description: 'Allows describing Cognito user pool clients.'
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- 'cognito-idp:DescribeUserPoolClient'
Resource:
- !Sub 'arn:aws:cognito-idp:${AWS::Region}:${AWS::AccountId}:userpool/*'
If necessary only allow it for certain resources.
Add creation of In-Line Lambda function to CloudFormation Template.
# Lambda: Function to get the secret of a Cognito User Pool Client
LambdaFunctionGetCognitoUserPoolClientSecret:
Type: AWS::Lambda::Function
Properties:
FunctionName: 'GetCognitoUserPoolClientSecret'
Description: 'Lambda function to get the secret of a Cognito User Pool Client.'
Handler: index.lambda_handler
Role: !Ref LambdaFunctionExecutionRoleArn
Runtime: nodejs14.x
Timeout: '30'
Code:
ZipFile: |
// Import required modules
const response = require('cfn-response');
const { CognitoIdentityServiceProvider } = require('aws-sdk');
// FUNCTION: Lambda Handler
exports.lambda_handler = function(event, context) {
console.log("Request received:\n" + JSON.stringify(event));
// Read data from input parameters
let userPoolId = event.ResourceProperties.UserPoolId;
let userPoolClientId = event.ResourceProperties.UserPoolClientId;
// Set physical ID
let physicalId = `${userPoolId}-${userPoolClientId}-secret`;
let errorMessage = `Error at getting secret from cognito user pool client:`;
try {
let requestType = event.RequestType;
if(requestType === 'Create') {
console.log(`Request is of type '${requestType}'. Get secret from cognito user pool client.`);
// Get secret from cognito user pool client
let cognitoIdp = new CognitoIdentityServiceProvider();
cognitoIdp.describeUserPoolClient({
UserPoolId: userPoolId,
ClientId: userPoolClientId
}).promise()
.then(result => {
let secret = result.UserPoolClient.ClientSecret;
response.send(event, context, response.SUCCESS, {Status: response.SUCCESS, Error: 'No Error', Secret: secret}, physicalId);
}).catch(error => {
// Error
console.log(`${errorMessage}:${error}`);
response.send(event, context, response.FAILED, {Status: response.FAILED, Error: error}, physicalId);
});
} else {
console.log(`Request is of type '${requestType}'. Not doing anything.`);
response.send(event, context, response.SUCCESS, {Status: response.SUCCESS, Error: 'No Error'}, physicalId);
}
} catch (error){
// Error
console.log(`${errorMessage}:${error}`);
response.send(event, context, response.FAILED, {Status: response.FAILED, Error: error}, physicalId);
}
};
Make sure you pass the right Lambda Execution Role to the parameter Role. It should contain the policy created in step 1.
Add creation of Lambda-backed custom resource to CloudFormation Template.
# Custom: Cognito user pool client secret
UserPoolClientSecret:
Type: Custom::UserPoolClientSecret
Properties:
ServiceToken: !Ref LambdaFunctionGetCognitoUserPoolClientSecret
UserPoolId: !Ref UserPool
UserPoolClientId: !Ref UserPoolClient
Make sure you pass the Lambda function created in step 2 as ServiceToken. Also make sure you pass in the right values for the parameters UserPoolId and UserPoolClientId. They should be taken from the Cognito User Pool and the Cognito User Pool Client.
Get the output from the custom Ressource via !GetAtt
!GetAtt UserPoolClientSecret.Secret
You can do this anywhere you want.
I need to scan a dynamodb database but I keep getting this error:
"errorMessage": "An error occurred (AccessDeniedException) when
calling the Scan operation: User:
arn:aws:sts::747857903140:assumed-role/test_role/TestFunction is not
authorized to perform: dynamodb:Scan on resource:
arn:aws:dynamodb:us-east-1:747857903140:table/HelpBot"
This is my Lambda code (index.py):
import json
import boto3
client = boto3.resource('dynamodb')
table = client.Table('HelpBot')
def handler(event, context):
table.scan()
return {
"statusCode": 200,
"body": json.dumps('Hello from Lambda!')
}
This is my SAM template (template.yml):
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
MyFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.handler
Runtime: python3.6
Policies:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:Scan
Resource: arn:aws:dynamodb:us-east-1:747857903140:table/HelpBot
Does you lambda role have the DynamoDB policies applied?
Go to
IAM Go to policies
Choose the DynamoDB policy (try full access and then go back and restrict your permissions)
From Policy Actions - Select Attach Attach it to the role that is
used by your Lambda
Try configuring the boto client to use your IAM role directly in the lambda function.
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-api.html
import json
import boto3
client = boto3.resource(
'dynamodb',
aws_access_key_id = ACCESS_KEY,
aws_secret_access_key = SECERT_KEY
)
table = client.Table('HelpBot')
def handler(event, context):
table.scan()
return {
"statusCode": 200,
"body": json.dumps('Hello from Lambda!')
}
Make sure you are not applying the default role (autogenerated) to your lambda, you need to create a rol with lambda basic execution permissions and attach this when you are creating your lambda or in the configuration pane, in the left choose permissions and change it.
I can't seem to get this to work. I create 2 lambdas via C9. I'm using boto3 to invoke one lambda from another. Everything seems to work just fine via C9 but when I publish and try to access via API Gateway I keep getting "Endpoint request timed out" errors.
I know it can't be a timeout issue because I've set up my yaml files to have enough time to execute and the lambda right now are really simple (only returning a string)
here are my current yaml file. I'm wondering if maybe there are some sort of permissions I need to include for API Gateway in the second yaml
Lambda1
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
api:
Type: 'AWS::Serverless::Function'
Properties:
Description: ''
Handler: api/lambda_function.lambda_handler
MemorySize: 256
Role: 'arn:aws:iam::820788395625:role/service-role/api_int-role'
Runtime: python3.6
Timeout: 30
VpcConfig:
SecurityGroupIds:
- ...
SubnetIds:
- ...
Policies: AWSLambdaFullAccess
Lambda2
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: An AWS Serverless Specification template describing your function.
Resources:
api:
Type: 'AWS::Serverless::Function'
Properties:
Description: ''
Handler: api/lambda_function.lambda_handler
MemorySize: 512
Role: 'arn:aws:iam::820788395625:role/service-role/api_int-role'
Runtime: python3.6
Timeout: 15
VpcConfig:
SecurityGroupIds:
- ...
SubnetIds:
- ...
I just set up an API Gateway endpoint directly to Lambda2 and it returned no problem. So...
API Gateway -> Lambda 2 (works)
API Gateway -> Lambda 1 -> Lambda 2 (does not work)
So for some reason when I want to call Lambda 2 via Lambda 1 over API Gateway it doesn't work.
Here is the code that is calling the 2nd Lambda
import json
import boto3
def lambda_handler(event, context):
print('call boto3 client')
lambda_client = boto3.client('lambda', region_name='us-east-1')
print('boto3 client called')
print('invoke lambda')
env_response = lambda_client.invoke(
FunctionName='cloud9-apiAlpha-api-TBSOYXLVBCLX',
InvocationType='RequestResponse',
Payload=json.dumps(event)
)
print('lambda invoked')
print('env_response')
print(env_response)
print(env_response['Payload'])
print(env_response['Payload'].read())
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Methods': 'POST,GET,OPTIONS,PUT,DELETE',
'Access-Control-Allow-Origin': '*'
},
'body': 'HELLO WORLD!',
'isBase64Encoded': False
}
Now when I look at the logs it gets to print('invoke lambda') but then stops and timesout
1.Invoking a Lambda from another Lambda can't be done without some configuration. In your .yml file, permission must be specified in order to invoke another Lambda. This can be accomplished by adding an iamRoleStatements section under the provider property
or
by add the simple policy AWSLambdaRole to the existing role attached to the lambda function_1.
provider:
name: aws
runtime: <runtime goes here> # e.g. python3.6 or nodejs6.10
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"
or do this add/attach this policy to your existing role attached to your lambda function_1
2.Invoking lambda function_1 code attached.
global LAMBDA_CLIENT
if not LAMBDA_CLIENT:
LAMBDA_CLIENT = boto3.client('lambda')
try:
encoded_payload = json.dumps({'message': 'this is an invokcation call form lambda_1'}).encode(UTF_8)
invoke_resp = lambda_client.invoke(
FunctionName='function_2',
InvocationType='RequestResponse',
Payload=encoded_payload)
status_code = invoke_resp['StatusCode']
if status_code != 200:
LOGGER.error('error ')
paylaod = invoke_resp['Payload'].read()
resp = json.loads(payload)
print(resp)
except Exception:
IF you are using InvocationType=RequestResponse then you can return some response form function_2.
Finally found the solution. The answer to my particular problem was Lambda 1 & Lambda 2 were operating over VPC thus no internet connection. Once I removed VPC from Lambda 1 the invocation of Lambda 2 worked without any problems.
Just wanted to share in case I can save anyone else a weeks worth of debugging LOL
When I configure DNS Query Logging with Route53, I can create a resource policy for Route53 to log to my log group. I can confirm this policy with the cli aws logs describe-resource-policies and see something like:
{
"resourcePolicies": [
{
"policyName": "test-logging-policy",
"policyDocument": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"route53.amazonaws.com\"},\"Action\":[\"logs:CreateLogStream\",\"logs:PutLogEvents\"],\"Resource\":\"arn:aws:logs:us-east-1:xxxxxx:log-group:test-route53*\"}]}",
"lastUpdatedTime": 1520865407511
}
]
}
The cli also has a put-resource-policy to create one of these. I also see that Terraform has a resource aws_cloudwatch_log_resource_policy which does the same.
So the question: How do I do this with CloudFormation???
You can't use the CloudWatch console to create or edit a resource policy. You must use the CloudWatch API, one of the AWS SDKs, or the
AWS CLI.
There is no Cloudformation support for creating a resource policy right now, but you create a custom lambda resource to do this.
https://gist.github.com/sudharsans/cf9c52d7c78a81818a4a47872982bd76
CloudFormation Custom resource:
AddResourcePolicy:
Type: Custom::AddResourcePolicy
Version: '1.0'
Properties:
ServiceToken: arn:aws:lambda:us-east-1:872673965194:function:test-lambda-deploy-Lambda-15R963QKCI80A
CloudWatchLogsLogGroupArn: !GetAtt LogGroup.Arn
PolicyName: "testpolicy"
lambda:
import cfnresponse
import boto3
def PutPolicy(arn,policyname):
response = client.put_resource_policy(
policyName=policyname,
policyDocument="....",
)
return
def handler(event, context):
......
if event['RequestType'] == "Delete":
DeletePolicy(PolicyName)
if event['RequestType'] == "Create":
PutPolicy(CloudWatchLogsLogGroupArn,PolicyName)
responseData['Data'] = "SUCCESS"
status=cfnresponse.SUCCESS
.....
4 years later, this still doesn't seem to work through Cloudformation although there is apparently support for this included now