Very simple question -- how to list aws lambda applications using cli?
I'm following:
https://awscli.amazonaws.com/v2/documentation/api/2.1.29/reference/deploy/list-applications.html#examples
To get information about applications
The following list-applications example displays information about all applications that are associated with the user's AWS account.
aws deploy list-applications
And this is what I get:
$ aws deploy list-applications
applications: []
However, I have many aws lambda applications:
how to list them using cli?
UPDATE:
I'm aware of the aws lambda list-functions command, however, it is not the ~functions~ but the applications that I need to list (as functions and the applications are named differently):
You are looking for:
aws lambda list-functions
This query will list all the details about your lambda, to list only the FunctionNames, or FunctionArns, you have to use:
aws lambda list-functions --query 'Functions[].FunctionName'
aws lambda list-functions --query 'Functions[].FunctionArn'
You can also filter by region for this you can use for example:
aws lambda list-functions --region eu-west-2
aws deploy list-applications is used to list all of the applications in an AWS CodeDeploy deployment group.
Edit
it is not the functions but the applications that I need to list.
This is not possible, there are no commands to list the lambda applications, because:
An AWS Lambda application is a combination of Lambda functions, event
sources, and other resources that work together to perform tasks. You
can use AWS CloudFormation and other tools to collect your
application's components into a single package that can be deployed
and managed as one resource.
AWS Lambda applications
In fact, you can get the name of your application from CloudFormation:
aws cloudformation list-stacks
but is it a good idea, I don't know ?!
You should use aws lambda list-functions to list all functions.
aws deploy list-applications lists all CodeDeploy applications.
All aws-cli calls have the format aws <service> <operation>, e.g.
aws lambda list-functions - AWS Lambda
aws deploy list-applications - AWS CodeDeploy
aws s3 ls - S3
You can view all available services here - https://awscli.amazonaws.com/v2/documentation/api/2.1.29/reference/index.html#available-services
Related
I'm new to Cloud Custodian and have the few doubts specific to using it for AWS.
I ran the following policy (no filters and actions present) so that I could get all the options for using as keys in value type filters :
policies:
- name: CheckPublicECRRepo
resource: ecr
The output was a detailed list of all the AWS ECR private repositories in my account which is exactly same as running aws ecr describe-repositories --region <region>.
So,
How AWS CLI command responses relate to those from running Cloud Custodian commands? Are they both calling same APIs? If yes, which API is being called here exactly?
How can I write a Cloud Custodian policy to detect AWS ECR public repositories? I'm getting the desired output by running this AWS CLI command : aws ecr-public describe-repositories --region us-east-1.
ecr-public resource does not seem to be supported yet. So I would either submit a feature request here or I would try to code the missing feature and contribute it.
I have a ecs task running with aws fargate. I generate some files on the container and need to upload these files to an s3 bucket.
Can I do this by installing the aws cli to the container?
I'm not sure about the following stuff:
Do I need to use some rest api (like python boto3 library) or can I use the aws console?
How should I authenticate the requests (iam and aws secrets manager?)
Do I need to use some rest api (like python boto3 library) or can I
use the aws console?
Are you asking how to install the AWS CLI into the Docker container running in ECS? You would need to update your Docker image to include the AWS CLI and then redeploy the container to ECS. The AWS API, Boto3, or the AWS console are not going to help with that task.
How should I authenticate the requests (iam and aws secrets manager?)
By assigning an IAM role to the ECS task.
As supported regions for AWS services and its resources are not always the same, I want to fetch the supported regions for resources programatically. Is there any available command to do so?
I can find for the service but not for resource:
For example,
aws ssm get-parameters-by-path --path /aws/service/global-infrastructure/services/ec2/regions --output json
Reference: New – Query for AWS Regions, Endpoints, and More Using AWS Systems Manager Parameter Store
I have created an API Gateway in AWS using the UI, I want to automate this process and write a shell script which will create the API Gateway, same as I have configured it in AWS.
As already suggested, I also recommend using better tools such as cloudformation to manage the infrastructure.
If you really want to use bash script, you can use AWS Cli commands in your bash script.
create API
aws apigateway create-rest-api --name my-api
get root resource
API=bs8fqo6bp0
aws apigateway get-resources --rest-api-id $API
create resource
aws apigateway create-resource --rest-api-id $API --path-part test \
--parent-id e8kitthgdb
here is an example from AWS docs, https://docs.aws.amazon.com/lambda/latest/dg/with-on-demand-https-example.html
Hi all!
Code: (entrypoint.sh)
printenv
CREDENTIALS=$(curl -s "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI")
ACCESS_KEY_ID=$(echo "$CREDENTIALS" | jq .AccessKeyId)
SECRET_ACCESS_KEY=$(echo "$CREDENTIALS" | jq .SecretAccessKey)
TOKEN=$(echo "$CREDENTIALS" | jq .Token)
export AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$SECRET_ACCESS_KEY
export AWS_SESSION_TOKEN=$TOKEN
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Problem:
I'm trying to fetch AWS S3 files to ECS inspired by:
AWS Documentation
(But I'm fetching from S3 directly, not throught VPC endpoint)
I have configured bucket policy & role policy (that is passed in taskDefinition as taskRoleArn & executionRoleArn)
Locally when I'm fetching with aws cli and passing temporary credentials (that I logged in ECS with printenv command in entrypoint script) everything works fine. I can save files on my pc.
On ECS I have error:
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Where can I find solution? Someone had similar problem?
Frist thing, If you are working inside AWS, It strongly recommended to use AWS ECS service role or ECS task role or EC2 role. you do need to fetch credentials from metadata.
But seems like the current role does have permission to s3 or the entrypoint not exporting properly the Environment variable.
If your container instance has already assing role then do not need to export Accesskey just call the aws s3 cp s3://BUCKET/file.txt /PATH/file.txt and it should work.
IAM Roles for Tasks
With IAM roles for Amazon ECS tasks, you can specify an IAM role that
can be used by the containers in a task. Applications must sign their
AWS API requests with AWS credentials, and this feature provides a
strategy for managing credentials for your applications to use,
similar to the way that Amazon EC2 instance profiles provide
credentials to EC2 instances. Instead of creating and distributing
your AWS credentials to the containers or using the EC2 instance’s
role, you can associate an IAM role with an ECS task definition or
RunTask API operation.
So the when you assign role to ECS task or ECS service your entrypoint will be that simple.
printenv
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Also, your export will not work as you are expecting, the best way to pass ENV to container form task definition, export will not in this case.
I will suggest assigning role to ECS task and it should work as you are expecting.