How do I do a one-time check if a lambda function exists via the CLI? I saw this function-exists option - https://docs.aws.amazon.com/cli/latest/reference/lambda/wait/function-exists.html
But it polls every second and returns a failure after 20 failed checks. I only want to check once and fail if it isn't found. Is there a way to do that?
You can check the exit code of get-function in bash. If the function does not exist, it returns exit code 255 else it returns 0 on success.
e.g.
aws lambda get-function --function-name my_lambda
echo $?
And you can use it like below:
(paste this in your terminal)
function does_lambda_exist() {
aws lambda get-function --function-name $1 > /dev/null 2>&1
if [ 0 -eq $? ]; then
echo "Lambda '$1' exists"
else
echo "Lambda '$1' does not exist"
fi
}
does_lambda_exist my_lambda_fn_name
Related
How do I check if a named profile exists before I attempt to use it ?
aws cli will throw an ugly error if I attempt to use a non-existent profile, so I'd like to do something like this :
$(awsConfigurationExists "${profile_name}") && aws iam list-users --profile "${profile_name}" || echo "can't do it!"
Method 1 - Check entries in the .aws/config file
function awsConfigurationExists() {
local profile_name="${1}"
local profile_name_check=$(cat $HOME/.aws/config | grep "\[profile ${profile_name}]")
if [ -z "${profile_name_check}" ]; then
return 1
else
return 0
fi
}
Method 2 - Check results of aws configure list , see aws-cli issue #819
function awsConfigurationExists() {
local profile_name="${1}"
local profile_status=$( (aws configure --profile ${1} list) 2>&1)
if [[ $profile_status = *'could not be found'* ]]; then
return 1
else
return 0
fi
}
usage
$(awsConfigurationExists "my-aws-profile") && echo "does exist" || echo "does not exist"
or
if $(awsConfigurationExists "my-aws-profile"); then
echo "does exist"
else
echo "does not exist"
fi
I was stuck with the same problem and the proposed answer did not work for me.
Here is my solution with aws-cli/2.8.5 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/off:
export AWS_PROFILE=localstack
aws configure list-profiles | grep -q "${AWS_PROFILE}"
if [ $? -eq 0 ]; then
echo "AWS Profile [$AWS_PROFILE] already exists"
else
echo "Creating AWS Profile [$AWS_PROFILE]"
aws configure --profile $AWS_PROFILE set aws_access_key_id test
aws configure --profile $AWS_PROFILE set aws_secret_access_key test
fi
I can get the details with
$ aws lambda get-function --function-name random_number
{
"Configuration": {
"FunctionName": "random_number",
"FunctionArn": "arn:aws:lambda:us-east-2:193693970645:function:random_number",
"Runtime": "ruby2.5",
"Role": "arn:aws:iam::193693970645:role/service-role/random_number-role-8cy8a1a7",
...
But how can get just a couple of fields like function name ?
I tried:
$ aws lambda get-function --function-name random_number --query "Configuration[*].[FunctionName]"
but I get null
Your overall approach is correct, you just need to adjust the query:
$ aws lambda get-function --function-name random_number \
--query "Configuration.FunctionName" --output text
I also added a parameter to convert the result to text, which makes processing a bit easier.
Here is a simple awk (standard Linux gnu awk) script that does the trick: Extract the values of quoted field #3, only for line having /FunctionName/.
awk 'BEGIN {FPAT="\"[^\"]+"}/FunctionName/{print substr($3,2)}'
Piped with your initial command:
$ aws lambda get-function --function-name random_number | awk 'BEGIN {FPAT="\"[^\"]+"}/FunctionName/{print substr($3,2)}'
One way to achieve that is by using jq.
therefore, the output must be JSON.
From the docs :
jq is like sed for JSON data - you can use it to slice and filter and
map and transform structured data with the same ease that sed, awk,
grep and friends let you play with text.
Usage example :
aws lambda get-function --function-name test --output json | jq -r '.Configuration.FunctionName'
Use get-function-configuration as in the following:
aws lambda get-function-configuration --function-name MyFunction --query "[FunctionName]"
I have a Jenkins pipeline where I need to check if an AWS Lambda function exists. During this check, you receive a 0 response if it is a function in AWS, and 255 if it isn't a function in AWS. But the script exits upon reaching this 255.
I had seen code on here about ways to check if a function exists or not, and ended up pursuing the technique of checking the exit codes. However I need a way to catch this exit code without actually exiting the Jenkins pipeline script.
For test purposes, testJenkins already exists in AWS Lambda, while testJenkins2 does not exist.
agent any
environment {
PATH = "/var/lib/jenkins/aws:$PATH"
}
stages {
stage('Checkout code') {
steps {
checkout scm
}
}
stage('Zip up Lambda') {
steps {
dir('lambda/testJenkins') {
sh 'zip testJenkins.zip testJenkins.py'
}
}
}
stage('Upload to AWS') {
steps {
dir('lambda/testJenkins') {
sh '''function does_lambda_exist() {
aws lambda get-function --function-name $1 > /dev/null 2>&1
if [ 0 -eq $? ]; then
echo "Lambda '$1' exists"
else
echo "Lambda '$1' does not exist"
fi
} && does_lambda_exist testJenkins && does_lambda_exist testJenkins2'''
sh 'aws lambda create-function --zip-file fileb://testJenkins.zip --function-name testJenkins --runtime python3.7 --role <role> --handler testJenkins.lambda_handler'
}
}
}
}
}
It should continue after the first test, but unfortunately exits right after receiving the exit code:
+ does_lambda_exist testJenkins
+ aws lambda get-function --function-name testJenkins
+ '[' 0 -eq 0 ']'
+ echo 'Lambda '\''testJenkins'\'' exists'
Lambda 'testJenkins' exists
+ does_lambda_exist testJenkins2
+ aws lambda get-function --function-name testJenkins2
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 255
Finished: FAILURE
Any help/better methods is greatly appreciated. New to Jenkins pipelines.
Your first sh step fails since one command in the && chain fails. You don't need an && here. The commands in the first sh steps can be separated using normal line breaks. Also, use the shebang while you start the script. Example will be:
#!/bin/bash
function does_lambda_exist() {
aws lambda get-function --function-name $1 > /dev/null 2>&1
if [ 0 -eq $? ]; then
echo "Lambda '$1' exists"
else
echo "Lambda '$1' does not exist"
fi
}
does_lambda_exist testJenkins
does_lambda_exist testJenkins2
What is the purpose of “&&” in a shell command?
I'm trying to paginate over EC2 Reserved Instance offerings, but can't seem to paginate via the CLI (see below).
% aws ec2 describe-reserved-instances-offerings --max-results 20
{
"NextToken": "someToken",
"ReservedInstancesOfferings": [
{
...
}
]
}
% aws ec2 describe-reserved-instances-offerings --max-results 20 --starting-token someToken
Parameter validation failed:
Unknown parameter in input: "PaginationConfig", must be one of: DryRun, ReservedInstancesOfferingIds, InstanceType, AvailabilityZone, ProductDescription, Filters, InstanceTenancy, OfferingType, NextToken, MaxResults, IncludeMarketplace, MinDuration, MaxDuration, MaxInstanceCount
The documentation found in [1] says to use start-token. How am I supposed to do this?
[1] http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-reserved-instances-offerings.html
With deference to a 2017 solution by marjamis which must have worked on a prior CLI version, please see a working approach for paginating from AWS in bash from a Mac laptop and aws-cli/2.1.2
# The scope of this example requires that credentials are already available or
# are passed in with the AWS CLI command.
# The parsing example uses jq, available from https://stedolan.github.io/jq/
# The below command is the one being executed and should be adapted appropriately.
# Note that the max items may need adjusting depending on how many results are returned.
aws_command="aws emr list-instances --max-items 333 --cluster-id $active_cluster"
unset NEXT_TOKEN
function parse_output() {
if [ ! -z "$cli_output" ]; then
# The output parsing below also needs to be adapted as needed.
echo $cli_output | jq -r '.Instances[] | "\(.Ec2InstanceId)"' >> listOfinstances.txt
NEXT_TOKEN=$(echo $cli_output | jq -r ".NextToken")
fi
}
# The command is run and output parsed in the below statements.
cli_output=$($aws_command)
parse_output
# The below while loop runs until either the command errors due to throttling or
# comes back with a pagination token. In the case of being throttled / throwing
# an error, it sleeps for three seconds and then tries again.
while [ "$NEXT_TOKEN" != "null" ]; do
if [ "$NEXT_TOKEN" == "null" ] || [ -z "$NEXT_TOKEN" ] ; then
echo "now running: $aws_command "
sleep 3
cli_output=$($aws_command)
parse_output
else
echo "now paginating: $aws_command --starting-token $NEXT_TOKEN"
sleep 3
cli_output=$($aws_command --starting-token $NEXT_TOKEN)
parse_output
fi
done #pagination loop
Looks like some busted documentation.
If you run the following, this works:
aws ec2 describe-reserved-instances-offerings --max-results 20 --next-token someToken
Translating the error message, it said it expected NextToken which can be represented as next-token on the CLI.
If you continue to read the reference documentation that you provided, you will learn that:
--starting-token (string)
A token to specify where to start paginating. This is the NextToken from a previously truncated response.
Moreover:
--max-items (integer)
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
I have managed to push my application logs to AWS Cloudwatch by using the AWS CloudWatch log agent. But the CloudWatch web console does not seem to provide a button to allow you to download/export the log data from it.
Any idea how I can achieve this goal?
The latest AWS CLI has a CloudWatch Logs cli, that allows you to download the logs as JSON, text file or any other output supported by AWS CLI.
For example to get the first 1MB up to 10,000 log entries from the stream a in group A to a text file, run:
aws logs get-log-events \
--log-group-name A --log-stream-name a \
--output text > a.log
The command is currently limited to a response size of maximum 1MB (up to 10,000 records per request), and if you have more you need to implement your own page stepping mechanism using the --next-token parameter. I expect that in the future the CLI will also allow full dump in a single command.
Update
Here's a small Bash script to list events from all streams in a specific group, since a specified time:
#!/bin/bash
function dumpstreams() {
aws $AWSARGS logs describe-log-streams \
--order-by LastEventTime --log-group-name $LOGGROUP \
--output text | while read -a st; do
[ "${st[4]}" -lt "$starttime" ] && continue
stname="${st[1]}"
echo ${stname##*:}
done | while read stream; do
aws $AWSARGS logs get-log-events \
--start-from-head --start-time $starttime \
--log-group-name $LOGGROUP --log-stream-name $stream --output text
done
}
AWSARGS="--profile myprofile --region us-east-1"
LOGGROUP="some-log-group"
TAIL=
starttime=$(date --date "-1 week" +%s)000
nexttime=$(date +%s)000
dumpstreams
if [ -n "$TAIL" ]; then
while true; do
starttime=$nexttime
nexttime=$(date +%s)000
sleep 1
dumpstreams
done
fi
That last part, if you set TAIL will continue to fetch log events and will report newer events as they come in (with some expected delay).
There is also a python project called awslogs, allowing to get the logs: https://github.com/jorgebastida/awslogs
There are things like:
list log groups:
$ awslogs groups
list streams for given log group:
$ awslogs streams /var/log/syslog
get the log records from all streams:
$ awslogs get /var/log/syslog
get the log records from specific stream :
$ awslogs get /var/log/syslog stream_A
and much more (filtering for time period, watching log streams...
I think, this tool might help you to do what you want.
It seems AWS has added the ability to export an entire log group to S3.
You'll need to setup permissions on the S3 bucket to allow cloudwatch to write to the bucket by adding the following to your bucket policy, replacing the region with your region and the bucket name with your bucket name.
{
"Effect": "Allow",
"Principal": {
"Service": "logs.us-east-1.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::tsf-log-data"
},
{
"Effect": "Allow",
"Principal": {
"Service": "logs.us-east-1.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::tsf-log-data/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
Details can be found in Step 2 of this AWS doc
The other answers were not useful with AWS Lambda logs since they create many log streams and I just wanted to dump everything in the last week. I finally found the following command to be what I needed:
aws logs tail --since 1w LOG_GROUP_NAME > output.log
Note that LOG_GROUP_NAME is the lambda function path (e.g. /aws/lambda/FUNCTION_NAME) and you can replace the since argument with a variety of times (1w = 1 week, 5m = 5 minutes, etc)
I would add that one liner to get all logs for a stream :
aws logs get-log-events --log-group-name my-log-group --log-stream-name my-log-stream | grep '"message":' | awk -F '"' '{ print $(NF-1) }' > my-log-group_my-log-stream.txt
Or in a slightly more readable format :
aws logs get-log-events \
--log-group-name my-log-group\
--log-stream-name my-log-stream \
| grep '"message":' \
| awk -F '"' '{ print $(NF-1) }' \
> my-log-group_my-log-stream.txt
And you can make a handy script out of it that is admittedly less powerful than #Guss's but simple enough. I saved it as getLogs.sh and invoke it with ./getLogs.sh log-group log-stream
#!/bin/bash
if [[ "${#}" != 2 ]]
then
echo "This script requires two arguments!"
echo
echo "Usage :"
echo "${0} <log-group-name> <log-stream-name>"
echo
echo "Example :"
echo "${0} my-log-group my-log-stream"
exit 1
fi
OUTPUT_FILE="${1}_${2}.log"
aws logs get-log-events \
--log-group-name "${1}"\
--log-stream-name "${2}" \
| grep '"message":' \
| awk -F '"' '{ print $(NF-1) }' \
> "${OUTPUT_FILE}"
echo "Logs stored in ${OUTPUT_FILE}"
Apparently there isn't an out-of-box way from AWS Console where you can download the CloudWatchLogs. Perhaps you can write a script to perform the CloudWatchLogs fetch using the SDK / API.
The good thing about CloudWatchLogs is that you can retain the logs for infinite time(Never Expire); unlike the CloudWatch which just keeps the logs for just 14 days. Which means you can run the script in monthly / quarterly frequency rather than on-demand.
More information about the CloudWatchLogs API,
http://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/Welcome.html
http://awsdocs.s3.amazonaws.com/cloudwatchlogs/latest/cwl-api.pdf
You can now perform exports via the Cloudwatch Management Console with the new Cloudwatch Logs Insights page. Full documentation here https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_ExportQueryResults.html. I had already started ingesting my Apache logs into Cloudwatch with JSON, so YMMV if you haven't set it up in advance.
Add Query to Dashboard or Export Query Results
After you run a query, you can add the query to a CloudWatch
dashboard, or copy the results to the clipboard.
Queries added to dashboards automatically re-run every time you load
the dashboard and every time that the dashboard refreshes. These
queries count toward your limit of four concurrent CloudWatch Logs
Insights queries.
To add query results to a dashboard
Open the CloudWatch console at
https://console.aws.amazon.com/cloudwatch/.
In the navigation pane, choose Insights.
Choose one or more log groups and run a query.
Choose Add to dashboard.
Select the dashboard, or choose Create new to create a new dashboard
for the query results.
Choose Add to dashboard.
To copy query results to the clipboard
Open the CloudWatch console at
https://console.aws.amazon.com/cloudwatch/.
In the navigation pane, choose Insights.
Choose one or more log groups and run a query.
Choose Actions, Copy query results.
Inspired by saputkin I have created a pyton script that downloads all the logs for a log group in given time period.
The script itself: https://github.com/slavogri/aws-logs-downloader.git
In case there are multiple log streams for that period multiple files will be created. Downloaded files will be stored in current directory, and will be named by the log streams that has a log events in given time period. (If the group name contains forward slashes, they will be replaced by underscores. Each file will be overwritten if it already exists.)
Prerequisite: You need to be logged in to your aws profile. The Script itself is going to use on behalf of you the AWS command line APIs: "aws logs describe-log-streams" and "aws logs get-log-events"
Usage example: python aws-logs-downloader -g /ecs/my-cluster-test-my-app -t "2021-09-04 05:59:50 +00:00" -i 60
optional arguments:
-h, --help show this help message and exit
-v, --version show program's version number and exit
-g , --log-group (required) Log group name for which the log stream events needs to be downloaded
-t , --end-time (default: now) End date and time of the downloaded logs in format: %Y-%m-%d %H:%M:%S %z (example: 2021-09-04 05:59:50 +00:00)
-i , --interval (default: 30) Time period in minutes before the end-time. This will be used to calculate the time since which the logs will be downloaded.
-p , --profile (default: dev) The aws profile that is logged in, and on behalf of which the logs will be downloaded.
-r , --region (default: eu-central-1) The aws region from which the logs will be downloaded.
Please let me now if it was useful to you. :)
After I did it I learned that there is another option using Boto3: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/logs.html#CloudWatchLogs.Client.get_log_events
Still the command line API seems to me like a good option.
export LOGGROUPNAME=[SOME_LOG_GROUP_NAME]; for LOGSTREAM in `aws --output text logs describe-log-streams --log-group-name ${LOGGROUPNAME} |awk '{print $7}'`; do aws --output text logs get-log-events --log-group-name ${LOGGROUPNAME} --log-stream-name ${LOGSTREAM} >> ${LOGGROUPNAME}_output.txt; done
Adapted #Guyss answer to macOS. As I am not really a bash guy, had to use python, to convert dates to a human-readable form.
runaswslog -1w gets last week and so on
runawslog() { sh awslogs.sh $1 | grep "EVENTS" | python parselogline.py; }
awslogs.sh:
#!/bin/bash
#set -x
function dumpstreams() {
aws $AWSARGS logs describe-log-streams \
--order-by LastEventTime --log-group-name $LOGGROUP \
--output text | while read -a st; do
[ "${st[4]}" -lt "$starttime" ] && continue
stname="${st[1]}"
echo ${stname##*:}
done | while read stream; do
aws $AWSARGS logs get-log-events \
--start-from-head --start-time $starttime \
--log-group-name $LOGGROUP --log-stream-name $stream --output text
done
}
AWSARGS=""
#AWSARGS="--profile myprofile --region us-east-1"
LOGGROUP="/aws/lambda/StockTrackFunc"
TAIL=
FROMDAT=$1
starttime=$(date -v ${FROMDAT} +%s)000
nexttime=$(date +%s)000
dumpstreams
if [ -n "$TAIL" ]; then
while true; do
starttime=$nexttime
nexttime=$(date +%s)000
sleep 1
dumpstreams
done
fi
parselogline.py:
import sys
import datetime
dat=sys.stdin.read()
for k in dat.split('\n'):
d=k.split('\t')
if len(d)<3:
continue
d[2]='\t'.join(d[2:])
print( str(datetime.datetime.fromtimestamp(int(d[1])/1000)) + '\t' + d[2] )
I had a similar use case where i had to download all the streams for a given log group. See if this script helps.
#!/bin/bash
if [[ "${#}" != 1 ]]
then
echo "This script requires two arguments!"
echo
echo "Usage :"
echo "${0} <log-group-name>"
exit 1
fi
streams=`aws logs describe-log-streams --log-group-name "${1}"`
for stream in $(jq '.logStreams | keys | .[]' <<< "$streams"); do
record=$(jq -r ".logStreams[$stream]" <<< "$streams")
streamName=$(jq -r ".logStreamName" <<< "$record")
echo "Downloading ${streamName}";
echo `aws logs get-log-events --log-group-name "${1}" --log-stream-name "$streamName" --output json > "${stream}.log" `
echo "Completed dowload:: ${streamName}";
done;
You have have pass log group name as an argument.
Eg: bash <name_of_the_bash_file>.sh <group_name>
I found AWS Documentation to be complete and accurate. https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasks.html
This laid down steps for exporting logs from Cloudwatch to S3