Error of the Zabbiks item when monitoring the ssl aws ALB - amazon-web-services

It is necessary with the help of Zabbiks to enable monitoring of SSL count licenses for ALB in aws.
Zabbiks can't recognize the item
UserParameter = ssl.count, aws elbv2 describe-listener-certificates --listener-arn --profile ******** arn: aws: elasticloadbalancing: eu-central - ******* ****** 1dcfc52e | jq '.Certificates | . []. CertificateArn '| wc -l
zabbix writes
Value of type "string" is not suitable for value type "Numeric (unsigned)".Value "The config profile (****) could not be found0"
If I change the Type of information to text, then everything works except for the triggers, which should return how many SSL licenses are in the ALB,
output from zabbix_agentd -t ssl.count command ssl.count [t | 26] everything works from the console, all credential are fine.
ABL in zabbix declared in macros {$ HOST.NAME} main-devs - *******. Eu-central-1.elb.amazonaws.com
Tell me who came across, what could it be?

The problems is that you are not redirecting the standard error, and your output is not a number. Since it's not a number, is not valid in a Numeric item.
Example:
$ ls -l temp.sh nonexisting.sh
ls: cannot access nonexisting.sh: No such file or directory
-rw-r--r-- 1 myuser domain users 14918 Oct 15 2019 temp.sh
$ ls -l temp.sh nonexisting.sh | wc -l
ls: cannot access nonexisting.sh: No such file or directory
1
$ ls -l temp.sh nonexisting.sh 2>/dev/null | wc -l
1
With your code:
UserParameter = ssl.count, aws elbv2 describe-listener-certificates --listener-arn --profile ******** arn: aws: elasticloadbalancing: eu-central - ******* ****** 1dcfc52e 2>/dev/null | jq '.Certificates | . []. CertificateArn '| wc -l

Related

My aws cli command returns an error message "The specified key does not exist."

here is the command I used, and it works fine when there is no space space in S3 URL
aws s3 ls <s3://bucket/folder/> --recursive | awk '{print $4}' | awk "NR >= 2" | xargs -I %%% aws s3api restore -object --bucket <bucket> --restore-request Days=3,GlacierJobParameters={"Tier"="Bulk"} --key %%%
But if there is space in s3 url like the picture I attached, it returns an error message. I don't know what the problem is, how do I fix it?

AWS IAM - How to show describe policy statements using the CLI?

How can I use the AWS CLI to show an IAM policy's full body including the Effect, Action and Resource statements?
"aws iam list-policies" command lists all the policies but not the actual JSON E,A,R statements contained within the policy.
I could use the "aws iam get-policy-version" command but this does not show the policy name in its output. When I am running this command via a script to obtain information for dozens of policies, there is no way to know which policy the output will belong to.
Is there another way of doing this?
The only to do this as you've said is the following:
Get all IAM Policies via the list-policies verb.
Loop over the output, taking the "PolicyId" and "DefaultVersionId".
Pass these into the get-policy-version verb.
Map the PolicyName from the iteration to the PolicyVersion.Document value in the second request.
Slight modification to #uberhumus suggestion to reduce the number of policies that will be extracted . Use the --scope Local qualifier in the query to limit it . Otherwise it will spit out 100's of policies in the account . limiting the scope to local will only list policies which are user provisioned in the account ... Here's the modified version :
RAW_POLICIES=$(aws iam list-policies **--scope Local** --query Policies[].[Arn,PolicyName,DefaultVersionId])
POLICIES=$(echo $RAW_POLICIES | tr -d " " | sed 's/\],/\]\n/g')
for POLICY in $POLICIES
do echo $POLICY | cut -d '"' -f 4
echo -e "---------------\n"
aws iam get-policy-version --version-id $(echo $POLICY | cut -d '"' -f 6) --policy-arn $(echo $POLICY | cut -d '"' -f 2)
echo -e "\n-----------------\n"
done
As mokugo-devops said in his answer, and you stated in your question, you could only use "get-policy-version" to get the proper JSON. Here is how I would do it:
RAW_POLICIES=$(aws iam list-policies --query Policies[].[Arn,PolicyName,DefaultVersionId])
POLICIES=$(echo $RAW_POLICIES | tr -d " " | sed 's/\],/\]\n/g')
for POLICY in $POLICIES
do echo $POLICY | cut -d '"' -f 4
echo -e "---------------\n"
aws iam get-policy-version --version-id $(echo $POLICY | cut -d '"' -f 6) --policy-arn $(echo $POLICY | cut -d '"' -f 2)
echo -e "\n-----------------\n"
done
Now a bit of explanation about the script:
RAW_POLICIES will get you a giant list of arrays that would each contain the name of the policy as requested and the Policy ARN, and Default Version ID as needed. It would however contain spaces that would make iterating over it directly in bash less comfortable (though not impossible for the sufficiently stubborn).
To make the upcoming loop more easy we will clean the spaces and then use sed to insert the spaces we will need. This is done in the 2nd line which defines the POLICIES variable.
This leaves us very little to do in the actual loop. Here we just print the Policy name, some pretty lines and invoke the function that you predicted will be the one used, get-policy-version.

How to collect a grep and use it in a aws configset

In my aws Cloud Formation cfn configset I have a command to set an environment key to the name of the user group apache belongs to as it might be apache or www-data depending on the distro.
Something like this:
Metadata:
AWS::CloudFormation::Init:
configSets:
joomla:
- "set_permissions"
- "and_some_more..."
configure_cfn:
files:
/etc/cfn/hooks.d/cfn-auto-reloader.conf:
content: !Sub |
[cfn-auto-reloader-hook]
triggers=post.update
path=Resources.EC2.Metadata.AWS::CloudFormation::Init
action=/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource EC2 --configsets joomla --region ${AWS::Region}
mode: "000400"
owner: root
group: root
.....
set_permissions:
commands:
01_01_get_WebServerGroup:
env:
#webserver group might be apache or www-data depending on the distro
WebServerGp:
command: "ps -ef | egrep '(httpd|apache2|apache)' | grep -v `whoami` | grep -v root | head -n1 | awk '{print $1}'"
However, when I launch this stack, the configsets process halts at this point and I get an an error in the cfn_init.log that looks like this:
File
"/usr/lib/python2.7/dist-packages/cfnbootstrap/command_tool.py", line
80, in apply
raise ToolError(u"%s does not specify the 'command' attribute, which is required" % name) ToolError: 01_01_get_WebServerGroup does
not specify the 'command' attribute, which is required
Is this the preferred method to catch and use a grep result in a configset command? Is there a better way? What can I do to address the error thrown in the cfn_init.log?
OK, I guess I can create parameter and mapping elements to capture the distro type on launch and then set the webserver group accordingly but I am really trying to understand how to set the env: key to a response from the cli.
The problem of your code is this line WebServerGp.
Line command is must be on the same level of env, under the commands name, in your case is 01_01_get_WebServerGroup. So, it has to be like this:
commands:
01_01_get_WebServerGroup:
env: ..
command: ..
If you want to use the result of grep, you can put them on variable and use it later.
You can specify more than one command under that command line using \n for executing the command.
Please check this code below.
command: "result=(ps ef | grep ...)\n echo $result\n ..."
If you have really long command, you can use the Fn::Join to be the value of command.

How to retrieve the most recent file in cloud storage bucket?

Is this something that can be done with gsutil?
https://cloud.google.com/storage/docs/gsutil/commands/ls does not seem to mention any sorting functionality - only filtering by a date - which wouldn't work for my use case.
Hello this still doesn't seems to exists, but there is a solution in this post: enter link description here
The command used is this one:
gsutil ls -l gs://[bucket-name]/ | sort -k 2
As it allow you to filter by date you can get the most recent result in the bucket and recuperating the last line using another pipe if you need.
gsutil ls -l gs://<bucket-name> | sort -k 2 | tail -n 2 | head -1 | cut -d ' ' -f 7
It will not work well if there is less then two objects in the bucket though
By using gsutil from a host machine this will populate the response array:
response=(`gsutil ls -l gs://some-bucket-name|sort -k 2|tail -2|head -1`)
Or by gsutil from docker container:
response=(`docker run --name some-container-name --rm --volumes-from gcloud-config -it google/cloud-sdk:latest gsutil ls -l gs://some-bucket-name|sort -k 2|tail -2|head -1`)
Afterwards, to get the whole response, run:
echo ${response[#]}
will print for example:
33 2021-08-11T09:24:55Z gs://some-bucket-name/filename-37.txt
Or to get separate info from the response, (e.g. filename)
echo ${response[2]}
will print the filename only
gs://some-bucket-name/filename-37.txt
For my use case, I wanted to find the most recent directory in my bucket. I number them in ascending order (with leading zeros), so all I need to get the most recent one is this:
gsutil ls -l gs://[bucket-name] | sort | tail -n 1 | cut -d '/' -f 4
list the directory
sort alphabetically (probably unnecessary)
take the last line
tokenise it with "/" delimiter
get the 4th token, which is the directory name

looking for s3cmd download command for a certain date

I am trying to figure out on what the s3cmd command would be to download files from bucket by date, so for example i have a bucket named "test" and in that bucket there are different files from different dates. I am trying to get the files that were uploaded yesterday. what would the command be?
There is no single command that will allow you to do that. You have to write a script some thing like this. Or use a SDK that allows you to do this. Below script is a sample script that will get S3 files from last 30 days.
#!/bin/bash
# Usage: ./getOld "bucketname" "30 days"
s3cmd ls s3://$1 | while read -r line; do
createDate=`echo $line|awk {'print $1" "$2'}`
createDate=`date -d"$createDate" +%s`
olderThan=`date -d"-$2" +%s`
if [[ $createDate -lt $olderThan ]]
then
fileName=`echo $line|awk {'print $4'}`
echo $fileName
if [[ $fileName != "" ]]
then
s3cmd get "$fileName"
fi
fi
done;
I like s3cmd but to work with single line command, I prefer the JSon output of aws cli and jq JSon processor
The command will look like
aws s3api list-objects --bucket "yourbucket" |\
jq '.Contents[] | select(.LastModified | startswith("yourdate")).Key' --raw-output |\
xargs -I {} aws s3 cp s3://yourbucket/{} .
basically what the script does
list all object from a given bucket
(the interesting part) jq will parse the Contents array and select element where the LastModified value start with your pattern (you will need to change), get the Key of the s3 object and add --raw-output so it strips the quote from the value
pass the result to an aws copy command to download the file from s3
if you want to automate a bit further you can get yesterday from the command line
for mac os
$ export YESTERDAY=`date -v-1w +%F`
$ aws s3api list-objects --bucket "ariba-install" |\
jq '.Contents[] | select(.LastModified | startswith('\"$YESTERDAY\"')).Key' --raw-output |\
xargs -I {} aws s3 cp s3://ariba-install/{} .
for linux os (or other flavor of bash that I am not familiar)
$ export YESTERDAY=`date -d "1 day ago" '+%Y-%m-%d' `
$ aws s3api list-objects --bucket "ariba-install" |\
jq '.Contents[] | select(.LastModified | startswith('\"$YESTERDAY\"')).Key' --raw-output |\
xargs -I {} aws s3 cp s3://ariba-install/{} .
Now you get the idea if you want to change the YESTERDAY variable to have different kind of date