Has anybody been able to set up auto-complete for the AWS CLI with fish shell? The AWS documentation only offers the guide for bash, tcsh, and zsh.
Bash exports the variables COMP_LINE and COMP_POINT that is used by the aws_completer script provided by the Amazon. Is there any equivalent for fish? I'm new with the fish shell and I'm giving it a try.
Building upon David Roussel's answers I cooked up the following:
function __fish_complete_aws
env COMP_LINE=(commandline -pc) aws_completer | tr -d ' '
end
complete -c aws -f -a "(__fish_complete_aws)"
Put this in a file $HOME/.config/fish/completions/aws.fish so fish can autoload it when necessary.
aws_completer appends a space after every option it prints and that gets escaped as \ so trimming it solves the trailing backslashes.
Now we can test the completion with the following:
> complete -C'aws co'
codebuild
codecommit
codepipeline
codestar
cognito-identity
cognito-idp
cognito-sync
comprehend
comprehendmedical
connect
configure
configservice
Using the commandline -c helps if you move back the cursor since it cuts the command line at the cursor so aws_completer can offer the right completions.
I also want to get his to work, and I've made some progress, but it's not perfect.
First I look some advise from here which helps to seem how to emulate the bash environment variables that as_completer expects.
Putting it together I get this:
complete -c aws -f -a '(begin; set -lx COMP_SHELL fish; set -lx COMP_LINE (commandline); /usr/local/bin/aws_completer; end)'
That mostly works but I get spurious extra slashes, so if I try to complete "aws ec2 describe-instances --" I get:
dave#retino ~> aws ec2 describe-instances --
--ca-bundle\ --color\ --filters\ --no-dry-run\ --output\ --region\
--cli-connect-timeout\ --debug\ --generate-cli-skeleton --no-paginate\ --page-size\ --starting-token\
--cli-input-json\ --dry-run\ --instance-ids\ --no-sign-request\ --profile\ --version\
--cli-read-timeout\ --endpoint-url\ --max-items\ --no-verify-ssl\ --query\
It looks to me like there is a trailing whitespace char, but I tried to remove it using sed:
complete -c aws -f -a '(begin; set -lx COMP_SHELL fish; set -lx COMP_LINE (commandline); echo (/usr/local/bin/aws_completer | sed -e \'s/[ ]*//\') ; end)'
But this doesn't seem to help. It seems that fish expects a different output format than bash for it's completer. And indeed the fish decimation for the complete builtin doe say that it expects a space separated list.
So I tried joining the lines with xargs:
complete -c aws -f -a '(begin; set -lx COMP_SHELL fish; set -lx COMP_LINE (commandline); echo (/usr/local/bin/aws_completer | sed -e \'s/[ ]*//\') | xargs echo ; end)'
But this doesn't work either. I just get one completion
This is annoying, I'm so close, but it doesn't work!
While the provided answer doesn't answer the question directly about the using fish; I intend to provide an answer to help in the context of auto-completion & shell.
Amazon has launched a new CLI based tool forked from AWSCLI.
aws-shell is a command-line shell program that provides convenience
and productivity features to help both new and advanced users of the
AWS Command Line Interface. Key features include the following.
Fuzzy auto-completion
Commands (e.g. ec2, describe-instances, sms, create-queue)
Options (e.g. --instance-ids, --queue-url)
Resource identifiers (e.g. Amazon EC2 instance IDs, Amazon SQS queue URLs, Amazon SNS topic names)
Dynamic in-line documentation
Documentation for commands and options are displayed as you type
Execution of OS shell commands
Use common OS commands such as cat, ls, and cp and pipe inputs and outputs without leaving the shell
Export executed commands to a text editor To find out more, check out the related blog post on AWS Command Line Interface blog.
Add this line to your .config/fish/config.fish
complete --command aws --no-files --arguments '(begin; set --local --export COMP_SHELL fish; set --local --export COMP_LINE (commandline); aws_completer | sed \'s/ $//\'; end)'
In case you want to make sure that aws-cli is installed:
test -x (which aws_completer); and complete --command aws --no-files --arguments '(begin; set --local --export COMP_SHELL fish; set --local --export COMP_LINE (commandline); aws_completer | sed \'s/ $//\'; end)'
All credits belong to this issue thread and a comment by an awesome SO contributor #scooter-dangle.
It's actually possible to map bash's completion to fish's.
See the npm completions.
However it's probably still better to write a real fish script (it's not hard!).
The command I use in my virtualenv/bin/activate is this:
complete -C aws_completer aws
Looks like aws-cli has fish support too. There is a bundled installer provided with aws-cli that might be worth checking out: activate.fish. I found it in the same bin directory as the aws command.
For example:
ubuntu#ip-xxx-xx-x-xx:/data/src$ tail -n1 ~/venv/bin/activate
complete -C aws_completer aws
ubuntu#ip-xxx-xx-x-xx:/data/src$ source ~/venv/bin/activate
(venv) ubuntu#ip-xxx-xx-x-xx:/data/src$ aws s3 <- hitting TAB here
cp ls mb mv presign rb rm sync website
(venv) ubuntu#ip-xxx-xx-x-xx:/data/src$ aws s3
Related
I've used jq many times to parse, pick values etc from JSON returned by AWS CLI, e.g. for ec2 describe-instances etc.
Now I'm using the dockerized version of AWS CLI v2 to get a list of CloudWatch log groups:
$ alias aws='docker run --rm -it -v ~/.aws:/root/.aws -e AWS_PROFILE -e AWS_REGION amazon/aws-cli'
$ aws logs describe-log-groups
{
"logGroups": [
{
...
}
]
}
This looks like proper JSON; however when piping this to jq, I get:
parse error: Invalid numeric literal at line 1, column 2
If looking at the JSON returned from aws in a binary editor, or using jq:s inputs feature, I see that it contains a lot of control codes:
[
"\u001b[?1h\u001b=\r{\u001b[m\r",
" \"logGroups\": [\u001b[m\r",
" {\u001b[m\r",
" \"logGroupName\": \"/aws/lambda/...
...
It seems to me that it's the fact that I'm using AWS CLI through docker that causes this, because when using the old fashioned AWS CLI v1 installed using pip it does not happen - but it could also be v2 vs v1 that is the key difference rather than using docker as the environment I guess (I never tried installing v2 natively).
These control codes, e.g. \u001b[m, look like ANSI codes to control formatting such as bold, colors etc. But AFAIK the AWS CLI does not use colored/ANSI output. Why are they included in the returned JSON? Is there a simple tool to strip them away so that I can continue using the dockerized AWS CLI v2 and pipe the output to jq? I found other answers using complex sed patterns and I thought to myself that there must be a simpler way to do this?
Edit: here's a minimal example that shows the control codes using xxd. I deliberately listed log groups with a mismatching filter to get an empty array:
$ aws logs describe-log-groups --log-group-name-prefix FOO > foo.txt
$ xxd foo.txt
00000000: 1b5b 3f31 681b 3d0d 7b1b 5b6d 0d0a 2020 .[?1h.=.{.[m..
00000010: 2020 226c 6f67 4772 6f75 7073 223a 205b "logGroups": [
00000020: 5d1b 5b6d 0d0a 7d1b 5b6d 0d0a 0d1b 5b4b ].[m..}.[m....[K
00000030: 1b5b 3f31 6c1b 3e .[?1l.>
$ cat foo.txt | jq
parse error: Invalid numeric literal at line 1, column 2
$
Same thing displayed with xxd when using non-dockerized AWS CLI v1:
00000000: 7b0a 2020 2020 226c 6f67 4772 6f75 7073 {. "logGroups
00000010: 223a 205b 5d0a 7d0a ": [].}.
The aws CLI is feeding its output to less because a) you've allocated a pseudo-tty with the -it flags, b) as far as the process is concerned, it's outputting directly to the tty and not to a pipe, and c) you haven't told it do anything else instead. The correct fix is to remove -it - it's only there for when you need to provide interactive input, and if you're piping the output to jq then you don't need or want interactive input. However, if you're trying to set up an alias or function to behave seamlessly in place of aws, you need to decide whether or not to pass -it based on whether you want interactive input. You could try this:
function daws() {
local usetty
if [ -t 1 ]
then
usetty=-it
else
usetty=
fi
docker run --rm $usetty -v ~/.aws:/root/.aws -e AWS_PROFILE -e AWS_REGION amazon/aws-cli "$#"
}
And in one line:
function daws() { local usetty; if [ -t 1 ]; then usetty=-it; else usetty=; fi; docker run --rm $usetty -v ~/.aws:/root/.aws -e AWS_PROFILE -e AWS_REGION amazon/aws-cli "$#"; }
Alternatively you can pass --no-cli-pager or set the environment variable AWS_PAGER to cat, but I tried both and while they didn't result in any errors, there was still some odd messy output where newlines weren't applying a carriage-return for part of the output.
I'm trying to implement a pipeline that package and copy Python code to S3 using Gitlab CI.
Here is the job that is causing the problem:
package:
stage: package
image: python:3.8
script:
- apt-get update && apt-get install -y zip unzip jq
- pip3 install awscli
- aws s3 ls
- ./cicd/scripts/copy_zip_to_s3.sh
only:
refs:
- developer
I want to mention that in the section before_script in .gitlab-ci.yml, I've already exported the AWS credentials (AWS SECRET ACCESS KEY, AWS_ACCESS_KEY_ID, etc) from Gitlab environment variables.
I've checked thousands of times my credentials and they are totally correct. I want also to mention that the same script works perfectly for another project under the same group in Gitlab.
Here is the error:
$ aws s3 ls
An HTTP Client raised an unhandled exception: Invalid header value b'AWS4-HMAC-SHA256 Credential=AKIAZXXXXXXXXXX\n/2020XX2/us-east-1/sts/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=ab53XX6eb72XXXXXX2152e4XXXX93b104XXXXXXX363b1da6f9XXXXX'
ERROR: Job failed: exit code 1
./cicd/scripts/copy_zip_to_s3.sh do the package and the copy, same error occurs when executing it, that's why I've added a simple aws command aws s3 ls to show that even a simple 'ls' is not working.
Any solutions, please? Thank you all in advance.
This was because of an additional line added to AWS ACCESS KEY variable.
Thanks to #jordanm
I had similar issue when running a bash script on Cygwin in Windows. The fix was removing the \r\n from the end of the values I was putting into environment variables.
Here's my whole script if anyone is interested. It assumes a new AWS role, sets those creds into environment variables, then opens a new bash shell which will respect those set variables.
#!/bin/bash
hash aws 2>/dev/null
if [ $? -ne 0 ]; then
echo >&2 "'aws' command line tool required, but not installed. Aborting.";
exit 1;
fi;
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
ROLEID=123918273981723
TARGETARN="arn:aws:iam::${ROLEID}:role/OrganizationAccountAccessRole"
COMMAND="aws sts assume-role --role-arn $TARGETARN --role-session-name damien_was_here"
RESULT=$($COMMAND)
#the calls to tr -d \r\n are the important part with regards to this question.
AccessKeyId=$(echo -n "$RESULT" | jq -r '.Credentials.AccessKeyId' | tr -d '\r\n')
SecretAcessKey=$(echo -n "$RESULT" | jq -r '.Credentials.SecretAccessKey' | tr -d '\r\n')
SessionToken=$(echo -n "$RESULT" | jq -r '.Credentials.SessionToken' | tr -d '\r\n')
export AWS_ACCESS_KEY_ID=$AccessKeyId
export AWS_SECRET_ACCESS_KEY=$SecretAcessKey
export AWS_SESSION_TOKEN=$SessionToken
echo Running a new bash shell with the environment variable set.
bash
I am setting up a web app through code pipeline. My cloud formation script is creating an ec2 instance. In that ec2 user data, I have written a logic to get a code from the s3 and copy the code in the ec2 and start the server. A web app is in Python Pyramid framework.
code pipeline is connected with GitHub. It creates a zip file and uploads to the s3 bucket. (That is all in a buildspec.yml file)
When I changed the user data script and run code pipeline it works fine.
But When I changed some web app(My code base) file and re-run the code pipeline. That change is not reflected.
This is for ubuntu ec2 instance.
#cloud-boothook
#!/bin/bash -xe
echo "hello "
exec > /etc/setup_log.txt 2> /etc/setup_err.txt
sleep 5s
echo "User_Data starts"
rm -rf /home/ubuntu/c
mkdir /home/ubuntu/c
key=`aws s3 ls s3://bucket-name/pipeline-name/MyApp/ --recursive | sort | tail -n 1 | awk '{print $4}'`
aws s3 cp s3://bucket-name/$key /home/ubuntu/c/
cd /home/ubuntu/c
zipname="$(cut -d'/' -f3 <<<"$key")"
echo $zipname
mv /home/ubuntu/c/$zipname /home/ubuntu/c/c.zip
unzip -o /home/ubuntu/c/c.zip -d /home/ubuntu/c/
echo $?
python3 -m venv venv
venv/bin/pip3 install -e .
rm -rf cc.zip
aws configure set default.region us-east-1
venv/bin/pserve development.ini http_port=5000 &
The expected result is when I run core pipeline, every time user data script will execute.
Give me a suggestion, any other
The User-Data script gets executed exactly once upon instance creation. If you want to periodically synchronize your code changes to the instance you should think about implementing a CronJob in your User-Data script or use a service like AWS CodeDeploy to deploy new versions (this is the preferred approach).
CodePipeline uses a different S3 object for each pipeline execution artifact, so you can't hardcore a reference to it. You could publish the artifact to a fixed location. You might want to consider using CodeDeploy to deploy the latest version of your application.
I can list out the buckets folders using:
aws s3 ls s3://bucket/ --recursive --human-readable --summarize
But then I also get ALL the contents of the folders also. I just want a list like:
/folder1 10GB
/folder2 6GB
This way I know where to focus and dive deeper. Is this possible because I can't find it anywhere?
What you are looking for is not available currently, though there is a feature request against the aws-cli project under "aws s3 ls" should have a summary-only option.
The comments also include some ideas for how you could use Bash scripting to produce what you are looking for.
UPDATE: fixed it. And is much simpler now and should do exactly what you want.
Trying to write a script to do this, but no promises on when or if it will ever do what you want. But you can check it here: https://github.com/thisaaronm/aws-s3-size (feel free to submit PRs too!)
Right now, it's not much any than just running the command you're running now, but I'm working through child directories in my dev branch.
And it's written pretty ugly right now.
A new solution in case you run across this like I did:
s3cmd ls "s3://<bucket>/<prefix>" | awk '{print $2}' | xargs -n 1 -P 20 -I{} s3cmd du -H {}
Currently my team is using Jenkins to manage our CI/CD workflow. As our infrastructure is entirely in AWS I have been looking into migrating to AWS CodePipeline/CodeBuild to manage this.
In current state, we are versioning our artifacts as such <major>.<minor>.<patch>-<jenkins build #> i.e. 1.1.1-987. However, CodeBuild doesn't seem to have any concept of a build number. As artifacts are stored in s3 like <bucket>/<version>/<artifact> I would really hate to lose this versioning approach.
CodeBuild does provide a few env variables that i can see here: http://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref.html#build-env-ref-env-vars
But from what is available it seems silly to try to use the build ID or anything else.
Is there anything readily available from CodeBuild that could support an incremental build #? Or is there an AWS recommended approach to semantic versioning? Searching this topic returns remarkably low results
Any help or suggestions is greatly appreciated
The suggestion to use date wasn't really going to work for our use case. We ended up creating a base version in SSM and creating a script that runs within the buildspec that grabs, increments, and updates the version back to SSM. It's easy enough to do:
Create a String/SecureString within SSM as [NAME]. For example lets say "BUILD_VERSION". The value should be in [MAJOR.MINOR.PATCH] or [MAJOR.PATCH].
Create a shell script. The one below should be taken as a basic template, you will have to modify it to your needs:
#!/bin/bash
if [ "$1" = 'next' ]; then
version=$(aws ssm get-parameter --name "BUILD_VERSION" --region 'us-east-1' --with-decryption | sed -n -e 's/.*Value\"[^\"]*//p' | sed -n -e 's/[\"\,]//gp')
majorminor=$(printf $version | grep -o ^[0-9]*\\.[0-9]*\. | tr -d '\n')
patch=$(printf $version | grep -o [0-9]*$ | tr -d '\n')
patch=$(($patch+1))
silent=$(aws ssm put-parameter --name "BUILD_VERSION" --value "$majorminor$patch" --type "SecureString" --overwrite)
echo "$majorminor$patch"
fi
Call the versioning script from within buildspec and use the output however you need.
It may be late while I post this answer, however since this feature is not yet released by AWS this may help a few people in a similar boat.
We used Jenkins build numbers for versioning and were migrating to codebuild/code-pipeline. codebuild-id did not work for us as it was very random.
So in the interim we create our own build number in buildspec file
BUILD_NUMBER=$(date +%y%m%d%H%M%S).
This way at least we are able to look at the id and know when it was deployed and have some consistency in the numbering.
So in your case, it would be 1.1.1-181120193918 instead of 1.1.1-987.
Hope this helps.
CodeBuild supports semantic versioning.
In the configuration for the CodeBuild project you need to enable semantic versioning (or set overrideArtifactName via the CLI/API).
Then in your buildspec.yml file specify a name using the Shell command language:
artifacts:
files:
- '**/*'
name: myname-$(date +%Y-%m-%d)
Caveat: I have tried lots of variations of this and cannot get it to work.