When running the following command on kube-master (CoreOS):
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
I get following error:
Can't find aws in PATH, please fix and retry.
I have already set PATH. Can anyopne tell which 'aws' it is searching for? Is it the aws directory in kubernetes repo directory i.e. kubernetes/cluster/aws?
Follow the AWS CLI installation guide and then ensure your PATH is set correctly.
Yes, you are right.
If you set "aws" as KUBERNETES_PROVIDER, Kubernetes will use scripts that reside in kubernetes/cluster/aws. If no KUBERNETES_PROVIDER is set, I believe the default it to rely on gcloud CLI tool.
If you are using Ubuntu OS. run the below command. it will resolve your issue.
apt-get install awscli
Related
I just installed aws cli on Ubuntu following the oficial installation guide on an azure VM.
When I run any command from the command line the results are python objects and not a text or regular output
$ aws s3 ls
<botocore.awsrequest.AWSRequest object at 0x7f412f3573a0>
I searched everywhere but I cant find any hint.
I already reinstalled aws and also tried using the output flag but nothing changes.
Any suggestions?
This took me a while to figure out as well. For some reason this only affected our CICD jobs, but using the same exact container image and env vars locally worked fine.
Turns out, the issue stems from not providing a region.
You can fix this by specifying the region explicitly in the command:
aws s3 ls --region us-west-2
Or by providing the region with the available AWS env vars:
export AWS_REGION="us-west-2"
# or
export AWS_DEFAULT_REGION="us-west-2"
Some related sources that helped me figure this out:
https://github.com/jwalton/gh-ecr-login/issues/3
aws s3 ls gives error botocore.utils.BadIMDSRequestError: <botocore.awsrequest.AWSRequest object at 0x7f3f6cb44d00>
Well, I don't know how I didn't tried this before, but installing the awscli with apt fixed the issue.
sudo apt-get install awscli.
In Build Step, I've added Send files or execute command over SSh -> SSH Publishers -> Exec command, I'm trying to run aws command to copy file from ec2 to s3. The same command runs fine when I execute it over the terminal, but via jenkins it simply returns:
bash: aws: command not found
The command is
cd ~/.local/bin/ && aws s3 cp /home/ec2-user/lambda_test/lambda_function.zip s3://temp-airflow-us/lambda_function.zip
Based on the comments.
The solution was to use the following command:
cd ~/.local/bin/ && ./aws s3 cp /home/ec2-user/lambda_test/lambda_function.zip s3://temp-airflow-us/lambda_function.zip
since aws is not available in PATH env variable.
command not found indicates that the aws utility is not on $PATH for the jenkins user.
To confirm, sudo su -l jenkins and then issue the command which aws - this will most likely return no results.
You have two options:
use the full path (likely /usr/local/bin/aws)
add /usr/local/bin to the jenkins user's $PATH
I need my Makefile to work in both Linux and Windows so the accepted answer is not an option for me.
I diagnosed the problem by adding the following to the top of my build script:
whoami
which aws
env|grep PATH
This returned:
root
which: no aws in (/sbin:/bin:/usr/sbin:/usr/bin)
PATH=/sbin:/bin:/usr/sbin:/usr/bin
Bizarrely, the path does not include /usr/local/bin, even though the interactive shell on the Jenkins host includes it. The fix is simple enough, create a symlink on the Jenkins host:
ln -s /usr/local/bin/aws /bin/aws
Now the aws command can be found by scripts running in Jenkins (in /bin).
I have in Airflow a BashOperator that executes an aws s3 ls s3:/path command. It works on the console but it doesn't works on Airflow, the error message is: command not found . Aws is correctly installed, the following URL explains how the PATH was set:
How do I resolve the "-bash: aws: command not found" awscli error? and how it was installed: Is it possible to install aws-cli package without root permission? .
aws --version
aws-cli/1.18.209 Python/3.8.5 Linux/5.4.0-58-generic botocore/1.19.49
I dont know what I am doing wrong. Please help. How can I make this command (and any other in aws) work on Airflow ?
Thanks in advance.
I am playing with apache-spark on aws emr, and trying to use this to set the cluster to use python3,
I use the command as the last command in a bootstrap script
sudo sed -i -e '$a\export PYSPARK_PYTHON=/usr/bin/python3' /etc/spark/conf/spark-env.sh
When I use it the cluster crashes during the bootstrap with the following error.
sed: can't read /etc/spark/conf/spark-env.sh: No such file or
directory
How should I set it to use python3 properly?
This is not a duplicate of, My issue is that the cluster is not finding the spark-env.sh file while bootstrapping, while the other question addresses the issue of the system not finding python3
In the end I did not use that script, but Used the EMR configuration file that is available on the creation stage, It gave me the proper configurations via spark_submit (in the aws gui) If you need it to be available for pyspark scripts in a more programatic way, you can use os.environ to set the pyspark python version in the python script
So I am using cloudformation for my AWS setup, I am trying to run composer but for some reason no matter what command I put in my userdata section I always can an error, this is my error:
php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev
[RuntimeException]
The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
This is my code within the userdata section:
"#composer\n",
"curl -sS https://getcomposer.org/installer | php\n",
"mv composer.phar /usr/local/bin/composer.phar\n",
"#satis\n",
"php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev\n",
Does anyone have any ideas why this might not work and should I should be doing ?
Composer is looking for the location of the .composer directory. Export the HOME or COMPOSER_HOME env variable, e.g. : HOME=/root php /usr/local/bin/composer.phar create-project composer/satis /var/www/satis --stability=dev, it will work fine then.
I had the similar issue with amazon linux ami 2, it was showing in the log All settings correct for using Composer. The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly, but it was not installed at all. Below is the way to fix it. Might be helpful to somebody rather waisting 2,3 hours!
sudo curl -sS https://getcomposer.org/installer | sudo php
mv composer.phar /usr/bin/composer
chmod +x /usr/bin/composer
export COMPOSER_HOME=/root
Agree with Ntwobike's answer.
When launching AWS EC2 instances I was installing composer by running an Ansible playbook during in the user data script run. (The user data script is called by cloud-init during the instance build process).
For some reason at this point in the build the $HOME environment variable is not set. So I needed to add 'export HOME=/root' - e.g.
# These need to be set to enable the composer installer to run. It is probably due to an issue
# with the $HOME variable not yet being set at this point in the instance creation.
export HOME=/root
ansible-playbook --extra-vars "target=localhost" playbooks/debian-9/drush.yml