I am trying to deploy a series of AWS step functions through a setup.sh file.
I have successfully tested the step functions in a test environment and there are no issues in the source code.
This is the Deployment Code
./setup.sh <data dictionary command> <step function name>
Output looks like this
*** Step Function Json Uploading to AWS ***
TENANT : <Tenant Name>
EX_AWS_REGION : eu-west-2
EX_AWS_ACCT_ALIAS : <Environment>
File Name : <Step Function File Path>
/path/step_functions
error: unknown command '.Account'
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='cp1252'>
OSError: [Errno 22] Invalid argument
/directory_path/
In setup.sh
.Account has been used as follows
dummy=`aws sts get-caller-identity | jq .Account`
jq has been installed globally and no issues in the setup.sh as well.
It is a jq installation issue. Download and install the jq under the following steps.
Open git bash with the administration privileges. (In Linx based system run with the sudo privileges)
Run the following command curl -L -o /usr/bin/jq.exe https://github.com/stedolan/jq/releases/latest/download/jq-win64.exe
Replace the link with an appropriate one for lnux base systems
Related
$ minikube kubectl create -f hello-app-deployment.yaml
Error: unknown shorthand flag: 'f' in -f
See 'minikube kubectl --help' for usage.
enter image description here
where hello-app-deployment.yaml is the deployment manifest file being saved in a working directory.
I tried saving the same manifest file in my home directory, but encountering the same ERROR.
Is there any minikube or kubectl libaries missing ?
I would say, try installing the kubectl CLI and try with this command
kubectl apply -f ~/<filepath>
You can download the tool from the official website:
https://kubernetes.io/docs/tasks/tools/
app.py example of how my stacks are defined (with some information changed as you can imagine)
Stack1(app, "Stack1",env=cdk.Environment(account='123456789', region='eu-west-1'))
In my azure pipeline I'm trying to do a cdk deploy
- task: AWSShellScript#1
inputs:
awsCredentials: 'Service_connection_name'
regionName: 'eu-west-1'
scriptType: 'inline'
inlineScript: |
sudo bash -c "cdk deploy '*' -v --ci --require-approval-never"
displayName: "Deploying CDK stacks"
but getting errors. I have the service connection to AWS configured, but the first error was
[Stack_Name] failed: Error: Need to perform AWS calls for account [Account_number], but no credentials have been configured
Stack_Name and Account_Number have been redacted
After this error, I decided to add a step to my pipeline and manually create the files .aws/config and .aws/credentials
- script: |
echo "Preparing for CDK"
echo "Creating directory"
sudo bash -c "mkdir -p ~/.aws"
echo "Writing to files"
sudo bash -c "echo -e '[default]\nregion = $AWS_REGION\noutput = json' > ~/.aws/config"
sudo bash -c "echo -e '[default]\naws_access_key_id = $AWS_ACCESS_KEY_ID\naws_secret_access_key = $AWS_SECRET_ACCESS_KEY' > ~/.aws/credentials"
displayName: "Setting up files for CDK"
After this I believed the credentials would be fixed but it still failed. The verbose option revealed the following error amongst the output:
Setting "CDK_DEFAULT_REGION" environment variable to
So instead of setting the region to "eu-west-1" it is being set to nothing
I imagine I'm missing something, so please, educate me and help me get this working
This happens because you're launching separate instances of a shell with sudo bash, and they don't share the credential environment variables that the AWSShellScript task is populating.
To fix the credentials issue, replace the inline script with just cdk deploy '*' -v --ci --require-approval never
When deploying an Elastic Beanstalk application, one of my hooks fails with "permission denied". I get the following in /var/log/eb-engine.log:
[INFO] Running platform hook: .platform/hooks/predeploy/collectstatic.sh
[ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPreDeployHooks]. Stop running the command. Error: Command .platform/hooks/predeploy/predeploy.sh failed with error fork/exec .platform/hooks/predeploy/predeploy.sh: permission denied
How do I fix this?
According to the docs, Platform hooks need to be executable. Of note, this means they need to be executable according to git, because that's what Elastic Beanstalk uses to deploy.
You can check if they are executable via git ls-files -s .platform; you should see 100755 before any shell files in the output of this command. If you see 100644 before any of your shell files, run git add --chmod=+x -- .platform/*/*/*.sh to make them executable.
Create a file under .ebextensions folder with the right order and name it something like: 001_chmod.config
# This command finds all the files within hooks folder with extension .sh and makes them executable.
container_commands:
01_chmod1:
command: find .platform/hooks/ -type f -iname "*.sh" -exec chmod +x {} \;
Source: https://www.barot.us/running-sidekiq-on-amazon-linux-2/
In Build Step, I've added Send files or execute command over SSh -> SSH Publishers -> Exec command, I'm trying to run aws command to copy file from ec2 to s3. The same command runs fine when I execute it over the terminal, but via jenkins it simply returns:
bash: aws: command not found
The command is
cd ~/.local/bin/ && aws s3 cp /home/ec2-user/lambda_test/lambda_function.zip s3://temp-airflow-us/lambda_function.zip
Based on the comments.
The solution was to use the following command:
cd ~/.local/bin/ && ./aws s3 cp /home/ec2-user/lambda_test/lambda_function.zip s3://temp-airflow-us/lambda_function.zip
since aws is not available in PATH env variable.
command not found indicates that the aws utility is not on $PATH for the jenkins user.
To confirm, sudo su -l jenkins and then issue the command which aws - this will most likely return no results.
You have two options:
use the full path (likely /usr/local/bin/aws)
add /usr/local/bin to the jenkins user's $PATH
I need my Makefile to work in both Linux and Windows so the accepted answer is not an option for me.
I diagnosed the problem by adding the following to the top of my build script:
whoami
which aws
env|grep PATH
This returned:
root
which: no aws in (/sbin:/bin:/usr/sbin:/usr/bin)
PATH=/sbin:/bin:/usr/sbin:/usr/bin
Bizarrely, the path does not include /usr/local/bin, even though the interactive shell on the Jenkins host includes it. The fix is simple enough, create a symlink on the Jenkins host:
ln -s /usr/local/bin/aws /bin/aws
Now the aws command can be found by scripts running in Jenkins (in /bin).
I am running the following script. (intentionally i hide the keys of course).
It is basically a copy paste from the readme.md.
Enviroment details:
- I have windows 10.
- running this script on git bash enviroment.
- docker version is 18.03.1-ce
docker container run \
--env AWS_ACCESS_KEY_ID=aaaaaaa \
--env AWS_SECRET_ACCESS_KEY=bbbbbbb \
-v $PWD:/data \
garland/aws-cli-docker \
aws s3 sync . s3://www.typing-coacher.net
i am getting the following error:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Mount denied:
The source path "C:/projects/docker;C"
doesn't exist and is not known to Docker.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
the folder path that actually exist is: C:/projects/docker
Your Git Bash environment will evaluate $PWD to /c/projects/docker instead of C:\projects\docker. Docker daemon will not be able to find that path.
Walkarounds:
Use Winodows shell or PowerShell.
Use absolute path instead of $PWD.