I want to install auto-checking my code with help of cpplint in Github actions.
I try to install it in workflows file like that:
- name: Install cpplint
working-directory: ${{runner.workspace}}/uast
shell: bash
run: |
pip install wheel
pip install cpplint
After this code block I try to run cpplint:
- name: cpplint
working-directory: ${{runner.workspace}}/uast
shell: bash
run: cpplint --recursive --exclude=source/catch.hpp --filter=-legal/copyright,-build/include_subdir source/*
But after successful installation (in the first block) I got "line 1: cpplint: command not found" in the second one.
Please try python -m cpplint:
- name: cpplint
working-directory: ${{runner.workspace}}/uast
shell: bash
run: python -m cpplint --recursive --exclude=source/catch.hpp --filter=-legal/copyright,-build/include_subdir source/*
Modules installed over pip are not recognized as system level command.
Related
I am trying to run ansible playbook via jenkins groovy scripts but keep getting error: boto3 is required. I have already installed boto3:
pip list boto | grep boto
boto3 1.20.3
botocore 1.23.3
I have inventory as:
[localhost]
localhost ansible_connection=local ansible_python_interpreter=/usr/local/bin/python
Python:
which python
/usr/bin/python
pip:
which pip
/home/john/.local/bin/pip
boto:
find $HOME/.local -name 'boto3' -type d/home/john/.local/lib/python3.6/site-packages/boto3
versions:
pip --version
pip 21.3.1 from /home/john/.local/lib/python3.6/site-packages/pip (python 3.6)
python --version
Python 3.6.9
Ansible:
which ansible
/usr/bin/ansible
Playbook in sh file:
ansible-playbook -c local \
-e ansible_python_interpreter=$(which python) \
-i localhost, \
-e env="'${ENV}'" \
-e image="'${IMAGE_NAME}'" \
-e version="'${BUILD_NUMBER}'" \
infra/test.ansible.yaml
What else did I miss to configure?
Finally after days of struggle fixed my problem using the steps below:
Created virtual environment for python, boto and ansible
Edit the ansible inventory file to point the interpreter to python instead of /usr/bin/python
sudo pip install virtualenv
sudo pip install boto botocore
source ansible_vEnv/bin/activate
Set the following in ansible inventory:
[localhost]
localhost ansible_python_interpreter=python
Gave the playbook command as ansible-playbook -c local dir/test.yaml
Note: make sure you use boto in yaml file and not boto3:
- hosts: localhost
gather_facts: no
tasks:
- name:
pip:
name: boto // here
state: present
Pointing the interpreter to python will actually pickup the python from our isolated environment i.e. virtual environment we created in step 2.
Also, I did installed ansible as root using:
sudo su -
Uninstalled the existing ansible that was installed using apt-get
Installed ansible using pip install ansible
Set the path in Jenkins as /usr/bin for ansible plugin in global tool configuration
I'm trying to create a trigger that test a function before deploying it in cloud function. So far I managed to install requirements.txt and execute pytest but I get the following error:
/usr/local/lib/python3.7/site-packages/ghostscript/__init__.py:35: in <module>
from . import _gsprint as gs
/usr/local/lib/python3.7/site-packages/ghostscript/_gsprint.py:515: in <module>
raise RuntimeError('Can not find Ghostscript library (libgs)')
E RuntimeError: Can not find Ghostscript library (libgs)
I have ghostscript in my requirements.txt file :
[...]
ghostscript==0.6
[...]
pytest==6.0.1
pytest-mock==3.3.1
Here is my deploy.yaml
steps:
- name: 'docker.io/library/python:3.7'
id: Test
entrypoint: /bin/sh
dir: 'My_Project/'
args:
- -c
- 'pip install -r requirements.txt && pytest pytest/test_mainpytest.py -v'
From the traceback, I understand that I don't have ghostscript installed on the cloud build, which is true.
Is there a way to install ghostscript on a step of my deploy.yaml?
Edit-1:
So I tried to install ghostscript using commands in a step, I tried apt-get gs, apt-get ghostscript but unfortunately it didn't work
The real problem is that you are missing a c-library, the package itself seems installed by pip. You should install that library with your package manager. This is an example for ubuntu-based containers:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
apt update
apt install ghostscript -y
pip install -r requirements.txt
pytest pytest/test_mainpytest.py -v
I have a Dockerfile as follow:
FROM centos
RUN mkdir work
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip
RUN pip install pandas
RUN pip install boto3
RUN pip install pynt
WORKDIR ./work
CMD ["bash"]
where i am installing some basic dependencies.
Now when I run
docker run imagename
it does nothing but when I run
docker run -it imageName
I lands into the bash shell. But I want to get into the bash shell as soon as I trigger the run command without any extra parameters.
I am using this docker container in AWS codebuild and there I can't specify any parameters like -it but I want to execute my code in the docker container itself.
Is it possible to modify CMD/ENTRYPOINT in such a way that when running the docker image I land right inside the container?
I checked your container, it will not even build due to missing pip. So I modified it a bit so that it at least builds:
FROM centos
RUN mkdir glue
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip python3-pip
RUN pip3 install pandas
RUN pip3 install boto3
RUN pip3 install pynt
WORKDIR ./glue
Build it using, e.g.:
docker build . -t glue
Then you can run command in it using for example the following syntax:
docker run --rm glue bash -c "mkdir a; ls -a; pwd"
I use --rm as I don't want to keep the container.
Hope this helps.
We cannot login to the docker container directly.
If you want to run any specific commands when the container start in detach mode than either you can give it in CMD and ENTRYPOINT command of the Dockerfile.
If you want to get into the shell directly, you can run
docker -it run imageName
or
docker run imageName bash -c "ls -ltr;pwd"
and it will return the output.
If you have triggered the run command without -it param then you can get into the container using:
docker exec -it imageName
and you will land up into the shell.
Now, if you are using AWS codebuild custom images and concerned about how the commands can be submitted to the container than you have to put your commands into the build_spec.yaml file and put your commands either in pre_build, build or post_build parameter and those commands will be submitted to the docker container.
-build_spec.yml
version: 0.2
phases:
pre_build:
commands:
- pip install boto3 #or any prebuild configuration
build:
commands:
- spark-submit job.py
post_build:
commands:
- rm -rf /tmp/*
More about build_spec here
I have installed sam using following
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html
I can run following
sam build
But not
sudo sam build
which gives me => sudo: sam: command not found
Further going I have found that I need to sudo permission to sudo as follows.
sudo env "PATH=/home/linuxbrew/.linuxbrew/bin/sam" sam
Is the above correct? I haven't run this command and not sure if it is proper.
This is what I have run.
test -d ~/.linuxbrew && eval $(~/.linuxbrew/bin/brew shellenv)
test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)
test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profile
echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile
You can try this:
In a normal terminal (normal user):
which sam
This will give you the location, where sam is installed, let's say /somewhere/bin/sam.
Then: try:
sudo /somewhere/bin/sam build
if you followed the tutorial about Linux+Brew+SAM install, maybe you forgot to run the command:
brew install aws-sam-cli
Or just make an alias to the command
nano ~/.bashrc
add row at the end
alias sam='/home/linuxbrew/.linuxbrew/bin/sam'
Save. Restart terminal.
pip3 install aws-sam-cli
this worked for me.
Run the below command after following the official aws sam cli installation tutorial
$ brew install aws-sam-cli
==> Installing aws-sam-cli from aws/tap
==> Downloading https://github.com/aws/aws-sam-šŗ
...
/home/linuxbrew/.linuxbrew/Cellar/aws-sam-cli/1.13.2: 3,899 files, 91MB
At the end it will show where it would be installed.
for me it is the path
/home/linuxbrew/.linuxbrew/Cellar/aws-sam-cli/1.13.2/libexec/bin/sam
then make a symbolic link
$ ln -s /home/linuxbrew/.linuxbrew/Cellar/aws-sam-cli/1.13.2/libexec/bin/sam /home/linuxbrew/.linuxbrew/bin/sam
now you will be able to call sam easily
$ sam --version
SAM CLI, version 1.13.2
There's a GitLab.com update rolling out today, and I'm seeing issues connecting to a particular AWS region with Ansible: us-gov-west-1.
This is odd, since in my CI job I'm able to use the AWS CLI just fine:
CI build step:
$ aws ec2 describe-instances
Output (truncated):
{
"Reservations": [
{
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
"PublicDnsName": "ec2-...
The very build step is as follows, notice that it fails to connect to the region:
CI build step:
$ ansible-playbook -vvv -i inventory/ec2.py -e ansible_ssh_private_key_file=aws-keypairs/gitlab_keypair.pem playbooks/deploy.yml
Output (truncated)
Using /builds/me/my-project/ansible.cfg as config file
ERROR! Attempted to execute "inventory/ec2.py" as inventory script: Inventory script (inventory/ec2.py) had an execution error: region name: us-gov-west-1 likely not supported, or AWS is down. connection to region failed.
ERROR: Job failed: exit code 1
Is anyone else seeing this?
It was working this morning. Any idea why this might be failing now?
I wrote a small Python script to dive deeper into boto. When I googled how to list the regions,I was reminded of the differences in boto 2 vs boto 3. Then, I reviewed the mechanism I was using to install boto. It looks like the boto installation was the problem.
Hereās the buggy version of my .gitlab-ci.yml file:
image: ansible/ansible:ubuntu1604
test_aws:
stage: deploy
before_script:
- apt-get update
- apt-get -y install python
- apt-get -y install python-boto python-pip
- pip install awscli
script:
- 'aws ec2 describe-instances'
deploy_app:
stage: deploy
before_script:
- apt-get update
- apt-get -y install python
- apt-get -y install python-boto python-pip
- pip install awscli
script:
- 'chmod 400 aws-keypairs/gitlab_keypair.pem'
- 'ansible-playbook -vvv -i inventory/ec2.py -e ansible_ssh_private_key_file=aws-keypairs/gitlab_keypair.pem playbooks/deploy.yml'
And hereās the fixed version:
image: ansible/ansible:ubuntu1604
all_in_one:
stage: deploy
before_script:
- rm -rf /var/lib/apt/lists/*
- apt-get update
- apt-get -y install python python-pip
- pip install boto==2.48.0
- pip install awscli
- pip install ansible==2.2.2.0
script:
- 'chmod 400 aws-keypairs/gitlab_keypair.pem'
- 'aws ec2 describe-instances'
- 'python ./boto_debug.py'
- 'ansible-playbook -vvv -i inventory/ec2.py -e ansible_ssh_private_key_file=aws-keypairs/gitlab_keypair.pem playbooks/deploy.yml'
Notice that I switched from using apt-get install to using pip install. Hopefully others will come across this post in the future and avoid installing boto with apt-get -y install python-boto!