rsync command not found while rsync already installed - amazon-web-services

I'm new to AWS and pipelines in general.
I was trying to deploy a Next.js app to EC2 AWS (ubuntu) using bitbucket pipeline. However, the pipeline failed due to bash: rsync: command not found.
pipeline failed due to rsync command not found
I've googled for a few days and tried some solutions, yet none of them helped me to fix my pipeline issue. I've checked the rsync on my remote machine and it's already installed. I've also tried to re-install it a few times.
rsync is already installed on the remote machine
I've also re-checked my repository SSH keys both private (the one from .pem file) and public key (the one from my remote machine .ssh / authorized_key) so i am 100% sure there is no mistake in this part.
As for the host address, I'm using Public IPv4 address from my AWS Instance.
For the repository variables, I've checked and am sure there is nothing wrong with it as well after I compared it with the one on the pipeline file.
my bitbucket repository variables
here is my pipeline file
As for your note, I've tried using the --rsync-path=/usr/bin/rsync, but nothing changed.
I would really appreciate it if there is someone who could help me understand why this is happening and fix this issue.

I've found out that my pipeline is the one that caused the issue, i've separated get-apt updates and get-apt install -y rsync (get updates step) with deploy newest version step.
So the solution is just to merge get update step to deploy step.
- apt-get update -y # update apt
- apt-get install -y rsync # install rsync
- apt-cache search rsync # check if apt can find rsync
- rsync -avzrut --delete-delay --exclude='.git' . $SSH_USER#$SSH_HOST_DEV:$SSH_PATH

Related

How do I permanently install an apt package in Google Cloud Shell?

I tried to install a package with apt-get cloud shell once but the next day it was gone. I saw another stackoverflow here. But it was out of date (I think). Please help.
As you can see from the link #DazWilkin has provided, the only directory where Cloud Shell persists your file is at $HOME directory. Anything installed with apt will not persist when the instance provisioned in Cloud Shell shuts down.
There's a solution for this problem. The script $HOME/.customize_environment runs everytime you boot up Cloud Shell. It is already running as root and there you can run apt to install the packages you want.
Example, as per doc:
#!/bin/sh
apt-get update
apt-get -y install erlang
Update: There seems to be an issue where .customize_environment is not working. It's been confirmed by a Google Engineer and it's currently being fixed.

Add centos repository to Amazon Linux instance

I'm trying to add the following yum repository to my AWS instance:
https://centos.pkgs.org/7/centos-x86_64/
The issue is that there is no repodata/ directory with the required metadata in this source, how can I add this repository without getting the following error:
yum-config-manager --add-repo https://centos.pkgs.org/7/centos-x86_64/
yum install -y katello
https://centos.pkgs.org/7/centos-x86_64/repodata/repomd.xml: [Errno 12] Timeout on
https://centos.pkgs.org/7/centos-x86_64/repodata/repomd.xml: (28, 'Connection timed out
after 5001 milliseconds')
I'm currently missing multiple dependencies like python-rhsm and selinux-policy which both only exist on the centos.pkgs repository.
Your URL is wrong, the following worked for me:
RUN curl http://mirror.centos.org/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7 -o RPM-GPG-KEY-CentOS-7
RUN rpm --import RPM-GPG-KEY-CentOS-7
RUN yum-config-manager --add-repo='http://mirror.centos.org/centos/7/os/x86_64/'
Note that a lot of their packages will conflict with Amazon's pre-installed packages.
One workaround I've found is to not add that repo, and instead use yum install <direct_rpm_link> for all of my installations. I had to manually resolve some dependencies myself by adding more .rpm links, but at least it worked in the end.

How to add unavailable packages to EC2 instance?

This might be a really silly question, but I'm trying to train this model: https://github.com/Rayhane-mamah/Tacotron-2 on an AWS instance. I'm using an AWS educate account so I was unable to launch an EC2 instance with a Deep Learning AMI, instead I launched a regular Linux 2 AMI.
As per the repo's machine setup instructions, I installed python3 and pip and tensorflow onto the instance. However, I am unable to run the command:
sudo yum install -y libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0 ffmpeg libav-tools
(the repo lists the command with apt-get instead of yum)
When I run that command, most of the packages are unavailable. The output I get is:
No package libasound-dev available.
No package portaudio19-dev available.
No package libportaudio2 available.
No package libportaudiocpp0 available.
No package ffmpeg available.
No package libav-tools available.
How can I install these packages onto my ec2 instance? Thanks
EDIT: I see now my issue is EC2's Linux 2 AMI is running on Centos. I would have to manually install each of these packages (I think). Might be easier to try and launch an Ubuntu server, or Linux 1 and use the docker file included in the repo.
You Can use Cloud Formation Template to install the pacakges inside EC2 .In that way whenever EC2 comes up , it will come up with all the packages.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/deploying.applications.html

Missing AWS Dependency

i had a case where i need to configure an AWS structure similar to the architecture that is described in this article, is but this article is old, when i followed the steps i couldn't pass the step at which i run the script "vip_monitor.sh".
so be specific, at the step 5 by running the script i got the following error
Can't open /etc/profile.d/aws-apitools-common.sh
that shell script doesn't exist in the whole machine, how to solve this issue?
Thanks in advance
You will have to set api tools manually.
Ubuntu makes their own AMI's for Amazon, and they don't build the apitools into the images.
You can use official ubuntu documentation to fix these:
Install ec2 api tools
sudo apt-add-repository ppa:awstools-dev/awstools
sudo apt-get update
sudo apt-get install ec2-api-tools
actually i installed the ec2-api-tools J.Parashar instructed, and when i ran the script vip_monitor.sh it gave me the same error so i just took the missing script aws-apitools-common.sh file from an Amazon Linux instance and paste it at the path /etc/profile.d/ and then changed the mode to the script to executable chmod +x aws-apitools-common.sh and ran the script 'vip_monitor.sh'.
if you had the error :Unexpected operator run the script with bash ./vip_monitor.sh

Docker build has no network, but docker run has

If I want to build my Dockerfile, it can't connect to the network or at least DNS:
Sending build context to Docker daemon 15.95 MB
Sending build context to Docker daemon
Step 0 : FROM ruby
---> eeb85dfaa855
Step 1 : RUN apt-get update -qq && apt-get install -y build-essential libpq-dev
---> Running in ec8cbd41bcff
W: Failed to fetch http://httpredir.debian.org/debian/dists/jessie/InRelease
W: Failed to fetch http://httpredir.debian.org/debian/dists/jessie-updates/InRelease
W: Failed to fetch http://security.debian.org/dists/jessie/updates/InRelease
W: Failed to fetch http://httpredir.debian.org/debian/dists/jessie/Release.gpg Could not resolve 'httpredir.debian.org'
W: Failed to fetch http://httpredir.debian.org/debian/dists/jessie-updates/Release.gpg Could not resolve 'httpredir.debian.org'
W: Failed to fetch http://security.debian.org/dists/jessie/updates/Release.gpg Could not resolve 'security.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package build-essential
INFO[0001] The command "/bin/sh -c apt-get update -qq && apt-get install -y build-essential libpq-dev" returned a non-zero code: 100
But if I run exactly the same command via docker run it works:
docker run --name="test" ruby /bin/sh -c 'apt-get update -qq && apt-get install -y build-essential libpq-dev'
Does anybody have an idea, why docker build does not work? I have tried all DNS related tipps on StackOverflow, like starting docker with --dns 8.8.8.8 etc.
Thanks in advance
Check what networks are available on your host with the below command:
docker network ls
then pick one that you know is working, the host one could be a good candidate.
Now assuming you are in the directory where it is available your Dokerfile, build your image appending the flag --networks and change the <image-name> with yours:
docker build . -t <image-name> --no-cache --network=host
Docker definitely seems to have some network issues. I managed to fix this problem with
systemctl restart docker
... which is basically just the unix-level 'restart-the-daemon' command in Debian 8.
I had similar problem. But as I was running AWS linux i had no systemctl. I solved using:
sudo service docker restart
My docker build also failed while trying to run apt-get upgrade with the exact same errors. I was using docker-machine on Mac OSX and a simple docker-machine restart default solved this issue. No idea what initially caused this, though.
Another case of the above reported behaviour - this time building a docker image from Jenkins:
[...]
Step 3 : RUN apt-get update && apt-get install -y curl libapache2-mod-proxy-html
---> Running in ea7aca5dea9b
Err http://security.debian.org jessie/updates InRelease
Err http://security.debian.org jessie/updates Release.gpg
Could not resolve 'security.debian.org'
Err http://httpredir.debian.org jessie InRelease
[...]
In my case it turned out that the DNS wasn't reachable from within the container - but still from the docker host !? (The containers resolver configuration was okay(!))
After restarting the docker machine (a complete reboot - a 'docker.service restart' didn't do the trick) it's been working again.
So one of my activities (or of a colleague of mine) must have broken the docker networking then !?? Maybe some firewalld modification activity ???
I'm still investigating as I'm not sure which activity may have corrupted the docker networking then ...
I have the exact same issue with a Raspberry.
Start/stopping the service did not help, but re-installing the package (dpkg -i docker-hypriot_1.10.3-1_armhf.deb && service docker start in my case) immediately solved the situation : apt-get update manages to resolve and reach the servers.
There must be some one-shot actions in the installation process...
Also faced the same issue today. My workaround was to restart your docker-machine. In my case, it's on VirtualBox.
Once you power off it and then restart the machine, http://security.debian.org seemed resolved.
Hope this helps.
A couple of suggestions, not sure if they will work or not. Can you change the ...apt-get install -y... to ...apt-get install -yqq...
Also, has that image changed that you're trying to build from?