I'm attempting to install an up to date version of ffmpeg on an elastic beanstalk instance on amazon servers. I've created my config file and added these container_commands:
container_commands:
01-ffmpeg:
command: wget -O/usr/local/bin/ffmpeg http://ffmpeg.gusari.org/static/64bit/ffmpeg.static.64bit.2014-03-05.tar.gz
leader_only: false
02-ffmpeg:
command: tar -xzf /usr/local/bin/ffmpeg
leader_only: false
03-ffmpeg:
command: ln -s /usr/local/bin/ffmpeg /usr/bin/ffmpeg
leader_only: false
Command 01 and 03 seems to work perfectly but 02 doesn't seem to work so ffmpeg doesn't unzip. Any ideas what the issue might be?
Thanks,
Helen
A kind person at Amazon helped me out and sent me this config file that works, hopefully some other people will find this useful:
# .ebextensions/packages.config
packages:
yum:
ImageMagick: []
ImageMagick-devel: []
commands:
01-wget:
command: "wget -O /tmp/ffmpeg.tar.xz https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-amd64-static.tar.xz"
02-mkdir:
command: "if [ ! -d /opt/ffmpeg ] ; then mkdir -p /opt/ffmpeg; fi"
03-tar:
command: "tar xvf /tmp/ffmpeg.tar.xz -C /opt/ffmpeg"
04-ln:
command: "if [[ ! -f /usr/bin/ffmpeg ]] ; then ln -sf /opt/ffmpeg/ffmpeg-4.2.2-amd64-static/ffmpeg /usr/bin/ffmpeg; fi"
05-ln:
command: "if [[ ! -f /usr/bin/ffprobe ]] ; then ln -sf /opt/ffmpeg/ffmpeg-4.2.2-amd64-static/ffprobe /usr/bin/ffprobe; fi"
06-pecl:
command: "if [ `pecl list | grep imagick` ] ; then pecl install -f imagick; fi"
Edit:
The above code works for me today 2020-01-03, in Elastic Beanstalk environment Python 3.6 running on 64bit Amazon Linux/2.9.17.
https://johnvansickle.com/ffmpeg/ is linked from the official ffmpeg site.
(The former static build from Gusari does not seem available anymore.)
Warning:
The above will always download the latest release when you deploy. You're also depending on johnvansickle's site being online (to deploy), and his URL not changing. Two alternative approaches would be:
Download the .tar.xz file to your own CDN, and let your deployment download from your own site. (That way, if John's site has a moment of downtime while you're deploying, you're unaffected. And you won't be surprised by the ffmpeg version changing without you realising.)
Specify a version number like https://johnvansickle.com/ffmpeg/old-releases/ffmpeg-4.2.2-amd64-static.tar.xz.
You can use a static build from ffmpeg gusari and the sources syntax to automagically download and extract the binaries from a static build tar to /usr/local/bin. Here's an extremely simple example that has worked for me:
sources:
/usr/local/bin: https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-64bit-static.tar.xz
The version is not specified in the first command "01-wget ..." however, it is specified when linking the files. Since the publication of this the release has been changed from "ffmpeg-3.3.1-64bit-static" to "ffmpeg-3.3.3-64bit-static" there are two solutions to fix this problem:
specify the version for wget
strip the containing directory on unpacking.
03-tar:
command: "tar xvf /tmp/ffmpeg.tar.xz -C /opt/ffmpeg --strip 1"
Here is the full script:
packages:
yum:
ImageMagick: []
ImageMagick-devel: []
commands:
01-wget:
command: "wget -O /tmp/ffmpeg.tar.xz https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-64bit-static.tar.xz"
02-mkdir:
command: "if [ ! -d /opt/ffmpeg ] ; then mkdir -p /opt/ffmpeg; fi"
03-tar:
command: "tar xvf /tmp/ffmpeg.tar.xz -C /opt/ffmpeg --strip 1"
04-ln:
command: "if [[ ! -f /usr/bin/ffmpeg ]] ; then ln -s /opt/ffmpeg/ffmpeg /usr/bin/ffmpeg; fi"
05-ln:
command: "if [[ ! -f /usr/bin/ffprobe ]] ; then ln -s /opt/ffmpeg/ffprobe /usr/bin/ffprobe; fi"
06-pecl:
command: "if [ `pecl list | grep imagick` ] ; then pecl install -f imagick; fi"
add the following to your .ebextensions/packages.config
packages:
yum:
ImageMagick: []
sources:
/usr/local/bin: http://ffmpeg.org/releases/ffmpeg-4.1.tar.gz
Check cloud-init logs for messages. On a Linux instance, that would be:
grep "03-ffmpeg" /var/log/eb-cfn-init.log
Also, you can log to another file to make errors easier to find:
command: ln -s /usr/local/bin/ffmpeg /usr/bin/ffmpeg >> /var/log/my-init.log
Untested, but shouldn't it be
tar xzf /usr/local/bin/ffmpeg
without a minus?
In case there are others out there that prefers compiling from source, herewith steps I've actioned to do so (this worked for me using a Java Application with Maven)
Inside your project root directory, create a ".ebextensions" folder containing a file name of your convenience but it must have the extension of ".config" (see example picture)
Specify the following as the contents of the .ebextensions/yourfilename.config file
packages:
yum:
autoconf: []
automake: []
cmake: []
freetype-devel: []
gcc: []
gcc-c++: []
git: []
libtool: []
make: []
nasm: []
pkgconfig: []
zlib-devel: []
ImageMagick: []
ImageMagick-devel: []
commands:
01-mkdir:
command: |
if [ ! -d /opt/bin ] ; then mkdir -p /opt/bin; fi
if [ ! -d /opt/ffmpeg ] ; then mkdir -p /opt/ffmpeg; fi
if [ ! -d /opt/ffmpeg/ffmpeg-5.1-build ] ; then mkdir -p /opt/ffmpeg/ffmpeg-5.1-build; fi
02-wget:
command: |
if [ ! -d /opt/ffmpeg/ffmpeg-5.1 ] ; then
if [ ! -d /tmp/ffmpeg-5.1.tar.gz ] ; then wget -O /tmp/ffmpeg-5.1.tar.gz https://ffmpeg.org/releases/ffmpeg-5.1.tar.gz; fi
tar xvf /tmp/ffmpeg-5.1.tar.gz -C /opt/ffmpeg
fi
03-configure:
cwd: /opt/ffmpeg/ffmpeg-5.1
command: |
if [[ ! -f /opt/bin/ffmpeg ]] ; then
PKG_CONFIG_PATH="/opt/ffmpeg/ffmpeg-5.1-build/lib/pkgconfig" \
./configure \
--prefix="/opt/ffmpeg/ffmpeg-5.1-build" \
--pkg-config-flags="--static" \
--bindir="/opt/ffmpeg/ffmpeg-5.1-build/bin" \
--enable-gpl \
--enable-libx264 \
fi
04-make:
cwd: /opt/ffmpeg/ffmpeg-5.1
command: |
if [[ ! -f /opt/bin/ffmpeg ]] ; then
make && make install
fi
05-link:
command: if [[ ! -f /usr/bin/ffmpeg ]] ; then ln -sf /opt/ffmpeg/ffmpeg-5.1-build/bin/ffmpeg /usr/bin/ffmpeg; fi
Herewith a summary of the steps that will be executed when your update is deployed to elastic beanstalk
Packages will be configured on your instance e.g in case your instance doesn't have cmake, it will be installed as per the "packages" section as per the configuration in .ebextensions/yourfilename.config file
Once all dependencies are resolved, the commands section will execute (I've discovered that they don't depend on one another e.g. if step#1 fails, step#2 will still be executed)
Command 01: Creates directories where your files will be unarchived to (amongst other things)
Command 02: Downloads the tar from the release website, places it into your instance's temp folder then unarchives it into your instance's /opt/ffmpeg/ folder. It's worth mentioning that the archive contains a folder "ffmpeg-5.1". Thus, when it's unarchived, you'll have a directory "/opt/ffmpeg/ffmpeg-5.1". This entire step won't execute if the folder "/opt/ffmpeg/ffmpeg-5.1" already exists i.e. this step was most likely executed during a previous deploy/update to your instance
Command 03 & Command 04: Configures ffmpeg. You can checkout the ffmpeg documentation for different configurations as per your requirement
Command 05: Creates a symlink from the installation directory to your usr/bin folder. This is required as the /opt directory is not managed by your ec2-user (requires root access) and therefor when you run ffmpeg from /opt or its subdirectories, your Java application may throw an Access Denied or Permission relates issues
Related
I'm working on a webapp on AWS CodePipeline, and one of my backend pipeline's stages includes a docker build command, and the Dockerfile includes these commands:
RUN curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.37.2/install.sh | bash
RUN /bin/bash -c ". ~/.nvm/nvm.sh && \
nvm install $NODE_VERSION && nvm use $NODE_VERSION && \
npm install -g aws-cdk cdk-assume-role-credential-plugin#1.1.1 && \
nvm alias default node && nvm cache clear"
RUN echo export PATH="\
/root/.nvm/versions/node/${NODE_VERSION}/bin:\
$(python3.8 -m site --user-base)/bin:\
$(python3 -m site --user-base)/bin:\
$PATH" >> ~/.bashrc && \
echo "nvm use ${NODE_VERSION} 1> /dev/null" >> ~/.bashrc
RUN /bin/bash -c ". ~/.nvm/nvm.sh && cdk --version"
ENTRYPOINT [ "/bin/bash", "-c", ". ~/.nvm/nvm.sh && uvicorn cdkproxymain:app --host 0.0.0.0 --port 8080" ]
The problem is that I'm running code in a VPC without an internet gateway (client's policy), so the curl command fails. I have tried to install nvm locally by copying the nvm folder to my src directory, but I lack the skills to script this.
Any advice is welcome, Thank you so much !
I have a Django application which it's deployed to Amazon Elastic Beanstalk. I have to install anaconda for installing pythonocc-core package. I have created a .config file in .ebextensions folder and add the anaconda path in my wsgi.py file such as below and I have deployed it successfully.
.config file:
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_conda_activate_installation:
command: 'source ~/.bashrc'
wsgi.py:
sys.path.append('/anaconda/lib/python3.7/site-packages')
However when I add the 04_conda_install_pythonocc command below to the continuation of this .config file, I got command failed error.
04_conda_install_pythonocc:
command: 'conda install -c dlr-sc pythonocc-core=7.4.0'
I ssh into the instance for checking. I saw the /anaconda folder has occured. When I checked with the conda --version command, I got the -bash: conda: command not found error.
Afterwards, I thought there might be a problem with the PATH and I edited the .config file as follows and I have deployed this .config file successfully.
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_add_path:
command: 'export PATH=$PATH:$HOME/anaconda/bin'
04_conda_activate_installation:
command: 'source ~/.bashrc'
But when I add the conda_install_pythonocc command again to the continuation of this edited version of .config file, it failed again and I got command failed.
In manually, all the commands work but they don't work in my .config file.
How can I fix this issue and install package with conda?
I tried to replicated the issue on my sandbox account, and I successful installed conda using the following (simplified) config file on 64bit Amazon Linux 2 v3.0.3 running Python 3.7:
.ebextensions/60_anaconda.config
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
05_conda_install:
command: '/anaconda/bin/conda install -y -c dlr-sc pythonocc-core=7.4.0'
Note the use off absolute path /anaconda/bin/conda and -y to not ask for manual confirmations. I only verified installation procedure, not how to use it afterwards (e.g. not how to use it in python application). Thus you will probably need to adjust it to your needs.
The EB log file showing successful installation is also provided for your reference (shortened for simplicity):
/var/log/cfn-init-cmd.log
I need to have LibreOffice installed on my web server. Since I'm using autoscaling with AWS Elastic Beanstalk, I need to install it on deployment. To do so, I am using .ebextensions files, but can't get it to work. This is my config file in .ebextensions folder:
commands:
01-download-libreoffice:
command: wget http://download.documentfoundation.org/libreoffice/stable/6.0.2/rpm/x86_64/LibreOffice_6.0.2_Linux_x86-64_rpm.tar.gz
02-untar:
command: sudo tar -xvf LibreOffice_6.0.2_Linux_x86-64_rpm.tar.gz
03-install:
command: |
if [ ${APP_ENV} == "production" ]; then
cd LibreOffice_6.0.2.1_Linux_x86-64_rpm/RPMS
sudo yum localinstall *.rpm
fi
04-symlink:
command: sudo ln -fs /opt/libreoffice6.0/program/soffice /usr/bin/soffice
I tried to run these commands myself on my ec2-instance one after another as the root user, and everything worked. Only thing I might suspect: when I run the localinstall command, I need to confirm (there is a [y/n] prompt) to start the installation.
If this was the problem, I think I would still find a zipped LibreOffice file on my server or even untared LibreOffice files, but I can't find anything when I ssh into the ec2 instance after deployment.
There is no error message on deployment. Also, I can see that other .ebextensions scripts are running fine since some processes are running as asked in these scripts.
Any idea where the problem could be?
If it can be of any help, here is how I manage to install Libreoffice on my EC2 instances on deployment. This will install libreoffice 5.4 in /opt/libreoffice5.4
The following code is placed in this file : .ebextensions/01-libreoffice-setup.config
packages:
yum:
libXinerama.x86_64: []
cups-libs: []
dbus-glib: []
commands:
01-download-libreoffice:
command: wget http://download.documentfoundation.org/libreoffice/stable/5.4.6/rpm/x86_64/LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz
cwd: /tmp
test: "[ ! -f /tmp/LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz ]"
02-untar:
command: sudo tar -xvf LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz
cwd: /tmp
test: "[ ! -d /tmp/LibreOffice_5.4.6.2_Linux_x86-64_rpm ]"
03-install:
command: sudo yum localinstall *.rpm -y
cwd: /tmp/LibreOffice_5.4.6.2_Linux_x86-64_rpm/RPMS
test: "[ ! -d /opt/libreoffice5.4 ]"
I'm following this tutorial on how to use Travis CI with Google Cloud for Continuous Deployments:
https://cloud.google.com/solutions/continuous-delivery-with-travis-ci
When Travis builds, it tells me that the gcloud command is not found. Here's my .travis file:
sudo: false
language: python
cache:
directories:
- "$HOME/google-cloud-sdk/"
env:
- GAE_PYTHONPATH=${HOME}/.cache/google_appengine PATH=$PATH:${HOME}/google-cloud-sdk/bin
PYTHONPATH=${PYTHONPATH}:${GAE_PYTHONPATH} CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
- openssl aes-256-cbc -K $encrypted_404aa45a170f_key -iv $encrypted_404aa45a170f_iv
-in credentials.tar.gz.enc -out credentials.tar.gz -d
- if [ ! -d "${GAE_PYTHONPATH}" ]; then python scripts/fetch_gae_sdk.py $(dirname
"${GAE_PYTHONPATH}"); fi
- if [ ! -d ${HOME}/google-cloud-sdk ]; then curl https://sdk.cloud.google.com | bash;
fi
- tar -xzf credentials.tar.gz
- mkdir -p lib
- gcloud auth activate-service-account --key-file client-secret.json
install:
- gcloud config set project continuous-deployment-192112
- gcloud -q components update gae-python
- pip install -r requirements.txt -t lib/
script:
- python test_main.py
- gcloud -q preview app deploy app.yaml --promote
- python e2e_test.py
This is the same file provided by the example repository from the tutorial. The line that fails is:
- gcloud auth activate-service-account --key-file client-secret.json
Even though it's already checked for the SDK and installed it if it isn't there.
I've already tried adding - source ~/.bash_profile after the install, but this doesn't work.
Am I missing a command somewhere?
I ran into the same issue and this has worked for me:
- if [ ! -d "$HOME/google-cloud-sdk" ]; then
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)";
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list;
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ;
sudo apt-get update && sudo apt-get install google-cloud-sdk;
fi
The only issue however is since it needs sudo, it will run on gce which is much slower then ec2
https://docs.travis-ci.com/user/reference/overview/#Virtualisation-Environment-vs-Operating-System
Updated:
This is the best solution -
How to install Google Cloud SDK on Travis?
I'm currently install the goczmq (https://github.com/zeromq/goczmq) on golang:1.6.2-alpine docker container, as following:
wget https://download.libsodium.org/libsodium/releases/libsodium-1.0.10.tar.gz
wget https://download.libsodium.org/libsodium/releases/libsodium-1.0.10.tar.gz.sig
wget https://download.libsodium.org/jedi.gpg.asc
gpg --import jedi.gpg.asc
gpg --verify libsodium-1.0.10.tar.gz.sig libsodium-1.0.10.tar.gz
tar zxvf libsodium-1.0.10.tar.gz
cd libsodium-1.010.
./configure; make check
sudo make install
sudo ldconfig
The process failed on ldconfig, there seems be a command ldconfig, but I don't think it is actually functional. Any insights? Thank you in advance.
Alpine's version of ldconfig requires you to specify the target folder or library as an argument. Note that alpine has no /etc/ld.so.conf file, nor does it recognize one if you create it.
Example with no target path:
$ docker run -ti alpine sh -c "ldconfig; echo \$?"
1
Example with target path:
$ docker run -ti alpine sh -c "ldconfig /; echo \$?"
0
However, even with that there are frequently linking errors. Others suggest:
Manual symbolic links
Installing glibc into your container.