AWS elastic beanstalk : installing libreoffice on deployment - amazon-web-services

I need to have LibreOffice installed on my web server. Since I'm using autoscaling with AWS Elastic Beanstalk, I need to install it on deployment. To do so, I am using .ebextensions files, but can't get it to work. This is my config file in .ebextensions folder:
commands:
01-download-libreoffice:
command: wget http://download.documentfoundation.org/libreoffice/stable/6.0.2/rpm/x86_64/LibreOffice_6.0.2_Linux_x86-64_rpm.tar.gz
02-untar:
command: sudo tar -xvf LibreOffice_6.0.2_Linux_x86-64_rpm.tar.gz
03-install:
command: |
if [ ${APP_ENV} == "production" ]; then
cd LibreOffice_6.0.2.1_Linux_x86-64_rpm/RPMS
sudo yum localinstall *.rpm
fi
04-symlink:
command: sudo ln -fs /opt/libreoffice6.0/program/soffice /usr/bin/soffice
I tried to run these commands myself on my ec2-instance one after another as the root user, and everything worked. Only thing I might suspect: when I run the localinstall command, I need to confirm (there is a [y/n] prompt) to start the installation.
If this was the problem, I think I would still find a zipped LibreOffice file on my server or even untared LibreOffice files, but I can't find anything when I ssh into the ec2 instance after deployment.
There is no error message on deployment. Also, I can see that other .ebextensions scripts are running fine since some processes are running as asked in these scripts.
Any idea where the problem could be?

If it can be of any help, here is how I manage to install Libreoffice on my EC2 instances on deployment. This will install libreoffice 5.4 in /opt/libreoffice5.4
The following code is placed in this file : .ebextensions/01-libreoffice-setup.config
packages:
yum:
libXinerama.x86_64: []
cups-libs: []
dbus-glib: []
commands:
01-download-libreoffice:
command: wget http://download.documentfoundation.org/libreoffice/stable/5.4.6/rpm/x86_64/LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz
cwd: /tmp
test: "[ ! -f /tmp/LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz ]"
02-untar:
command: sudo tar -xvf LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz
cwd: /tmp
test: "[ ! -d /tmp/LibreOffice_5.4.6.2_Linux_x86-64_rpm ]"
03-install:
command: sudo yum localinstall *.rpm -y
cwd: /tmp/LibreOffice_5.4.6.2_Linux_x86-64_rpm/RPMS
test: "[ ! -d /opt/libreoffice5.4 ]"

Related

Files downloaded with user-data deleted?

I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.

Docker container not updating media folder from AWS S3 bucket

I'm having trouble getting a docker container to update the images (like .png's) on my local system.
The docker container has script in the dockerfile that copies the images into a folder in the container. Then, that folder gets copied into a new directory that is a set up as a shared volume. This process is split between the dockerfile for this container, and a "command" entry in the docker-compose.yaml.
Everything seems to run fine; following the output of the copy command looks right. I don't get any errors, and once the command stops running the container stops.
I've tried destroying the container and image completely and recreating it, but I still see the old images in the application. I'm guessing that it's not overwriting the existing images, but I don't know why.
Dockerfile:
FROM ubuntu:18.04
# Install packages
RUN apt-get update
RUN apt-get install -y apt-utils
RUN apt-get install -y curl unzip
# Intall AWS CLI
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
# Load AWS Access Keys
COPY ./config /root/.aws/
COPY ./credentials /root/.aws/
RUN chmod 600 /root/.aws/config /root/.aws/credentials
# Download media
RUN mkdir -p /var/media
RUN aws s3 cp --recursive s3://images/ /var/media
Snippet from the docker-compose.yaml:
version: '3.1'
services:
client:
[[other stuff here]]
media:
image: [[our own image stored with AWS]]
command: 'cp -R -v -f /var/media/images/ /var/media-access/'
volumes:
- ./src/media:/var/media-access
[[other stuff here]]
Any advice is appreciated!

Installing Anaconda on Amazon Elastic Beanstalk to use in Django application

I have a Django application which it's deployed to Amazon Elastic Beanstalk. I have to install anaconda for installing pythonocc-core package. I have created a .config file in .ebextensions folder and add the anaconda path in my wsgi.py file such as below and I have deployed it successfully.
.config file:
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_conda_activate_installation:
command: 'source ~/.bashrc'
wsgi.py:
sys.path.append('/anaconda/lib/python3.7/site-packages')
However when I add the 04_conda_install_pythonocc command below to the continuation of this .config file, I got command failed error.
04_conda_install_pythonocc:
command: 'conda install -c dlr-sc pythonocc-core=7.4.0'
I ssh into the instance for checking. I saw the /anaconda folder has occured. When I checked with the conda --version command, I got the -bash: conda: command not found error.
Afterwards, I thought there might be a problem with the PATH and I edited the .config file as follows and I have deployed this .config file successfully.
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_add_path:
command: 'export PATH=$PATH:$HOME/anaconda/bin'
04_conda_activate_installation:
command: 'source ~/.bashrc'
But when I add the conda_install_pythonocc command again to the continuation of this edited version of .config file, it failed again and I got command failed.
In manually, all the commands work but they don't work in my .config file.
How can I fix this issue and install package with conda?
I tried to replicated the issue on my sandbox account, and I successful installed conda using the following (simplified) config file on 64bit Amazon Linux 2 v3.0.3 running Python 3.7:
.ebextensions/60_anaconda.config
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
05_conda_install:
command: '/anaconda/bin/conda install -y -c dlr-sc pythonocc-core=7.4.0'
Note the use off absolute path /anaconda/bin/conda and -y to not ask for manual confirmations. I only verified installation procedure, not how to use it afterwards (e.g. not how to use it in python application). Thus you will probably need to adjust it to your needs.
The EB log file showing successful installation is also provided for your reference (shortened for simplicity):
/var/log/cfn-init-cmd.log

How can I use docker volume with ubuntu image to access a downloaded file from AWS S3?

I want to copy a file from AWS S3 to a local directory through a docker container.
This copying command is easy without docker, I can see the file downloaded in the current directory.
But the problem is with docker that I don’t even know how to access the file.
Here is my Dockerfile:
FROM ubuntu
WORKDIR "/Users/ezzeldin/s3docker-test"
RUN apt-get update
RUN apt-get install -y awscli
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
CMD [ "aws", "s3", "cp", "s3://ezz-test/s3-test.py", "." ]
The current working folder that I should see the file downloaded to is s3docker-test/. This is what I'm doing after building the Dockerfile to mount a volume myvol to the local directory
docker run -d --name devtest3 -v $PWD:/var/lib/docker/volumes/myvol/_data ubuntu
So after running the image I get this:
download: s3://ezz-test/s3-test.py to ./s3-test.py
which shows that the file s3-test.py is already downloaded, but when I run ls in the interactive terminal I can't see it. So how can I access that file?
Looks like you are overriding containers folder with your empty folder, when you run -v $PWD:/var/lib/docker/volumes/myvol/_data.
Try to simply copy the files from container to host fs by running:
docker cp \
<containerId>:/Users/ezzeldin/s3docker-test/s3-test.py \
/host/path/target/s3-test.py
You could perform this command even on downed container. But first you will have to run it without folder override:
docker run -d --name devtest3 ubuntu

Deploying a Geodjango Application on AWS Elastic Beanstalk

I'm trying to deploy a geodjango application on AWS Elastic Beanstalk. The configuration is 64bit Amazon Linux 2017.09 v2.6.6 running Python 3.6. I am getting this error when trying to deploy:
Requires: libpoppler.so.5()(64bit) Error: Package: gdal-java-1.9.2-8.rhel6.x86_64 (pgdg93) Requires: libpoppler.so.5()(64bit)
How do I install the required package? I read through Setting up Django with GeoDjango Support in AWS Beanstalk or EC2 Instance but I am still getting problems. My ebextensions currently looks like:
commands:
01_yum_update:
command: sudo yum -y update
02_epel_repo:
command: sudo yum-config-manager -y --enable epel
03_install_gdal_packages:
command: sudo yum -y install gdal gdal-devel
packages:
yum:
git: []
postgresql95-devel: []
gettext: []
libjpeg-turbo-devel: []
libffi-devel: []
I'm going to answer my own question for the sake my future projects and anyone else trying to get started with geodjango. Updating this answer as of July 2020
Create an ebextensions file to install GDAL on the EC2 instance at deployment:
01_gdal.config
commands:
01_install_gdal:
test: "[ ! -d /usr/local/gdal ]"
command: "/tmp/gdal_install.sh"
files:
"/tmp/gdal_install.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo yum-config-manager --enable epel
sudo yum -y install make automake gcc gcc-c++ libcurl-devel proj-devel geos-devel
# Geos
cd /
sudo mkdir -p /usr/local/geos
cd usr/local/geos/geos-3.7.2
sudo wget geos-3.7.2.tar.bz2 http://download.osgeo.org/geos/geos-3.7.2.tar.bz2
sudo tar -xvf geos-3.7.2.tar.bz2
cd geos-3.7.2
sudo ./configure
sudo make
sudo make install
sudo ldconfig
# Proj4
cd /
sudo mkdir -p /usr/local/proj
cd usr/local/proj
sudo wget -O proj-5.2.0.tar.gz http://download.osgeo.org/proj/proj-5.2.0.tar.gz
sudo wget -O proj-datumgrid-1.8.tar.gz http://download.osgeo.org/proj/proj-datumgrid-1.8.tar.gz
sudo tar xvf proj-5.2.0.tar.gz
sudo tar xvf proj-datumgrid-1.8.tar.gz
cd proj-5.2.0
sudo ./configure
sudo make
sudo make install
sudo ldconfig
# GDAL
cd /
sudo mkdir -p /usr/local/gdal
cd usr/local/gdal
sudo wget -O gdal-2.4.4.tar.gz http://download.osgeo.org/gdal/2.4.4/gdal-2.4.4.tar.gz
sudo tar xvf gdal-2.4.4.tar.gz
cd gdal-2.4.4
sudo ./configure
sudo make
sudo make install
sudo ldconfig
As shown, the script checks whether gdal already exists using the test function. It then downloads the Geos, Proj, and GDAL libraries and installs them in the usr/local directory. At the time of writing this, geodjango (Django 3.0) supports up to Geos 3.7, Proj 5.2 (which also requires projdatum. Current releases do not require it), and GDAL 2.4 Warning: this installation process can take a long time. Also I am not a Linux professional so some of those commands may be redundant, but it works.
Lastly I add the following two environment variables to my Elastic Beanstalk configuration:
LD_LIBRARY_PATH: /usr/local/lib:$LD_LIBRARY_PATH
PROJ_LIB: usr/local/proj
If you still have troubles I recommend checking the logs and ssh-ing in the EC2 instance to check that installation took place. Original credit to this post