We are using Alpine for our docker container that runs our watir tests. I would like to use a newer version like chromium75.0.3770.8 and chromium-chromedriver75.0.3770.8 instead but Alpine latest version is 73.
I have searched and searched and can not find what I need.
This is my last attempt below still uses the chrome=73.0.3683.103 and chromedriver=73.0.3683.103 when you run the tests in the container.
FROM ruby:alpine3.10
LABEL maintainer=""
RUN apk add --update alpine-sdk
RUN echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories
RUN apk add --no-cache \
chromium>75.0.3770.8 \
chromium-chromedriver>75.0.3770.8
RUN echo "gem: --no-ri --no-rdoc" > ~/.gemrc \
&& gem install bundler
ADD ./Gemfile /Gemfile
ADD ./Gemfile.lock /Gemfile.lock
RUN bundle install
ADD ./spec /spec
CMD [“rspec”]
Has anyone tried to do the same and how did you to accomplish it?
Related
I have alpine based image, where I would like to install unified cloudwatch agent from the lists mentioned in this DownloadLinks. And I choose https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm as my package. Inside my docker file I have used like this to call and install the package:
RUN curl -sS -o /tmp/amazon-cloudwatch-agent.rpm https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm && \
apk add --no-cache /tmp/amazon-cloudwatch-agent.rpm
Now the above line downloads the rpm in tmp but as soon as it tried to run apk, it throws me the below error.
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
/tmp/amazon-cloudwatch-agent.rpm (missing):
required by:
world[/tmp/amazon-cloudwatch-agent.rpm]
So, what is the best way to install rpm in alpine or should I choose a different rmp?
FROM node:12-alpine
RUN mkdir /project-api
WORKDIR /project-api
RUN apk add --update-cache python
ENV PYTHON=/usr/local/bin/
COPY ./package.json .
RUN npm cache clean --force
RUN rm -rf ~/.npm
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm install
EXPOSE 3000
I was trying to create a node container for my project, but it throws some error while npm install (bcrypt package). I tried installing python in image file.But still it shows error. I'm attaching error screen
The bcrypt npm package depends on non-javascript code. This means it needs to be built for the specific architecture it's being run on. The initial "WARNING: Tried to download" indicates a pre-built artifact wasn't available, so it's falling back to building from source.
The specific error I see is Error: not found: make, which indicates make isn't installed on the image you're building on (node:12-alpine). Either install it in a prior step in your dockerfile, or switch to a base image that has it pre-installed (node:12 might).
The bcrypt package have more specific instructions at https://github.com/kelektiv/node.bcrypt.js/wiki/Installation-Instructions#alpine-linux-based-images.
You need the following packages:
build-base
python
apk --no-cache add --virtual builds-deps build-base python
I am trying to build a docker image using Visual Studio Code following this tutorial "https://code.visualstudio.com/docs/python/tutorial-deploy-containers".
I created a django app with a connection to a MSSQLserver on azure with the package pyodbc.
During the build of the docker image i receive the following error messages:
unable to execute 'gcc': No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pyodbc
and
unable to execute 'gcc': No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for typed-ast
I read solutions for linux systems where one should install python-dev, but since i am working on a windows machine this is no solution.
Then i read that on windows all the needed files are in the 'include' directory of the python installation. But in a venv installation this directory is empty... so i created a directory junction to the original 'include'. The error still exists.
My docker file is included below.
# Python support can be specified down to the minor or micro version
# (e.g. 3.6 or 3.6.3).
# OS Support also exists for jessie & stretch (slim and full).
# See https://hub.docker.com/r/library/python/ for all supported Python
# tags from Docker Hub.
FROM tiangolo/uwsgi-nginx:python3.6-alpine3.7
# Indicate where uwsgi.ini lives
ENV UWSGI_INI uwsgi.ini
# Tell nginx where static files live (as typically collected using Django's
# collectstatic command.
ENV STATIC_URL /app/static_collected
# Copy the app files to a folder and run it from there
WORKDIR /app
ADD . /app
# Make app folder writable for the sake of db.sqlite3, and make that file also writable.
# RUN chmod g+w /app
# RUN chmod g+w /app/db.sqlite3
# If you prefer miniconda:
#FROM continuumio/miniconda3
LABEL Name=hello_django Version=0.0.1
EXPOSE 8000
# Using pip:
RUN python3 -m pip install -r requirements.txt
CMD ["python3", "-m", "hello_django"]
# Using pipenv:
#RUN python3 -m pip install pipenv
#RUN pipenv install --ignore-pipfile
#CMD ["pipenv", "run", "python3", "-m", "hello_django"]
# Using miniconda (make sure to replace 'myenv' w/ your environment name):
#RUN conda env create -f environment.yml
#CMD /bin/bash -c "source activate myenv && python3 -m hello_django"
I could use some help in building the image without the errors.
Based on the answer of 2ps i added these lines almost at the top of the docker file
FROM tiangolo/uwsgi-nginx:python3.6-alpine3.7
RUN apk update \
&& apk add apk add gcc libc-dev g++ \
&& apk add libffi-dev libxml2 libffi-dev \
&& apk add unixodbc-dev mariadb-dev python3-dev
and received a new error...
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
v3.7.1-98-g2f2e944c59 [http://dl-cdn.alpinelinux.org/alpine/v3.7/main]
v3.7.1-105-g7db92f4321 [http://dl-cdn.alpinelinux.org/alpine/v3.7/community]
OK: 9053 distinct packages available
ERROR: unsatisfiable constraints:
add (missing):
required by: world[add]
apk (missing):
required by: world[apk]
The command '/bin/sh -c apk update && apk add apk add gcc libc-dev g++ && apk add libffi-dev libxml2 libffi-dev && apk add unixodbc-dev mariadb-dev python3-dev' returned a non-zero code: 2
Found out that adding
RUN echo "ipv6" >> /etc/modules
helped with the errors above. Taken from: https://github.com/gliderlabs/docker-alpine/issues/55
The app now works, exept that the intended connection to the MsSQL database still not works.
Error at /
('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 13 for SQL Server' : file not found (0) (SQLDriverConnect)")
I think i should get my hands dirty on some docker documentation.
I gave up on the solution with alpine and switched to debian
FROM python:3.7
# needed files for pyodbc
RUN apt-get update
RUN apt-get install gcc libc-dev g++ libffi-dev libxml2 libffi-dev unixodbc-dev -y
# MS SQL driver 17 for debian
RUN apt-get install apt-transport-https \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -\
&& curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& ACCEPT_EULA=Y apt-get install msodbcsql17 -y
You'll need to use apk to install gcc and other native dependencies needed to build your pip dependencies. For the ones that you listed (typedast and pyodbc), I think they would be:
RUN apk update \
&& apk add apk add gcc libc-dev g++ \
&& apk add libffi-dev libxml2 libffi-dev \
&& apk add unixodbc-dev mariadb-dev python3-dev
I'm trying to build a Docker container that has much of the tooling required for an Aurelia-based javascript project. I have Docker community edition version 17.03.1-ce-mac5 (16048) and the following Dockerfile:
FROM ubuntu:yakkety
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -q && apt-get install -qy \
apt-utils \
git \
chromium-browser \
xvfb \
nodejs \
npm
RUN ln -s /usr/bin/nodejs /usr/bin/node
RUN npm install -g aurelia-cli
WORKDIR /workdir
EXPOSE 9000
I run the command docker build -t maldrake/aurelia:v1 . to build the container, followed by
docker run --rm -it -P -v `pwd`:/workdir maldrake/aurelia:v1 /bin/bash
to shell into the docker container.
At that point, when I run au, I expect to see the usual output of the Aurelia CLI tool when run with no arguments, a list of the available commands and their function. Instead, I'm getting the following output:
root#11f3d17edfd5:/workdir# au
/usr/local/lib/node_modules/aurelia-cli/lib/resolve/index.js:1
(function (exports, require, module, __filename, __dirname) { let core = require('./lib/core');
^^^
SyntaxError: Block-scoped declarations (let, const, function, class) not yet supported outside strict mode
at exports.runInThisContext (vm.js:53:16)
at Module._compile (module.js:374:25)
at Object.Module._extensions..js (module.js:417:10)
at Module.load (module.js:344:32)
at Function.Module._load (module.js:301:12)
at Module.require (module.js:354:17)
at require (internal/module.js:12:17)
at Object.<anonymous> (/usr/local/lib/node_modules/aurelia-cli/bin/aurelia-cli.js:3:17)
at Module._compile (module.js:410:26)
at Object.Module._extensions..js (module.js:417:10)
root#11f3d17edfd5:/workdir#
The javascript error itself is easy enough to understand, and if I go look at that index.js file, it's true that there's no 'use strict' at the top. I can add it to that individual file, and that gets the Aurelia CLI running, far enough to create a new project, but then when I try to use other CLI commands, like au test, I run into the same error with other files that don't have use strict at the top. How should I be configuring my environment differently such that the CLI just works upon installation?
Node and npm versions are as follows:
root#11f3d17edfd5:/workdir# node -v
v4.2.6
root#11f3d17edfd5:/workdir# npm -v
3.5.2
I've searched via Stack Overflow and, more generally, with Google and haven't found a description of the same problem. That may well mean that I'm missing something facepalmingly obvious.
I opened this question using Aurelia CLI version 0.27.0. The issue was fixed in the just released version 0.28.0.
(edit) Well, partly fixed. I still wasn't able to get a build command to execute without a similar error. However, it seems that the underlying problem is an incompatibility with node 4.x. So, changing the build to pull from the node PPA (version 6.10.x) solved the problem in its entirety:
FROM ubuntu:yakkety
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -q && apt-get install -qy \
apt-utils \
chromium-browser \
curl \
git \
xvfb
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -qy \
nodejs
RUN npm install -g aurelia-cli
WORKDIR /workdir
EXPOSE 9000
I'm using azk and my system depends on extra packages. I'd be able to install them using (since I'm using an Ubuntu-based image):
apt-get -yq update && apt-get install -y libqtwebkit-dev qt4-qmake
Can I add this steps to provision? In the Azkfile.js, it would look like:
// ...
provision: [
"apt-get -yq update",
"apt-get install -y libqtwebkit-dev qt4-qmake",
"bundle install --path /azk/bundler",
"bundle exec rake db:create",
"bundle exec rake db:migrate",
]
Or it's better to create a new Docker image?
Provision steps are run in a separated container, so all the data generated inside of it is lost after the provision step, unless you persist them. That's why you probably have bundle folders as persistent folders.
Since that, you should use a Dockerfile in this case. It'll look like this:
FROM azukiapp/ruby:2.2.2 # or the image you were using previously
RUN apt-get -yq update && \
apt-get install -y libqtwebkit-dev qt4-qmake && \
apt-get clean -qq && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Keeping the image as small as possible
After that, you should edit your Azkfile.js and replace the image property of your main system to use the created Dockerfile (you can check azk docs here):
image: { dockerfile: './PATH_TO_DOCKERFILE' },
Finally, when you run azk start, azk will build this Dockerfile and use it with all your dependencies installed.
Tip: If you want to force azk to rebuild your Dockerfile, just pass -B flag to azk start.
As it looks like you're using a Debian-based Linux distribution, you could create (https://wiki.debian.org/Packaging) your own Debian virtual package (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-virtual) that lists all the packages it depends on. If you just do that one thing, you can dpkg -i (or apt-get install if you host a custom debian repository yourself) your custom package and it will install all the dependencies you need via apt.
You can then move on to learning about postinst and prerm scripts in Debian packages (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-maintscripts). This will allow you to run commands like bundle and gem as the last step of the package installation and the first step of package removal.
There are a few advantages to doing it this way:
1. If you host a package repository somewhere you can use a pull method of dependency installation in a dynamic scaling environment by simply having the host apt-get update && apt-get install custom-dependencies-diego
2. Versioning your dependency list - Using dpkg -l you can tell what version everything is on a given host, including the version of your dependency virtual package.
3. With prerm scripts, you can ensure that removing your virtual package will also have the effect of removing the changes your installation scripts made so you can get a host back to a "clean" state".
The disadvantage of doing it this way is that it's debian/apt specific. If you wanted to deploy to Slack or RHEL you'd have to change things a bit. Changing to a new distro wouldn't be particularly hard, but it's definitely not as portable as using Bash, for example.