FROM node:12-alpine
RUN mkdir /project-api
WORKDIR /project-api
RUN apk add --update-cache python
ENV PYTHON=/usr/local/bin/
COPY ./package.json .
RUN npm cache clean --force
RUN rm -rf ~/.npm
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm install
EXPOSE 3000
I was trying to create a node container for my project, but it throws some error while npm install (bcrypt package). I tried installing python in image file.But still it shows error. I'm attaching error screen
The bcrypt npm package depends on non-javascript code. This means it needs to be built for the specific architecture it's being run on. The initial "WARNING: Tried to download" indicates a pre-built artifact wasn't available, so it's falling back to building from source.
The specific error I see is Error: not found: make, which indicates make isn't installed on the image you're building on (node:12-alpine). Either install it in a prior step in your dockerfile, or switch to a base image that has it pre-installed (node:12 might).
The bcrypt package have more specific instructions at https://github.com/kelektiv/node.bcrypt.js/wiki/Installation-Instructions#alpine-linux-based-images.
You need the following packages:
build-base
python
apk --no-cache add --virtual builds-deps build-base python
Related
I am writing an AWS Lambda function in .NET Core 3.1. I am using Aspose.slides library in the AWS Lambda function. I am publishing the AWS lambda function as docker on AWS. Lambda function successfully gets published but when i test the Lambda it gives me the following error:
Aspose.Slides.PptxReadException: The type initializer for 'Gdip' threw an exception.
---> System.TypeInitializationException: The type initializer for 'Gdip' threw an exception.
---> System.DllNotFoundException: Unable to load shared library 'libgdiplus' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: liblibgdiplus: cannot open shared object file: No such file or directory
at System.Drawing.SafeNativeMethods.Gdip.GdiplusStartup(IntPtr& token, StartupInput& input, StartupOutput& output)
at System.Drawing.SafeNativeMethods.Gdip..cctor()
Even though, i am installing the libgdiplus package from the docker file but i am still getting the above error.
Docker file is:
FROM public.ecr.aws/lambda/dotnet:core3.1 AS base
FROM mcr.microsoft.com/dotnet/sdk:3.1 as build
WORKDIR /src
COPY ["Lambda.PowerPointProcessor.csproj", "base/"]
RUN dotnet restore "base/Lambda.PowerPointProcessor.csproj"
WORKDIR "/src"
COPY . .
RUN apt-get update && apt-get install -y libc6-dev
RUN apt-get update && apt-get install -y libgdiplus
RUN dotnet build "Lambda.PowerPointProcessor.csproj" --configuration Release --output /app/build
FROM build AS publish
RUN dotnet publish "Lambda.PowerPointProcessor.csproj" \
--configuration Release \
--runtime linux-x64 \
--self-contained false \
--output /app/publish \
-p:PublishReadyToRun=true
FROM base AS final
WORKDIR /var/task
COPY --from=publish /app/publish .
CMD ["Lambda.PowerPointProcessor::Lambda.PowerPointProcessor.Function::FunctionHandler"]
Any help would be much appreciated.
FROM public.ecr.aws/lambda/dotnet:core3.1
WORKDIR /var/task
COPY "bin/Release/netcoreapp3.1/linux-x64" .
RUN yum install -y amazon-linux-extras
RUN amazon-linux-extras install epel -y
RUN yum install -y libgdiplus
CMD ["Lambda.PowerPointProcessor::Lambda.PowerPointProcessor.Function::FunctionHandler"]
This docker file resolved the issue for me. It's working fine for me.
I've build a NodeJS project, which I need to run on a custom Docker image.
This is my Dockerfile:
FROM public.ecr.aws/lambda/nodejs:14-x86_64
# Create app directory
WORKDIR /usr/src/
RUN yum update && yum install -y git openssh-client vim python py-pip pip jq
RUN yum update && yum install -y automake autoconf libtool dpkg pkgconfig nasm libpng cmake
RUN pip install awscli
# RUN apk --purge -v del py-pip
# RUN rm /var/cache/apk/*
RUN npm install -g yarn
RUN yarn install --frozen-lockfile
# Bundle app source
COPY . .
RUN yarn build
ENTRYPOINT ["npx", "aws-lambda-ric"]
CMD [ "src/executionHandler.runner" ]
But when I call docker run <imagename>
I get the following errors:
tar: curl-7.78.0/tests/data/test1131: Cannot open: No such file or directory
tar: curl-7.78.0: Cannot mkdir: Permission denied
tar: curl-7.78.0/tests/data/test971: Cannot open: No such file or directory
tar: curl-7.78.0: Cannot mkdir: Permission denied
tar: Exiting with failure status due to previous errors
./scripts/preinstall.sh: line 28: cd: curl-7.78.0: No such file or directory
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! aws-lambda-ric#2.0.0 preinstall: `./scripts/preinstall.sh`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the aws-lambda-ric#2.0.0 preinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2021-10-13T09_11_31_626Z-debug.log
Install for [ 'aws-lambda-ric#latest' ] failed with code 1
The base image I use was taken from official AWS images repository.
How can I resolve this permissions issue?
This isn't really a permissions issue.
According to https://gallery.ecr.aws/lambda/nodejs the base image you're using public.ecr.aws/lambda/nodejs should come with the entire runtime pre-installed. I think the issue is that your entrypoint uses npx, which is a tool for running local npm packages, and the base image can only have the packages installed globally. npx, if it can't find the package in local package.json, tries to install the package. This is both unnecessary, since it's already installed globally, and not possible in the stripped down public image in to which you have installed some of the prereqs like cmake, autoconf, etc, but not libcurl.
I suspect
ENTRYPOINT ["aws-lambda-ric"]
without npx and without all the extraneous development packages will work fine with this image.
I am trying to set up a CI for my cmake/c++ project hosted on a private repository on GitHub.
The project depends on lots of third party libraries that should be git-cloned and built. The latter takes a while, hence, I created a docker image with all dependencies installed and hosted it on the docker hub. (Ideally, I would like the docker image to be private also, but if it is not possible, I can make it public.)
I want to achieve the following:
On pull requests to the master branch, the application is automatically built on the docker container(because all dependencies are there), all unit tests (gtest) are run and, if everything is alright, the branch is merged to master.
Ideally, I would like to see the logs and statistics generated by gcovr/lcov.
OS: Ubuntu 18.04
I wonder if this is even achievable as I have been searching for 2 days with no luck and a billion of possible readings.
My 2 cents (more a comment) on controlled build using docker.
For automatic merge, I don't know since I would be against it since code review can't be replaced by CI only IMHO...
Take a look at https://github.com/Mizux/cmake-cpp
Introduction
I use a Makefile for orchestration (docker command can be way too long ;)) and docker for isolated build on various distro.
pro:
Be able to test locally (Just need a GNU/Linux distro with Docker & Make)
Can migrate easily to various CI runners provider (Travis-CI, GitHub Workflow, gitlab-runner, bitbucket?)
Contributors can test locally before sending a PR
cons:
Less coupled to github -> more complex to maintain.
more difficult to have a cache between workflow
note: Dockerfile are stored in the repository in ci/docker i.e. I rebuild the images in the first steps but you should be able to replace this step by a simple docker load if your image is located on docker hub (not tested)
Setup
Dockerfile
I split my Dockerfile in several stages (mostly for debug).
note: you can replace ubuntu:rolling with your own image...
ci/docker/ubuntu/Dockerfile:
# Create a virtual environment with all tools installed
# ref: https://hub.docker.com/_/ubuntu
FROM ubuntu:rolling AS env
# Install system build dependencies
# note: here we use the CMake package provided by Ubuntu
# see: https://repology.org/project/cmake/versions
ENV PATH=/usr/local/bin:$PATH
RUN apt-get update -q && \
apt-get install -yq git build-essential cmake && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
CMD [ "/bin/sh" ]
# Add the library src to our build env
FROM env AS devel
# Create lib directory
WORKDIR /home/lib
# Bundle lib source
COPY . .
# Build in an other stage
FROM devel AS build
# CMake configure
RUN cmake -H. -Bbuild
# CMake build
RUN cmake --build build --target all
# CMake install
RUN cmake --build build --target install
# Create an install image to check cmake install config
FROM env AS install
# Copy lib from build to install
COPY --from=build /usr/local /usr/local/
# Copy sample
WORKDIR /home/sample
COPY ci/sample .
Runner jobs
Github action runners have docker installed.
note: you can have one badge per yml file. e.g. You could should use one job per distro for example to have one jobs per distro or one file for Release and one file for Debug...
.github/workflows/docker.yml:
name: C++ CI
on: [push, pull_request]
jobs:
build-docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build env image
run: docker build --target=env --tag project:env -f ci/docker/ubuntu/Dockerfile .
- name: Build devel image
run: docker build --target=devel --tag project:devel -f ci/docker/ubuntu/Dockerfile .
- name: Build build image
run: docker build --target=build --tag project:build -f ci/docker/ubuntu/Dockerfile .
For testing you can add an other stage or run it using the project:build image:
docker run --rm --init -t --name test project:build cmake --build build --target test
Annexes
Faster build
You can add a .dockerignore file to remove unneeded files (e.g. LICENCES, doc, local build dir if testing locally...) to reduce docker context and the COPY . .
.dockerignore:
# Project Files unneeded by docker
ci/cache
ci/docker
ci/Makefile
.git
.gitignore
.github
.dockerignore
.travis.yml
.appveyor.yml
.clang-format
AUTHORS
CONTRIBUTING.md
CONTRIBUTHORS
INSTALL
LICENSE
README.md
doc
# Native CMake build
build/
# Editor directories and files
*.user
*.swp
Custom CMake version install
You can use the following instead of apt install -y cmake
Can take time since you rebuild CMake...
# Install CMake 3.16.4
RUN wget "https://cmake.org/files/v3.16/cmake-3.16.4.tar.gz" \
&& tar xzf cmake-3.16.4.tar.gz \
&& rm cmake-3.16.4.tar.gz \
&& cd cmake-3.16.4 \
&& ./bootstrap --prefix=/usr/local/ \
&& make \
&& make install \
&& cd .. \
&& rm -rf cmake-3.16.4
so you can use the prebuild version instead using:
# Install CMake 3.16.4
RUN wget "https://cmake.org/files/v3.16/cmake-3.16.4-Linux-x86_64.sh" \
&& chmod a+x cmake-3.16.4-Linux-x86_64.sh \
&& ./cmake-3.16.4-Linux-x86_64.sh --prefix=/usr/local/ --skip-license \
&& rm cmake-3.16.4-Linux-x86_64.sh
I was able to run the C++ Program and build & test it using GitLab CI unit with the help of Docker Image of gcc. But now I want to compile the program in docker using cmake instead of g++. How to change the '.gitlab-ci.yml' file to support cmake.
Current File : .gitlab-ci.yml
image: gcc
before_script:
- apt-get install --yes cmake libmatio-dev libblas-dev libsqlite3-dev libcurl4-openssl-dev
- apt-get install --yes libarchive-dev liblzma-dev
build:
script:
- ./runner.sh
- ./bin/hello
./runner.sh
cmake -H. -Bbuild
cmake --build build -- -j3
I think you need to add apt-get update in order to get cmake to install. See this
image: gcc
before_script:
- apt-get update --yes
- apt-get install --yes cmake
build:
script:
- ./runner.sh
- ./bin/hello
In general, you can figure stuff out by jumping into the docker image to debug (in your case the image is the debian-based gcc:latest):
sudo docker run -it --rm gcc
If you had run your original apt-get install command inside the gcc container, you would have seen following error message that you could have then googled to figure out that apt-get update was needed
sudo docker run -it --rm gcc apt-get install --yes cmake
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package cmake is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'cmake' has no installation candidate
As this blog post mentions, you can do a test run locally by downloading the gitlab-runner executable:
gitlab-runner exec docker build
Running the gitlab-runner locally will have gitlab clone your repo and run through all the steps in the .gitlab-ci.yml and you can see the output and debug locally rather quickly.
I have a server and want to deploy my Yesod applications without installing GHC and Cabal. I am not sure if is possible: a Teacher told me that I must first compile Keter in my machine and, after that, put keter executable on the server, though I am not sure how to do that.
To build Keter, first you'll need to clone the sources from its GitHub repository. Then you'll need to set up a Haskell build environment and use cabal build or cabal install to build the sources. Personally, I use a Docker container derived from an image based on the following Dockerfile:
FROM haskell:7.10.2
RUN apt-get update && apt-get install -y \
git
RUN mkdir /src
RUN cd src && \
git clone https://github.com/snoyberg/keter && \
cd keter && \
git checkout e8b5a3fd5e14dfca466f8acff2a02f0415fceeb0
WORKDIR /src/keter
RUN cabal update
RUN cabal install keter
ENTRYPOINT /bin/bash
This is an image containing the Keter sources checked out at a specific revision with the minimum GHC toolchain required to build it all. The cabal command lines pull down all the project's dependencies and compiles the whole thing. Once this has completed, you can grab the keter executable from ~/.cabal/bin/keter.
Even if you choose not to use Docker, this file should give you a rough idea how to set up your environment.
Now you have Keter compiled, you can run it inside another Docker container. Here's a rough idea what the Dockerfile for the corresponding image might look like:
FROM debian
RUN apt-get update && apt-get install -y \
libgmp-dev \
nano \
postgresql
COPY keter /opt/keter/bin/
COPY keter-config.yaml /opt/keter/etc/
EXPOSE 80
CMD ["/opt/keter/bin/keter", "/opt/keter/etc/keter-config.yaml"]
Ths will take a base Debian image and install a minimal set of packages on top of it. It then copies the keter executable and configuration file into the image. If you then run a container from the resulting image it will start the keter executable.
Fair warning: This whole process is fairly involved. I'm still working on tweaking the exact details myself. Good luck!