Error Running Aurelia CLI in a Docker Container - dockerfile

I'm trying to build a Docker container that has much of the tooling required for an Aurelia-based javascript project. I have Docker community edition version 17.03.1-ce-mac5 (16048) and the following Dockerfile:
FROM ubuntu:yakkety
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -q && apt-get install -qy \
apt-utils \
git \
chromium-browser \
xvfb \
nodejs \
npm
RUN ln -s /usr/bin/nodejs /usr/bin/node
RUN npm install -g aurelia-cli
WORKDIR /workdir
EXPOSE 9000
I run the command docker build -t maldrake/aurelia:v1 . to build the container, followed by
docker run --rm -it -P -v `pwd`:/workdir maldrake/aurelia:v1 /bin/bash
to shell into the docker container.
At that point, when I run au, I expect to see the usual output of the Aurelia CLI tool when run with no arguments, a list of the available commands and their function. Instead, I'm getting the following output:
root#11f3d17edfd5:/workdir# au
/usr/local/lib/node_modules/aurelia-cli/lib/resolve/index.js:1
(function (exports, require, module, __filename, __dirname) { let core = require('./lib/core');
^^^
SyntaxError: Block-scoped declarations (let, const, function, class) not yet supported outside strict mode
at exports.runInThisContext (vm.js:53:16)
at Module._compile (module.js:374:25)
at Object.Module._extensions..js (module.js:417:10)
at Module.load (module.js:344:32)
at Function.Module._load (module.js:301:12)
at Module.require (module.js:354:17)
at require (internal/module.js:12:17)
at Object.<anonymous> (/usr/local/lib/node_modules/aurelia-cli/bin/aurelia-cli.js:3:17)
at Module._compile (module.js:410:26)
at Object.Module._extensions..js (module.js:417:10)
root#11f3d17edfd5:/workdir#
The javascript error itself is easy enough to understand, and if I go look at that index.js file, it's true that there's no 'use strict' at the top. I can add it to that individual file, and that gets the Aurelia CLI running, far enough to create a new project, but then when I try to use other CLI commands, like au test, I run into the same error with other files that don't have use strict at the top. How should I be configuring my environment differently such that the CLI just works upon installation?
Node and npm versions are as follows:
root#11f3d17edfd5:/workdir# node -v
v4.2.6
root#11f3d17edfd5:/workdir# npm -v
3.5.2
I've searched via Stack Overflow and, more generally, with Google and haven't found a description of the same problem. That may well mean that I'm missing something facepalmingly obvious.

I opened this question using Aurelia CLI version 0.27.0. The issue was fixed in the just released version 0.28.0.
(edit) Well, partly fixed. I still wasn't able to get a build command to execute without a similar error. However, it seems that the underlying problem is an incompatibility with node 4.x. So, changing the build to pull from the node PPA (version 6.10.x) solved the problem in its entirety:
FROM ubuntu:yakkety
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -q && apt-get install -qy \
apt-utils \
chromium-browser \
curl \
git \
xvfb
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -qy \
nodejs
RUN npm install -g aurelia-cli
WORKDIR /workdir
EXPOSE 9000

Related

Unable to load shared library 'libgdiplus' or one of its dependencies while running lambda function

I am writing an AWS Lambda function in .NET Core 3.1. I am using Aspose.slides library in the AWS Lambda function. I am publishing the AWS lambda function as docker on AWS. Lambda function successfully gets published but when i test the Lambda it gives me the following error:
Aspose.Slides.PptxReadException: The type initializer for 'Gdip' threw an exception.
---> System.TypeInitializationException: The type initializer for 'Gdip' threw an exception.
---> System.DllNotFoundException: Unable to load shared library 'libgdiplus' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: liblibgdiplus: cannot open shared object file: No such file or directory
at System.Drawing.SafeNativeMethods.Gdip.GdiplusStartup(IntPtr& token, StartupInput& input, StartupOutput& output)
at System.Drawing.SafeNativeMethods.Gdip..cctor()
Even though, i am installing the libgdiplus package from the docker file but i am still getting the above error.
Docker file is:
FROM public.ecr.aws/lambda/dotnet:core3.1 AS base
FROM mcr.microsoft.com/dotnet/sdk:3.1 as build
WORKDIR /src
COPY ["Lambda.PowerPointProcessor.csproj", "base/"]
RUN dotnet restore "base/Lambda.PowerPointProcessor.csproj"
WORKDIR "/src"
COPY . .
RUN apt-get update && apt-get install -y libc6-dev
RUN apt-get update && apt-get install -y libgdiplus
RUN dotnet build "Lambda.PowerPointProcessor.csproj" --configuration Release --output /app/build
FROM build AS publish
RUN dotnet publish "Lambda.PowerPointProcessor.csproj" \
--configuration Release \
--runtime linux-x64 \
--self-contained false \
--output /app/publish \
-p:PublishReadyToRun=true
FROM base AS final
WORKDIR /var/task
COPY --from=publish /app/publish .
CMD ["Lambda.PowerPointProcessor::Lambda.PowerPointProcessor.Function::FunctionHandler"]
Any help would be much appreciated.
FROM public.ecr.aws/lambda/dotnet:core3.1
WORKDIR /var/task
COPY "bin/Release/netcoreapp3.1/linux-x64" .
RUN yum install -y amazon-linux-extras
RUN amazon-linux-extras install epel -y
RUN yum install -y libgdiplus
CMD ["Lambda.PowerPointProcessor::Lambda.PowerPointProcessor.Function::FunctionHandler"]
This docker file resolved the issue for me. It's working fine for me.

Deploying Rails 6.0 to AWS EB, Webpacker requires Node.js version error

I am attempting to upload a rails app that was recently updated from rails 5.2 to 6 to AWS Elastic Beanstalk. We had someone else working on this, but with the pandemic he had to step away - and now our site is kind of in limbo and I have not been able to update it. I have searched many different variations of my problem but no solutions have worked yet.
The app was working on EB with rails 5.2. I have the app running in 6.0 locally. When I eb deploy I get this error:
MacBook-Pro:app $ eb deploy
Starting environment deployment via CodeCommit
--- Waiting for Application Versions to be pre-processed ---
Finished processing application version app-0e294-200420_110159
2020-04-21 00:22:24 INFO Environment update is starting.
2020-04-21 00:23:07 INFO Deploying new version to instance(s).
2020-04-21 00:27:59 ERROR [Instance: i-0e613ac1fe175f3f6] Command failed on instance. Return code: 1 Output: (TRUNCATED)...-- : Writing /var/app/ondeck/public/assets/application-06fe3df6175ba0def3d0e732489f883d0c09de.css.gz
Webpacker requires Node.js ">=10.13.0" and you are using v6.17.1
Please upgrade Node.js https://nodejs.org/en/download/
Exiting!.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/11_asset_compilation.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2020-04-21 00:27:59 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-04-21 00:28:00 ERROR Unsuccessful command execution on instance id(s) 'i-0e613ac1fe175f3f6'. Aborting the operation.
2020-04-21 00:28:00 ERROR Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
It was giving me a bundler error before this, that I was able to fix by adding a file into .ebextensions that installs the correct version of bundler. I figured the solution to this would be similar.
This post was close to my problem:
Deploy rails react app with webpacker gem on AWS elastic beanstalk
So I added this file to my .ebextensions based off the selected answer of that:
01_update_note.config
commands:
01_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash -
02_install_nodejs:
command: yum -y install nodejs
However, it did not appear to do anything, I still get the same error. I tried a couple variations of the file based off a few other blog posts about the issue, but the error remains. Is anyone able to point me in the right direction or offer any insight into the problem? I apologize for not being very familiar with AWS or EB yet, but I will do my best to answer additional questions.
Maybe it is caused by yarn install later.
I try the following scripts and remove yarn install then set RAILS_SKIP_ASSET_COMPILATION=false and it works for me.
commands:
01_install_yarn:
command: "sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo && curl --silent --location https://rpm.nodesource.com/setup_12.x | sudo bash - && sudo yum install yarn -y"
02_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_12.x | sudo bash -
03_install_nodejs:
command: yum -y install nodejs
04_install_packages:
command: sudo yum install -y yarn
This is how I did it on Amazon Linux 2:
Create this file in .platform/hooks/prebuild/yarn_config.sh:
#!/usr/bin/env bash
curl --silent --location https://rpm.nodesource.com/setup_16.x | sudo bash -
sudo yum -y install nodejs
sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
sudo yum -y install yarn
yarn install
Give it the right permission: chmod +x .platform/hooks/prebuild/yarn_config.sh
And the error is gone, while you assets still compile (unlike with accepted answer)

Docker throws error while running npm install for node application

FROM node:12-alpine
RUN mkdir /project-api
WORKDIR /project-api
RUN apk add --update-cache python
ENV PYTHON=/usr/local/bin/
COPY ./package.json .
RUN npm cache clean --force
RUN rm -rf ~/.npm
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm install
EXPOSE 3000
I was trying to create a node container for my project, but it throws some error while npm install (bcrypt package). I tried installing python in image file.But still it shows error. I'm attaching error screen
The bcrypt npm package depends on non-javascript code. This means it needs to be built for the specific architecture it's being run on. The initial "WARNING: Tried to download" indicates a pre-built artifact wasn't available, so it's falling back to building from source.
The specific error I see is Error: not found: make, which indicates make isn't installed on the image you're building on (node:12-alpine). Either install it in a prior step in your dockerfile, or switch to a base image that has it pre-installed (node:12 might).
The bcrypt package have more specific instructions at https://github.com/kelektiv/node.bcrypt.js/wiki/Installation-Instructions#alpine-linux-based-images.
You need the following packages:
build-base
python
apk --no-cache add --virtual builds-deps build-base python

How to use latest version of chromium driver with alpine

We are using Alpine for our docker container that runs our watir tests. I would like to use a newer version like chromium75.0.3770.8 and chromium-chromedriver75.0.3770.8 instead but Alpine latest version is 73.
I have searched and searched and can not find what I need.
This is my last attempt below still uses the chrome=73.0.3683.103 and chromedriver=73.0.3683.103 when you run the tests in the container.
FROM ruby:alpine3.10
LABEL maintainer=""
RUN apk add --update alpine-sdk
RUN echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories
RUN apk add --no-cache \
chromium>75.0.3770.8 \
chromium-chromedriver>75.0.3770.8
RUN echo "gem: --no-ri --no-rdoc" > ~/.gemrc \
&& gem install bundler
ADD ./Gemfile /Gemfile
ADD ./Gemfile.lock /Gemfile.lock
RUN bundle install
ADD ./spec /spec
CMD [“rspec”]
Has anyone tried to do the same and how did you to accomplish it?

How about manage system dependencies when using azk?

I'm using azk and my system depends on extra packages. I'd be able to install them using (since I'm using an Ubuntu-based image):
apt-get -yq update && apt-get install -y libqtwebkit-dev qt4-qmake
Can I add this steps to provision? In the Azkfile.js, it would look like:
// ...
provision: [
"apt-get -yq update",
"apt-get install -y libqtwebkit-dev qt4-qmake",
"bundle install --path /azk/bundler",
"bundle exec rake db:create",
"bundle exec rake db:migrate",
]
Or it's better to create a new Docker image?
Provision steps are run in a separated container, so all the data generated inside of it is lost after the provision step, unless you persist them. That's why you probably have bundle folders as persistent folders.
Since that, you should use a Dockerfile in this case. It'll look like this:
FROM azukiapp/ruby:2.2.2 # or the image you were using previously
RUN apt-get -yq update && \
apt-get install -y libqtwebkit-dev qt4-qmake && \
apt-get clean -qq && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Keeping the image as small as possible
After that, you should edit your Azkfile.js and replace the image property of your main system to use the created Dockerfile (you can check azk docs here):
image: { dockerfile: './PATH_TO_DOCKERFILE' },
Finally, when you run azk start, azk will build this Dockerfile and use it with all your dependencies installed.
Tip: If you want to force azk to rebuild your Dockerfile, just pass -B flag to azk start.
As it looks like you're using a Debian-based Linux distribution, you could create (https://wiki.debian.org/Packaging) your own Debian virtual package (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-virtual) that lists all the packages it depends on. If you just do that one thing, you can dpkg -i (or apt-get install if you host a custom debian repository yourself) your custom package and it will install all the dependencies you need via apt.
You can then move on to learning about postinst and prerm scripts in Debian packages (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-maintscripts). This will allow you to run commands like bundle and gem as the last step of the package installation and the first step of package removal.
There are a few advantages to doing it this way:
1. If you host a package repository somewhere you can use a pull method of dependency installation in a dynamic scaling environment by simply having the host apt-get update && apt-get install custom-dependencies-diego
2. Versioning your dependency list - Using dpkg -l you can tell what version everything is on a given host, including the version of your dependency virtual package.
3. With prerm scripts, you can ensure that removing your virtual package will also have the effect of removing the changes your installation scripts made so you can get a host back to a "clean" state".
The disadvantage of doing it this way is that it's debian/apt specific. If you wanted to deploy to Slack or RHEL you'd have to change things a bit. Changing to a new distro wouldn't be particularly hard, but it's definitely not as portable as using Bash, for example.