how to deploy unified cloudwatch agent rpm in your alpine based image - amazon-web-services

I have alpine based image, where I would like to install unified cloudwatch agent from the lists mentioned in this DownloadLinks. And I choose https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm as my package. Inside my docker file I have used like this to call and install the package:
RUN curl -sS -o /tmp/amazon-cloudwatch-agent.rpm https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm && \
apk add --no-cache /tmp/amazon-cloudwatch-agent.rpm
Now the above line downloads the rpm in tmp but as soon as it tried to run apk, it throws me the below error.
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
/tmp/amazon-cloudwatch-agent.rpm (missing):
required by:
world[/tmp/amazon-cloudwatch-agent.rpm]
So, what is the best way to install rpm in alpine or should I choose a different rmp?

Related

How to install bazel4.2 on OpenSuse

It seems OpenSuse provide bazel4.2 package, unfortunately it is an experimental/community package and I don't know how to enable this feed in my opensuse based Dockerfile.
Dockerfile:
FROM opensuse/tumbleweed
RUN zypper update -y \
&& zypper install -y bazel4.2
RUN bazel --version
Observed
docker build --tag=plop .
...
Retrieving repository 'openSUSE-Tumbleweed-Non-Oss' metadata [..done]
Building repository 'openSUSE-Tumbleweed-Non-Oss' cache [....done]
Retrieving repository 'openSUSE-Tumbleweed-Oss' metadata [......done]
Building repository 'openSUSE-Tumbleweed-Oss' cache [....done]
Retrieving repository 'openSUSE-Tumbleweed-Update' metadata [.done]
Building repository 'openSUSE-Tumbleweed-Update' cache [....done]
Loading repository data...
Reading installed packages...
Nothing to do.
Loading repository data...
Reading installed packages...
No provider of 'bazel4.2' found.
'bazel4.2' not found in package names. Trying capabilities.
Expected
Bazel4.2 is retrieved and installed from one community/experimental repository.
ref: https://software.opensuse.org/package/bazel4.2
after few trial and error
Dockerfile:
FROM opensuse/tumbleweed
RUN zypper update -y
# https://en.opensuse.org/SDB:Add_package_repositories
RUN zypper ar -Gf https://download.opensuse.org/repositories/devel:tools:building/openSUSE_Factory/devel:tools:building.repo
# https://software.opensuse.org/package/bazel4.2
RUN zypper install -y bazel4.2
RUN bazel --version

Unable to load shared library 'libgdiplus' or one of its dependencies while running lambda function

I am writing an AWS Lambda function in .NET Core 3.1. I am using Aspose.slides library in the AWS Lambda function. I am publishing the AWS lambda function as docker on AWS. Lambda function successfully gets published but when i test the Lambda it gives me the following error:
Aspose.Slides.PptxReadException: The type initializer for 'Gdip' threw an exception.
---> System.TypeInitializationException: The type initializer for 'Gdip' threw an exception.
---> System.DllNotFoundException: Unable to load shared library 'libgdiplus' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: liblibgdiplus: cannot open shared object file: No such file or directory
at System.Drawing.SafeNativeMethods.Gdip.GdiplusStartup(IntPtr& token, StartupInput& input, StartupOutput& output)
at System.Drawing.SafeNativeMethods.Gdip..cctor()
Even though, i am installing the libgdiplus package from the docker file but i am still getting the above error.
Docker file is:
FROM public.ecr.aws/lambda/dotnet:core3.1 AS base
FROM mcr.microsoft.com/dotnet/sdk:3.1 as build
WORKDIR /src
COPY ["Lambda.PowerPointProcessor.csproj", "base/"]
RUN dotnet restore "base/Lambda.PowerPointProcessor.csproj"
WORKDIR "/src"
COPY . .
RUN apt-get update && apt-get install -y libc6-dev
RUN apt-get update && apt-get install -y libgdiplus
RUN dotnet build "Lambda.PowerPointProcessor.csproj" --configuration Release --output /app/build
FROM build AS publish
RUN dotnet publish "Lambda.PowerPointProcessor.csproj" \
--configuration Release \
--runtime linux-x64 \
--self-contained false \
--output /app/publish \
-p:PublishReadyToRun=true
FROM base AS final
WORKDIR /var/task
COPY --from=publish /app/publish .
CMD ["Lambda.PowerPointProcessor::Lambda.PowerPointProcessor.Function::FunctionHandler"]
Any help would be much appreciated.
FROM public.ecr.aws/lambda/dotnet:core3.1
WORKDIR /var/task
COPY "bin/Release/netcoreapp3.1/linux-x64" .
RUN yum install -y amazon-linux-extras
RUN amazon-linux-extras install epel -y
RUN yum install -y libgdiplus
CMD ["Lambda.PowerPointProcessor::Lambda.PowerPointProcessor.Function::FunctionHandler"]
This docker file resolved the issue for me. It's working fine for me.

Docker throws error while running npm install for node application

FROM node:12-alpine
RUN mkdir /project-api
WORKDIR /project-api
RUN apk add --update-cache python
ENV PYTHON=/usr/local/bin/
COPY ./package.json .
RUN npm cache clean --force
RUN rm -rf ~/.npm
RUN rm -rf node_modules
RUN rm -f package-lock.json
RUN npm install
EXPOSE 3000
I was trying to create a node container for my project, but it throws some error while npm install (bcrypt package). I tried installing python in image file.But still it shows error. I'm attaching error screen
The bcrypt npm package depends on non-javascript code. This means it needs to be built for the specific architecture it's being run on. The initial "WARNING: Tried to download" indicates a pre-built artifact wasn't available, so it's falling back to building from source.
The specific error I see is Error: not found: make, which indicates make isn't installed on the image you're building on (node:12-alpine). Either install it in a prior step in your dockerfile, or switch to a base image that has it pre-installed (node:12 might).
The bcrypt package have more specific instructions at https://github.com/kelektiv/node.bcrypt.js/wiki/Installation-Instructions#alpine-linux-based-images.
You need the following packages:
build-base
python
apk --no-cache add --virtual builds-deps build-base python

Gitlab CI pipeline takes too long to build every time

I am using docker and Gitlab CI for deploying my app on AWS and I would like to improve my pipeline build time. The problem is that it requires a lot of time to download the libraries everytime I build a new image. Here is my 'before_script' job:
before_script:
- which apk
- apk add --no-cache curl jq python python-dev python3-dev gcc py-pip docker openrc git libc-dev libffi-dev openssl-dev nodejs yarn make
- pip install awscli
- pip install 'docker-compose<=1.23.2'
I think that it would be possible by storing the libraries in cache maybe for future reuse, but I can't find the way it works. Thanks !
Yes, it is possible to use the cache in some cases.
BUT in this scenario I think is better that you build a docker image with all your dependencies built-in. Next, you use that new image (which already has all dependencies) to deploying.
In the Gitlab-CI pipeline, you can set the image at each stage. You would configure the new one.

How about manage system dependencies when using azk?

I'm using azk and my system depends on extra packages. I'd be able to install them using (since I'm using an Ubuntu-based image):
apt-get -yq update && apt-get install -y libqtwebkit-dev qt4-qmake
Can I add this steps to provision? In the Azkfile.js, it would look like:
// ...
provision: [
"apt-get -yq update",
"apt-get install -y libqtwebkit-dev qt4-qmake",
"bundle install --path /azk/bundler",
"bundle exec rake db:create",
"bundle exec rake db:migrate",
]
Or it's better to create a new Docker image?
Provision steps are run in a separated container, so all the data generated inside of it is lost after the provision step, unless you persist them. That's why you probably have bundle folders as persistent folders.
Since that, you should use a Dockerfile in this case. It'll look like this:
FROM azukiapp/ruby:2.2.2 # or the image you were using previously
RUN apt-get -yq update && \
apt-get install -y libqtwebkit-dev qt4-qmake && \
apt-get clean -qq && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Keeping the image as small as possible
After that, you should edit your Azkfile.js and replace the image property of your main system to use the created Dockerfile (you can check azk docs here):
image: { dockerfile: './PATH_TO_DOCKERFILE' },
Finally, when you run azk start, azk will build this Dockerfile and use it with all your dependencies installed.
Tip: If you want to force azk to rebuild your Dockerfile, just pass -B flag to azk start.
As it looks like you're using a Debian-based Linux distribution, you could create (https://wiki.debian.org/Packaging) your own Debian virtual package (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-virtual) that lists all the packages it depends on. If you just do that one thing, you can dpkg -i (or apt-get install if you host a custom debian repository yourself) your custom package and it will install all the dependencies you need via apt.
You can then move on to learning about postinst and prerm scripts in Debian packages (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-maintscripts). This will allow you to run commands like bundle and gem as the last step of the package installation and the first step of package removal.
There are a few advantages to doing it this way:
1. If you host a package repository somewhere you can use a pull method of dependency installation in a dynamic scaling environment by simply having the host apt-get update && apt-get install custom-dependencies-diego
2. Versioning your dependency list - Using dpkg -l you can tell what version everything is on a given host, including the version of your dependency virtual package.
3. With prerm scripts, you can ensure that removing your virtual package will also have the effect of removing the changes your installation scripts made so you can get a host back to a "clean" state".
The disadvantage of doing it this way is that it's debian/apt specific. If you wanted to deploy to Slack or RHEL you'd have to change things a bit. Changing to a new distro wouldn't be particularly hard, but it's definitely not as portable as using Bash, for example.