[AWS Lambda]: How to fix "version GLIBC 2.27 Not found" - amazon-web-services

I would like to deploy and test my Lambda function, but, every time I try to do that I am getting following error message:
2019-11-11 13:25:33 Mounting /tmp/tmphebm3s_4 as /var/task:ro,delegated inside runtime container
/var/task/bin/inference: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /opt/lib/libopencv_dnn.so.4.1)
/var/task/bin/inference: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /opt/lib/libopencv_video.so.4.1)
/var/task/bin/inference: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /opt/lib/libopencv_objdetect.so.4.1)
/var/task/bin/inference: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /opt/lib/libopencv_features2d.so.4.1)
/var/task/bin/inference: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /opt/lib/libopencv_imgproc.so.4.1)
/var/task/bin/inference: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /opt/lib/libopencv_core.so.4.1)
/var/task/bin/inference: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by /opt/lib/libinference_engine.so)
/var/task/bin/inference: /lib64/libdbus-1.so.3: no version information available (required by /opt/lib/libatk-bridge-2.0.so.0)
^C/var/task/bin/inference: /lib64/libdbus-1.so.3: no version information available (required by /opt/lib/libatspi.so.0)
Makefile:85: recipe for target 'run-inference' failed
Note that inference is name of my Lambda functions binary.
I found about this link: https://aws.amazon.com/premiumsupport/knowledge-center/lambda-linux-binary-package/ that lets me use Amazon Linux box in order to create deployment package adequate for Lambda function execution environment.
My plan was to copy code to EC2 instance and build it with GLIBC version installed on it. I think that this would fix issue mentioned above.
Problem is that, once I SSH to EC2 instance, how do I copy my code to it and then build it? I am not an expert in linux so this is kinda confusing to me.
Thanks in advance!

I have just answered a similar question that addresses that issue you had, in which was the same issue I had earlier today. Please look at:
How can I use environmental variables on AWS Lambda?
In addition to looking there, please note that you will have to pack a layer into your AWS Lambda Function in which will need to have the correct LIB files -- "libm.so.6" is one for example -- in the lib folder of your layer. After that, you will need to set up the environmental variable, as explained in the link above, so that the correct lib file of your layer is used at runtime and thus your code runs successfully.
In order to get the correct LIB file, I would suggest googling more, and also trying to run your code in conda. My project was developed in a conda environment, and when I translated into a virtualenv so that I could package into a Layer and then upload to AWS Lambda, I noticed that I was getting that error too. I then grabbed the correct lib file from either (don't remember now) the lib folder of my conda environment, or the lib folder of the conda installation directory, and I placed in the lib folder of my layer package. After that, I stll had to set the environment variable so that those specific lib files would load and be linked to the python runtime.

The problem here lies in incompatibility between the version of OpenCV that you are trying to use and Amazon Linux, the OS that runs it. Basically, you are trying to use OpenCV compiled for a different system and it can't run.
To solve this, you need to build OpenCV for Amazon Linux and for the current version of the programming language that you use. Here is a repository for Python 3.7 that I used. Please note, that it will not run correctly, unless you comment out all of 3.8 installation and add a RUN pip3.7 install --upgrade pip line into the Dockerfile before RUN pip3.7 install -r requirements.txt <...>.

I was having this error while creating the Rust binary to deploy in an AWS Lambda.
I solved it by using cross as suggested in this GitHub comment.
I performed these two steps to solve the issue:
Install cross: cargo install cross
Compile the project using cross: cross build --release --target x86_64-unknown-linux-gnu

Related

Adding binaries to other peoples conan recipes

I'm using given conan packages
gtest/1.8.0#bincrafters/stable
boost/1.66.0#conan/stable
log4cplus/2.0.2#bincrafters/stable`
and clang (version at least 6.0).
While first two packages has binaries for clang 6.0, log4cplus doesn't (last is clang 3.9). I don't like the idea that on each workstation I would need to build this package by hand.
How can I upload localy build binary with clang 6.0?
conan upload looks promising, however it suggest that it will be NEW package. Second question - wouldn't I interfere with package author in any way?
I do recommend open an issue for Bincrafters, requesting clang 3.9 support: https://github.com/bincrafters/community/issues/
Include a new package configuration is just one line in the Travis recipe.
How can I upload localy build binary with clang 6.0?
You could use JFrog Artifactory, there is a Community Edition with Conan support. Also, you could create a "mirror" for your packages locally with Artifactory, instead to download from Bintray:
https://docs.conan.io/en/latest/uploading_packages/artifactory_ce.html
However, Conan respects your remote list by it order, if your Conan client finds log4cplus first in Bincrafters' remote but the correct binary is only available in your local repository, Conan will ignore your local remote and will show a message error about missed binary package for log4cplus. Thus, in your case, you will need to copy ALL binaries to your local repository.
Regards!
You will find the conan packages installed on your Linux system at .conan/data/package_name/version/repo_name/tag. There will be a package folder inside it. If you want to manually add binaries to existing packages then you can add the binary in /bin folder in packages.
Or else you can look into the conan recipe in exports folder and look for the package, that from where it is getting its binaries from and add that binary in that path.

How to install Rasa Stack in Windows 10?

Maybe someone else asked the same question too. But this question is difficult. I tried everything. The place I am stuck is with installing dependencies. Some of the dependencies are old and not easily available. But I managed to install them.
The problems lies here.. There are dependencies that need to get the build from their source code. I already installed Visual C++ Build and MSMPI. Also installed HDF5 for H5PY but it doesn't let me build old versions of H5PY. So, I tried installing the latest version of H5PY but still, I am stuck at errors like file not found. Some of the files which the build process cannot find are "h5py/h5f.pyx", "mpi_c", "mpi.h". Solving error for one missing file leads to other and so on..
On trying hard to solve such errors and installing one or the other package to do the same task, I am tired up.. Something I found, at last, was that "mpi_c" file was replaced with some other file in newer versions of MPI4PY. But my dependencies depend on older version. I tried installing an older version of MPI4PY but HDF5 won't let me install that giving other errors. At last, I quit the task with my whole day wasted after this.
So can someone here please provide a step by step guideline for installing Rasa Stack on Windows Machine?
Windows 10 with Python 3.7.. Let me know if I need to downgrade python as well.. It was my first time building some project from source with python on windows. Thanks...
Please try the below steps to install Rasa:
Install Conda
Create a virtual environment:
conda create -n myenv python=3.5
Activate the virtual environment
conda activate myenv
pip install rasa_nlu rasa_core

How to manually set which version 'libstdc++.so.6' used instead of using the latest one?

I got an error on my server.
version `GLIBCXX_3.4.21' not found
After I some investigation I found that 'libstdc++.so.6' version used when build the app on my local computer is much advanced than on server. So I got that error because that version is not available on server. From what I read, I can fixed that by upgrade 'libstdc++.so.6' on server to the latest one but I can't do that because the restricted acces.
Is there any way to downgrade or make my local use older version as default?
When linking your application specify -Wl,-rpath=$ORIGIN to make it search for shared libraries in the folder where the executable is. Then copy libstdc++.so.6 and other application dependencies (find them with ldd) into your application folder and distribute that folder. See man ld.so, section about $ORIGIN.

How to find the correct version of PyBindGen for Python Bindings

Currently, I am working on the Ns3 simulator and now trying to enable the pyviz visualizer. According to the doc, I have downloaded the three dependencies which are
py27-pygtk
py27-pygoocanvas
py27-pygraphviz
Now in order to use this, I still need to enable the python bindings which I used /usr/bin/python2.7 ./waf configure wanna to check what needs for enabling python bindings. The result shows that
Python Bindings : not enabled (PyBindGen version not correct and newer version could not be retrieved)
So I checked the Doc and downloaded PyBindGen (version 0.18.0). The output shows
Installed /Library/Python/2.7/site-packages/PyBindGen-0.18.0-py2.7.egg
Processing dependencies for PyBindGen==0.18.0
Finished processing dependencies for PyBindGen==0.18.0
After I ran the configuration check the results still showed that PyBindGen version not correct and newer version could not be retrieved
So I presume that is that because I installed the wrong version of PyBindGen? If so how can I get the suitable version for enabling Python Binding?
I would appreciate if there is someone who can help me figure it out. Many thanks.
S.
According to the Google Group
Here is the resolution(tested it worked):
follow the instruction
hg clone http://code.nsnam.org/ns-3-allinone
cd ns-3-allinone && ./download.py
This will solve the Python Binding problem
Updated: after downloading this version of ns3. Solving the python binding problem. Then there will be another problem after running
./waf configure
it will show the result like this:
PyViz visualizer: not enabled (Missing python modules: gtk, goocanvas, pygraphviz)
Even though I have installed all of the three dependencies. So after some researches I found that there has another questions post So there is a guy gave the guessing that
" Waf found the standard Python here (/usr/bin/python is the Apple path), and you installed the python libraries using MacPorts.
Most probably you'll need to configure Python to point to the MacPort-based Python, or it will not see what you installed."
So according to How to: Macports select python
here is the solution:
port select --list python
sudo port select --set python python27
Hope it will help anyone come afterwards to use this.
S.

Ubuntu OpenCV not compiling

I'm trying to compile OpenCV 3.2 with contributions with the following commands:
1.
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr/local/ -DOPENCV_EXTRA_MODULES_PATH=/home/matteo/Desktop/Xilinx/OpenCV/source/opencv_contrib/modules/ /home/matteo/Desktop/Xilinx/OpenCV/source/opencv-3.2.0/
2.
make -j7 # runs 7 jobs in parallel
3.
sudo make install
Can you explain why I get
...
........
...........
........................
-- VTK is not found. Please set -DVTK_DIR in CMake to VTK build directory, or to VTK install subdirectory with VTKConfig.cmake file
-- Caffe: NO
-- Protobuf: NO
-- Glog: NO
-- Downloading ...
CMake Error at cmake/OpenCVUtils.cmake:1043 (file):
file DOWNLOAD cannot open file for write.
Call Stack (most recent call first):
../opencv_contrib/modules/dnn/cmake/OpenCVFindLibProtobuf.cmake:32 (ocv_download)
../opencv_contrib/modules/dnn/CMakeLists.txt:5 (include)
CMake Error at cmake/OpenCVUtils.cmake:1047 (message):
Failed to download . Status=
Call Stack (most recent call first):
../opencv_contrib/modules/dnn/cmake/OpenCVFindLibProtobuf.cmake:32 (ocv_download)
../opencv_contrib/modules/dnn/CMakeLists.txt:5 (include)
-- Configuring incomplete, errors occurred!
I'm working with Ubuntu 16.04 . I already had OpenCV on the system: maybe I unistall it in the wrong way? I remember to compile OpenCV 3.2 with the same command used above.
You must have matching versions of the opencv_contrib and the opencv itself.
Under the opencv github, go to the OpenCV releases and download the 3.2.0 (it should be the same in the master branch).
Now, go to https://github.com/opencv/opencv_contrib/releases and download the 3.2.0. Then you will have both versions matching.
After that all the cmake commands found on the README.md at opencv_contrib master branch should work fine.
I get the same error, that exact error, around the protobuf. There's another error on the xfeatures2d module, too, if your delete the dnn modules (so they don't get configured/built). My problem is, I need the "non-free" xfeatures2d module. :(
The problem appears to be in the opencv_contrib, in the DNN and xfeatures2d modules, but I'm not sure how to fix it. the call to ocv_download seems to be having empty inputs, even though the dnn and xfeatures2d cmake files are passing in arguments. I am not even a novice with cmake, so I'm not sure how to troubleshoot further.
I get this error on both Mac configuring for XCode and on Windows configuring for Visual Studio, using the latest version of cmake-gui, 3.8.0-rc3.
EDIT: I think I've found the issue, though. I opened an issue in the opencv_contrib github. There is a call to ocv_download in the dnn and xfeatures2d cmake files that uses FILENAME as the first parameter, but should be using PACKAGE instead. When I changed the parameters to PACKAGE, CMake successfully configured opencv with the opencv_contrib modules.
Hope this helps! :)
You might not use the same version of opencv and opencv_contrib
https://github.com/opencv/opencv_contrib/archive/<version>.zip
https://github.com/opencv/opencv/archive/<version>.zip
like master or 3.2.0
SHORT
You need to have the same version in opencv and opencv_contrib (.../opencv_contrib/modules/... belongs to an independent repo).
Either the same release or the last commit in BOTH repositories.
Check which version you have and move the other. In your case, I guess you have to change the version of opencv_contrib, then move to the release with git or download it from github.
git checkout <number_opencv_version i.e. 3.2.0>
LONG
I guess as Ken Lee, that you do not have the same version in the repositories.
As Matt referenced in the opened issue, there is a problem with the call of ocv_download because the version is not the one that was used when opencv-3.1, therefore it fails because the parameter is not the expected one.
It happens to me when I was using opencv 3.1.0 and the last version of opencv_contrib. You could change the cmake files one by one, but it is easier to take the correct version in each repo.
There is a right conflict in your build folder (may result from your previous sudo make install). I don't remember how I fixed it, but you can try recursively chown to both source and build folders (or chmod to 777).