I am trying to run geopipupdate in alpine linux as mentioned here.
Installing on Linux via the tarball
Download and extract the appropriate tarball for your system. You will end up with a directory named something like geoipupdate_4.0.0_linux_amd64 depending on the version and architecture.
Copy geoipupdate to where you want it to live. To install it to /usr/local/bin/geoipupdate, run the equivalent of sudo cp geoipupdate_4.0.0_linux_amd64/geoipupdate /usr/local/bin.
geoipupdate looks for the config file /usr/local/etc/GeoIP.conf by default.
I copied geoipudate to /usr/local/bin and conf file to usr/local/etc but when I run geoipupdate, it says command not found.
I am not sure where am I going wrong of if this is not supposed to work in alpine linux this way. Has anyone faced same issue
Related
I am trying to install the lustre clients on Unbuntu 20.04 nodes I have in GCP. Im using linux kernel version 5.15.0-1021-gcp.
I'm trying to install the client with the following code:
cd /home/apps/
mkdir lustre
git clone git://git.whamcloud.com/fs/lustre-release.git
cd lustre-release
git checkout 2.15.0
sh autogen.sh
./configure --prefix=/home/apps/lustre --disable-server --enable-client ## doesnt run! Fails at ./configures with error message "error: Run make config in /lib/modules/5.15.0-1021-gcp/build"
make debs
The configure step fails with an error about running make config in /lib/modules/5.15.0-1021-gcp/build. I tried running make config in /lib/modules/5.15.0-1021-gcp/build but was asked to input some values that I was unsure of.
I also tried downloading the deb package of the client software at
https://downloads.whamcloud.com/public/lustre/lustre-2.15.0/ubuntu2004/client/lustre-client-modules-5.4.0-96-generic_2.15.0-1_amd64.deb. However this is for the wrong linux kernel and I'm not sure what env variables need to be set for this package.
Anyone know how to install the client modules for lustre on Ubuntu?
You need to have the kernel sources or kernel-devel package that exactly match the kernel that you are installing on. This should also include the .config file that describes all of the options used when building your kernel.
Alternately, you could try a pre-built package, but it isn't clear if this will install on your kernel or not.
https://build.whamcloud.com/job/lustre-b2_15/40/arch=x86_64,build_type=client,distro=ubuntu2204,ib_stack=inkernel/
I'm attempting to deploy a very basic trading system to AWS using serverless (following along with this link), but I have a bit of a problem.
Prior to running the deployment command, I'm supposed to run
pip3 install -r requirements.txt -t . --system
but I am getting an error message saying 'no such option: --system'
Initially, I just tried to install the packages without the --system option, but I think that's causing the cron lamda(??) function to fail when I execute it manually through the serverless console because it's not finding the requisite modules.
I'm assuming it's because they aren't being installed properly so my question is how then should I install them so this doesn't happen?
Running
pip3 install -r requirements.txt
alone (while in the trading system directory) does not suffice.
So, what should I do?
The original author was working on an older Debian-derived system, you aren't. You can safely omit this option if it's not supported.
I don't have an authoritative link available, although this came up in a Google search. But here's my summary:
With older Debian-derived systems (eg, Ubuntu 18.04), the --user flag was enabled by default and it overrode the -t flag, so all packages would be installed in the $HOME/.local. The --system flag was nominally intended to allow installation in the system package directory, but in practice it was needed to enable -t.
This is fixed for Debian-derived systems that default to Python 3 (eg, Ubuntu 20.04).
It was never an issue for non-Debian systems (eg, EC2 Linux).
Since you don't seem to be familiar with pip, the -r argument tells it to use a file containing dependencies, and the -t argument tells it to install those dependencies in the current directory (not a great habit, but I don't want to describe virtual environments).
I try to install the package "data.table" (and "aws.s3)" via Rstudio Server on an Amazon Linux instance following this guide:
http://stanke.co/category/r/
Unfortunately, I get the following error message. I really don't know what else to do.
Can anybody help? I installed devtools and I am able to install other packages such as xml2, devtools and deplyr.
I had the same issue on AWS and already fixed.
You need first install gcc64 and openmp shared support library.
sudo yum install gcc64
sudo yum install libgomp
Then under your user home create an .R folder with a Makevars file in it, with the following content (it will tell to R which compiler to use):
CC = /usr/bin/gcc64
CXX = /usr/bin/g++
SHLIB_OPENMP_CFLAGS = -fopenmp
I hope it's working for you as well ...
You need to install dmlc-core.
This link will provide more information:
A common bricks library for building scalable and portable distributed machine learning
based on https://github.com/RcppCore/RcppArmadillo/issues/200, I think this issue is due to a g++ compatability issue. It might also explain why when I installed devtools it kept giving me [-Wdeprecated-declarations]
so run:
sudo yum remove gcc72-c++.x86_64 libgcc72.x86_64
yum install R-devel
Then you should be able to run the installation command.
I am installing aws cli on Mac. Previously I installed anaconda to control my python versions. So I installed python using conda. Now I want to install aws cli.
By using pip:
pip3 install awscli --upgrade --user
The installation was successful. However, when I run
aws --version
It told me that aws command was not found.
I again tried to add it to the command line path. But I could not find where it was installed.
When I run
which python
It gave me
/anaconda/bin/python
People say this might not be the real folder and it is true I could not find aws cli under it either.
I then run
ls -al /anaconda/bin/python
It gives
lrwxr-xr-x 1 mac staff 9 Aug 15 20:14 /anaconda/bin/python -> python3.6
I dont understand the path at all.
How could I find where my aws cli installed?
I ran into the same issue and eventually found the awscli command in ~/.local/bin. Just add /Users/<username>/.local/bin to your $PATH.
You can do this by editing ~/.bash_profile, which probably already has these lines in it:
# added by Anaconda3 4.4.0 installer
export PATH="/Users/<username>/anaconda/bin:$PATH"
You could make another copy of this line but replace the anaconda path with the new one, but I just updated the existing path since the two are related:
# added by Anaconda3 4.4.0 installer
export PATH="/Users/<username>/.local/bin:/Users/<username>/anaconda/bin:$PATH"
I solved the problem by using conda to install awscli.
conda install -c conda-forge awscli
worked so far. It seems that pip install does not work for conda installed python... Is this conclusion true?
If it's installing and then saying "command not found" it probably just means that the executable it has installed is not referenced in the operating systems PATH environment variable.
Here is how to add the downloaded executable to PATH: https://docs.aws.amazon.com/cli/latest/userguide/cli-install-macos.html#awscli-install-osx-path
Here is the AWS docs to troubleshoot the issue: https://docs.aws.amazon.com/cli/latest/userguide/troubleshooting.html
I encountered an identical situation.
I solved this by adding the location of the awscli command to the file...
/etc/paths
The location to my awscli command was where others had found it...
~/.local/bin
From my home directory in Mac OS X Terminal, I entered a quick nano command to edit the /etc/paths file...
sudo nano /etc/paths
#For those who don't know...
#sudo is to get admin access
#nano is quick and dirty file editor.
# /etc/paths is the file you want to edit.
I entered my password, then I just added the awscli command location at the end of the file...
/Users/UpAndAtThem/.local/bin
Yours might be be...
/Users/your_username_here/.local/bin
Still in Nano editor to exit and save: Hit control+X > Hit Y > Hit Enter.
Here's a quick video...
https://youtu.be/htb_HTwtgmk
Good luck!
I am trying to reproduce my development environment in a docker image. I am able to get simple dependencies met such as python+a couple standard packages, largely through the builds from docker hub. But when it comes to installing xgboost or pandas I am having great difficulty.
After looking into the error messages it looked like I had the wrong version of g++ installed. The build had 4.7, but xgboost requires 4.9+. As I tried to update g++ I kept running into a wall where I couldn't update g++ because I needed another package (apt-add-repository), but to install that package I needed another (apt-utils) etc.
Does anyone have any general advice with setting up a Docker image or for this specific problem of upgrading the g++.
Here is the Docker file:
FROM continuumio/anaconda
MAINTAINER maintainer
RUN apt-get install -y g++-4.9
One test would be to start from a gcc:4.9 image (which uses wheezy), and try to add what anaconda Dockerfile does.
That way, you start from an image with the right gcc.
You first need to make sure your source list is up-to-date. The line with RUN command in the dockerfile should be
RUN apt-get update && apt-get install -y g++