'VBoxManage guestcontrol' to run shell script on guest - virtualbox

I have VirtualBox VM that runs a server that can be accessed via localhost and forwarded ports.
I need to run some shells scripts and implement some business logic based on the results.
I tried following command as example:
VBoxManage guestcontrol <UUID> exec --image /bin/sh --username <su username> --password <su password> --wait-exit --wait-stdout --wait-stderr -- "[ -d /<server_folder>/ ] && echo "OK" || echo "Server is not installed""
but I'm getting the error:
/bin/sh: [ -d <server_folder> ] && echo : No such file or directory
What is wrong with the syntax above?

First make sure that VBoxManage.exe is in your path!
Secondly you have to be carefull with your quotations. You used:
"[ -d /<server_folder>/ ] && echo "OK" || echo "Server is not installed""
you have to use singel quotaions for the outermost quotation:
'[ -d /<server_folder>/ ] && echo "OK" || echo "Server is not installed"'
Finally you have to add a -c in front of your arguments (to call /bin/sh -c '...').
The complete command:
VBoxManage guestcontrol <UUID> exec --image /bin/sh --username <su username> --password <su password> --wait-exit --wait-stdout --wait-stderr -- -c '[ -d /<server_folder>/ ] && echo "OK" || echo "Server is not installed"'

Related

EBS FFMPEG Not Installing from ebexentsions

I am trying to install FFMPEG on EBS. I have the following in the directory:
rootfolder/.ebextensions/packages.config
With the following info inside the file:
packages:
yum:
ImageMagick: []
ImageMagick-devel: []
commands:
01-wget:
command: "wget -O /tmp/ffmpeg.tar.xz https://johnvansickle.com/ffmpeg/releases/ffmpeg-release-i686-static.tar.xz"
02-mkdir:
command: "if [ ! -d /opt/ffmpeg ] ; then mkdir -p /opt/ffmpeg; fi"
03-tar:
command: "tar xvf /tmp/ffmpeg.tar.xz -C /opt/ffmpeg"
04-ln:
command: "if [[ ! -f /usr/bin/ffmpeg ]] ; then ln -sf /opt/ffmpeg/ffmpeg-3.4-64bit-static/ffmpeg /usr/bin/ffmpeg; fi"
05-ln:
command: "if [[ ! -f /usr/bin/ffprobe ]] ; then ln -sf /opt/ffmpeg/ffmpeg-3.4-64bit-static/ffprobe /usr/bin/ffprobe; fi"
06-pecl:
command: "if [ `pecl list | grep imagick` ] ; then pecl install -f imagick; fi"
ut when I sSH into my instang, the ffmpeg command is not installed. Any ideas as to why?
It does not work because you are linking wrong version.
You are trying to link:
/opt/ffmpeg/ffmpeg-3.4-64bit-static/ffmpeg
However, it should be:
/opt/ffmpeg/ffmpeg-4.3.1-i686-static/ffmpeg
Same for ffprobe. It should be:
/opt/ffmpeg/ffmpeg-4.3.1-i686-static/ffprobe

Errors adding Environment Variables to NodeJS Elastic Beanstalk

My configuration worked up until yesterday. I have added the nginx NodeJS https redirect extension from AWS. Now, when I try to add a new Environment Variable through the Elastic Beanstalk configuration, I get this error:
[Instance: i-0364b59cca36774a0] Command failed on instance. Return code: 137 Output: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf + service nginx stop Stopping nginx: /sbin/service: line 66: 27395 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS}. Hook /opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
When I look at the eb-activity.log, I see this error:
[2018-02-18T17:24:58.762Z] INFO [13848] - [Configuration update 1.0.61#112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Starting activity...
[2018-02-18T17:24:58.939Z] INFO [13848] - [Configuration update 1.0.61#112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Activity execution failed, because: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (ElasticBeanstalk::ExternalInvocationError)
caused by: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (Executor::NonZeroExitStatus)
What am I doing wrong? And what has changed recently since this worked fine when I changed an Environment Variable a couple months ago.
I had this problem as well and Amazon acknowledged the error in the documentation. This is a working restart script that you can use in your .ebextensions config file.
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi
service nginx stop exits with status 137 (Killed).
Your script starts with: #!/bin/bash -xe
The parameter -e makes the script exit immediately whenever something exits with a non-zero status.
If you want to continue the execution, you need to catch the exit status (137).
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
set +e
service nginx stop
exitStatus = $?
if [ $exitStatus -ne 0 ] && [ $exitStatus -ne 137 ]
then
exit $exitStatus
fi
set -e
fi
service nginx start
The order of events looks like this to me:
Create a post-deploy hook to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
Run a container command to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
Run the post-deploy hook, which tries to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
So it doesn't seem surprising to me that the post-deploy script fails as the file you are trying to delete probably doesn't exist.
I would try one of two things:
Move the deletion of the temporary conf file from the container command to the 99_kill_default_nginx.sh script, then remove the whole container command section.
Remove the line rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf from the 99_kill_default_nginx.sh script.
/sbin/status nginx seems not to work anymore. I updated the script to use service nginx status:
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=$(service nginx status)
if [[ "$status" =~ "running" ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi
And the faulty script is STILL in amazon's docs... I wonder when they are going to fix it. It's been enough time already

Elastic-Beanstalk: How to find the S3 Bucket From the EB Instance

I'm interested in creating a backup of the current application as an application version. As a result I'd like to try and track down the S3 Bucket where the App Versions are saved to save a new file to that location.
Does anyone know how this is done?
To do this:
Get the Instance Id
Get the environment Id.
Get the App Name
Get the App Versions
The data is in the response of the App Versions.
Based on the above, here is my entire backup script. It will figure out steps 1-4 above, then zip up the current app directory, send it to S3, and create a new app version based on it.
#/bin/bash
EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
test -n "$EC2_INSTANCE_ID" || die 'cannot obtain instance-id'
EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone || die \"wget availability-zone has failed: $?\"`"
test -n "$EC2_AVAIL_ZONE" || die 'cannot obtain availability-zone'
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
EB_ENV_ID="`aws ec2 describe-instances --instance-ids $EC2_INSTANCE_ID --region $EC2_REGION | grep -B 1 \"elasticbeanstalk:environment-id\" | grep -Po '\"Value\":\\s+\"[^\"]+\"' | cut -d':' -f 2 | grep -Po '[^\"]+'`"
EB_APP_NAME="`aws elasticbeanstalk describe-environments --environment-id $EB_ENV_ID --region $EC2_REGION | grep 'ApplicationName' | cut -d':' -f 2 | grep -Po '[^\\",]+'`"
EB_BUCKET_NAME="`aws elasticbeanstalk describe-application-versions --application-name $EB_APP_NAME --region $EC2_REGION | grep -m 1 'S3Bucket' | cut -d':' -f 2 | grep -Po '[^\", ]+'`"
DATE=$(date -d "today" +"%Y%m%d%H%M")
FILENAME=application.$DATE.zip
cd /var/app/current
sudo -u webapp zip -r -J /tmp/$FILENAME *
sudo mv /tmp/$FILENAME ~/.
sudo chmod 755 ~/$FILENAME
cd ~
aws s3 cp $FILENAME s3://$EB_BUCKET_NAME/
rm -f ~/$FILENAME
aws elasticbeanstalk create-application-version --application-name $EB_APP_NAME --version-label "$DATE-$FILENAME" --description "Auto-Save $DATE" --source-bundle S3Bucket="$EB_BUCKET_NAME",S3Key="$FILENAME" --no-auto-create-application --region $EC2_REGION
tldr: most probably you want
jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)
Detailed version:
Add to any of your .ebextensions
packages:
yum:
jq: []
(assuming you are using Amazon Linux or Centos)
and run
export BUCKET="$(jq -r '..|.url?' /etc/elasticbeanstalk/metadata-cache | grep -oP '//\K(.*)(?=\.s3)' || sudo !!)"
echo bucketname: $BUCKET
then full s3 path would be accessible at
bash -c 'source /etc/elasticbeanstalk/.aws-eb-stack.properties; echo s3:\\${BUCKET} -r ${region:-eu-east-3}'

AWS CLI commands on JENKINS

I'm trying to write a script that with the help of Jenkins will look at the updated files in git, download it and will encrypt them using AWS KMS. I have a working script that does it all and the file is downloaded to the Jenkins repository on local server. But my problem is encrypting this file in Jenkins repo. Basically, when I encrypt files on local computer, I use the command:
aws kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://file.json --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json
and all is ok, but if I try to do it with Jenkins I get an error that the AWS command not found.
Does somebody know how to solve this problem and how do I make it run AWS through Jenkins?
Here is my working code which breaks down on the last line:
bom_sniffer() {
head -c3 "$1" | LC_ALL=C grep -qP '\xef\xbb\xbf';
if [ $? -eq 0 ]
then
echo "BOM SNIFFER DETECTED BOM CHARACTER IN FILE \"$1\""
exit 1
fi
}
check_rc() {
# exit if passed in value is not = 0
# $1 = return code
# $2 = command / label
if [ $1 -ne 0 ]
then
echo "$2 command failed"
exit 1
fi
}
# finding files that differ from this commit and master
echo 'git fetch'
check_rc $? 'echo git fetch'
git fetch
check_rc $? 'git fetch'
echo 'git diff --name-only origin/master'
check_rc $? 'echo git diff'
diff_files=`git diff --name-only $GIT_PREVIOUS_COMMIT $GIT_COMMIT | xargs`
check_rc $? 'git diff'
for x in ${diff_files}
do
echo "${x}"
cat ${x}
bom_sniffer "${x}"
check_rc $? "BOM character detected in ${x},"
aws configure kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://${x} --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json
done
After discussion with you this is how this issue was resolved :
First corrected the command by removing configure from it.
Installed the awscli for the jenkins user :
pip install awscli --user
Used the absolute path of aws in your script
for eg. say if it's in ~/.local/bin/ use ~/.local/bin/aws kms encrypt --key-id xxxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx --plaintext fileb://${x} --output text --query CiphertextBlob | base64 --decode > Encrypted-data.json in your script.
Or add the path of aws in PATH.

How to install nginx 1.9.15 on amazon linux disto

I try to install the latest version of nginx (>= 1.9.5) on a fresh amazon linux to make use of http2. I followed the instructions that are described here -> http://nginx.org/en/linux_packages.html
I created a repo file /etc/yum.repos.d/nginx.repowith this content:
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck=0
enabled=1
If I run yum update and yum install nginx I get this:
nginx x86_64 1:1.8.1-1.26.amzn1 amzn-main 557 k
It seems that it fetches still from the amzn-main repo. How do I install a newer version of nginx?
-- edit --
I added "priority=10" to the nginx.repo file and now I can install 1.9.15 with yum install nginx with this result:
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 1:1.9.15-1.el7.ngx will be installed
--> Processing Dependency: systemd for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Processing Dependency: libpcre.so.1()(64bit) for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Finished Dependency Resolution
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: libpcre.so.1()(64bit)
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: systemd
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
If you're using AWS Linux2, you have to install nginx from the AWS "Extras Repository". To see a list of the packages available:
# View list of packages to install
amazon-linux-extras list
You'll see a list similar to:
0 ansible2 disabled [ =2.4.2 ]
1 emacs disabled [ =25.3 ]
2 memcached1.5 disabled [ =1.5.1 ]
3 nginx1.12 disabled [ =1.12.2 ]
4 postgresql9.6 disabled [ =9.6.6 ]
5 python3 disabled [ =3.6.2 ]
6 redis4.0 disabled [ =4.0.5 ]
7 R3.4 disabled [ =3.4.3 ]
8 rust1 disabled [ =1.22.1 ]
9 vim disabled [ =8.0 ]
10 golang1.9 disabled [ =1.9.2 ]
11 ruby2.4 disabled [ =2.4.2 ]
12 nano disabled [ =2.9.1 ]
13 php7.2 disabled [ =7.2.0 ]
14 lamp-mariadb10.2-php7.2 disabled [ =10.2.10_7.2.0 ]
Use the amazon-linux-extras install command to install it, like:
sudo amazon-linux-extras install nginx1.12
More details are here: https://aws.amazon.com/amazon-linux-2/faqs/.
At the time of writing, the latest version of nginx available from the AWS yum repo is 1.8.
The best thing to do for now is to build any newer version from source.
The AWS Linux AMI already has the necessary build tools.
For example, based on the Nginx 1.10 (I've assumed you're logged in as the regular ec2-user. Anything needing superuser rights is preceded with sudo)
cd /tmp #so we can clean-up easily
wget http://nginx.org/download/nginx-1.10.0.tar.gz
tar zxvf nginx-1.10.0.tar.gz && rm -f nginx-1.10.0.tar.gz
cd nginx-1.10.0
sudo yum install pcre-devel openssl-devel #required libs, not installed by default
./configure \
--prefix=/etc/nginx \
--conf-path=/etc/nginx/nginx.conf \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock \
--with-http_ssl_module \
--with-http_v2_module \
--user=nginx \
--group=nginx
make
sudo make install
sudo groupadd nginx
sudo useradd -M -G nginx nginx
rm -rf nginx-1.10.0
You'll then want a service file, so that you can start/stop nginx, and load it on boot.
Here's one that matches the above config. Put it in /etc/rc.d/init.d/nginx:
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: - 85 15
# description: NGINX is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /etc/sysconfig/nginx
# pidfile: /var/run/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/etc/nginx/sbin/nginx"
prog=$(basename $nginx)
NGINX_CONF_FILE="/etc/nginx/nginx.conf"
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/run/nginx.lock
make_dirs() {
# make required directories
user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
if [ -z "`grep $user /etc/passwd`" ]; then
useradd -M -s /bin/nologin $user
fi
options=`$nginx -V 2>&1 | grep 'configure arguments:'`
for opt in $options; do
if [ `echo $opt | grep '.*-temp-path'` ]; then
value=`echo $opt | cut -d "=" -f 2`
if [ ! -d "$value" ]; then
# echo "creating" $value
mkdir -p $value && chown -R $user $value
fi
fi
done
}
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
make_dirs
echo -n $"Starting $prog: "
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
configtest || return $?
stop
sleep 1
start
}
reload() {
configtest || return $?
echo -n $"Reloading $prog: "
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit 2
esac
Set the service file to be executable:
sudo chmod 755 /etc/rc.d/init.d/nginx
Now you can start it with:
sudo service nginx start
To load it automatically on boot:
sudo chkconfig nginx on
Finally, don't forget to edit /etc/nginx/nginx.conf to match your requirements and run sudo service nginx reload to refresh the changes.
Note, there is no 1.10 where you're looking. You can see the list here
http://nginx.org/packages/mainline/centos/7/x86_64/RPMS/
After you yum update use yum search nginx to see the different versions you have and choose a specific one:
yum search nginx
on centos 6 gives
nginx.x86_64 : A high performance web server and reverse proxy server
nginx16.x86_64 : A high performance web server and reverse proxy server
nginx18.x86_64 : A high performance web server and reverse proxy server
I have two versions to choose from, 1.6 and 1.8.
You're getting error because those nginx RPMs are built for RHEL7, not Amazon Linux. Amazon Linux is a weird hybrid of RHEL6, RHEL7, and Fedora. You should contact Amazon and ask them to create a proper nginx19 RPM specifically built for their distro.