SSH execute remote command infacmd.sh is failing - amazon-web-services

sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com 'cd /opt/tools/informatica/ids/Informatica/10.2.0/isp/bin;infacmd.sh oie importObjects -dn Domain_IDS_Dev -un abc -pd "xxx" -rs MRS_IDS_DEV -sdn LDAP_NP -fp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/mapping_import.xml -cp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/import_control_file.xml'| tee -a logfile.log
I am running the above command from container in Buildspec as well as tested in ec2 instance , Command is failing with error: sh: infacmd.sh: command not found
But When i tried only command sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com and executed other command manually in ec2 then command is working.

Make sure the file exists at the path.
Make sure you have access to the file.
Make sure the file is executable or change the command to
; /bin/bash infacmd.sh ...

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

Sagemaker lifecycle config: could not find conda environment conda_python3

the below script should run a notebook called prepTimePreProcessing whenever a AWS notebook instance starts runing.
however I am getting "could not find conda environment conda_python3" error from the lifecycle config file.
set -e
ENVIRONMENT=python3
NOTEBOOK_FILE="/home/ec2-user/SageMaker/prepTimePreProcessing.ipynb"
echo "Activating conda env"
source /home/ec2-user/anaconda3/bin/activate "$ENVIRONMENT"
echo "Starting notebook"
nohup jupyter nbconvert --to notebook --inplace --ExecutePreprocessor.timeout=600 --ExecutePreprocessor.kernel_name=python3 --execute "$NOTEBOOK_FILE" &
Any help whould be appreciated.
Assuming no environment problems, if you open a terminal in the instance in use and run:
conda env list
the result should also contain this line:
python3 /home/ec2-user/anaconda3/envs/python3
After that, you can create a .sh script inside /home/ec2-user/SageMaker containing all the code to run. This way it also becomes versionable by being a persisted file in the instance space and not inside an external configuration.
The on-start.sh/on-create.sh (from this point I will simply call it script.sh) file becomes trivially:
# PARAMETERS
ENVIRONMENT=python3
# conda env
source /home/ec2-user/anaconda3/bin/activate "$ENVIRONMENT";
echo "'$ENVIRONMENT' env activated"
In the lifecycle config, on the other hand, just write a few lines to invoke the previously created script.sh:
#!/bin/bash
set -e
SETUP_FILE=/home/ec2-user/SageMaker/script.sh
echo "Run setup script"
sh "$SETUP_FILE"
echo "Setup completed!"
Extra
If you want to add a safety check so that the .sh file is read correctly regardless of line breaks, I would also add a conversion:
#!/bin/bash
set -e
SETUP_FILE=/home/ec2-user/SageMaker/script.sh
# convert script to unix format
echo "Converting setup script into unix format"
sudo yum -y install dos2unix > /dev/null 2>&1
dos2unix "$SETUP_FILE" > /dev/null 2>&1
echo "Run setup script"
sh "$SETUP_FILE"
echo "Setup completed!"

ENTRYPOINT just refuses to exec or even shell run

This is my 3rd day of tear-your-hair-out since the weekend and I just cannot get ENTRYPOINT to work via gitlab runner 13.3.1, this for something that previously worked with a simple ENTRYPOINT ["/bin/bash"] but that was using local docker desktop and using docker run followed by docker exec commands which worked like a synch. Essentially, at the end of it all I previously got a WAR file built.
Currently I build my container in gitlab runner 13.3.1 and push to s3 bucket and then use the IMAGE:localhost:500/my-recently-builtcontainer and then try and do whatever it is I want with the container but I cannot even get ENTRYPOINT to work, in it's exec form or in shell form - atleast in the shell form I get to see something. In the exec form it just gave "OCI runtime create failed" opaque errors so I shifted to the shell form just to see where I could get to.
I keep getting
sh: 1: sh: echo HOME=/home/nonroot-user params=#$ pwd=/ whoami=nonroot-user script=sh ENTRYPOINT reached which_sh=/bin/sh which_bash=/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; ls -alrth /bin/bash; ls -alrth /bin/sh; /usr/local/bin/entrypoint.sh ;: not found
In my Dockerfile I distinctly have
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN bash -c "ls -larth /usr/local/bin/entrypoint.sh"
ENTRYPOINT "echo HOME=${HOME} params=#$ pwd=`pwd` whoami=`whoami` script=${0} ENTRYPOINT reached which_sh=`which sh` which_bash=`which bash` PATH=${PATH}; ls -alrth `which bash`; ls -alrth `which sh`; /usr/local/bin/lse-entrypoint.sh ;"
The output after I build the container in gitlab is - and I made sure anyone has rights to see this file and use it - just so that I can proceed with my work
-rwxrwxrwx 1 root root 512 Apr 11 17:40 /usr/local/bin/entrypoint.sh
So, I know it is there and all the chmod flags indicate anybody can look at it - so I am so perplexed why it is saying NOT FOUND
/usr/local/bin/entrypoint.sh ;: not found
entrypoint.sh is ...
#!/bin/sh
export PATH=$PATH:/usr/local/bin/
clear
echo Script is $0
echo numOfArgs is $#
echo paramtrsPassd is $#
echo whoami is `whoami`
bash --version
echo "About to exec ....."
exec "$#"
It does not even reach inside this entrypoint.sh file.

Echo command in Dockerfile

echo "Hi There - Welcome to Docker POC">C:/Users/abc/xyz/POC/poc.html
The above echo command works from windows powershell but the same does not work when it is included in a Dockerfile as given below.
RUN echo "Hi There - Welcome to Docker POC">C:/Users/abc/xyz/POC/poc.html
The Error is : System cannot find the path specified.
Pls help.
This is because the path you give at the end of the RUN command in the Dockerfile is into the container.
You probably want to run the command into a docker container. If so, please run:
docker run --rm -v C:/Users/abc/xyz/POC/:/POC busybox sh -c 'echo "Hi There - Welcome to Docker POC" > /POC/poc.html'
And you will see 'poc.html' file in 'C:/Users/abc/xyz/POC/'.
Tell if I misunderstood your request.

How to launch more than one instance using knife ec2

How launch more than one instance using knife ec2, also is there any need of delay between launching the instances.
While lauching multiple instance using knife ec2 can we attach different roles to different instances
Honestly, when it comes to knife ec2 or any of the cloud providers, I use a wrapper bash+tmux script around it.
#!/bin/bash
tmux new-session -s build -n build -d "echo 'start'"
tmux new-window -t build -n backend
tmux send-keys -t build:backend "knife ec2 server create --server-name backend -N backend -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base], recipe[ops::mysql_db_setup], ' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n web01
tmux send-keys -t build:web01 "knife ec2 server create --server-name web01 -N web01 -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[web]' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n web02
tmux send-keys -t build:web02 "knife ec2 server create --server-name web02 -N web02 -E playpen -f 5 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[web]' -d ubuntu10.04-v4 --private-network" Enter
tmux new-window -t build -n background01
tmux send-keys -t build:background01 "knife ec2 server create --server-name background01 -N background01 -E playpen -f 2 -I 9aa3b52b-1471-413f-8b2b-0fbc756491b4 -r 'role[base],role[background]' -d ubuntu10.04-v4 --private-network" Enter
tmux attach-session -t build
tmux select-window -t build
Or at least something to that effect.