I've got some problem with ansible because I can't ping the server in localhost. I create the file hosts and that's the code :
testserver ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 \ ansible_ssh_user=vagrant \ ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
I'm using Fedora and I virtualized Debian with virtualbox 4.3
that's what append in shell:
[andrea#andrea ~]$ ansible testserver -i /home/andrea/playbooks/hosts -m ping -vvvv
<127.0.0.1> ESTABLISH CONNECTION FOR USER: andrea
<127.0.0.1> REMOTE_MODULE ping
<127.0.0.1> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/home/andrea/.ansible/cp/ansible-ssh-%h-%p-%r" -o Port=2200 -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 127.0.0.1 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445357197.49-202989636750564 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445357197.49-202989636750564 && echo $HOME/.ansible/tmp/ansible-tmp-1445357197.49-202989636750564'
testserver | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 127.0.0.1:2200
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
I think you are not login in using the correct user/key combination.
Try the following :
ansible -vvvv testserver -i /home/andrea/playbooks/hosts -m ping --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant
I added
-u option to specify the user (according to the line you posted it should be vagrant)
--private-key option to tell vagrant where to find you ssh private key file
By the way, if you want to log in you should use:
ssh -i .vagrant/machines/default/virtualbox/private_key vagrant#127.0.0.1 -p 2200
(Sven forgot to tell you to use the proper user, so you probably were trying to log in using the "andrea" user)
Related
I am facing this error when I try any kubectl command, on alternate basis, i.e.,
once the command gives correct output and other time it shows the above error.
[root#ip-10-0-3-103 ec2-user]# kubectl get ns
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
[root#ip-10-0-3-103 ec2-user]# kubectl get ns
NAME STATUS AGE
default Active 4d1h
kube-node-lease Active 4d1h
kube-public Active 4d1h
kube-system Active 4d1h
migration Active 3d19h
[root#ip-10-0-3-103 ec2-user]#
This is the issue, the same command is behaving two way.
I did refer the answers on StackOverflow itself, but did not work:
$ sudo kubeadm reset
$ sudo swapoff -a
$ sudo kubeadm init --pod-network-cidr=10.244.10.0/16 --kubernetes-
version "1.18.3"
$ sudo rm -rf $HOME/.kube
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ sudo systemctl enable docker.service
$ sudo service kubelet restart
$ kubectl get nodes
Im trying to deploy a django app using docker and Gitlab CI/CD . I have a running instance on aws and also created a postgres database for the same . The deployment script is showing error as follows :
Login Succeeded
$ mkdir -p ~/.ssh
$ echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa
$ chmod 700 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 28
$ ssh-add ~/.ssh/id_rsa
Error loading key "/root/.ssh/id_rsa": invalid format
Running after_script
00:02
Uploading artifacts for failed job
00:01
ERROR: Job failed: exit code 1
How can i fix this issue?
sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com 'cd /opt/tools/informatica/ids/Informatica/10.2.0/isp/bin;infacmd.sh oie importObjects -dn Domain_IDS_Dev -un abc -pd "xxx" -rs MRS_IDS_DEV -sdn LDAP_NP -fp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/mapping_import.xml -cp /opt/tools/informatica/ids/Informatica/10.2.0/tomcat/bin/source/import_control_file.xml'| tee -a logfile.log
I am running the above command from container in Buildspec as well as tested in ec2 instance , Command is failing with error: sh: infacmd.sh: command not found
But When i tried only command sshpass -p "xxx" ssh -t -t abc#usllpz107.net.com and executed other command manually in ec2 then command is working.
Make sure the file exists at the path.
Make sure you have access to the file.
Make sure the file is executable or change the command to
; /bin/bash infacmd.sh ...
I have a question about Dockerfile with CMD command. I am trying to setup a server that needs to run 2 commands in the docker container at startup. I am able to run either 1 or the other service just fine on their own but if I try to script it to run 2 services at the same time, it fails. I have tried all sorts of variations of nohup, &, linux task backgrounding but I haven't been able to solve it.
Here is my project where I am trying to achieve this:
https://djangofan.github.io/mountebank-with-ui-node/
#entryPoint.sh
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
jobs -l
Displays this output but the ports are not listening:
djangofan#MACPRO ~/workspace/mountebank-container (master)*$ ./run-container.sh
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878
f5c50afd848e46df93989fcc471b4c0c163d2f5ad845a889013d59d951170878 djangofan/mountebank-example "/bin/bash -c /scripts/entryPoint.sh" Less than a second ago Up Less than a second 0.0.0.0:2525->2525/tcp, 0.0.0.0:4546->4546/tcp, 0.0.0.0:5555->5555/tcp, 2424/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:2424->80/tcp nervous_lalande
[1]- 5 Running nohup /bin/bash -c "http-server -p 80 /ui" &
[2]+ 6 Running nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection" &
And here is my Dockerfile:
FROM node:8-alpine
ENV MOUNTEBANK_VERSION=1.14.0
RUN apk add --no-cache bash gawk sed grep bc coreutils
RUN npm install -g http-server
RUN npm install -g mountebank#${MOUNTEBANK_VERSION} --production
EXPOSE 2525 2424 4546 5555 9000
ADD imposters /mb/
ADD ui /ui/
ADD *.sh /scripts/
# these work when ran 1 or the other
#CMD ["http-server", "-p", "80", "/ui"]
#CMD ["mb", "--port", "2525", "--configfile", "/mb/imposters.ejs", "--allowInjection"]
# this doesnt yet work
CMD ["/bin/bash", "-c", "/scripts/entryPoint.sh"]
One process inside docker container has to run not in background mode, because docker container is running while main process inside it is running.
The /scripts/entryPoint.sh should be:
#!/bin/bash
nohup /bin/bash -c "http-server -p 80 /ui" &
nohup /bin/bash -c "mb --port 2525 --configfile /mb/imposters.ejs --allowInjection"
Everything else is fine in your Dockerfile.
I am trying to run a Ansible playbook that provisions EC2 instances in AWS using Jenkins.
My Jenkins application is installed on an EC2 that has required roles to provision instances, and my JENKINS_USER is ec2-user.
I am able to execute the playbook manually when logged in as ec2-user. However, when I try to execute the same exact Ansible command, Jenkins stalls indefinitely.
Building in workspace /var/lib/jenkins/workspace/Provision-AWS-Environment-dev
[Provision-AWS-Environment-dev] $ /bin/ansible-playbook /home/ec2-user/efx-devops-jenkins/aws/awsprovision.yml -i /home/ec2-user/efx-devops-jenkins/aws/inventories/dev/hosts -s -f 5 -vvv
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: awsprovision.yml *****************************************************
[0;34m2 plays in /home/ec2-user/efx-devops-jenkins/aws/awsprovision.yml[0m
PLAY [awsmaster] ***************************************************************
TASK [provision : Provison "3" ec2 instances in "ap-southeast-2"] **************
[1;30mtask path: /home/ec2-user/efx-devops-jenkins/aws/roles/provision/tasks/main.yml:5[0m
[0;34mUsing module file /usr/lib/python2.7/site-packages/ansible/modules/core/cloud/amazon/ec2.py[0m
[0;34m<10.39.144.187> ESTABLISH LOCAL CONNECTION FOR USER: ec2-user[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615 `" && echo ansible-tmp-1489656061.65-268771004227615="` echo ~/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615 `" ) && sleep 0'[0m
[0;34m<10.39.144.187> PUT /tmp/tmpvvKnfU TO /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c 'chmod u+x /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py && sleep 0'[0m
[0;34m<10.39.144.187> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-uatxqcnoparsvzhjhxvlccmbjwaxjqaz; /usr/bin/python /home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/ec2.py; rm -rf "/home/ec2-user/.ansible/tmp/ansible-tmp-1489656061.65-268771004227615/" > /dev/null 2>&1'"'"' && sleep 0'[0m
Can anyone identify why I am not able to execute the playbook using Jenkins?
The issue was that the Jenkins Master node (where the Ansible playbook was being executed), was missing some Environmental Variables (Configured under Manage Jenkins>Manage Node> Configure Master). See below list of variable I added to Jenkins Master node.
Name: http_proxy
Value: http://proxy.com:123
Name: PATH
Value: /usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/ec2-user/.local/bin:/home/ec2-user/bin
Name: SUDO_COMMAND
Value: /bin/su ec2-user
Name: SUDO_USER
Value: svc_ansible_lab
Once I added the above variables, I was able to execute the Ansible Playbooks with no issues.