Jenkins can't copy files to windows remote host - build

I have a Jenkins server on OS X 10.7, which polls a subversion server, builds the code and packages the app. The last step that I need to complete is deploying the app on a remote host, which is a windows share. Note that my domain account has write access to the target folder and the volume is mounted. I've tried using a shell script build step:
sudo cp "path/to/app" "/Volumes/path/to/target"
However i get a "no tty" response. I was able to run this command succesfully in Terminal, but not as a build step in Jenkins.
Does this have something to do with the user being used when starting up Jenkins? As a side note, the default user.name is jenkins and my JENKINS_HOME resides in /Users/Shared/Jenkins. I would appreciate any help as to how to achieve this.

Your immediate problem seems to be that you are running Jenkins in the background and sudo wants to input a password. Run Jenkins in the foreground with $ java -jar jenkins.war.
However, this most probably won't solve your problem as you'll be asked to enter a password when the command runs - from the terminal you started Jenkins from (presumably it's not what you want). You need to find a way to copy your files without needing root permissions. In general, it it not a good idea to rely on administrative permissions in your builds (there are exceptions, but your case is not it).

Related

AWS Batch Failing to launch Dockerfile - standard_init_linux.go:219: exec user process caused: exec format error

I am attempting to use AWS Batch to launch a linux server, which will in essence perform the fetch and go example included within AWS (to download a SH from S3 and run it).
Does AWS Batch work at all for anyone?
The aws fetch_and_go example always fails, even followed someone elses guide online which mimicked the aws example.
I have tried creating Dockerfile for amazonlinux:latest and ubuntu:20.04 with numerous RUN and CMD.
The scripts always seem to fail with the error:
standard_init_linux.go:219: exec user process caused: exec format error
I thought at first this was relevant to my deployment access rights maybe within the amazonlinux so have played with chmod 777, chmod -x etc on the she file.
The final nail in the coffin, my current script is litterely:
FROM ubuntu:20.04
Launch this using AWS Batch, no command or parameters passed through and it still fails with the same error code. This is almost hinting to me that there is either a setup issue with my AWS Batch (which im using default wizard settings, except changing to an a1.medium server) or that AWS Batch has some major issues.
Has anyone had any success with AWS Batch launching their own Dockerfiles ? Could they share their examples and/or setup parameters?
Thank you in advance.
A1 instances are ARM based first-generation Graviton CPU. It is highly likely the image you are trying to run something that is expecting x86 CPU (Intel or AMD). Any instance class with a "g" in it ("c6g" or "m5g") are Graviton2 which is also ARM based and will not work for the default examples.
You can test whether a specific container will run by launching an A1 instance yourself and running the container (after installing docker). My guess is that you will get the same error. Running on Intel or AMD instances should work.
To leverage Batch with ARM your containerized application will need to work on ARM. If you point me to the exact example, I can give more details on how to adjust to run on A1 or Graviton2 instances.
I had the same issue, and it was because I build the image locally on my M1 Mac.
Try adding --platform linux/amd64 to your docker build command before pushing if this is your case.
In addition to the other comment. You can create multi-arch images yourself which will provide the correct architecture.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

Unable to bring up docker project

I'm following this Docker tutorial, which creates a simple Docker-managed Django site, and when I try to run docker-compose up to launch my docker project, I get the ambiguous error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
The error suggests that the Docker daemon isn't running, but service docker status shows the Docker daemon is running.
If instead I run sudo docker-compose up, then it succeeds, but it chowns a lot of my local development files to the root user, which is easy enough to fix, but annoying.
Why does Docker require root access just to start a local Django development server? How do I fix this?
My versions:
Docker version 18.06.1-ce, build e68fc7a
docker-compose version 1.11.1, build 7c5d5e4
Ubuntu 16.04.5 LTS
If you can run any Docker command at all, you can trivially root the host:
docker run --rm -v /:/host busybox \
cat /host/etc/shadow
Additionally, Docker containers frequently run as root within their own container space, which means that whatever parts of the host filesystem you choose to expose into them, they can make arbitrary changes as arbitrary user IDs. You can use a docker run -u option to pick a different user ID, but you can pick any user ID, even one that belongs to another user on a shared system.
It is very reasonable to use sudo as a way to get root privileges for things that need it, and this is a typical out-of-the-box Docker configuration.
At the end of the day the only real gate on this is the Unix permissions on the file /var/run/docker.sock. This is often mode 0660 owned by a dedicated docker group. If you don’t mind your normal user being able to read and write arbitrary host files without much of a control at all, you can add yourself to that group. That’s frequently appropriate for something like a developer laptop; but on anything like a production system it deserves some real consideration of its security implications.

Is there any way to use runasoriginaluser in uninstallrun in inno setup?

I have written an application using docker-toolbox and inno setup script for the application installation in windows 10.
And I want to remove the docker-toolbox VM, while I uninstall my program. However, the VM cannot remove completely by following inno setup script.
[Setup]
PrivilegesRequired=none
[UninstallRun]
Filename: "{cmd}"; Parameters: "/C ""docker-machine rm -y myDocker"""
The command "docker-machine rm -y myDocker" always work whenever runs in my user cmd, but not works in inno setup uninstallrun.
And I checked and found out that docker-toolbox is based on VirtualBox. VirtualBox uses a per-user environment. Becoming root (or any other user) does not give you access or more powers to the original user's VMs. ALWAYS perform VM operations as the user that actually created the VMs. Hence, I have to run a command as a original user in uninstallrun, but I cannot find a way to do so.
Looking forward for a help and support, I have spent lots of times in this problem.
Is there any way to use runasoriginaluser in uninstallrun in inno setup?
You should not modify a specific user profile from an (un)installer that runs with Administrator privileges (installs software for all users).
See Installing application for currently logged in user from Inno Setup installer running as Administrator.
runasoriginaluser flag is not supported in UninstallRun section. Probably because it won't be of any use there anyway. What the flag does in Run section is that it executes program with privileges with which the installer was originally executed. But uninstaller (for installer elevated to Administrator privileges) is executed with Administrator privileges straight away, when executed from Control Panel/Settings app.

Hyperledger: get "/bin/bash: ./scripts/script.sh: No such file or directory" when running "./byfn -m up"

I'm a newer for the hyperledger and just studying it by following the tutorials on http://hyperledger-fabric.readthedocs.io. I am trying to build the first network using "first-network" in the fabric-samples. The ./byfn -m generate is OK. But after typing ./byfn -m up, I meet
/bin/bash: ./scripts/script.sh: No such file or directory
error and the process hangs.
What is going wrong?
PS: The OS is Windows 10.
Check to see if you have a local firewall enabled. Depending on your docker configuration, a firewall may prohibit the docker daemon from accessing share drives as specified in docker setup (windows).
Restart the Docker daemon after applying local firewall changes.
I was facing the same issue and could resolve it.
The shared network drive needs to be working for any directory on the local machine to be identified from the container.
Docker for example has the "Shared drive" usually c:\ under which all your byfn.sh paths shall be present. Second condition is you need to be running the byfn.sh script with the same user who was authenticated to share the drives on the container. Your password change on the windows environment could break the already existing shared drives with the containers, hence creating problems in starting them.
Follow these steps :
In your docker terminal check the path $HOME. Type the command echo $HOME.
Make sure that your fabric-samples folder is the same path as of the variable $HOME.
Follow the steps for generating your first network.
or try the below solution.
Follow these steps :
Go to settings of docker.
Click on reset credentials.
Now check if the shared drives include the required drives or not.
If not, then include them apply your changes and restart your docker and your bash where you were trying to start your network.
I know the question is old but i have faced the similar issue so i did the following
./byfn.sh -m generate
./byfn.sh -m up
i was missing .sh in both commands.

Understanding fabric

I've just stumbled upon Fabric and the documentation doesn't really make it obvious how it works.
My educated guess is that you need to install it on both client-side and server-side. The Python code is stored on the client side and transferred through Fabric's wire-protocol when the command is run. The server accepts connections using the OpenSSH SSH daemon through the ~/.ssh/authorized_keys file for the current user (or a special user, or specified in the host name to the fab command).
Is any of this correct? If not, how does it work?
From the docs:
Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
It provides a basic suite of operations for executing local or remote shell commands (normally or via sudo) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution.
So it's just like ssh'ing into a box and running the commands you've put into run()/sudo().
There is no transfer of code, so you only need to have ssh running on the remote machine and have some sort of shell (bash is assumed by default).
If you want remote access to a python interpreter you're more looking at something like execnet.
If you want more information on how execution on the remote machine(s) work look to this section of the docs.
Most what you are saying is correct, except that the "fabfile.py" file only has to be stored on your client. An SSH server like OpenSSH needs to be installed on your server and an SSH client needs to be installed on your client.
Fabric then logs into one or more servers in turn and executes the shell commands defined in "fabfile.py". If you are located in the same dir as "fabfile.py" you can go "fab --list" to see a list of available commands and then "fab [COMMAND_NAME]" to execute a command.
The user on the server does not need to be added to "~/.ssh/authorized_keys" but if it is you don't have to type the password every time you want to execute a command.