How to execute a docker-run command from C++ program? - c++

I want to execute "docker run -it Image_name" from a C++ program. Is there any way to achieve this?

Try as you do for simple system command from C++.
System("docker run -it Image_name")

I can think of two ways you could achieve this.
For a quick-and-dirty approach, you can actually run commands from your C++ code. There seems to be a few ways to run commands with C++, but the system() function seems to be an easy way if you just want to run the command:
int main() {
system("docker run -it Image_name");
}
Bare in mind you will need to make sure the docker executable is in your PATH environment variable. You will also need to consider what operating systems you want to support, a system call in Linux might not behave the same as in windows. It can be tricky to get system calls right.
For another method, using the docker engine's API directly. docker commands are sent to this API. You could connect directly to this API yourself and call the API the same way the docker run -it Image_name command would. The Engine API is documented here https://docs.docker.com/engine/api/v1.24/ . I believe the docker run -it Image_name command starts up what the API calls a "service".
The shell command will be the easiest approach. The Engine API approach would take more effort up front, but will result in cleaner, more robust code. The correct approach will depend on your situation.

Related

How to see which files a binary in a lambda container is trying open

A question about lambda container images and how to debug them.
The problem: We need to run a legacy binary to process data in a serverless fashion. It is being run in a container image on lambda with a wrapper in nodejs. However, it is crashing with EACCES (errno -13) as the error. Meaning it cannot access something. Everything works locally.
My first thought was to try to strace it but the lamdba environment doesn't have permission.
Does anyone have suggestion on how to either better replicate the lambda environment locally or something similar to strace to help debugging what file the legacy binary is try to access on lambda?
We have tried debugging it by running the container locally. We are simulating the lambda environment as much as possible.
Image is built from public.ecr.aws/lambda/nodejs:14-x86_64
And for running locally: docker run --read-only --rm --tmpfs /tmp -p 9000:8080 test-image
I.e., only the /tmp is writable and the rest of the files system readonly like on lambda
The legacy binary does need to write to /tmp but, from what I can see of the source code, it doesn't try to write anywhere else. However, the source is very legacy.
Everything works correctly when run locally. However fails when run on lambda.
From running an strace in a locally run image, all files it tries to access (for read or readwrite) should be accessible already. Only files under /tmp are opened for write.

Run command conditionally while deploying with amazon ECS

I am having a django app deployed on ECS. I need to run fixtures, a management command needed to be run inside the container on my first deployment.
I want to have access to run fixtures conditionally, not fixed for first deployment only. One way I was thinking is to maintain a variable in my env, and to run fixtures in entrypoint.sh accordingly.
Is this a good way to go about it? Also, what are some other standard ways to do the same.
Let me know if I have missed some details you might need to understand my problem.
You probably need to handle it in your entrypoint.sh script only.As far as my experience goes, you won't be able to conditionally run commands without a script in case of ECS.

Does Google Life Sciences API support dependencies and parallel commands?

From what I can tell from the docs, the Google Cloud Life Sciences API (v2beta) allows you to define a "pipeline" with multiple commands in it, but these run sequentially.
Am I correct in thinking there is no way to have some commands run in parallel, and for a group of commands to be dependent on others (that is, to not start running until their predecessors have finished)?
You are correct that you cannot run commands in parallel, or in such a way that the process is dependent upon the completion of some other process.
When you run commands using the commands[] flag. This is exactly the same as passing the CMD parameter to a Docker container(as this is exactly what you are doing). The commands[] flag overrides the CMD arguments passed to the Docker container at runtime. If the container in using an Entrypoint then the commands[] flag will override the Entrypoint argument values for the container
You can review the official here;
Method: projects.locations.pipelines.run
gcloud command-line tool examples
gcloud beta lifesciences

How to make Cygwin the default shell for Jenkins?

I'm trying to come up with some sensible solution for a build written using SCons, which relies on quite a lot of applications to be accessible in a Unix-like way, using Unix-like paths etc. However, when I'm trying to use SCons plugin, or Git plugin in Jenkins, it tries to invoke the plugins using something like cmd /c git.exe - and this will certainly fail, because Git was installed using Cygwin and is only known in Cygwin shell, but not in CMD. But even if I could make git and the rest available to cmd.exe, other problems arise: the Cygwin version of Git expects paths to have forward slashes and treats backward slashes as escape characters. Idiotic Windows file-system related issues kick in too (I can't give Jenkins permissions to delete my own files!).
So, is there a way to somehow make Jenkins only use Cygwin shell, and never cmd.exe? Or should I be prepared to run some Linux in a VM to have this handled?
You could configure Jenkins to execute the cygwin command with the specific shell command, as follows:
c:\cygwin\bin\mintty --hold always --exec /cygdrive/c/path/to/bash/script.sh
Where script.sh will execute all the commands needed for the Jenkins execution.
Just for the record here's what I ended up doing:
Added a user SYSTEM to Cygwin, mkpasswd -u SYSTEM
Edited /etc/passwd by adding the newly created user's home directory to the record. Now it looks something like the below:
SYSTEM:*:18:544:,S-1-5-18:/home/SYSTEM:
Copied my own user's configuration settings such as .netrc, .ssh and so on into the SYSTEM home. Then, from Windows Explorer, through an array of popups I've claimed ownership of all of these files to SYSTEM user. One by one! I love Microsoft!
In Jenkins I now run a wrapper for my build that sets some other environment variables etc. by calling c:\cygwin\bin\bash --login -i /path/to/script/script
Gave it up because of other difficulties in configuration and made Jenkins service run under my user rather then SYSTEM. Here's a blog post on how to do it: http://antagonisticpleiotropy.blogspot.co.il/2012/08/running-jenkins-in-windows-with-regular.html but, basically, you need to open Windows services, then find Jenkins service, select it's properties, go to "Logon" tab and change the user to the "this user".
One way to do this is to start your "execute shell" build steps with
#!c:\cygwin\bin\bash --login
The trick is of course that it resets your current directory so you need to
cd `cygpath $WORKSPACE`
to get back to the workspace.
Adding to thon56's good answer: this is helpful: "set -ex"
#!c:\cygwin\bin\bash --login
cd `cygpath $WORKSPACE`
set -ex
Details:
-e to exit on error. This is important if you want your jobs to fail on error.
-x to echo command to the screen, if desired.
You can also use #!c:\cygwin\bin\bash --login -ex, but that echos a lot of login steps that you most likely don't care to see.

Building project from cron task

When I build project from terminal by using 'xcodebuild' command I succeed, but when I try to do run same script from cron task I receive error
"Code Sign error: The identity '****' doesn't match any valid certificate/private key pair in the default keychain"
I think problem is in settings and permissions of crontab utility, it seems crontab does not see my keychain
Can anyone provide me terminal command how to make my keychain visible for crontab
I encountered a similar issue with trying to build nightly via cron. The only resolution I found was to create a plist in /Library/LaunchDaemons/ and load it via launchctl. The key necessary is "SessionCreate" otherwise you will quickly run in to problems similar to what was encountered with trying to use cron -- namely that your user login.keychain is not available to the process. "SessionCreate" is similar to "su -l" in that (as far as I understand) it simulates a login and thus default keychains you expect will be available; otherwise, you are stuck with only the System keychain despite the task running as your user.
I found the answers (though not the top answer currently) here useful in troublw shooting this issue: Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
You execute your cron job with which account ?
most probably the problem !!
You can add
echo `whoami`
at the beginning of your script to see with which user the script is launched.
Also when a Bash script is launched from cron, it don't use the same environment variable (non login shell) as when you launch it as a user.
When the script launches from cron, it doesn't load your $HOME/.profile (or .bash_profile). Anything you run from cron has to be 100% self-sufficient in terms of it's environment. I'd suggest you make yourself a file called something like "set_build_env.sh" It should contain everything from your .profile that you need to build, such as $PATH, $HOME, $CLASSPATH etc. Then in your build script, load set_build_env.sh using the dot notation or source cmd as ericc said. You should also remove the build-specific lines from your.profile and then source set_build_env from there too so only one place to maintain. Example:
source /home/dmitry/set_build_env.sh #absolute path
. /home/dmitry/set_build_env.sh #dot-space notation same as "source"