Convert Docker image into binary on Mac OS X - c++

Suppose I have a Docker image that runs some fun, but horrendously-complicated-to-compile-on-Mac-OSX-for audio editing application that does specific rendering of audio in realtime.
I have a Docker setup that runs this decently with a Linux image, but the process of getting audio piped to the host system isn't reliable nor performant. I'd really like to just run it directly on the host OS.
I'd like to run this desktop application nicely on Mac, but not have dependencies on Docker, Wine, etc.
Is there a way to make a single binary out of a Docker image that runs natively on Mac OSX?
I assume it would have to depend on some dynamic libs that Docker has, but that's fine.

No, this isn't possible.
Consider that even the "native" Docker Desktop for Mac application actually works with a hidden virtual machine. (Compare the outputs of uname -a and docker run --rm busybox uname -a, for example.) That's why you're able to run Linux containers on a MacOS host. But that also means that, if you were able to package up a Linux container into something that could run directly, it'd have to bring the rest of the Linux VM with it.
There are tools like Packer that can help with the process of building a VM, but you'll hit the same issues you're already encountering on connecting the VM's display/audio to the host's. Simplifying or automating the native-application build process is probably a better time investment.

Related

deploying to ubuntu instance on aws from windows

I want to deploy a python project to ubuntu instance on aws from a windows operating system, but all tutorials I have encountered either use ubuntu or mac as their development/local machine.
Is the deployment from windows the same i.e. after createing the instance all I would then be doing from the local windows system would be running inside the ubuntu instance?
is there any tutorial which can help me achieve my objective?
Note i am deploying directly without git.
ANy help would be appreciated
To transfer files to an ubuntu instance you could use SSH, from windows you could download an SSH client such as Mobaxterm (https://mobaxterm.mobatek.net/)! or download Windows Subsystem for Linux (https://learn.microsoft.com/en-us/windows/wsl/install-win10)! then use SCP to copy files (https://www.computerhope.com/unix/scp.htm)!. Both options require that you have the .pem security file for your instance
Your best bet is going to be installing WSL(Windows Subsystem for Linux) and using that to run bash commands. This will make your life a whole lot easier as you won't have to look for Windows specific tutorials and you can now follow Ubuntu/Linux tutorials.
What is WSL? It is essentially a Linux VM built into Windows. It will provide you with a terminal running Ubuntu or pretty much any other Linux distro you could want.
How to install WSL

Can a Docker remote host keep its files synced with your local machine? Django project that needs auto-reloading

I'm considering the purchase of one of the new Macbook M1's. Docker Desktop is apparently unworkable with their Rosetta 2 engine, and all of my development efforts rely on Docker desktop, and a local development environment that auto-reloads when files are changed.
I haven't done much with Docker remote hosts, but I see that this could be a stop-gap solution until Docker rewrites its engine. Google is failing me... can you keep files on your local machine synced up with your Docker remote host?
No, Docker doesn't do this. Instead, Docker packages your application code into an image; that image can be transferred to a repository (with Docker Hub being the most prominent option), and then run on the remote system, without necessarily needing to have the application code or the interpreter directly installed there. Beyond the image system, Docker has no direct ability to transfer or mount files from one system to another (you could do something like create an NFS-backed named volume, but you would need to run the NFS server yourself).
For day-to-day development, using your language's native isolation system often will work better than trying to simulate a local development environment using Docker. For Python, consider using a tool like Pipfile to create a virtual environment. Python is reasonably platform-independent, so you shouldn't notice any trouble using Apple silicon vs. Intel's.
Don't even consider using the Docker remote API. If you don't configure it perfectly, it's trivial to use it to root the host (and there are many instances of this in the wild). Even if it is configured, you can't use it to mount files from your local system (a docker run -v bind-mount option is always interpreted relative to the Docker host it runs on). If you need to work directly on the remote host for whatever reason, use an ordinary ssh connection.

VS Code integration with C++ development-environment inside Docker

I would like to run VSCode on my host machine, but (using its features / extensions) fire up tools from within the dev-env living inside my Docker container.
I have set up a docker image as a development environment for C++. Let's call it dev-env.
It is linux-based and contains required libraries, crosscompilation toolchains and various tools we use for building and testing our software (cmake, ninja, cppcheck, clang-tidy etc.)
I have a GIT repository on a host machine, which I mount inside a docker.
So my usual workflow would be to run docker:
host$
host$ docker run -v path/to/my/codebase/on/host:path/inside/docker -h dev-env --rm -it image_name bash
docker#
docker# cd build; cmake ..
etc...
And as such, I can build, test and run my tools inside my unified development environment inside the docker.
Now, the goal is to take it out of the terminal to the world of IDE.
I happen to use VS Code.
On host machine, I open my codebase folder in VSCode. Since it's mapped inside the docker, any changes I make locally will be available inside dev-env as well.
But if I now run anything from VSCode (CMake configure, build, etc.) it will of course call the tools from within my host machine - which of course will not work, and is not what I want.
With tasks defined in tasks.json I could probably manage with having them run something like docker exec CONTAINER my_command
It gets more complicated with extensions:
What I would like is to have the e.g. VSCode CMake Tools extension configured in such a way, that when I run Cmake Configure (in a VSCode running on my host machine), it will actually run cmake commands from within Docker container, using cmake installed inside Docker, not from my host machine.
Temporary solution: Forwarding display through X / VNC
So Installing VSCode inside the Docker, running x/vnc server inside the Docker, exposing port and connecting to it from the host machine.
Yes, it is possible, I have it running here. But it has many limitations and problems, of which the most painful is the lag/delay.
This is bad solution in general, so I would strongly push for avoiding this.
Another solution that I can think about:
VSCode instance running as a server inside the docker.
VSCode instance on your host connecting to the server instance.
You do all the work inside your host VSCode, but anytime you run a command, it is executed by a server instance, which runs everything inside Docker.
I guess this would require support from VSCode (or maybe an extension).
VSCode Live Share extension is not made exactly for that, but it's functionalities might do the job. I have not tested it yet.

Visual Studio 2017 remote code synchronization

I've been developing a c++ project on linux remote server these days, however, I'd like to do all the coding things on my windows machine using VS2017. So I need some kind of synchronization tool to synchronize the codes such that whenever I save the file in VS2017 the changes can be synchronized to the linux server immediately. Is there any tool or VS2017 extension can help me?
I don't want to use git as it may cause a lot meaningless commits.
Several ideas:
Cygwin. Compile your code on the emulated Linux/Unix environment for local testing and use Visual Studio as your IDE. Do final testing on the Linux box with less frequency. Can be combined with any of the ideas below.
Git, but with a different branch for commits. Do a squashed merged for all meaningful commits or pull requests to master.
Samba. Mount your Linux file system on your Windows PC or vice versa. Copy files between Windows and Linux as if was a network drive.
Local VM. Run Linux in a local Virtual Machine with VMWare or VirtualBox. Drag and drop files between Windows host and Linux guest OS using the host/guest extensions stuff. Then you can dink around with deploying to the real Linux machine later.
Personally, for my open source projects where I'm too lazy to boot into Linux locally to test code before deploying to AWS, I basically do some combination with the above.
And #5 of course is: Dropbox. :( I use OneDrive and a Python script on Linux to pull down files.

Using Rsync and the delta-transfer algorithm in an application

Is it possible to use a server running an Rsync daemon to update program files on the client machine? Is there a library for using as a connector when developing the Rsync client ?
I would not do that if the program files you update on the client machine are often or continuously running -in particular if they are server or daemon programs.
On Linux distributions, the package manager (i.e. dpkg on Debian) does a good job for upgrading programs (even for daemons). Can't you use it?
Writing inside a running program binary is error-prone.
There is a librsync on Linux.