VS Code integration with C++ development-environment inside Docker - c++

I would like to run VSCode on my host machine, but (using its features / extensions) fire up tools from within the dev-env living inside my Docker container.
I have set up a docker image as a development environment for C++. Let's call it dev-env.
It is linux-based and contains required libraries, crosscompilation toolchains and various tools we use for building and testing our software (cmake, ninja, cppcheck, clang-tidy etc.)
I have a GIT repository on a host machine, which I mount inside a docker.
So my usual workflow would be to run docker:
host$
host$ docker run -v path/to/my/codebase/on/host:path/inside/docker -h dev-env --rm -it image_name bash
docker#
docker# cd build; cmake ..
etc...
And as such, I can build, test and run my tools inside my unified development environment inside the docker.
Now, the goal is to take it out of the terminal to the world of IDE.
I happen to use VS Code.
On host machine, I open my codebase folder in VSCode. Since it's mapped inside the docker, any changes I make locally will be available inside dev-env as well.
But if I now run anything from VSCode (CMake configure, build, etc.) it will of course call the tools from within my host machine - which of course will not work, and is not what I want.
With tasks defined in tasks.json I could probably manage with having them run something like docker exec CONTAINER my_command
It gets more complicated with extensions:
What I would like is to have the e.g. VSCode CMake Tools extension configured in such a way, that when I run Cmake Configure (in a VSCode running on my host machine), it will actually run cmake commands from within Docker container, using cmake installed inside Docker, not from my host machine.
Temporary solution: Forwarding display through X / VNC
So Installing VSCode inside the Docker, running x/vnc server inside the Docker, exposing port and connecting to it from the host machine.
Yes, it is possible, I have it running here. But it has many limitations and problems, of which the most painful is the lag/delay.
This is bad solution in general, so I would strongly push for avoiding this.
Another solution that I can think about:
VSCode instance running as a server inside the docker.
VSCode instance on your host connecting to the server instance.
You do all the work inside your host VSCode, but anytime you run a command, it is executed by a server instance, which runs everything inside Docker.
I guess this would require support from VSCode (or maybe an extension).
VSCode Live Share extension is not made exactly for that, but it's functionalities might do the job. I have not tested it yet.

Related

Should I move windows project folder into WSL?

I'm trying to set up a work environment on a new machine and I am a bit confused how best to procede.
I've set up a new windows machine and have WSL2 set-up; I plan on using that with VS Code for my development environment.
I have a previous django project that I want to continue working on stored in a folder in a thumb drive.
Do I move the [windows] project folder into the linux folder system and everything is magically ready to go?
Will my previous virtual environment in the existing folder still work or do I need to start a new one?
Is it better to just start a new folder via linux terminal and pull the project from github?
I haven't installed pip, python, or django on the windows OR linux side just yet either.
Any other things to look out for while setting this up would be really appreciated. I'm trying to avoid headaches later by getting it all set-up correctly now!
I would pull it from github, and make sure you have the correct settings for line endings, since they are different between windows and linux. Just let git manage these though:
https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings
Some other suggestions:
Use a version manager in linux to manage your python versions - something like pyenv or asdf. It will make life easier.
Make sure to always create a virtual environment for everything and don't pip install anything in your main python. (I use direnv for virtual env management)
The single exception to the previous suggestion is pipx, which I do install in the main python and then use to install things like cli tools, black, isort, pip-tools etc.
Configure VScode to use the pipx installed versions of black, flake8 etc. for linting purposes.
If you're using Docker, enable the WSL integration for your WSL flavour (probably Ubuntu). Note that docker desktop needs starting before your WSL session.

Can a Docker remote host keep its files synced with your local machine? Django project that needs auto-reloading

I'm considering the purchase of one of the new Macbook M1's. Docker Desktop is apparently unworkable with their Rosetta 2 engine, and all of my development efforts rely on Docker desktop, and a local development environment that auto-reloads when files are changed.
I haven't done much with Docker remote hosts, but I see that this could be a stop-gap solution until Docker rewrites its engine. Google is failing me... can you keep files on your local machine synced up with your Docker remote host?
No, Docker doesn't do this. Instead, Docker packages your application code into an image; that image can be transferred to a repository (with Docker Hub being the most prominent option), and then run on the remote system, without necessarily needing to have the application code or the interpreter directly installed there. Beyond the image system, Docker has no direct ability to transfer or mount files from one system to another (you could do something like create an NFS-backed named volume, but you would need to run the NFS server yourself).
For day-to-day development, using your language's native isolation system often will work better than trying to simulate a local development environment using Docker. For Python, consider using a tool like Pipfile to create a virtual environment. Python is reasonably platform-independent, so you shouldn't notice any trouble using Apple silicon vs. Intel's.
Don't even consider using the Docker remote API. If you don't configure it perfectly, it's trivial to use it to root the host (and there are many instances of this in the wild). Even if it is configured, you can't use it to mount files from your local system (a docker run -v bind-mount option is always interpreted relative to the Docker host it runs on). If you need to work directly on the remote host for whatever reason, use an ordinary ssh connection.

Unable to bring up docker project

I'm following this Docker tutorial, which creates a simple Docker-managed Django site, and when I try to run docker-compose up to launch my docker project, I get the ambiguous error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
The error suggests that the Docker daemon isn't running, but service docker status shows the Docker daemon is running.
If instead I run sudo docker-compose up, then it succeeds, but it chowns a lot of my local development files to the root user, which is easy enough to fix, but annoying.
Why does Docker require root access just to start a local Django development server? How do I fix this?
My versions:
Docker version 18.06.1-ce, build e68fc7a
docker-compose version 1.11.1, build 7c5d5e4
Ubuntu 16.04.5 LTS
If you can run any Docker command at all, you can trivially root the host:
docker run --rm -v /:/host busybox \
cat /host/etc/shadow
Additionally, Docker containers frequently run as root within their own container space, which means that whatever parts of the host filesystem you choose to expose into them, they can make arbitrary changes as arbitrary user IDs. You can use a docker run -u option to pick a different user ID, but you can pick any user ID, even one that belongs to another user on a shared system.
It is very reasonable to use sudo as a way to get root privileges for things that need it, and this is a typical out-of-the-box Docker configuration.
At the end of the day the only real gate on this is the Unix permissions on the file /var/run/docker.sock. This is often mode 0660 owned by a dedicated docker group. If you don’t mind your normal user being able to read and write arbitrary host files without much of a control at all, you can add yourself to that group. That’s frequently appropriate for something like a developer laptop; but on anything like a production system it deserves some real consideration of its security implications.

Run docker from within toolbox

Within Google Container OS, I would like to use it as my cloud development environment. How would I run the docker command from the toolbox? Do I need to add the docker.sock as a bind mount? I need to be able to run docker (and docker-compose) to run my development environment.
Google Container OS images come with docker already installed and configured, so you will be able to use the docker command from the command line without any prior configuration if you create a virtual machine from one of these images, and SSH into the machine.
As for docker-compose, this doesn't come pre-installed. However, you can install this, and other relevant tools/programs you require by making use of the toolbox you mentioned which provides a shell (including a package manager)in a Debian chroot-like environment (here you automatically gain root privileges).
You can install docker-compose by following these steps:
1) If you havn't already, enter the toolbox environment by running /usr/bin/toolbox
2) Check the latest version of docker-compose here.
3) You can run the following to retrieve and install docker-compose on the machine (substitute the docker-compose version number for the latest version you retrieved in step 2):
curl -L https://github.com/docker/compose/releases/download/1.18.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
You've probably found at this point that although you can now run the freshly installed docker-compose command within the toolbox, you can't run the docker command. This is because by default the toolbox environment doesn't have access to all paths with the rootfs and that the filesystem available doesn't correspond between both environments.
It may be possible to remedy this by exiting out of the toolbox shell, and then edit the /etc/default/toolbox file which allows you to configure the toolbox settings. This would allow you to provide access to the docker binary file in the standard environment by following these steps:
1) Ensure you are no longer in the toolbox shell, then run command which docker. You will see something similar to /usr/bin/docker.
2) Open file /etc/default/toolbox
3) The TOOLBOX_BIND line specifies the paths from rootfs to be made available inside the toolbox environment. To ensure docker is available inside the toolbox environment, you could try adding an entry to the TOOLBOX_BIND section, for example --bind=/usr/bin/docker:/usr/bin/docker.
However, I've found that even though it's possible to edit the /etc/default/toolbox to make the docker binary file available in the toolbox environment, when certain docker commands are run in the toolbox environment, additional errors are generated as the docker version that comes pre-installed on the machine is configured to use certain configuration files and directories and although it may be possible edit the /etc/default/toolbox file and make all of the required locations accessible from within the toolbox environment, it may be simpler to install docker within the toolbox by following the instructions for installing docker on debian found here.
You would then be able, to issue both the docker and docker-compose commands from within toolbox.
Alternatively, it's possible to simply install docker and docker-compose on a standard VM (i.e. without necessarily using a Google Container OS machine type) although the suitability of this depends on your use case.

How to develop remotely in PyCharm?

I have a lab system (with a hardware piece attached to it) which has some python test scripts. The test script sends commands to the attached hardware and receives response.
I don't want to work on the lab computer all the time. Currently, I'm using SSH from my local machine to the lab computer and using the shell to modify the scripts, run the commands etc. Using nano is cumbersome especially while debugging. I want to use an IDE (Pycharm) on my local machine in order to edit and run the scripts on the remote server. Pycharm has remote interpreters which uses the remote python but I want to be able to access and modify the scripts too, just like SSH from terminal.
How can I do that?
PyCharm (Professional Edition only) is also capable of Deployments. You can upload/download files via SFTP directly within Pycharm and run your scripts remotely.
You can visit the following pages for further instructions on how to set everything up:
Setting up a deployment
Configuring a remote interpreter
Yes, PyCharm Professional Edition can do this. Since PyCharm 2018.1 setting up a remote interpreter also automatically sets up deployment. If you have automatic deployments configured (Tools | Deployment | Automatic Deployment) all changes will automatically be uploaded to your SSH box.
See here for a tutorial on configuring an SSH box in PyCharm Professional Edition: https://blog.jetbrains.com/pycharm/2018/04/running-flask-with-an-ssh-remote-python-interpreter/