There is remote server with gitlab runner and docker. I need to build c++/qt project on it, but i should use custom qt libraries(they built with deprecated webkit). I have them on local pc. How can i create docker image with this specific libraries? Is it ok to use COPY command for this purpose?
Yes, you can certainly use COPY from your local machine.
However, I would make sure that the custom qt libraries are available online on GitHub or so, so that the docker image can be built correctly from anywhere without having to set up every local machine where the docker image is meant to be created.
This way, you can just clone the repository and the respective branch instead of COPY in your docker file.
Related
When inside the RapidsAI docker image with examples, how does one recompile the C++ code after modifying? I've tried running the build scripts from a terminal sessions inside Jupyter but it cannot find CMake.
In order to be able to recompile the C++ code in a docker container you need to use the RAPIDS Docker + Dev Env container provided on https://rapids.ai/start.html.
The RAPIDS Docker + Examples container installs the RAPIDS libraries using conda install and does not contain the source C++ code or Cmake.
If you would like to continue to use the RAPIDS Docker + Examples container then I would suggest:
First uninstall the existing library that you want to modify from the container.
Then pull the source code of the desired library and make the
required modifications.
Once the above steps are done please follow
the steps provided in the libraries github repo to build it from
source.
I'm considering the purchase of one of the new Macbook M1's. Docker Desktop is apparently unworkable with their Rosetta 2 engine, and all of my development efforts rely on Docker desktop, and a local development environment that auto-reloads when files are changed.
I haven't done much with Docker remote hosts, but I see that this could be a stop-gap solution until Docker rewrites its engine. Google is failing me... can you keep files on your local machine synced up with your Docker remote host?
No, Docker doesn't do this. Instead, Docker packages your application code into an image; that image can be transferred to a repository (with Docker Hub being the most prominent option), and then run on the remote system, without necessarily needing to have the application code or the interpreter directly installed there. Beyond the image system, Docker has no direct ability to transfer or mount files from one system to another (you could do something like create an NFS-backed named volume, but you would need to run the NFS server yourself).
For day-to-day development, using your language's native isolation system often will work better than trying to simulate a local development environment using Docker. For Python, consider using a tool like Pipfile to create a virtual environment. Python is reasonably platform-independent, so you shouldn't notice any trouble using Apple silicon vs. Intel's.
Don't even consider using the Docker remote API. If you don't configure it perfectly, it's trivial to use it to root the host (and there are many instances of this in the wild). Even if it is configured, you can't use it to mount files from your local system (a docker run -v bind-mount option is always interpreted relative to the Docker host it runs on). If you need to work directly on the remote host for whatever reason, use an ordinary ssh connection.
As announced the Swisscom logstash buildpack is not supported any longer.
The proposed solution is to push the default docker image.
I am trying to figure out the way to attach the curator configuration without "baking" it inside the docker image. Any ideas?
thanks
There are two articles in the support forum that discuss some aspects of your question here:
https://docs.developer.swisscom.com/service-offerings/logstash-docker.html
https://docs.developer.swisscom.com/service-offerings/kibana-docker.html
They do in fact recommend:
If you wish to use configuration files instead, you can fork the official Docker image and ADD your configuration files in your own Dockerfile.
I assume that is exactly what you did not want to do, but you can pass in most of the config via environment variables as far as I understand.
If you are ok with creating a separate Docker image, you could also host the config somewhere (let's say on S3) and then dynamically retrieve it on start-up of your Docker container.
You could also build the config setup into your deployment setup, although I haven't tried this with the docker build-pack, you can "stack" multiple build-packs in CloudFoundry and pre-load your configuration files into the virtual server as part of an initial build-pack step. There is more information on how to do that here: https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
I am new to Docker and am experimenting with developing a Django App on Docker.
I have followed the example in this link here:
Currently I am developing my app and have made changes to various files within the web directory. For now in order to test my changes I have had to remove all my running containers, stop my docker machine, start my docker machine, attach docker machine, run docker-compose up. This is a timely process and is unproductive especially if I need to keep testing after small changes.
My question is if I make changes to the image (changes in the web directory) how can I update my container to reflect those changes or should I be doing things differently?
How do other people develop using Docker? what are your best practices?
You could use volumes to map host directory in container's web directory. Any changes in host directory will be reflected immediately without restarting container. See below post.
How to make a docker container directory accesible from host?
You can use docker-compose up --build to rebuild the image and container after making changes. It will automatically rebuild and restart any changed containers. There shouldn't be any reason to stop docker machine. If you are using a Mac or Windows PC, you can try the new beta app, which is a bit easier to use than prior versions.
Also see: https://docs.docker.com/compose/reference/
As for best practices, this probably isn't really the right forum unless you have a more specific question.
I'm attempting to deploy a Django app via docker, first locally, and then to a cloud server. I could not find an answer to my initial question before I attempt this: if I run docker-machine create, I'm guessing this should be run from within my virtualenv, right?
This would then grab all of my specific app dependencies, and begin to build certificates to throw in the container? If not, please explain otherwise..
Yes you are correct.
I will try to help you by my experience, if you wanna deploy django apps via docker.
First you need to setup docker machine in your local machine. Please see the
instruction. By default driver that will be used is --driver
virtualbox default.
List what kind of specifics dependencies images of your apps. Ex:
you need nginx, postgres, uwsgi, or you need to fetch an image then
modified that image you can use dockerfile (its the best practice
for you).
I suggested you to use docker-compose. Really its make our project
pretty easy to manage. You have to define all images that you need
for your app in docker-compose file Please read this reference.
After you finished develop your app then you want to deploy in production server (cloud) you just need to copy all your project then running your docker-compose. All images dependencies will be automatically pulled in the cloud.
As a reference, you can see this project (this is an open source project that I developed.) On that project, I use make file to manage docker-compose command and it make easy to manage.
An example of dockerfile
An example of docker-compose.yml
An example of Makefile
Hope this will help you.