Is docker best just for prod environments? - amazon-web-services

I just decided to jump into using docker to test out building a microservice application using AWS fargate.
My question really relates to hearing about many development teams using Docker to avoid people saying the phrase "works on my machine" when committing code. Although I see the solution to that problem being solved, I still do not see how Docker images actually can be used in development environment.
The workflow for anything above production baffles me. Example of my thinking is...
team of 10 devs all use docker, each pull the image from the repo to there container, with the source code, if they all have a individual version of the image, that means any edits they make to that image is their own and when they push back to the repo where none of the edits can be merged (along with that to edit a image source code is not easily done as well).
I am thinking of it in the say way as git -GitHub, where code is pushed to a branch and then merged to master to create a finished product.
I guess if you pull the code from the GitHub master and create the Docker image is the way for it to be used, but again that points back to my original assumption of Docker being used for Production environments over development.
Is docker being used in development, more so the dev can just test the feature on the container that ever other dev on the team is using so all the environments match across the team?
I just really do not understand the workflow of development environments with docker.

I'd highlight three cases where I've found Docker particularly useful, prior to a production deploy:
Docker is really useful for installing local dependencies. If your application needs a database, docker run postgresql with appropriate options. Need a clean start? Delete the container. Running two microservices that need separate databases? Start two containers. The second microservice is maintained by another team? Run it in a container too.
Docker is useful for capturing the build environment in the CI system. Jenkins, for example, can run build steps inside a container, bind-mounting the current work tree in, so it's useful to build an image that just contains build-time dependencies (which can be updated independently of the CI system itself).
If you're running Docker in production, you can test the exact thing you're about to run. You're guaranteed the install environment will be the same in the QA and prod environments, because it's encapsulated inside the same Docker image. A developer can debug problems against the production-installed code without actually being in production.
In the basic scenario you describe, an important detail to note is that you never "edit an image"; you always docker build a new image from its Dockerfile and other source code. In compiled languages (C++, Go, Java, Rust, Haskell) the source code won't be in the image. Even if you're "using Docker in development" the actual source code will be in some other system (frequently Git), and typically you will have a CI system that builds "official" images from that source code.
Where I see Docker proposed for day-to-day development, it's either because the language ecosystem in use makes it hard to have multiple versions concurrently installed, or to avoid installing software on the host system. You need specific tooling support to "develop inside a container", and if developers choose their own IDE, this support is not universal. Conversely, in between OS package managers (APT, Homebrew) and interpreter version managers (rbenv, nvm) it's usually straightforward to install a couple of things on the host. If your application isn't that sensitive to, say, the specific version of Node, it's probably easier to use whichever version is already installed on your host than to try to insert Docker into the process.

Related

What is the benefit to use Docker for Django and Channels?

I'm developing a Django web app with Channels. While I'm following this tutorial , it is required to install Docker.
I'm working on WSL on windows 10 HOME, and so, it is really painful to install Docker.
I just discover Docker, I'm a little confuse about it, I understand it is a tool which facilitates the deployment of a web app on a web hosting later. But I'm not sure.
Could you give me your advice ? Could you tell me if it is really important to use Docker for my project ?
Would Have I less pain if I would develop on a Ubuntu OS ?
Thank you,
The following are my own considerations, not pretending to be exhaustive Docker review.
Moving to Docker would give you following advantages:
Easy deploy - you don't need to supply manuals on how to install your app, dependencies and link them together. Only How to install Docker (btw for Windows it hurts:)
Isolation - your services get isolated network and do not bother the host
Easy upgrade - just push new image and that's it
Decomposition - with docker-compose and other tools you will be able to split your application into services and maintain them separately
Scaling - with proper design, tools like k8s will allow you to easily scale app by adding replicas of your services
From the other hand, on Windows Docker create additional overhead, unlike Linux where it is implemented on top of Linux kernel, also you need Win10 Professional to enjoy Docker and not docker toolbox.
Also Windows is not so good in automated package management and installing software for Windows in many cases cannot be done as simple as apt-get install whatever, thus you loose another Docker benefit - easy system preparation via Dockerfile.
If you plan to stay only on Windows, based on my own exp I would probably not recommend moving to Docker, because I personally found it difficult to use without VirtualBox/Ubuntu.

Webpack: Should I build bundle on production server or build it locally and then upload?

I am deploying a React app on AWS Elastic Beanstalk. I bundle the app using webpack. However, I'm slightly confused about what best practices are from the production build process. Should I build the app locally (with NODE_ENV=production) using webpack, and then just upload the resultant bundle.js file, along with all node_modules to the Elasticbeanstalk instance? Or, should I upload all the source files, and run webpack on the actual cloud AWS server during deployment?
You should never build for production locally (unless you're the only developer).
Ideally, you have a build process that gets triggered manually or automatically from a git commit which then builds your project for production for you.
By using a centralized build process, you can then be sure that all your builds are built the same way (e.g. same node version, same npm or yarn version).
Both approaches are not really good to be honest. Local building is not a best way to build anything you want to have on production. You might have packages locally that may have inpact on what you're building. Same applies to the OS your doing it on.
And, again, same applies to the building during deployment. As the name of 'deployments' stands for, it's deploying. Just placing your application setup on the server so it may serve as it is supposed to.
That's the point where all CI/CD comes in. Having those kinds of solutions guarantee that each build is done with the same steps and on the same solution stack. No difference between each build is desired, because it allows you to assume that any bug or a change comparing to the 'desing' is because of the code, not environment it was build within.
Assuming that you're the only developer here (because you're asking for such a thing), CI/CD might be definitive overkill here, so just create shell script with steps and use Docker as the environment for build, so it stays the same between each build. That's the closest to the CI/CD option you can get without a hassle.

Update Docker container after making changes to image

I am new to Docker and am experimenting with developing a Django App on Docker.
I have followed the example in this link here:
Currently I am developing my app and have made changes to various files within the web directory. For now in order to test my changes I have had to remove all my running containers, stop my docker machine, start my docker machine, attach docker machine, run docker-compose up. This is a timely process and is unproductive especially if I need to keep testing after small changes.
My question is if I make changes to the image (changes in the web directory) how can I update my container to reflect those changes or should I be doing things differently?
How do other people develop using Docker? what are your best practices?
You could use volumes to map host directory in container's web directory. Any changes in host directory will be reflected immediately without restarting container. See below post.
How to make a docker container directory accesible from host?
You can use docker-compose up --build to rebuild the image and container after making changes. It will automatically rebuild and restart any changed containers. There shouldn't be any reason to stop docker machine. If you are using a Mac or Windows PC, you can try the new beta app, which is a bit easier to use than prior versions.
Also see: https://docs.docker.com/compose/reference/
As for best practices, this probably isn't really the right forum unless you have a more specific question.

Development workflow for a Clojure webapp with Docker

I'm trying to get started with Docker for developing a web application with Clojure and am unsure which way to go. From what I've read so far and also looking at the offical Docker Clojure repo, there are basically two possible ways:
call lein ring server (interactively or as a CMD in a Dockerfile) or
use a Dockerfile to compile your application into an uberjar and use java -jar as the CMD on the resulting jar file.
The former seems to me to be problematic in the sense that the dev environment is not as close as possible to the production environment, given that we're probably using a :dev leiningen profile adding stuff that one would strictly not want in production (providing as few tools and "information", i.e. code on an exposed production server is always a good idea). The latter, however, seems to have the exact opposite problem: Now every change requires basically a rebuild of the image (think edit-compile-run cycle), so you would lose lein rings nice re-compile on modification functionality.
How are people using this combination in practice?
PS: I'm aware that there might be some other modes of operation in practice (e.g. using Immutant or Tomcat as the deployment target or using a CI server like Hudson etc.). I'm asking about the most basic setup first here.
My team and I have opted to optimize rapid feedback while developing and minimize the number of moving parts in our deploys. As a result we've opted to use lein ring server in development and opt to ship an uberjar for our deployment. I've done this with code running in docker containers and without them.
I wouldn't want to go back to using a development workflow that didn't enable seeing the results of changing code as quickly as possible. In my mind, the rapid feedback far outweighs the risk of the running services slightly differently between my local machine and production.
Also, nothing stops me from changing a couple lines of code and then starting up a local service that is running much closer to my production setup (either running a built docker image or building an uberjar locally).
There's nothing stopping you from running in production mode with Leiningen. Just use:
lein with-profile production ring server
I've used both approaches successfully, although we've settled on the uberjar approach because it gives faster startup times.
I use the second option java -jar ... to deploy my web application to production (not using Docker yet). This creates an edit-compile-run cycle as you said. But I don't recompile for every change. Only when I'm ready to release I create the uberjar. Of cource CI is always recommended.

Django testing environment

I have deployed several Django-driven sites, mostly "concept" stuff; nothing serious. Now I'm ready to deploy a real-deal site (for my brother's medical practice), and would like to ensure that I'm doing it correctly.
My central concern, is the testing environment. I had been doing it by maintaining two separate folders with different Mercurial copies of a site, then updating the development branch, merging with the release branch, and then uploading to the server (Webfaction).
How do you manage testing environment for your Django projects?
All development is done on my local machine. I use virtualenv (and virtualenvwrapper) for the multiple projects. With virtualenv, you can have several versions of the same software without having to 'break' other code that may depend on a certain version. I use pip for downloading the proper libraries/applications into these separate environments. For each project (and therefore environment), I have a mercurial repository. If the new development passes all unit tests and works as expected, I send it up to the VCS. Once in the VCS, the code gets reviewed by colleagues.