I created a Docker image with Ubuntu 14.04 and compiled FFMPEG to run the streaming of a video asset to a DASH endpoint. On the same image I can run the media analysis script which basically use FFMPEG and other tools to analyse a video asset. Now I want to put a Django app so that assets can be both loaded in the streaming pipeline and run through the media analysis. What would you suggest is the best approach? Have 2 Docker images – one with compiled FFMPEG and the streaming pipeline and another one with django, and then share the code between the two? Or just keep 1 docker image and run both the FFMPEG streaming pipeline and media analysis and Django from there?
I am open to suggestions…
Possible duplicate of https://serverfault.com/questions/706736/sharing-code-base-between-docker-containers
Related
Hi there I am trying to deploy a project in docker with django and django rest framework. According to the documentation I successfully able to deploy it and it is working fine with media files. I am able to upload media files (images) and it also shows the right file at the right side. But the problem is for some changes I need to re-build the container with new code ... after re-deploying It can not able to found out the previously uploaded media files(images). I badly need help on it.enter image description here
this is my docker-compose file.
enter image description here
and dockerfile
I want to share the project to multiple clients so that they can use it on their own local networks without doing a lot of work. So somehow packaging the Django project along with a secure server software together so that it easily can be run on any machine would be nice.
Docker might be your best bet, you will need to create a docker image and on any machine that you'll want to run, have the docker client run this image. running it after installing the docker client can be a 1 line in the command line.
an example:
https://docs.docker.com/samples/django/
I built a Django app for speech recognition, the app uses the user's microphone to record audio then convert it to text. It works well locally, but when I try to deploy it in Heroku it giving an error that Pyaudio can not install and
command 'gcc' failed with exit status 1.
I am using Python 3.6 and Windows 7. How can I deploy this application to Heroku?
the app uses the user's microphone to record audio then convert it to text
This won't work on Heroku even if you manage to install Pyaudio.
Python code runs on the server, not in the browser. If you try to record audio using Pyaudio it will try to record audio in some data centre somewhere on Amazon Web Services. This appears to work locally because in development your server and client are running on the same machine.
If you want to record audio from your users you'll need to do it in JavaScript.
We tried the default AWS codebuild image to build .NET core apps and it worked fine.
Now we require to build to Docker images, but the default image has no docker installed.
AWS has the option to run the Builder image in Priviledged mode so you can run docker-in-docker operations.
I would like to know if there is an image I can use that has both .NET Core and Docker installed, so I can build the code, and then the image.
Thanks!!
You'll need to create you own docker image and provide that to CodeBuild (as part of project environment configuration).
You can find CodeBuild's vended docker images here for reference https://github.com/aws/aws-codebuild-docker-images
You need to create a docker image which has both docker daemon and .NET core on the same image. Refer to this sample on how to start the docker daemon before starting builds in your custom docker images http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker-custom-image.html
I'm in the process of moving some on-premise app to Azure and struggling with once aspect - GhostScript. We use GhostScript to convert PDF's to multi page TIFF's. At present this is deployed in an Azure VM, but it seems like a WebApp and WebJob would be a better fit - from a management point of view. In all of my testing I've been unable to get a job to run the GhostScript exe.
Has anyone been able to run GhostScript or any third party exe in a WebJob?
I have tried packaging the GhostScript exe, lib and dll into a ZIP file and then unzip to Path.GetTempPath() and then using a new System.Diagnostics.Process to run the exe with the required parameters - this didn't work - the process refused to start with an exit code of -1073741819.
Any help or suggestions would be appreciated.
We got it to work here:
Converting PDFs to Multipage Tiff files Using Azure WebJobs. The key was putting the Ghostscript assemblies in the root of the project and setting "Copy always". This is what allows them to be pushed to the Azure server, and to end up in the correct place, when you publish the project.
Also, we needed to download the file to be processed by Ghostscript to the local Azure WebJob temp directory. This is discovered by using the following code:
Environment.GetEnvironmentVariable("WEBJOBS_PATH");