I have created a Neural Network Regression Model and I wish to deploy it using AWS.
I am using tensorflow serving, and have gone so far as to save the model.
Now I am trying to use Docker to deploy it in the container using Docker on Windows 10 home
As an example, I tried to use multiple tutorials but when it comes to this command, no matter what I do, it just doesn't work for me:
docker run -t --rm -p 8501:8501 -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
Every time I change something, I get a different error. I am totally at loss. Please direct me to some tutorials for this that are simple but complete for novices like me. I have already read the TensorFlow documentation but the errors persist.
Any help would REALLY oblige me greatly since I have been stuck for about a month now.
The easiest tutorial I found was https://www.tensorflow.org/tfx/serving/docker#serving_example
Also
Docker toolkit
has trouble with the mounts as you have to manually specify the path, so if you can afford it, upgrade to Windows Pro which will simplify dockerization. That way you will get Docker Desktop which is much simpler.
Related
I am attempting to use AWS Batch to launch a linux server, which will in essence perform the fetch and go example included within AWS (to download a SH from S3 and run it).
Does AWS Batch work at all for anyone?
The aws fetch_and_go example always fails, even followed someone elses guide online which mimicked the aws example.
I have tried creating Dockerfile for amazonlinux:latest and ubuntu:20.04 with numerous RUN and CMD.
The scripts always seem to fail with the error:
standard_init_linux.go:219: exec user process caused: exec format error
I thought at first this was relevant to my deployment access rights maybe within the amazonlinux so have played with chmod 777, chmod -x etc on the she file.
The final nail in the coffin, my current script is litterely:
FROM ubuntu:20.04
Launch this using AWS Batch, no command or parameters passed through and it still fails with the same error code. This is almost hinting to me that there is either a setup issue with my AWS Batch (which im using default wizard settings, except changing to an a1.medium server) or that AWS Batch has some major issues.
Has anyone had any success with AWS Batch launching their own Dockerfiles ? Could they share their examples and/or setup parameters?
Thank you in advance.
A1 instances are ARM based first-generation Graviton CPU. It is highly likely the image you are trying to run something that is expecting x86 CPU (Intel or AMD). Any instance class with a "g" in it ("c6g" or "m5g") are Graviton2 which is also ARM based and will not work for the default examples.
You can test whether a specific container will run by launching an A1 instance yourself and running the container (after installing docker). My guess is that you will get the same error. Running on Intel or AMD instances should work.
To leverage Batch with ARM your containerized application will need to work on ARM. If you point me to the exact example, I can give more details on how to adjust to run on A1 or Graviton2 instances.
I had the same issue, and it was because I build the image locally on my M1 Mac.
Try adding --platform linux/amd64 to your docker build command before pushing if this is your case.
In addition to the other comment. You can create multi-arch images yourself which will provide the correct architecture.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
I am newbie to docker and AWS. I wanted to have container running image for Maven and Java. I was able to refer https://github.com/carlossg/docker-maven/blob/master/jdk-8/Dockerfile and could create dockerfile for the same. Through terminal, I could see new container is created , with image of Java and Maven. Upto this point it is simple , took me while to figure out though.
(1) I think this is the way you can always have Maven plus Java image , and there is no other way with lot less files/ coding. Is it right? This is just for my information. The real question is the next one.
(2) If I get image of AWS Cli, once the container starts I can login to AWS using the credentials. I know how to do it using terminal. Not a big deal. If I want to have CI/CD pipeline, where do I provide the command - docker build -t <imageName> . and command to start container. Right now I use mac terminal, but not sure how it would play out in CI/CD. I did research on here but nothing conclusive. Does it go inside .yml file?
(3) How do I send the AWS credentials while building docker image? I do not want to put into dockerfile. How do you guys do it so its safe?
I'm developing a Django web app with Channels. While I'm following this tutorial , it is required to install Docker.
I'm working on WSL on windows 10 HOME, and so, it is really painful to install Docker.
I just discover Docker, I'm a little confuse about it, I understand it is a tool which facilitates the deployment of a web app on a web hosting later. But I'm not sure.
Could you give me your advice ? Could you tell me if it is really important to use Docker for my project ?
Would Have I less pain if I would develop on a Ubuntu OS ?
Thank you,
The following are my own considerations, not pretending to be exhaustive Docker review.
Moving to Docker would give you following advantages:
Easy deploy - you don't need to supply manuals on how to install your app, dependencies and link them together. Only How to install Docker (btw for Windows it hurts:)
Isolation - your services get isolated network and do not bother the host
Easy upgrade - just push new image and that's it
Decomposition - with docker-compose and other tools you will be able to split your application into services and maintain them separately
Scaling - with proper design, tools like k8s will allow you to easily scale app by adding replicas of your services
From the other hand, on Windows Docker create additional overhead, unlike Linux where it is implemented on top of Linux kernel, also you need Win10 Professional to enjoy Docker and not docker toolbox.
Also Windows is not so good in automated package management and installing software for Windows in many cases cannot be done as simple as apt-get install whatever, thus you loose another Docker benefit - easy system preparation via Dockerfile.
If you plan to stay only on Windows, based on my own exp I would probably not recommend moving to Docker, because I personally found it difficult to use without VirtualBox/Ubuntu.
Hello and thank your for your time
i have a big Django project and i'm developing it in PyCharm on Windows. Right now i need to use smart queries, so i want to add Celery on it. The main problem, that celery dropped support for Windows since v4.0. So my questions is:
1) How can i use rabbit/celery on windows?
2) I have some old answer, that suggest to use old version, that has support for windows, but maybe it has some way with virtual box or other staff to lanch it on windows?
You can use Docker to run your Celery, there is an official image in Docker Hub: https://hub.docker.com/_/celery/
It will take some time to learn how to use Docker though. But it's definitely worth it.
You could use Vagrant, but in this case you will have to spend some time to configure your Celery environment, at least I didn't find existing Celery Vagrant Box you could use in your project.
I'm attempting to deploy a Django app via docker, first locally, and then to a cloud server. I could not find an answer to my initial question before I attempt this: if I run docker-machine create, I'm guessing this should be run from within my virtualenv, right?
This would then grab all of my specific app dependencies, and begin to build certificates to throw in the container? If not, please explain otherwise..
Yes you are correct.
I will try to help you by my experience, if you wanna deploy django apps via docker.
First you need to setup docker machine in your local machine. Please see the
instruction. By default driver that will be used is --driver
virtualbox default.
List what kind of specifics dependencies images of your apps. Ex:
you need nginx, postgres, uwsgi, or you need to fetch an image then
modified that image you can use dockerfile (its the best practice
for you).
I suggested you to use docker-compose. Really its make our project
pretty easy to manage. You have to define all images that you need
for your app in docker-compose file Please read this reference.
After you finished develop your app then you want to deploy in production server (cloud) you just need to copy all your project then running your docker-compose. All images dependencies will be automatically pulled in the cloud.
As a reference, you can see this project (this is an open source project that I developed.) On that project, I use make file to manage docker-compose command and it make easy to manage.
An example of dockerfile
An example of docker-compose.yml
An example of Makefile
Hope this will help you.