AWS credentilas in dockerfile while building image - amazon-web-services

I am newbie to docker and AWS. I wanted to have container running image for Maven and Java. I was able to refer https://github.com/carlossg/docker-maven/blob/master/jdk-8/Dockerfile and could create dockerfile for the same. Through terminal, I could see new container is created , with image of Java and Maven. Upto this point it is simple , took me while to figure out though.
(1) I think this is the way you can always have Maven plus Java image , and there is no other way with lot less files/ coding. Is it right? This is just for my information. The real question is the next one.
(2) If I get image of AWS Cli, once the container starts I can login to AWS using the credentials. I know how to do it using terminal. Not a big deal. If I want to have CI/CD pipeline, where do I provide the command - docker build -t <imageName> . and command to start container. Right now I use mac terminal, but not sure how it would play out in CI/CD. I did research on here but nothing conclusive. Does it go inside .yml file?
(3) How do I send the AWS credentials while building docker image? I do not want to put into dockerfile. How do you guys do it so its safe?

Related

How to add config file while pushing default docker image to cloud foundry

As announced the Swisscom logstash buildpack is not supported any longer.
The proposed solution is to push the default docker image.
I am trying to figure out the way to attach the curator configuration without "baking" it inside the docker image. Any ideas?
thanks
There are two articles in the support forum that discuss some aspects of your question here:
https://docs.developer.swisscom.com/service-offerings/logstash-docker.html
https://docs.developer.swisscom.com/service-offerings/kibana-docker.html
They do in fact recommend:
If you wish to use configuration files instead, you can fork the official Docker image and ADD your configuration files in your own Dockerfile.
I assume that is exactly what you did not want to do, but you can pass in most of the config via environment variables as far as I understand.
If you are ok with creating a separate Docker image, you could also host the config somewhere (let's say on S3) and then dynamically retrieve it on start-up of your Docker container.
You could also build the config setup into your deployment setup, although I haven't tried this with the docker build-pack, you can "stack" multiple build-packs in CloudFoundry and pre-load your configuration files into the virtual server as part of an initial build-pack step. There is more information on how to do that here: https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html

Is there a way to push changed to AWS Beanstalk instead of uploading an entire zip file on each deploy?

Im migrating a Play! application from Heroku to AWS Beanstalk.
Heroku is really straight forward when it comes to deploying: Just push changes to a remote git repository on Heroku and the build occurs on the server side.
This is very convenient because it is not necessary to upload the whole project for each tiny change (Including all libraries!).
Basically for each change we are generating a huge 140 MB Docker zipped file that takes at least 10 minutes to upload.
Surely there must be a better way but a long search on Google only returned options to automize the file generation with scripts and alternatives like Jenkins but this does not solve the problem, it just automates the problem.
Does anyone have a better solution?
You can set up a AWS CodeCommit repository, and use that as a remote for your local git repository. Next you can set up AWS CodePipeline to build your application and deploy to Elastic Beanstalk whenever there is a new commit to the AWS CodeCommit repository.
This way you don't have to upload everything every time. Whenever you do git push, only the changed files are uploaded to the AWS CodeCommit repository, and then AWS CodePipeline takes care of building your application and deploying it to Elastic Beanstalk.
So I got curious about this question too and had a conversation with an AWS specialist about different options here. Each option has it's downsides tho.
The first option is to bake your application code, create an AMI out of it and carry out deployment using baked AMI. More on that
You have to test this approach first before adopting. The downside is that you would have to regularly maintain the AMI. You might also miss out critical patches from Beanstalk since AMI has been locked down
A good read on this topic
The next approach would be to move out of Beanstalk and use CloudFormation where you can just upload your application folder to S3. Your CloudFormation template has to take care of spinning up all the resources required and using AWS::CloudFormation::Init and cfn-signals, it would be possible to install and setup software.Changes within the resource Metadata can be detected by making use of the proper CloudFormation signal and we can also run user-specified actions when a change is detected on the template specification.
(AWS::CloudFormation::Init)
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.html (set of helper scripts that can be used with CloudFormation)
Although these are not exactly a solution to what you asked for, they can be a good alternative. At least I made sure that you are not missing out any available options at Beanstalk.
Also one advice I got from them was to consider splitting up application into multiple components and sub-components. This would reduce your application size considerably.
Hope this helped.
Short answer: No.
Long Answer: I ended up packaging the app with activator and not using Docker.
Crate a folder named "dist" in the root of the project.
Include a file named Procfile with the following line:
web: ./bin/YOUR_APP_NAME -Dhttp.port=5000 -Dconfig.file=conf/application.conf
Make sure to replace YOUR_APP_NAME with the name of your app as configured in build.sbt.
Package the Play app with the following command:
activator clean dist
That will generate a zip file inside target/universal/ folder in the project.
Deploy that zip file to AWS Elastic Beanstalk.

How could i deploy my Cloud Code to AWS Elastic Beanstalk? (Parse Server)

I am struggling about how to upload my Cloud Code files that i had on Parse.com to my Parse Server hosted on AWS EB.
So far i have:
Parse Server hosted on AWS EB. To host it on AWS i used the Orange Deploy Button which basically makes all stuff easier for people without having to install the Parse Server locally and upload it later to AWS.
iOS App written in objective C connected to the Parse server and working perfectly
Parse Dashboard locally on my mac connected to the Parse Server on AWS
The only thing that i would need is to upload all my cloud code files to the Parse Server. How could i do this? I have researched a lot over Google, stackoverflow, etc without success. There is some information but its unclear. Thanks in advance.
Finally and thanks to Ran Hassid i now have a Fully functional Parse Server on AWS with Cloud Code. For those who are in the same situation where i was, here is the answer to my question:
Go to this link here and follow all the steps (By the time i asked the question, the information provided by this link of AWS wasn't that clear as it is now. They improved the explanations and the info.)
After you finish all the previous steps from the link. You would have a Parse Server on AWS working.
Now the part of CLOUD CODE. Just create a folder in your MAC or PC wherever you like. Let's say on the desktop and called it Parse Server AWS (You can call it whatever you want)
Install the EB CLI which is the Command line interface to user Terminal (On Mac) or the equivalent on windows to work with the parse server you just set up on AWS (Similar to CloudCode with Parse CLI). The easy way to install it is running this command:
brew install awsebcli
Now open terminal on mac (or the equivalent on windows) and go to the folder that you just created on the step 3.
Run the next command. It will ask you to select the location of your parse server, and then the name.
eb init
Now this command. It will download all the files from AWS of your parse server to this folder you are in.
eb labs download
Finally, you will have a folder called Cloud where you can put all your cloud code files in.
When you finish just run the command:
eb deploy
Now you have your parse server with all your cloud code files working on AWS.
Now any change you need to make to your cloudCode files, just change the local files inside this folder just created on step 3 and run again the command from the step 9. Just exactly as you used to do with Parse Deploy command
Hopefully this information will help many people as it helped to me.
Have a happy coding!
parse-server cloud code is a bit different from Parse.com cloud code. In Parse.com we use the Parse CLI in order to modify and deploy our cloud code (parse deploy ...) in parse-server your cloud code exist under the following path of your parse project ./cloud/main.js* so your cloud code endpoint is the main.js file which by default located under the **cloud folder of your parse project. If you really want you can change this path but to keep it simple use the default location.
Now about deployment. in parse-server you need to redeploy your parse server again when you do some modification to your cloud code. Another option is to edit your cloud code remotely but from my POV its better to redeploy it

Deploy Django app with Docker

I'm attempting to deploy a Django app via docker, first locally, and then to a cloud server. I could not find an answer to my initial question before I attempt this: if I run docker-machine create, I'm guessing this should be run from within my virtualenv, right?
This would then grab all of my specific app dependencies, and begin to build certificates to throw in the container? If not, please explain otherwise..
Yes you are correct.
I will try to help you by my experience, if you wanna deploy django apps via docker.
First you need to setup docker machine in your local machine. Please see the
instruction. By default driver that will be used is --driver
virtualbox default.
List what kind of specifics dependencies images of your apps. Ex:
you need nginx, postgres, uwsgi, or you need to fetch an image then
modified that image you can use dockerfile (its the best practice
for you).
I suggested you to use docker-compose. Really its make our project
pretty easy to manage. You have to define all images that you need
for your app in docker-compose file Please read this reference.
After you finished develop your app then you want to deploy in production server (cloud) you just need to copy all your project then running your docker-compose. All images dependencies will be automatically pulled in the cloud.
As a reference, you can see this project (this is an open source project that I developed.) On that project, I use make file to manage docker-compose command and it make easy to manage.
An example of dockerfile
An example of docker-compose.yml
An example of Makefile
Hope this will help you.

Getting Started with EC2 Container Registry

This is giving me a headache.
Here's what I've done so far
Created an EC2 Virtual Server Instance, and its running
Installed the AWS CLI
Installed Docker on my EC2 Virtual Server after I SSH'd into it
So looking at the docs it tells you how to build an image. Now comes my confusion.
Question 1: So am I right by assuming that one basically have an option to a) build an image off your host or b) pull an image created by others from Docker Hub?
Question 2: If I'm right about Question #1 then what am I building an image ** off of** if I am not pulling one from docker hub? with the AWS docs here?
Question 3: then I see a whole different route I can take, using Docker Compose, so I'd use that instead of all this above? This is so confusing.
EC2 Container Registry – Now Generally Available
So again, here, it tells you to install docker on the Host. Then immediately jumps into "create an image". Create an image off what, that host's OS? I don't get it, I guess that's what it means OR I can pull an image from Docker Hub and not go this route?
Same here, it's talking about creating a docker image, what off the Host?
Or..maybe I'm not understanding what "image" means but I assume going this route, instead of pulling a Docker image from Docker Hub that I'm creating an image off my EC2 virtual Instance?
A1: No. You can't build an image off your host.
You can create an new image according to your requirement like which Operating Sytem (Ubuntu, Fedora), Stack(LAMP, LEMP) and many other things.
Or you can pull an image which will be pre-configured with all the packages like Wordpress Stack image, Magento stack image, Bitnami image which you can pull from docker-hub.
A2: As I have mentioned earlier you can build an image of any operating system you want(Ubuntu, Fedora, Debian) but not off the host.
You just need to pull image from docker-hub. e.g docker pull ubuntu will pull mininmal image of Ubuntu-14.04. And if you need specific version of Ubuntu
like Ubuntu-12.04 version e.g docker pull ubuntu:12.04 will pull minimal image of Ubuntu-12.04
A3: Docker-compose is a tool for defining and running multi-container docker applications. docker-compose conatins a compose file
in which you can configure your application services.
And finally Amazon EC2 Container Registry is little bit different thing. The Idea is the same as docker but Amazon is providing
this as a EC2 Container Service with many other functionality which docker doesn't have right now.
Hope it hepls:-)