i used this https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook?hl=de to bulid a Guestbook with Google Kubernetes Engine.
I applyed this an everything works.
Now i wanted to change the index.html for a better Look.
How can i upload or update the changed file?
I tried to apply the frontend service again with this
kubectl apply -f frontend-service.yaml
But it did not work.
You will have to rebuild the containers if you make changes to the source code. I suggest you:
Download docker and run docker build to rebuild the containers locally.
Push the containers to your own Artifact Registry(AR) following this guide https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling.
Update the yaml files to point to your own AR.
Redeploy the application to GKE
Related
Dear network please take a look on following questions.
ENVIRONMENT:
AWS Elastic Beanstalk (EBS) (Docker running on 64bit Amazon Linux 2/3.4.12)
SITUATION:
I am running AWS EBS from Docker Platform and using Docker File to run the container. After deployment I need to replace nginx.conf file with my updated nginx.conf,
ACTIONS:
Trying to use in DockerFile ADD https://my-scripts.s3.amazonaws.com/nginx.conf /etc/nginx command, but after deployment the file isn't replacing the original file.
RESULT:
Seems the reason is, that nginx installing in container after Docker File process and probably to have custom nginx.conf I need to use on of the aws platform
custom hooks.
QUESTIONS:
Am I in the right direction regarding custom hooks, or there are
better ways to solve it?
How platform custom hooks could be used for this case to build EBS:
Should I just add hooks /opt/elasticbeanstalk/hooks/postdeploy
EX: ADD s3://my_custom_hook_script /opt/elasticbeanstalk/hooks/postdeploy.?
As a solution just make war/zip archive with DockerFile and .platform/nginx/nginx.conf and deploy war/zip instead of DockerFile only. That replaced /etc/nginx/nginx.conf and not need to restart nginx. COuld be this meaningfull solution ?
NOTE:
When I was building AWS EBS from war file for my other service, I just added the .platform/nginx/nginx.conf under the root of project and after building war keep that folder in the same level as the generated project jar, which replace with custom nginx.conf without restarting nginx service.
Thanks in advance
As announced the Swisscom logstash buildpack is not supported any longer.
The proposed solution is to push the default docker image.
I am trying to figure out the way to attach the curator configuration without "baking" it inside the docker image. Any ideas?
thanks
There are two articles in the support forum that discuss some aspects of your question here:
https://docs.developer.swisscom.com/service-offerings/logstash-docker.html
https://docs.developer.swisscom.com/service-offerings/kibana-docker.html
They do in fact recommend:
If you wish to use configuration files instead, you can fork the official Docker image and ADD your configuration files in your own Dockerfile.
I assume that is exactly what you did not want to do, but you can pass in most of the config via environment variables as far as I understand.
If you are ok with creating a separate Docker image, you could also host the config somewhere (let's say on S3) and then dynamically retrieve it on start-up of your Docker container.
You could also build the config setup into your deployment setup, although I haven't tried this with the docker build-pack, you can "stack" multiple build-packs in CloudFoundry and pre-load your configuration files into the virtual server as part of an initial build-pack step. There is more information on how to do that here: https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
I have a django web application code in github. From time to time, I make necessary updates and arrangements on the repository. I have to pull the project every time and make adjustments on the docker and run on my machine.
Is there a way to run docker synchronously with the code in my github repoitory? When I make a change in github I want the docker to pull it automatically and try to run the project without interrupting.
Using hooks inside Jenkins we configure Git & Docker.
Say:
When ever we push changes to git, then jenkins job will trigger, jenkins will pull the changes and build new docker image and push the image inside docker.
Hi I am new to Google Cloud Platform. I want to build an Java application which should be built using Google Cloud Build without docker containers. And also the built application to be tested and artifact to be saved in bucket. Can anyone help me on this ?
Cloud Build is conceptually a pipeline mechanism that takes some set of files as input (commonly in some source repo) and applies a number of processing steps to the files including steps that produce output: file(s) | step-1 | step-2 | ... | step-n.
Most of the examples show Cloud Build producing Docker images but this underplays all the many things it can do.
Importantly, each of the processors (steps) must be a Docker containers but the input and output need not be docker images.
You can use javac or mvn or gradle steps to compile your code and then use the gsutil step to copy the war or jar to Google Cloud Storage.
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/javac
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/mvn
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gradle
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gsutil
Since you mentioned that you without docker container, I assume you want to deploy your application not in docker image. You can deploy your app into Google App Engine Standard. So in how to deploy into App Engine, you can refer to this documentation: https://cloud.google.com/build/docs/deploying-builds/deploy-appengine
To run the application on App Engine, you create app.yaml on your project Then you put these lines inside app.yaml
runtime: java11
entrypoint: java -Xmx64m -jar {your application artifact in jar file}```
I'm attempting to deploy a Django app via docker, first locally, and then to a cloud server. I could not find an answer to my initial question before I attempt this: if I run docker-machine create, I'm guessing this should be run from within my virtualenv, right?
This would then grab all of my specific app dependencies, and begin to build certificates to throw in the container? If not, please explain otherwise..
Yes you are correct.
I will try to help you by my experience, if you wanna deploy django apps via docker.
First you need to setup docker machine in your local machine. Please see the
instruction. By default driver that will be used is --driver
virtualbox default.
List what kind of specifics dependencies images of your apps. Ex:
you need nginx, postgres, uwsgi, or you need to fetch an image then
modified that image you can use dockerfile (its the best practice
for you).
I suggested you to use docker-compose. Really its make our project
pretty easy to manage. You have to define all images that you need
for your app in docker-compose file Please read this reference.
After you finished develop your app then you want to deploy in production server (cloud) you just need to copy all your project then running your docker-compose. All images dependencies will be automatically pulled in the cloud.
As a reference, you can see this project (this is an open source project that I developed.) On that project, I use make file to manage docker-compose command and it make easy to manage.
An example of dockerfile
An example of docker-compose.yml
An example of Makefile
Hope this will help you.