Can anyone please share Docker file or repository that creates an image on which Cloud Storage plugin is already installed? Nexus plugin to write data into Cloud Storage?
you can pull the images using docker
docker pull gcr.io/google.com/cloudsdktool/cloud-sdk:latest
check this github repo to find more information
Related
Why when I use the Visual Studio Code extension "Cloud Code", to deploy a Cloud Run service, it seems to store the image contents in Cloud Storage (via Container Registry).
Can I make it store the image in the Google Cloud Artifact Registry instead?
I just tried the scenario and it worked for me! Following these steps should get you going.
Create an artifact registry repo at https://console.cloud.google.com/artifacts and setup docker auth on your client to use gcloud to authenticate the repo. You can find detailed steps to do this here.
When deploying to Cloud Run in Cloud Code, you'll find that it will default to a Container Registry repo as the "Container image URL", but you can easily use an artifact registry repo here instead. Here, you can paste the repo name you created in the previous step, and append an image name. Here's a screenshot of the example I just tested.
I wanted to make a CI/CD with a project on Github using GitHub Actions. Used this tutorial:
https://www.blog.labouardy.com/elastic-beanstalk-docker-tips/
But I still do not understand how elastic beanstalk will pull the docker image from the DockerHub.
How should this happen?
And why do we need a Dockerrun.aws.json file and how to use it?
There are different approaches that can be followed. The blogger chose to use the Dockerrun.aws.json + Dockerfile + zipfile approach. In other words, every time the CircleCI builds, it uploads a zip file containing the Dockerrun.aws.json (the Dockerfile is not really needed in this case since he's building the image remotely as well as the rest of the application since he's not mapping anything).
The circleci executes the following steps:
build image
push image
send zip file to AWS Elastic Beanstalk
AWS Elastic Beanstalk will simply follow the configuration inside the Dockerrun.aws.json and update using the tag ${CIRCLE_SHA1}.
Is the Dockerrun.aws.json necessary? No, you can also use a docker-compose.yml file.
I suggest you check AWS documentation on this topic.
EDIT: IMHO it's better to use docker-compose.yml since it allows to start the containers locally and make sure they're ok before updating the application remotely
Like an member said you could use the AWS documentation with complete steps.
I have web application tar file. I have created docker image for the same. I will be using a private docker registry (Due to security reasons). I have written Helm charts to use the image in Kubernetes (Kept it in Private helm repo). So if anyone want to install the APP using docker image on EKS feature of AWS, what would be the best way I can package my app and give it to them ?
Basic requirement is It shouldn't be available to everyone for installation. Only the one's approved by me can install.
Thanks in advance.
You can push it to their private container registry. If they are using AWS you can use ECR. You can find more information on how to push the image here
Basically, they would need to create an IAM user/role for you to be able to push to their AWS account.
I have created a local image based on CentOS with a specific application running on it. Running the application locally on my machine works fine. I want to take this image and upload it to AWS (preferably to an ElasticBeanstalk instance but I can work with other types).
I cannot upload the image to a image repository like docker hub or other type The application is by a third party and they gave us special permission to make the image and run it on AWS but we can not place the image in a repository.
Is there a way to just export my docker image (I have the saved TAR file) and upload it to AWS and have them use it directly? I've searched and searched and not found anything showing how to do that, all I have found indicate you have to have it in a repository or upload the code to AWS and have it build the image.
Thank you
For a project we are trying to expand Google Cloud Datalab and deploy the modified version to the Google Cloud platform. As I understand it, the deploying process normally consists of the following steps:
Build the Docker image
Push it to the Container Registry
Use the container parameter with the Google Cloud deployer to specify the correct Docker image, as explained here.
Since the default container registry, i.e. gcr.io/cloud_datalab/datalab:<tag> is off-limits for non-Datalab contributors, we pushed the Docker image to our own container registry, i.e. to gcr.io/<project_id>/datalab:<tag>.
However, the Google Cloud deployer only pulls directly from gcr.io/cloud_datalab/datalab:<tag> (with the tag specified by the containerparameter) and does not seem to allow specification of the source container registry. The deployer does not appear to be open-source, leaving us with no way to deploy our image to Google Cloud.
We have looked into creating a custom deployment similar to the example listed here but this never starts Datalab, so we suspect the start script is more complicated.
Question: How can we deploy a Datalab image from our own container registry to Google Cloud?
Many thanks in advance.
The deployment parameters can be guessed but it is easier to get the Google Cloud Datalab deployment script by sshing to the temporary compute node that is responsible for deployment and browsing the /datalab folder. This contains a runtime configuration file for use with the App Engine Flexible Environment. Using this configuration file, the google preview app deploy command (which accepts an --image parameter for Docker images) will deploy this to the App Engine correctly.