I am quiet new to devops and I am trying to use a custom docker image that I have pushed to docker hub.
In my app.yaml I have replace runtime: python by runtime:solalsab/clarins. Is the approach correct and secondly I get an error message as follow:
Value 'solalsab/clarins' for runtime does not match expression '^(?:((gs://[a-z0-9\-\._/]+)|([a-z][a-z0-9\-\.]{0,29})))$'
In the app.yaml it should be runtime: custom and env: flex.
The image should be defined in the Dockerfile: FROM solalsab/clarins
Check this Custom Runtimes Quickstart.
Related
I have a django application running locally and i've set up the project on CircleCi with python and postgres images.
If I understand correctly what is happening, CircleCi would use the images to build a container to test my application with code database.
Then I'm using the job heroku/deploy-via-git to deploy it to Heroku when the tests are passed.
Now I think Heroku is using some images too to run the application.
I would like to get the image used by heroku to run my site locally on another machine.
So pull the image then push it to Docker Hub and finally download it back to my computer to only have to use a docker compose up.
Here is my CircleCi configuration's file
version: 2.1
docker-auth: &docker-auth
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
orbs:
python: circleci/python#1.5.0
heroku: circleci/heroku#0.0.10
jobs:
build-and-test:
docker:
- image: cimg/python:3.10.2
- image: cimg/postgres:14.1
environment:
POSTGRES_USER: theophile
steps:
- checkout
- run:
command: pip install -r requirements.txt
name: Install Deps
- run:
name: Run MIGRATE
command: python manage.py migrate
- run:
name: Run loaddata from Json
command: python manage.py loaddata datadump.json
- run:
name: Run tests
command: pytest
workflows:
heroku_deploy:
jobs:
- build-and-test
- heroku/deploy-via-git:
requires:
- build-and-test
I don't know if it is possible, if not, what should be the best way to proceed ? (I assume that there is a lot of possibilites)
I was considering to build an image from my local directory with docker compose up then use this image direclty on CircleCi, then i would be able to use this image an on other computer. But building images into images with CircleCi seems really messy and I'm not sure how I should proceed.
I've tried to pull images from Heroku but it seems I can only pull the code or get/modify the database but I can't get the image builds itself.
I hope this question is relevant and clear, as the CircleCi and Heroku documentation seems not clear and it's my first post on stackoverflow !
Thanks in advance
Heroku's platform is proprietary, so we can't be sure how it works internally.
We know that their stacks are based on Ubuntu LTS releases, and we know that they use open-source buildpacks to compile application slugs from source code, but details about the underlying infrastructure are murky. They certainly don't provide base images like heroku/python:3.11.0 for you to download.
If you want to use the same image locally, on CircleCI, and Heroku, a better option would be to start deploying with Heroku's Container Registry instead of Git. This allows you to build an image locally, push it into the container registry, and release it as the next version of your application.
I suggest you read the entire documentation page linked above, but the short version is:
Log into the container registry using the Heroku CLI:
heroku container:login
Assuming you already have a Dockerfile for your application, build and push an image:
heroku container:push web
In this case we are building from Dockerfile and pushing the resulting image to be used as a web process.
Release your application:
heroku container:release web
That's a basic Docker deployment from your local machine, and even if that's not your final plan I suggest you start by getting that working.
From there, you have options. One option would be to move this flow to CircleCI—continue to build images there, but have CircleCI push the resulting container to Heroku's Container Registry.
Another option might be as you suggest in your question: to build images locally and use them with both CircleCI and Heroku.
How do I use a custom builder image in Cloud Build which is stored in a repository in Artifact Registry (instead of Container Registry?)
I have set up a pipeline in Cloud Build where some python code is executed using official python images. As I want to cache my python dependencies, I wanted to create a custom Cloud Builder as shown in the official documentation here.
GCP clearly indicates to switch to Artifact Registry as Container Registry will be replaced by the former. Consequently, I have pushed my docker image to Artifact Registry. I also gave my Cloud Builder Service Account the reader permissions to Artifact Registry.
Using the image in a Cloud Build step like this
steps:
- name: 'europe-west3-docker.pkg.dev/xxxx/yyyy:latest'
id: install_dependencies
entrypoint: pip
args: ["install", "-r", "requirements.txt", "--user"]
throws the following error
Step #0 - "install_dependencies": Pulling image: europe-west3-docker.pkg.dev/xxxx/yyyy:latest
Step #0 - "install_dependencies": Error response from daemon: manifest for europe-west3-docker.pkg.dev/xxxx/yyyy:latest not found: manifest unknown: Requested entity was not found.
"xxxx" is the repository name and "yyyy" the name of my image. The tag "latest" exists.
I can pull the image locally and access the repository.
I could not find any documentation on how to integrate these images from Artifact Registry. There is only this official guide, where the image is built using the Docker image from Container Registry – however this should not be future proof.
It looks like you need to add your Project ID to your image name.
You can use the "$PROJECT_ID" Cloud Build default substitution variable.
So your updated image name would look something like this:
steps:
- name: 'europe-west3-docker.pkg.dev/$PROJECT_ID/xxxx/yyyy:latest'
For more details about substituting variable values in Cloud Build see:
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values
I'm deploying to ECS with the Docker Compose API, however, I'm sort of confused about environment variables.
Right now my docker-compose.yml looks like this:
version: "3.8"
services:
simple-http:
image: "${IMAGE}"
secrets:
- message
secrets:
message:
name: "arn:aws:ssm:<AWS_REGION>:<AWS_ACCOUNT_ID>:parameter/test-env"
external: true
Now in my Container Definitions, I get a Simplehttp_Secrets_InitContainer that references this environment variable as message and with the correct ARN, but there is no variable named message inside my running container.
I'm a little confused, as I thought this was the correct way of passing env's such as DB-passwords, AWS credentials, and so forth.
In the docs we see:
services:
test:
image: "image"
environment:
- "FOO=BAR"
But is this the right and secure way of doing this? Am I missing something?
I haven't played much with secrets in this ECS/Docker integration but there are a couple of things that don't add up between your understanding and the docs. First the integration seems to be working with Secrets Manager and not SSM. Second, according to the doc the content won't be available as a variable but rather as a flat file at runtime at /run/secrets/message (in your example).
Check out this page for the fine details: https://docs.docker.com/cloud/ecs-integration/#secrets
summery
I tried to use docker container node:16.13.0-alpine in Codebuild.
However, build was failed with following error.
BUILD_CONTAINER_UNABLE_TO_PULL_IMAGE: Unable to pull customer's container image.
Asm fetching username: AuthorizationData is malformed, empty field
I want to know how to resolve this error and build successfully passes.
what I've tried
I set environment as follows:
In Registry credentials section, I added Secrets Manager ARN for Docekr credential.
codes
Here is buildspec.yml for testing.
version: 0.2
phases:
build:
commands:
- echo this is test.
My Registry URL was wrong.
node:14.16.0-stretch is successed.
I am building a Django based application on App Engine. I have created a Postres CloudSql instance. I created a cloudbuild.yaml file with a Cloud Build Trigger.
django = v2.2
psycopg2 = v2.8.4
GAE runtime: python37
The cloudbuild.yaml:
steps:
- name: 'python:3.7'
entrypoint: python3
args: ['-m', 'pip', 'install', '-t', '.', '-r', 'requirements.txt']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'migrate', '--noinput']
- name: 'python:3.7'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "3000s"
The deploymnet is going alright, the app can connect to the database. But when I try load a page I get the next error:
"...import psycopg2 as Database File "/srv/psycopg2/__init__.py", line 50, in from psycopg2._psycopg import ( # noqa ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory"
Another interesting thing is if I deploy my app with 'gcloud app deploy' (not through Cloud Build), everything is alright I am not getting the error above, my app can communicate with the database.
I am pretty new with gcloud, so maybe I missed some basic here.
But my questions are:
-What is missing from my cloudbuild.yaml to make it work?
-Do I pip install my dependencies to the correct place?
-The prospective of this error what is the difference with the Cloud Build based deployment and the manual one?
From what I see you're using Cloud Build to run gcloud app deploy.
This command commits your code and configuration files to App Engine. As explained here App engine runs in a Google managed environment that automatically handles the installation of the dependencies specified in the requirements.txt file and executes the entrypoint you defined in your app.yaml. This has the benefit of not having to manually trigger the instalation of dependencies. The first two steps of your cloudbuild are not affecting the App Engine's runtime, since the configuration of it is managed by the aforementioned files once they're deployed.
The purpose of Cloud Build is to import source code from a variety of repositories and build binaries or images according to your specifications. It could be used to build Docker images and push them to a repository, download a file to be included in Docker build or package a Go binary an upload it to Cloud Storage. Furthermore the gcloud builder is aimed to run gcloud commands through a build pipeline for example to create account permissions or configure firewall rules when these are required steps for another operation to succeed.
Since you're not automatizing a build pipeline but trying to deploy an App Engine application Cloud build is not the product you should be using. The way to go when deploying to App Engine is to simply run gcloud app deploy command and let Google's environment take care of the rest for you.
Isn't this Quickstart describing exactly what the OP was trying to do?
https://cloud.google.com/source-repositories/docs/quickstart-triggering-builds-with-source-repositories
I myself was hoping to automate deployment of a Django webapp to an AppEngine "standard" instance.