repository docker.wso2.com/wso2is not found - wso2

When I try to use WSO2 docker it gives an error.
Please see the following commands and responses:
[oracle#ol75new ~]$ docker login docker.wso2.com
Username: baterdene.m#gmail.com
Password:
Login Succeeded
[oracle#ol75new ~]$ docker run -it -p 9443:9443 docker.wso2.com/wso2is
Unable to find image 'docker.wso2.com/wso2is:latest' locally
docker: Error response from daemon: repository docker.wso2.com/wso2is not found: does not exist or no pull access.
See 'docker run --help'.

In order to pull the docker image from docker.wso2.com, you need to have a valid subscription. These docker images will be available in the docker hub soon. For the moment, you can build the docker image from https://github.com/wso2/docker-is

Related

Cloud Run /Cloud Code deployment error in intellij

trying to follow the Getting Started instructions for Deploying a Cloud Run service with Cloud Code in Intellij (deploying HelloWorld Flask app container with Cloud Run: Deploy) but getting the following error, any idea why this might be happening
it worked initially i.e. deployed the app on Cloud Run service using the same steps, and then started throwing this error after a week or so when trying to redeploy, there was no change in project settings.
intellij and docker versions are the latest.
authenticated to google cloud project with gcloud auth login --update-adc
The local run works fine (Cloud Run: Run Locally),
but running the Cloud Run: Deploy throws this "code 89" error
Preparing Google Cloud SDK (this may take several minutes for first time setup)...
Creating skaffold file: /var/.../skaffold8013155926954225609.tmp
Configuring image push settings in /var/.../skaffold8013155926954225609.tmp
../Library/Application Support/cloud-code/bin/versions/../
skaffold build --filename /var/.../skaffold8013155926954225609.tmp --tag latest --skip-tests=true
invalid skaffold config: getting minikube env:
running [/Users/USER/Library/Application Support/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/bin/
minikube docker-env --shell none -p minikube --user=skaffold]
- stdout: "false exit code 89"
- stderr: ""
- cause: exit status 89
Failed to build and push Cloud Run container image.
Please ensure your builder settings are correct, network is available, you are logged in to a valid GCP project, and try again.
Edit: I see minikube error code 89: ExGuestUnavailable and it's an error code specific to the guest host, still unclear what might be causing this
Looks like an issue with skaffold attempting to communicate with minikube (which could be used for building images as well). Please try cleaning minikube
minikube stop
minikube delete --all --purge
and try again.
ok, i still don't know why it fails to deploy to cloud run from intellij but i got it to deploy from command line
cd my-flask-app
#step 1: build container image from Dockerfile and submit to container registry
gcloud builds submit --tag gcr.io/GCP_PROJECT_ID/my-flask-app
#step 2: deploy the image on cloud run (reference)
gcloud run deploy --image gcr.io/GCP_PROJECT_ID/my-flask-app
references:
https://cloud.google.com/build/docs/building/build-containers
https://cloud.google.com/container-registry/docs/quickstart
Edit: the answer above did the trick : minikube delete --all --purge

docker with mysql not working on new ec2 arm based instance - no matching manifest

[nir ~]$ docker run --name mysql -d mysql:5.7
Unable to find image 'mysql:5.7' locally
5.7: Pulling from library/mysql
docker: no matching manifest for linux/arm64/v8 in the manifest list entries.
See 'docker run --help'.
[nir ~]$ docker run --name mysql -d mysql:8
Unable to find image 'mysql:8' locally
8: Pulling from library/mysql
docker: no matching manifest for linux/arm64/v8 in the manifest list entries.
See 'docker run --help'.
we upgraded to a newer instance type in aws to arm based - m6g
Looking here it looks like there is nothing I can do revert to intel based instance. Is that so?
Based on the comments.
mysql docker does not support linux/arm64/v8. However, mariadb supports linux/arm64/v8. Thus, if possible, change of of mysql into mariadb could solve the problem.

AWS EB docker-compose deployment from private registry access forbidden

I'm trying to get docker-compose deployment to AWS Elastic Beanstalk working, in which the docker images are pulled from a private registry hosted by GitLab.
The strange thing is that initial deployment works perfectly; It pulls the image from the private registry and starts the containers using docker-compose, and the webpage (served by Django) is accessible through the host.
Deploying a new version using the same docker-compose and the same docker image will result in an error while pulling the docker image:
2021/03/16 09:28:34.957094 [ERROR] An error occurred during execution of command [app-deploy] - [Run Docker Container]. Stop running the command. Error: failed to run docker containers: Command /bin/sh -c docker-compose up -d failed with error exit status 1. Stderr:Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating network "current_default" with the default driver
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest(registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 09:28:34.957104 [INFO] Executing cleanup logic
Setup
AWS Elastic Beanstalk 64bit Amazon Linux 2/3.2
Gitlab registry credentials are stored within a S3 bucket, with the filename .dockercfg and has the following content:
{
"auths": {
"registry.gitlab.com": {
"auth": "base64 encoded username:personal_access_token"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.03.1-ce (linux)"
}
}
The repository contains a v3 Dockerrun.aws.json file to refer to the credential file in S3:
{
"AWSEBDockerrunVersion": "3",
"Authentication": {
"bucket": "gitlab-dockercfg",
"key": ".dockercfg"
}
}
Reproduce
Setup docker-compose.yml that uses a service with a private docker image (and can be pulled with the credentials setup in the dockercfg within S3)
Create a new applicatoin that uses the docker-platform.
eb init testapplication --platform=docker --region=eu-west-1
Note: region must be the same as the S3 bucket containing the dockercfg.
Initial deployment (this will succeed)
eb create testapplication-test --branch_default --cname testapplication-test --elb-type=application --instance-types=t2.micro --min-instance=1 --max-instances=4
The initial deployment shows that the image is available and can be started:
2021/03/16 08:58:07.533988 [INFO] save docker tag command: docker tag 5812dfe24a4f redis:alpine
2021/03/16 08:58:07.533993 [INFO] save docker tag command: docker tag f8fcde8b9ae2 mysql:5.7
2021/03/16 08:58:07.533998 [INFO] save docker tag command: docker tag 1dd9b65d6a9f registry.gitlab.com/company/spikes/dockertest:latest
2021/03/16 08:58:07.534010 [INFO] Running command /bin/sh -c docker rm `docker ps -aq`
Without changing anything to the local repository and the remote docker image on the private registry, lets do a redeployment which will trigger the error:
eb deploy testapplication-test
This will fail with the following output:
...
2021-03-16 10:02:28 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2021-03-16 10:02:29 ERROR Unsuccessful command execution on instance id(s) 'i-0dc445d118ac14b80'. Aborting the operation.
2021-03-16 10:02:29 ERROR Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
And logs of the instance show (/var/log/eb-engine.log):
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest (registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 10:02:25.902479 [INFO] Executing cleanup logic
Steps I've tried to debug or solve the issue
Rename dockercfg to .dockercfg on S3 (somewhere mentioned on the internet as possible solution)
Use the 'old' docker config format instead of the one generated by docker 1.7+. But later on I figured out that Amazon Linux 2-instances are compatible with the new format together with Dockerrun v3
Having an incorrectly formatted dockercfg on S3 will cause an error deployment regarding the misformatted file (so it actually does something with the dockercfg from S3)
Documentation
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
I'm out of debug options, and I've no idea where to look any further to debug this problem. Perhaps someone can see what is going wrong here?
First of all, the issue describe above is a bug confirmed by Amazon. To get the deployment working on our side, we've contacted Amazon support.
They've a fix in place which should be released this month, so keep an eye on the changelog of the Elastic beanstalk platform: https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/relnotes.html
Although the upcoming release should have the fix, there is a workaround available to get the docker-compose deployment working.
Elastic Beanstalk allows hook to be executed within the deployment, which can be used to fetch the .docker.cfg from a S3 bucket to authenticate with against the private registry.
To do so, create the following file and directories from the root of the project:
File location: .platform/hooks/predeploy/docker_login
#!/bin/bash
aws s3 cp s3://{{bucket_name_to_use}}/.dockercfg ~/.docker/config.json
Important: Add execution rights to this file (for example: chmod +x .platform/hooks/predeploy/docker_login)
To support instance configuration changes, please symlink the hooks directory to confighooks:
ln -s .platform/hooks/ .platform/confighooks/
Updating configuration requires the .dockercfg credentials to be fetched too.
This should enable continuous deployments to the same EB-instance without the authentication errors, because the hook will be execute before the docker image pulling.
Some background:
The docker daemon reads credentials from ~/.docker/config by default on traditional linux systems. On the initial deploy this file will exist on the Elastic Beanstalk instance. On the next deployment this file is removed. Unfortunately, on the next deployment the .dockercfg is not refetched, therefor the docker daemon does not have the correct credentials to authenticate with.
I was dealing the same errors while trying to pull images from a privately hosted GitLab instance. I was able to resolve them by including the email address that was associated with the generated token found in the auth field of the .dockercfg file.
The following file format worked for me:
"registry.gitlab.com" {
"auth": "base64 encoded username:personal_access_token",
"email": "email for personal access token"
}
In my case I used a Project Access Token, which has an e-mail address associated with it once it is created.
The file format in the Elastic Beanstalk documentation for the authentication file here, indicates that this is the required file format, though the versions that it says this format is required for are almost certainly outdated, since we are running Docker ^19.

Docker executable not found in PATH when using AWS batch/ECS

I am trying to run a simple Dockerized Python script with AWS batch.
Is there a problem with my Docker image?
I have locally built the Docker image and it runs fine locally. I pushed the image to a AWS repository, and pulling this remote image to my local machine also runs correctly.
Problem
I have setup my compute env, job queue, and job definition, but I get this error
CannotStartContainerError: Error response from daemon:
OCI runtime create failed: container_linux.go:370:
starting container process caused:
exec: "docker": executable file not found in $PATH: unknown
when I run
["docker","run","-t","111111111111.dkr.ecr.us-region-X.amazonaws.com/myimage:latest","python3","hello_world.py","--MSG","ok"]
Is Docker installed?
I am using the ECS_AL2 image type. When I start a EC2 with this AMI and ssh into it, I can see that Docker is already installed. docker run works fine for instance.
Is there a (generic) problem with my compute env, job queue, or job def?
When I instead try to run the command echo hello this works fine.
Appreciate any advice/help you can provide.
UPDATE - ANSWER
#samtoddler helped me to realize that I only needed
["python3","hello_world.py","--MSG","ok"]
in the Command statement
this error
CannotStartContainerError: Error response from daemon:
that means it is coming from docker daemon, so docker is doing its job.
Seems like you have some trouble with your docker image, how it is packaged and how you trying to pass all those vars.
Please check Docker Image CMD section on how to use ENTRYPOINT and CMD.
There is some explanation in this question docker-oci-runtime-create-failed-container-linux-go349-starting-container-pro

Unable to Push to Google Container Registry (access denied)

When I tried to push a container image to the Container Registry, it gave me the following error,
denied: Token exchange failed for project 'my-proj-123'. Caller does not have permission 'storage.buckets.create'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
I had to follow the Bucket Name Verification process to be able to create the artifacts.my-proj-123.appspot.com bucket. Now when I try to push the docker image, it does not complain on storage.buckets.create permission but only gives:
denied: Access denied.
I don't know which user I need to give access to. I gave Storage Admin access to the Compute Engine default service account to no avail. How can I fix it?
I was able to push a Docker image to Container Registry from a Container Optimized OS.
If you are having permission problems, I recommend you to give the Compute Engine default service account at least project editor permissions, just for testing purposes. Even if you just target Cloud Storage, other parts of the processes may need more permissions. Once you finish testing, you can create a new service account with less permissions and fine tune it for your needs.
Also, there is an alternative to gcloud for authentication. You can try by following this:
First try to download docker-credential-gcr with:
VERSION=1.5.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs
curl -fsSL "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz" \
| tar xz --to-stdout ./docker-credential-gcr \
> /usr/bin/docker-credential-gcr && chmod +x /usr/bin/docker-credential-gcr
After that execute docker-credential-gcr configure-docker
Download the Compute Engine default service account json key.
Execute cat [your_service_account_credentials.json] | docker login -u _json_key --password-stdin https://[HOSTNAME]
I hit the similar issue while i was trying to upload the docker image to GCR from container optimized OS, i ran the following sequence of command,
Created a service account and assigned Storage Admin privileges.
Downloaded the JSON key
Executed docker-credential-gcr configure-docker
Logged in with docker command - docker login -u _json_key -p "$(cat ./mygcrserviceaccount.JSON)" https://gcr.io
Tried pushing the image gcr - docker push gcr.io/project-id/imagename:tage01
It failed with following error,
denied: Token exchange failed for project 'project-id'. Caller does not have permission 'storage.buckets.create'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
I tried giving every possible permission to my service account through IAM role but it would fail with same error.
After reading this issue i did the following changes,
Removed the docker config directory rm -rf ~/.docker
Executed docker-credential-gcr configure-docker
Stored the JSON key into variable named GOOGLE_APPLICATION_CREDENTIALS
GOOGLE_APPLICATION_CREDENTIALS=/path/to/mygcrserviceaccount.JSON
Logged in with docker command - docker login -u _json_key -p "$(cat ${GOOGLE_APPLICATION_CREDENTIALS})" https://gcr.io
Executed docker push command - docker push gcr.io/project-id/imagename:tage01
Voila, it worked like a charm!