I'm trying to build docker container and push to to ECR repository. All is working fine locally, but once moved to AWS I'm getting error:
dpkg-deb: error: 'docker-ce_20.10.3_3-0_ubuntu-bionic_amd64.deb' is not a Debian format archive
dpkg: error processing archive docker-ce_20.10.3_3-0_ubuntu-bionic_amd64.deb (--install):
dpkg-deb --control subprocess returned error exit status 2
from following commands in Docker file:
COPY docker-assets/docker-ce_20.10.3_3-0_ubuntu-bionic_amd64.deb /home/folder/
RUN dpkg -i docker-ce_20.10.3_3-0_ubuntu-bionic_amd64.deb
Can anyone hit the same issue / help me out ?
I did workaround this with URL
Related
I am trying to make an automatic migration of workloads between two node pools in a GKE cluster. I am running Terraform in GitLab pipeline. When new node pool is created the local-exec runs and I want to cordon and drain the old node so that the pods are rescheduled on the new one. I am using this registry.gitlab.com/gitlab-org/terraform-images/releases/1.1:v0.43.0 image for my Gitlab jobs. Also, python3 is installed with apk add as well as gcloud cli - downloading the tar and using the gcloud binary executable from google-cloud-sdk/bin directory.
I am able to use commands like ./google-cloud-sdk/bin/gcloud auth activate-service-account --key-file=<key here>.
The problem is that I am not able to use kubectl against my cluster.
Although I have installed the gke-gcloud-auth-plugin with ./google-cloud-sdk/bin/gcloud components install gke-gcloud-auth-plugin --quiet once in the CI job and second time in the local-exec script in HCL code I get the following errors:
module.create_gke_app_cluster.null_resource.node_pool_provisioner (local-exec): E0112 16:52:04.854219 259 memcache.go:238] couldn't get current server API group list: Get "https://<IP>/api?timeout=32s": getting credentials: exec: executable <hidden>/google-cloud-sdk/bin/gke-gcloud-auth-plugin failed with exit code 1
290module.create_gke_app_cluster.null_resource.node_pool_provisioner (local-exec): Unable to connect to the server: getting credentials: exec: executable <hidden>/google-cloud-sdk/bin/gke-gcloud-auth-plugin failed with exit code 1
When I check the version of the gke-gcloud-auth-plugin with gke-gcloud-auth-plugin --version
I am getting the following error:
174/bin/sh: eval: line 253: gke-gcloud-auth-plugin: not found
Which clearly means that the plugin is not installed.
The image that I am using is based on alpine for which there is no way to install the plugin via package manager, unfortunately.
Edit: gcloud components list shows gke-gcloud-auth-plugin as installed too.
The solution was to use google/cloud-sdk image in which I have installed terraform and used this image for the job in question.
I am trying to connect Kafka(MSK) in aws to Elasticsearch in aws. I set it up but currently getting an error.Here are the steps:
sudo apt-get update
sudo apt-install java-1.8.0
wget https://packages.confluent.io/archive/5.2/confluent-5.2.0-2.11.tar.gz?_ga=2.30447679.1453070970.1611201478-474568264.1611201478
tar -xf confluent-5.2.0-2.11.tar.gz
confluent-hub install confluentinc/kafka-connect-elasticsearch:11.0.0
export PATH=/home/ubuntu/confluent-5.2.0/bin:${PATH};
I then updated the connect-standalone.properties config file:
bootstrap.servers=b-1.xx.xx.c8.kafka.us-east-1.amazonaws.com:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
plugin.path=share/java,/home/ubuntu/confluent-5.2.0/share/confluent-hub-components
Then I created another config file for the sink connector.
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=sampleTopic
topic.index.map=logs:logs_index
connection.url=https://xxxx.us-east-1.es.amazonaws.com:443
type.name=log
key.ignore=true
schema.ignore=true
Then I run the confluent standalone command to connect.
bin/connect-standalone etc/kafka/connect-standalone.properties etc/kafka/elasticsearch-connect.properties
It runs but then eventually throws an error which I can not figure out why. Bellow is the error:
ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:119)
java.lang.NoClassDefFoundError: org/apache/kafka/common/config/ConfigDef$CaseInsensitiveValidString
Any help or input would be great thank you.
I am trying to deploy my web app built with flask in python to elastic beanstalk. This is the first time I use this service and I am trying to upload it from the console of AWDS. However, the log file displays errors with the file requirements.txt, which I created from my local computer by typing "pip freeze > requirements.txt". This created me a 360 lines requirements file (is it not too much?) and the log displays errors like this one all the time like:
--------------------------------------------------------
2020/11/10 09:22:02.505005 [ERROR] An error occurred during execution of command [app-deploy] - [InstallDependency]. Stop running the command. Error: fail to install dependencies with requirements.txt file with error Command /bin/sh -c /var/app/venv/staging-LQM1lest/bin/pip install -r requirements.txt failed with error exit status 1. Stderr:ERROR: Could not find a version that satisfies the requirement anaconda-client==1.7.2 (from -r requirements.txt (line 5)) (from versions: 1.1.1, 1.2.2)
ERROR: No matching distribution found for anaconda-client==1.7.2 (from -r requirements.txt (line 5))
2020/11/10 09:22:02.505022 [INFO] Executing cleanup logic
2020/11/10 09:22:02.505119 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed to install application dependencies. The deployment failed.","timestamp":1605000122,"severity":"ERROR"},{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1605000122,"severity":"ERROR"}]}]}
---------------------------------------------------------
I deleted the entry "anaconda-client==1.7.2" and still does not work. Same problem as well with anaconda-navigator==1.9.12, anaconda-project==0.8.3, Automat==20.2.0... I erased them all but there is always a new wrong requirement.
I guess the requirements.txt file is just wrong... any ideas to solve the problem? Did I
create the requirements.txt correctly? Might it be any kind of problem with the enviroments?
thanks a lot
I'm following this tutorial to install Wordpress on AWS Ec2 Ubuntu.
https://www.digitalocean.com/community/tutorials/how-to-install-wordpress-on-ubuntu-20-04-with-a-lamp-stack
When I run this part
curl -O https://wordpress.org/latest.tar.gz
tar xzvf latest.tar.gz
I get this error:
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
I tried every kind of xzvf- combination and nothin seems to work.
Added here the download phase
Appreciate the help.
Has anyone encountered failed deployment when deploying docker app to aws eb?
Here's a piece of log
time="2016-09-20T09:36:42.802106539Z" level=error msg="Handler for DELETE /v1.23/containers/c7bc72d9ccec returned error: You cannot remove a running container c7bc72d9ccec6557ddca8e90c7c77b350cb0c80be9a90921478adccd70a2b97a. Stop the container before attempting removal or use -f"
time="2016-09-20T09:36:42.924322201Z" level=error msg="Handler for DELETE /v1.23/images/9daab71ad3c0 returned error: conflict: unable to delete 9daab71ad3c0 (cannot be forced) - image is being used by running container c7bc72d9ccec"
time="2016-09-20T09:36:42.924865908Z" level=error msg="Handler for DELETE /v1.23/images/dbcc41959b55 returned error: conflict: unable to delete dbcc41959b55 (cannot be forced) - image has dependent child images"
For the first time of the environment deployment, it works well. However, every time I deploy a new version of the app, it fails.
Running on 64bit Amazon Linux 2016.03 v2.1.6 | Docker 1.11.2
My Dockerfile is rather simple:
# Get Node Latest
FROM node:6.5.0
# Create working directory
WORKDIR /app
ADD . /app
# Install depencencies
RUN npm install
# Expost 3000 port
EXPOSE 3000
# Start app
CMD ["node", "server.js"]
It turns out that npm install might take too long to run, cuz once I put node_modules into the zip and remove npm install from Dockerfile, it takes 3-5 minutes for deploying now.