Error: file 'home/user/values.yaml' seems to be a YAML file, but expected a gzipped archive - amazon-web-services

I am trying to install Kube Prometheus Stack using helm.
I have already setup ingress, so it needs to be running behind a proxy.
For that I have updated values of the chart by using below command.
helm show values prometheus-com/kube-prometheus-stack > values.yaml
I followed this doc and changed configurations,
[server]
domain = example.com
Now I am trying to install using below command.
helm install monitoring ./values.yaml -n monitoring
I have already created a namespace monitoring
I get below error on running above command.
Error: file '/home/user/values.yaml' seems to be a YAML file, but expected a gzipped archive

Your helm command should be something like this:
$ helm install <release-name> <registry-name>/<chart-name> --values ./values.yaml -n monitoring

Related

Kubectl against GKE Cluster through Terraform's local-exec?

I am trying to make an automatic migration of workloads between two node pools in a GKE cluster. I am running Terraform in GitLab pipeline. When new node pool is created the local-exec runs and I want to cordon and drain the old node so that the pods are rescheduled on the new one. I am using this registry.gitlab.com/gitlab-org/terraform-images/releases/1.1:v0.43.0 image for my Gitlab jobs. Also, python3 is installed with apk add as well as gcloud cli - downloading the tar and using the gcloud binary executable from google-cloud-sdk/bin directory.
I am able to use commands like ./google-cloud-sdk/bin/gcloud auth activate-service-account --key-file=<key here>.
The problem is that I am not able to use kubectl against my cluster.
Although I have installed the gke-gcloud-auth-plugin with ./google-cloud-sdk/bin/gcloud components install gke-gcloud-auth-plugin --quiet once in the CI job and second time in the local-exec script in HCL code I get the following errors:
module.create_gke_app_cluster.null_resource.node_pool_provisioner (local-exec): E0112 16:52:04.854219 259 memcache.go:238] couldn't get current server API group list: Get "https://<IP>/api?timeout=32s": getting credentials: exec: executable <hidden>/google-cloud-sdk/bin/gke-gcloud-auth-plugin failed with exit code 1
290module.create_gke_app_cluster.null_resource.node_pool_provisioner (local-exec): Unable to connect to the server: getting credentials: exec: executable <hidden>/google-cloud-sdk/bin/gke-gcloud-auth-plugin failed with exit code 1
When I check the version of the gke-gcloud-auth-plugin with gke-gcloud-auth-plugin --version
I am getting the following error:
174/bin/sh: eval: line 253: gke-gcloud-auth-plugin: not found
Which clearly means that the plugin is not installed.
The image that I am using is based on alpine for which there is no way to install the plugin via package manager, unfortunately.
Edit: gcloud components list shows gke-gcloud-auth-plugin as installed too.
The solution was to use google/cloud-sdk image in which I have installed terraform and used this image for the job in question.

Helm external public chart

Basic question which I don't seem to find a concrete answer to. How should I keep track of external public charts?
Let's say I want to make use of the :
Kubernetes SIGs AWS Load balancer controller:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/deploy/installation/
I could "imperatively" do the following:
Add the Helm repo:
helm repo add eks https://aws.github.io/eks-charts
Get the image input values
helm show values eks/aws-load-balancer-controller > values.yml
Update the clusterName and install
Install the helm chart if not using IAM roles for service accounts:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system -f values.yml --dry-run
And it will work. But assuming that I have the following directory structure for my Kubernetes IaC:
▾ kubernetes/
▸ apps/
▾ base/
▾ alb_controller/
README.md
values.yml
▸ daemonsets/
I'll end up only with a values.yml and the README.md explaining what I did and which external chart I used.
What would be the best way to handle that type of dependencies?
I'm not 100% sure what you're trying to do, but it sounds like you're trying to utilise another external chart as part of your chart, in which case, Chart Dependencies will probably be what you're looking for.
Using Chart Dependencies you can tell Helm you want to install additional charts as part of your chart and Helm will automatically do that whenever you install or upgrade your chart.

Using GCR As Helm Repository

Is it possible to use Google Container Registry as an Helm repo?
I had success pushing charts to GCR, however when I try to add the repo using helm, I get an error"
Error: looks like "https://gcr.io/********" is not a valid chart repository or cannot be reached: error converting YAML to JSON: yaml: mapping values are not allowed in this context
Is that something to expect? I am doing
helm repo add reponame https://gcr.io/***** --username user-name --password *****
I tried this myself - but it did not work. Instead, I found this plugin:
https://github.com/hayorov/helm-gcs
This allows you to use GCS as a helm repository. Quick setup/usage:
$ helm plugin install https://github.com/hayorov/helm-gcs.git
# Init a new repository
$ helm gcs init gs://bucket/path
# Add your repository to Helm
$ helm repo add repo-name gs://bucket/path
# Push a chart to your repository
$ helm gcs push chart.tar.gz repo-name
# Update Helm cache
$ helm repo update
# Fetch the chart
$ helm fetch repo-name/chart
# Remove the chart
$ helm gcs rm chart repo-name
You can use chartmuseum. With chartmuseum you can configure any remote backend (GCS, S3) etc, Chartmuseum and your backend combined act as a chart repository with all the operations supported.
Also, chartmuseum is managed by helm. So, it's constantly being updated and is properly managed.

AWS Elastic Beanstalk commands return no output

I am very new to the Amazon Web Services and have been trying a learn-by-doing approach with them.
In summary I was trying to set up Git with the elastic beanstalk command line interface for my web-app. However, I wanted to use my SSH key-pair to authenticate (aws-access-id, secret) and in my naivety and ignorance, I just supplied this information (the SSH key files) and now I can't get it to work. More specifically stated below.
I have my project directory with Git set up so that it works. I then open the git bash window MINGW64 (I am on Windows 10) and attempt to set up eb.
$ eb init
It then tells me that my credentials are not set up and asks me for aws-access-id and the secret. I had just set up the SSH key-pair and try to enter these files; what's the harm in trying? EB failure, it turns out. Now, the instances seem to run fine still, looking at their status on the AWS console website. However, whatever I type into the bash:
$ eb init
$ eb status
$ eb deploy
$
There is no output. Not even an error. It just silently returns to awaiting a new command from me.
When using the --debug option with these commands, a long list of operations is returned, ending with
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
I thought I would be able to log out or something the like, so that I could enter proper credentials which I messed up from the beginning. I restarted the web-app from the AWS webpage interface and restarted my PC. No success.
Thanks in advance.
EDIT:
I also tried reinstalling awscli and awsebcli:
pip uninstall awsebcli
pip uninstall awscli
pip install awscli
pip install awsebcli --upgrade --user
Problem persists, but now there is one output (previously seen only upon --debug option):
$ eb init
ERROR: ResponseParserError - Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
$
It sounds like you have replaced your AWS credentials in ~/.aws/credentials and/or ~/.aws/config file(s) with your SSH key. You could manually replace these or execute aws configure if you have the AWS CLI installed.

Kubernetes on AWS

When running the following command on kube-master (CoreOS):
export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
I get following error:
Can't find aws in PATH, please fix and retry.
I have already set PATH. Can anyopne tell which 'aws' it is searching for? Is it the aws directory in kubernetes repo directory i.e. kubernetes/cluster/aws?
Follow the AWS CLI installation guide and then ensure your PATH is set correctly.
Yes, you are right.
If you set "aws" as KUBERNETES_PROVIDER, Kubernetes will use scripts that reside in kubernetes/cluster/aws. If no KUBERNETES_PROVIDER is set, I believe the default it to rely on gcloud CLI tool.
If you are using Ubuntu OS. run the below command. it will resolve your issue.
apt-get install awscli