Can I patch a string value in a kubernetes configmap? - kubectl

Trying to patch a string value in the data section of a kubernetes config map but running into an error.
kubectl patch configmap cm-example -n example-ns -p '{"data":{"application.yml":{"keycloak":{"auth-server-url":"https://server-url.domain.com/auth/"}}}}'
Getting the below error
The request is invalid: patch: Invalid value: "map[data:map[application.yml:map[keycloak:map[auth-server-url:https://server-url.domain.com/auth/]]]]": unrecognized type: string

I had the same issue when I tried to run a patch on my configmap map that holds a file that was supposed to be a yaml file.
The thing is, deployment, pod, jobs - they are yaml or json. BUT a file inside the config map is just a string. The patch will not know what to do and you will need to send the entire string, which is not very useful. Another thing to do is try a sed. But it does not give the same experience as running a patch on pod, deployment and so on - files that are really yaml or json.

Related

AWS: ERROR: Pre-processing of application version xxx has failed and Some application versions failed to process. Unable to continue deployment

Hi I am trying to deploy a node application from cloud 9 to ELB but I keep getting the below error.
Starting environment deployment via CodeCommit
--- Waiting for Application Versions to be pre-processed --- ERROR: Pre-processing of application version app-491a-200623_151654 has
failed. ERROR: Some application versions failed to process. Unable to
continue deployment.
I have attached an image of the IAM roles that I have. Any solutions?
Go to your console and open up your elastic beanstalk console. Go to both applications and environments and delete them. Then in your terminal hit
eb init #Follow instructions
eb create --single ##Follow instructions.
It would fix the error, which is due to some application states which are failed. If you want to check those do
aws elasticbeanstalk describe-application-versions
I was searching for this answer as a result of watching a YouTube tutorial for how to pass the AWS Certified Developer Associate exam. If anyone else gets this error as a result of that tutorial, delete the 002_node_command.config file created in the tutorial and commit that change, as that is causing the error to occur.
A failure within the pre-processing phase, may be caused by an invalid manifest, configuration or .ebextensions file.
If you deploy an (invalid) application version using eb deploy and you enable the preprocess option, The details of the error will not be revealed.
You can remove the --process flag and enable the verbose option to improve error output.
in my case I deploy using this command:
eb deploy -l "XXX" -p
And can return a failure when I mess around with .ebextensions:
ERROR: Pre-processing of application version xxx has failed.
ERROR: Some application versions failed to process. Unable to continue deployment.
With that result I can't figure up what is wrong,
but deploying without -p (or --process)and adding -v (verbose) flag:
eb deploy -l "$deployname" -v
It returns something more useful:
Uploading: [##################################################] 100% Done...
INFO: Creating AppVersion xxx
ERROR: InvalidParameterValueError - The configuration file .ebextensions/16-my_custom_config_file.config in application version xxx contains invalid YAML or JSON.
YAML exception: Invalid Yaml: while scanning a simple key
in 'reader', line 6, column 1:
(... details of the error ...)
, JSON exception: Invalid JSON: Unexpected character (#) at position 0.. Update the configuration file.
Now I can fix the problem.

Istio manual sidecar injection gives an error

I am trying to manually inject istio sidecar into an existing deployment according to the instructions here:
https://istio.io/docs/setup/kubernetes/additional-setup/sidecar-injection
I am getting the following error, however:
$ istioctl kube-inject -f k8s/prod/deployment.yaml
Error: missing configuration map key "values" in "istio-sidecar-injector"
This error occurs to me even why I try different kinds with different yaml files. Is this a bug or am I doing something wrong? How can I add "values" to the configuration map?
Check the version of your istioctl binary (istioctl version) versus istio installed on your cluster: if they differ, you may get such error message (or similar).

Unable to pull docker image into Kubernetes Pod from Google Container Registry

I have read this question and this one, and created my Kubernetes secret for Google Container Registry using a service account JSON key with project: owner and viewer permissions. I have also verified that the image does in fact exist in Google Container Registry by going to the console.
I have also read this document.
When I run:
minikube dashboard
And then from the user interface, I click the "+" sybmol, specify the URL of my image like this:
project-123456/bot-image
then click on 'advanced options' and specify the Secret that was imported.
After a few seconds I see this error:
Error: Status 403 trying to pull repository project-123456/bot-image: "Unable to access the repository: project-123456/bot-image; please verify that it exists and you have permission to access it (no valid credential was supplied)."
If I look at what's inside the Secret file (.dockerconfigjson), it's like:
{"https://us.gcr.io": {"email": "admin#domain.com", "auth": "longtexthere"}}
What could be the issue?
The json needs to have a top level "{auths": json key from:
Creating image pull secret for google container registry that doesn't expire?
So the json should be structured like:
{"auths":{"https://us.gcr.io": {"email": "admin#domain.com", "auth": "longtexthere"}}}
If you are still having issues, you can alternatively download the latest version of minikube (0.17.1) and run
minikube addons configure registry-creds
following the prompts there to setup creds
then run minikube addons enable registry-creds
Now you should be able to pull down pods from GCR using a yaml structured like this:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: default
spec:
containers:
- image: gcr.io/example-vm/helloworld:latest
name: foo
EDIT: 6/13/2018 updating the commands to reflect comment by #Rambatino

Ansible Keepass integration via python script

i am very new to ansible and would like to test a few things.
I have a couple of Amazon EC2 instances and would like to install different software components on them. I don't want to have the (plaintext) credentials of the technical users inside of ansible scripts or config files. I know that it is possible to encrypt those files, but I want to try keepass for a central password management tool. So my installation scripts should read the credentials from a .kdbx (Keepass 2) database file before starting the actual installation.
Till now i wrote a basic python script for reading the .kdbx file. The script outputs a json object via:
print json.dumps(inventory, sort_keys=False)
The ouput looks like the following:
{"cdc":
{"cdc_test_server":
{"cdc_test_user":
{"username": "cdc_test_user",
"password": "password"}
}
}
}
Now I want to achieve, that the python script is executed by ansible and the key value pairs of the output are included/registered as ansible variables. So far my playbook looks as follows:
- hosts: 127.0.0.1
connection: local
tasks:
- name: "Test Playboook Functionality"
command: python /usr/local/test.py
register: pass
- debug: var=pass.stdout
- name: "Include json user output"
set_fact: passwords="{{pass.stdout | from_json}}"
- debug: " {{passwords.cdc.cdc_test_server.cdc_test_user.password}} "
The first debug generates the correct json output, but i am not able to include the variables in ansible, so that I can use them via jinja2 notation. set_fact doesn't throw an exception, but the last debug just returns a "Hello world" - message? So my question is: How do I properly include the json key value pairs as ansible variables via task?
See Ansible KeePass Lookup Plugin
ansible_user : "{{ lookup('keepass', 'path/to/entry', 'username') }}"
ansible_become_pass: "{{ lookup('keepass', 'path/to/entry', 'password') }}"
You may want to use facts.d and place your python script there to be available as a fact.
Or write a simple action plugin that returns json object to eliminate the need in stdout->from_json conversion.
Late to the party, but it seems your use case is primarily covered by keepass-inventory. And it doesn't require any playbook "magic". Disclaimer: I contribute to this non-profit.
export KDB_PATH=example.kdbx
export KDB_PASS=example
ansible all --list-hosts -i keepass-inventory.py

Passing .config file to AWS Elastic Beanstalk

I'm using the Elastic Beanstalk Command Line tool and the function eb config put config. According to the AWS Elastic Beanstalk documentation it is required that you name your file according to *.cfg.yml and place it inside the .elasticbeanstalk/saved_configs file (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-config.html#eb3-configexample). I have done this, and it forces you to write Yaml. I don't know much about Yaml, but I have tried many different ways of writing my config file and I can't get EB to accept it. Below is basically what I think is the closest I've come (note actual names and url substituted below):
aws:
elasticbeanstalk:
create-configuration-template:
application_name: ProjectName
template_name: TemplateName
environment_id: EnvironmentName
- option_settings:
option_name: mongodb
value: "mongoAddress.com"
The error message for the above is:
ERROR: Error parsing configuration file as yaml or json. Yaml error: 'Invalid Yaml: while parsing a block mapping
in "<reader>", line 4, column 5:
application_name: ProjectName
^
expected <block end>, but found BlockEntry
in "<reader>", line 7, column 5:
- option_settings:
^
', Json error: 'Invalid JSON: Unexpected character (a) at position 0.'
and if I take away the "- " before option_settings then I get this error:
ERROR: Invalid Environment Configuration specification. Must specify configuration template version.
Any ideas? I've checked all over the internet and I can't find anything on this
EDIT: Here's some more on the template from the AWS docs: http://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/create-configuration-template.html
Abhishek Singh from AWS got back to me on Twitter and shared this blog post from the AWS website.
https://blogs.aws.amazon.com/application-management/post/Tx1YHAJ5EELY54J/Using-the-Elastic-Beanstalk-EB-CLI-to-create-manage-and-share-environment-config
In summary, he recommends using "eb config save" then modifying the file so that you don't need to worry about format as much. The details of how to do this are in the post above.