Adding toml config file into helm chart - templates

Newbie at Helm here. I'm trying to add static .toml config file into a helm chart, but the content of deployed manifest bothers me, here's the tree of my chart.
.
├── Chart.yaml
├── telegraf.conf
└── templates
└── configmap.yaml
configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: telegraf-api
data:
{{ (.Files.Glob "telegraf.conf").AsConfig | indent 4 }}
telegraf.conf
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = "0s"
hostname = ""
omit_hostname = false
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
core_tags = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
I can install the chart without any problems, but the problem occurs when i inspect deployed manifest (it has a lot of backslashes like this):
$ helm get manifest telegraf
---
# Source: telegraf/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: telegraf-api
data:
telegraf.conf: "[global_tags]\n[agent]\n interval = \"10s\"\n round_interval = true\n
\ metric_batch_size = 1000\n metric_buffer_limit = 10000\n collection_jitter =
\"0s\"\n flush_interval = \"10s\"\n flush_jitter = \"0s\"\n precision = \"0s\"\n
\ hostname = \"\"\n omit_hostname = false\n[[inputs.cpu]]\n percpu = true\n totalcpu
= true\n collect_cpu_time = false\n report_active = false\n core_tags = false\n[[inputs.disk]]\n
\ ignore_fs = [\"tmpfs\", \"devtmpfs\", \"devfs\", \"iso9660\", \"overlay\", \"aufs\",
\"squashfs\"]\n[[inputs.diskio]]\n[[inputs.kernel]]\n[[inputs.mem]]\n[[inputs.processes]]\n[[inputs.swap]]\n[[inputs.system]]\n
\ "
Does anyone have any thoughts on how to deploy it so the config don't get messed up?

This should not cause any issues with the configuration. The backslashes in the configuration file are used to indicate that the line should be continued on the next line. When the configmap is loaded to pod, actual key-value pairs will be correctly loaded.

Related

Ansible replace a value in big config file

I have a config file with more than 3000 lines, where i need to change/replace only few parameters.
since the config file is huge. am unable to use the template.
Need help in replacing the below parameters.
gateway-config {
enable = true
host-name = "car-cache"
port = 202
batch-size = 100
patterns = ["^((test))"]
type = LINE
prefix = "stats."${auth}".service"
}
k9-config {
enable = true
send-enable = false
host-name = ${auth}
connection-timeout = 120000
read-timeout = 60000
proxy = ""
project = "Networking"
period = 120
I need to replace the enable = false to enable = true only on some-config but when i use replace module the whole enable = false is replaced in the config file.
You can actually use the replace module with the after and before parameters:
- name: Replace between the expressions (requires Ansible >= 2.4)
replace:
path: /path/to/your/file
after: 'gateway-config {'
before: '}'
regexp: '^(\s*enable = )false$'
replace: '\g<1>true'
can you use replace module:
---
- name: Replace variable
replace:
path: "/etc/repli.conf"
after: "hite-config {"
regexp: "enable = false"
replace: "enable = true"

Why may be the reason fo helm_resource in terraform script being terribly slow?

I've a terraform script for my AWS EKS cluster and the following pieces there:
provider "helm" {
alias = "helm"
debug = true
kubernetes {
host = module.eks.endpoint
cluster_ca_certificate = module.eks.ca_certificate
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}
}
and:
resource "helm_release" "prometheus_operator" {
provider = "helm"
depends_on = [
module.eks.aws_eks_auth
]
chart = "stable/prometheus-operator"
name = "prometheus-operator"
values = [
file("staging/prometheus-operator-values.yaml")
]
wait = false
version = "8.12.12"
}
With this setup it takes ~15 minutes to install the required chart with terraform apply and sometimes it fails (with helm ls giving pending-install status). On the other hand if use the following command:
helm install prometheus-operator stable/prometheus-operator -f staging/prometheus-operator-values.yaml --version 8.12.12 --debug
the required chart gets installed in ~3 minutes and never fails. What is the reason for this behavior?
EDIT
Here is a log file from a failed installation. It's quit big - 5.6M. What bothers me a bit is located in line no 47725 and 56045
What's more, helm status prometheus-operator gives valid output (as if it was successfully installed), however there're no pods defined.
EDIT 2
I've also raised an issue.

list ami's older than x days/months

Does anyone know if its possible to retrieve a list of EC2 AMIs older than x months(or days) using the ec2_ami_find module? So far I've got:
- name: ec2 find all
ec2_ami_find:
owner: self
region: us-west-1
sort: creationDate
sort_order: descending
register: ec2_ami
- name: test
set_fact:
date: "{{lookup('pipe','date +%Y%m%d%H%M%S -d \"180 day ago\"')}}"
msg: "{{ ec2_ami | json_query('results[?creationDate<`{{ date }}`]') }}"
However, this doesnt seem to work with me. Whatever I put in the date command(180 days, 1 day, 700 days), it retrieves the exact same list of AMI's for me.
It has to do with string interpolation and replacing the date variable in the set_fact directive. Here is an example, also I have used the ec2_ami_facts module instead of ec2_ami_find as ec2_ami_find would be deprecated soon.
---
- hosts: localhost
remote_user: me
gather_facts: no
connection: local
tasks:
- ec2_ami_facts:
owner: self
region: eu-central-1
register: ec2_ami
- set_fact:
filter_date: "{{ lookup('pipe','date \"+%Y-%m-%d\" -d \"180 day ago\"') }}"
- debug: var=filter_date
- set_fact:
filtered_ami: "{{ ec2_ami | json_query(\"images[?creation_date<=`\" + filter_date + \"`]\") }}"
- shell: echo "{{ filtered_ami | length }} {{ ec2_ami.images | length }}"
Please find the python script to list the AMIs older than X Days. Please do not forget to update your AWS credential profile value if you have multiple accounts as profiles=["default", "profile2"].
`
import boto3
from dateutil.parser import parse
import datetime
retention_day = "<addyour desired days here> eg. 30"
profiles = ["default"]
def days_old(date):
get_date_obj = parse(date)
date_obj = get_date_obj.replace(tzinfo=None)
diff = datetime.datetime.now() - date_obj
return diff.days
def get_ami_snap_list():
for profile in profiles:
session = boto3.Session(profile_name=profile)
ec2 = session.client('ec2')
Name=""
Description=""
amis = ec2.describe_images(Owners=[
'self'
])
for ami in amis['Images']:
try:
create_date = ami['CreationDate']
ami_id = ami['ImageId']
# print ami['ImageId'], ami['CreationDate']
day_old = days_old(create_date)
if day_old > retention_day:
# print(ami_id)
image = ec2.describe_images(ImageIds=[ami_id])
for img in image['Images']:
Name=img['Name']
Description=img['Description']
print(ami_id+",",Name+",",Description+",",profile)
except Exception as e:
print(ami_id + ",", Name + ",", Description + ",", profile)
get_ami_snap_list()

log not generated in Superset 0.24

I have done following log configuration in superset_config.py file -
LOG_FORMAT = '%(asctime)s:%(levelname)s:%(name)s:%(message)s'
LOG_LEVEL = 'DEBUG'
ENABLE_TIME_ROTATE = False
TIME_ROTATE_LOG_LEVEL = 'DEBUG'
FILENAME = os.path.join(DATA_DIR, 'log', 'superset.log')
ROLLOVER = 'midnight'
INTERVAL = 1
BACKUP_COUNT = 30
But logs are not generated in my DATA_DIR/log/superset.log file, is there any configuration missing?
Change ENABLE_TIME_ROTATE = False to ENABLE_TIME_ROTATE = True

s3cmd obfuscate file names (change to random value) on Amazon S3 side (local original file name)

my .s3cfg with GPG encryption passphrase and other security settings. Would you recommend other security hardening?
[default]
access_key = $USERNAME
access_token =
add_encoding_exts =
add_headers =
bucket_location = eu-central-1
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/local/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = $PASSPHRASE
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
secret_key = $PASSWORD
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class =
urlencoding_mode = normal
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
I use this command to upload/sync my local folder to Amazon S3.
s3cmd -e -v put --recursive --dry-run /Users/$USERNAME/Downloads/ s3://dgtrtrtgth777
INFO: Compiling list of local files...
INFO: Running stat() and reading/calculating MD5 values on 15957 files, this may take some time...
INFO: [1000/15957]
INFO: [2000/15957]
INFO: [3000/15957]
INFO: [4000/15957]
INFO: [5000/15957]
INFO: [6000/15957]
INFO: [7000/15957]
INFO: [8000/15957]
INFO: [9000/15957]
INFO: [10000/15957]
INFO: [11000/15957]
INFO: [12000/15957]
INFO: [13000/15957]
INFO: [14000/15957]
INFO: [15000/15957]
I tested the encryption with Transmit GUI S3 Client and didn't get plain text files.
But I see the original filename. I wish to change the filename to a random value, but have local the original filename (mapping?). How can I do this?
What are downsides doing so if I need to restore the files? I use Amazon S3 only as a backup, in addition to my TimeMachine backup.
If you use "random" names, then it isn't sync.
If your only record on the filenames/mapping is local, it will be impossible to restore your backup in case of a local failure.
If you don't need all versions of your files I'd suggest putting everything in a (possibly encrypted) compressed tarball before uploading it.
Otherwise, you will have to write a small script that lists all files and individually does an s3cmd put specifying a random destination, where the mapping is appended to a log file, which should be the first thing you s3cmd put to your server. I don't recommend this for something as crucial as storing your backups.
A skeleton showing how this could work:
# Save all files in backupX.sh where X is the version number
find /Users/$USERNAME/Downloads/ | awk '{print "s3cmd -e -v put "$0" s3://dgtrshitcrapola/"rand()*1000000}' > backupX.sh
# Upload the mapping file
s3cmd -e -v put backupX.sh s3://dgtrshitcrapola/
# Upload the actual files
sh backupX.sh
# Add cleanup code here
However, you will need to handle filename collisions, failed uploads, versioning clashes, ... why not use an existing tool that backs up to S3?