I have done following log configuration in superset_config.py file -
LOG_FORMAT = '%(asctime)s:%(levelname)s:%(name)s:%(message)s'
LOG_LEVEL = 'DEBUG'
ENABLE_TIME_ROTATE = False
TIME_ROTATE_LOG_LEVEL = 'DEBUG'
FILENAME = os.path.join(DATA_DIR, 'log', 'superset.log')
ROLLOVER = 'midnight'
INTERVAL = 1
BACKUP_COUNT = 30
But logs are not generated in my DATA_DIR/log/superset.log file, is there any configuration missing?
Change ENABLE_TIME_ROTATE = False to ENABLE_TIME_ROTATE = True
Related
Newbie at Helm here. I'm trying to add static .toml config file into a helm chart, but the content of deployed manifest bothers me, here's the tree of my chart.
.
├── Chart.yaml
├── telegraf.conf
└── templates
└── configmap.yaml
configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: telegraf-api
data:
{{ (.Files.Glob "telegraf.conf").AsConfig | indent 4 }}
telegraf.conf
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = "0s"
hostname = ""
omit_hostname = false
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
core_tags = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
I can install the chart without any problems, but the problem occurs when i inspect deployed manifest (it has a lot of backslashes like this):
$ helm get manifest telegraf
---
# Source: telegraf/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: telegraf-api
data:
telegraf.conf: "[global_tags]\n[agent]\n interval = \"10s\"\n round_interval = true\n
\ metric_batch_size = 1000\n metric_buffer_limit = 10000\n collection_jitter =
\"0s\"\n flush_interval = \"10s\"\n flush_jitter = \"0s\"\n precision = \"0s\"\n
\ hostname = \"\"\n omit_hostname = false\n[[inputs.cpu]]\n percpu = true\n totalcpu
= true\n collect_cpu_time = false\n report_active = false\n core_tags = false\n[[inputs.disk]]\n
\ ignore_fs = [\"tmpfs\", \"devtmpfs\", \"devfs\", \"iso9660\", \"overlay\", \"aufs\",
\"squashfs\"]\n[[inputs.diskio]]\n[[inputs.kernel]]\n[[inputs.mem]]\n[[inputs.processes]]\n[[inputs.swap]]\n[[inputs.system]]\n
\ "
Does anyone have any thoughts on how to deploy it so the config don't get messed up?
This should not cause any issues with the configuration. The backslashes in the configuration file are used to indicate that the line should be continued on the next line. When the configmap is loaded to pod, actual key-value pairs will be correctly loaded.
For testing purposes I need to see all the requests that come to my uwsgi application (and Django behind it), but I only see 4xx and 5xx, here is my uwsgi.ini config:
[uwsgi]
http-socket = :8080
;chdir = /code/
module = app.wsgi:application
master = true
processes = 2
logto = ./uwsgi.log
logdate = %%d/%%m/%%Y %%H:%%M:%%S
vacuum = true
buffer-size = 65535
stats = 0.0.0.0:1717
stats-http = true
max-requests = 5000
memory-report = true
;touch-reload = /code/config/touch_for_uwsgi_reload
pidfile = /tmp/project-master.pid
enable-threads = true
single-interpreter = true
log-format = [%(ctime)] [%(proto) %(status)] %(method) %(host)%(uri) => %(rsize) bytes in %(msecs) msecs, referer - "%(referer)", user agent - "%(uagent)"
disable-logging = true ; Disable built-in logging
log-4xx = true ; but log 4xx's anyway
log-5xx = true ; and 5xx's
log-3xx = true
log-2xx = true
ignore-sigpipe = true
ignore-write-errors = true
disable-write-exception = true
;chown-socket=www-data:www-data
Django itself produces 2xx logs perfectly in the same env (uwsgi logs are written to ./uwsgi.log file so not visible here)
I need a sample configuration to keep JNDI in Wso2 EI and those name can be reuse in DB report mediator.
Thanks,
Ajay Babu Maguluri.
Find the deployment.toml file, it is a data source from which other config file are templated.
An example configuration creating a jdbc datasource inside the deployment.toml with name jndi/MY_DATA is like:
[[datasource]]
id = "MY_DATA" # "WSO2_COORDINATION_DB"
url = "jdbc:mysql://localhost:3306/mydata"
username = "root"
password = "root"
driver = "com.mysql.jdbc.Driver"
optionally you can specify other jdbc properties just after the [[datasource]] section
[datasource.pool_options]
maxActive = 10
maxWait = 60000
minIdle = 0
testOnBorrow = true
defaultAutoCommit = true
validationInterval = 30000
testWhileIdle = true
timeBetweenEvictionRunsMillis = 5000
minEvictableIdleTimeMillis = 60000
removeAbandoned = true
logAbandoned = true
removeAbandonedTimeout = 180
validationQuery = "SELECT 1"
I have a config file with more than 3000 lines, where i need to change/replace only few parameters.
since the config file is huge. am unable to use the template.
Need help in replacing the below parameters.
gateway-config {
enable = true
host-name = "car-cache"
port = 202
batch-size = 100
patterns = ["^((test))"]
type = LINE
prefix = "stats."${auth}".service"
}
k9-config {
enable = true
send-enable = false
host-name = ${auth}
connection-timeout = 120000
read-timeout = 60000
proxy = ""
project = "Networking"
period = 120
I need to replace the enable = false to enable = true only on some-config but when i use replace module the whole enable = false is replaced in the config file.
You can actually use the replace module with the after and before parameters:
- name: Replace between the expressions (requires Ansible >= 2.4)
replace:
path: /path/to/your/file
after: 'gateway-config {'
before: '}'
regexp: '^(\s*enable = )false$'
replace: '\g<1>true'
can you use replace module:
---
- name: Replace variable
replace:
path: "/etc/repli.conf"
after: "hite-config {"
regexp: "enable = false"
replace: "enable = true"
my .s3cfg with GPG encryption passphrase and other security settings. Would you recommend other security hardening?
[default]
access_key = $USERNAME
access_token =
add_encoding_exts =
add_headers =
bucket_location = eu-central-1
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/local/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = $PASSPHRASE
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
secret_key = $PASSWORD
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class =
urlencoding_mode = normal
use_https = True
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html
I use this command to upload/sync my local folder to Amazon S3.
s3cmd -e -v put --recursive --dry-run /Users/$USERNAME/Downloads/ s3://dgtrtrtgth777
INFO: Compiling list of local files...
INFO: Running stat() and reading/calculating MD5 values on 15957 files, this may take some time...
INFO: [1000/15957]
INFO: [2000/15957]
INFO: [3000/15957]
INFO: [4000/15957]
INFO: [5000/15957]
INFO: [6000/15957]
INFO: [7000/15957]
INFO: [8000/15957]
INFO: [9000/15957]
INFO: [10000/15957]
INFO: [11000/15957]
INFO: [12000/15957]
INFO: [13000/15957]
INFO: [14000/15957]
INFO: [15000/15957]
I tested the encryption with Transmit GUI S3 Client and didn't get plain text files.
But I see the original filename. I wish to change the filename to a random value, but have local the original filename (mapping?). How can I do this?
What are downsides doing so if I need to restore the files? I use Amazon S3 only as a backup, in addition to my TimeMachine backup.
If you use "random" names, then it isn't sync.
If your only record on the filenames/mapping is local, it will be impossible to restore your backup in case of a local failure.
If you don't need all versions of your files I'd suggest putting everything in a (possibly encrypted) compressed tarball before uploading it.
Otherwise, you will have to write a small script that lists all files and individually does an s3cmd put specifying a random destination, where the mapping is appended to a log file, which should be the first thing you s3cmd put to your server. I don't recommend this for something as crucial as storing your backups.
A skeleton showing how this could work:
# Save all files in backupX.sh where X is the version number
find /Users/$USERNAME/Downloads/ | awk '{print "s3cmd -e -v put "$0" s3://dgtrshitcrapola/"rand()*1000000}' > backupX.sh
# Upload the mapping file
s3cmd -e -v put backupX.sh s3://dgtrshitcrapola/
# Upload the actual files
sh backupX.sh
# Add cleanup code here
However, you will need to handle filename collisions, failed uploads, versioning clashes, ... why not use an existing tool that backs up to S3?