Set/Change list properties in application.properties in Akka - akka

I want to use slf4j for logging, based on logging doc. These config should be changed in application.conf:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
}
I'm using application.properties instead of application.conf:
akka.loggers[0]=akka.event.slf4j.Slf4jLogger
akka.logging-filter=akka.event.slf4j.Slf4jLoggingFilter
But above config does not change akka.loggers value (the value is still the default value: akka.event.Logging$DefaultLogger).
Printing all configuration:
"loggers" : [
# reference.conf # jar:file:/home/user/.m2/repository/com/typesafe/akka/akka-actor_2.12/2.5.18/akka-actor_2.12-2.5.18.jar!/reference.conf: 17
"akka.event.Logging$DefaultLogger"
],
# application.properties # file:/home/user/workspace/x-platform/target/test-classes/application.properties
"loggers[0]" : "akka.event.slf4j.Slf4jLogger",
# application.properties # file:/home/user/workspace/x-platform/target/test-classes/application.properties
"logging-filter" : "akka.event.slf4j.Slf4jLoggingFilter",
So my question is: how can i set/change value for a list prpperty in application.properties?
I'm using akka 2.5.18 with Java.

Have you tried this the parseString thing?
val customConf = ConfigFactory.parseString("""
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
}
""")
val system = ActorSystem("MySystem", ConfigFactory.load(customConf))
or combining custom config with the usual one
Config myConfig =ConfigFactory.parseString("loggers=['akka.event.slf4j.Slf4jLogger']");
Config regularConfig = ConfigFactory.load();
Config combined = myConfig.withFallback(regularConfig);

Related

How to specify Update the instance configuration setting in terraform Managed Instance Group module

I am using Terrafrom Managed Instance Group template to create Managed Instance Group. This is how the MIG configuration looks like:
module "mig" {
source = "terraform-google-modules/vm/google//modules/mig"
version = "8.0.0"
project_id = var.project_id
target_size = var.target_number_of_instance
hostname = "mig-sample"
instance_template = module.instance_template.self_link
region = var.region_name
named_ports = [
{
name = "http",
port = 8000
}
]
# update_policy = [{
# type = "PROACTIVE"
# instance_redistribution_type = "PROACTIVE"
# minimal_action = "REPLACE"
# most_disruptive_allowed_action = "REPLACE"
# max_surge_fixed = 0
# max_surge_percent = null
# max_unavailable_percent = null
# max_unavailable_fixed = 4
# min_ready_sec = 50
# replacement_method = "RECREATE"
# }]
}
On GCP console, one of the option in Managed Instance Group is to specify VM instance lifecycle which has 2 options:
Keep the same instance configuration
Update the instance configuration
When I deploy my Terraform template, it always selects option 1 (Keep the same instance configuration).
How do I tell Terraform Managed Instance Group template to select option 2 (Update the instance configuration)
I tried specifying update_policy attribute but that did not work. You can see the update_policy config in the commented section.

Send cloud-init script to gcp with terraform

How do I send cloud-init script to a gcp instance using terraform?
Documentation is very sparse around this topic.
You need the following:
A cloud-init file (say 'conf.yaml')
#cloud-config
# Create an empty file on the system
write_files:
- path: /root/CLOUD_INIT_WAS_HERE
cloudinit_config data source
gzip and base64_encode must be set to false (they are true by default).
data "cloudinit_config" "conf" {
gzip = false
base64_encode = false
part {
content_type = "text/cloud-config"
content = file("conf.yaml")
filename = "conf.yaml"
}
}
A metadata section under the google_compute_instance resource
metadata = {
user-data = "${data.cloudinit_config.conf.rendered}"
}

Log multiple uwsgi logger to stdout

I'm running uwsgi inside a Docker container for a Django application. I want to log uwsgi, request, and django logs differently, so I created the following configuration in my .ini file.
[uwsgi]
logger = main file:/dev/stdout
logger = django file:/dev/stdout
logger = req file:/dev/stdout
log-format = "method": "%(method)", "uri": "%(uri)", "proto": "%(proto)", "status": %(status), "referer": "%(referer)", "user_agent": "%(uagent)", "remote_addr": "%(addr)", "http_host": "%(host)", "pid": %(pid), "worker_id": %(wid), "core": %(core), "async_switches": %(switches), "io_errors": %(ioerr), "rq_size": %(cl), "rs_time_ms": %(msecs), "rs_size": %(size), "rs_header_size": %(hsize), "rs_header_count": %(headers), "event": "uwsgi_request"
log-route = main ^((?!django).)*$
log-route = django django
log-route = req uwsgi_request
log-encoder = format:django ${msg}
log-encoder = nl:django
log-encoder = json:main {"timestamp": "${strftime:%%Y-%%m-%%dT%%H:%%M:%%S+00:00Z}", "message":"${msg}","severity": "INFO"}
log-encoder = nl:main
log-encoder = format:req {"timestamp": "${strftime:%%Y-%%m-%%dT%%H:%%M:%%S}", ${msg}}
log-encoder = nl:req
The problem is that now my logs for my django and req don't show up. I'm guessing that's because multiple loggers want to write to /dev/stdout and can't.
How can I 1. Write everything to stdout while 2. Formatting my logs differently based on a regex?
I confirmed this is the case by turning of some of the log routes and seeing everything work.

com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'akka.stream' when running jar file

I'm trying to run an Akka stream application, but get an exception when running on linux.
When I run it with Windows debugger it is working.
I tried both these commands:
java -jar ./myService.jar -Dconfig.resource=/opt/myservice/conf/application.conf
java -jar ./myService.jar -Dconfig.file=/opt/myService/conf/application.conf
But I get the following exception:
No configuration setting found for key 'akka.stream'
My application.conf file:
akka {
event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
loglevel = "DEBUG"
actor {
debug {
# enable function of LoggingReceive, which is to log any received message
at
# DEBUG level
receive = on
}
}
stream {
# Default materializer settings
materializer {
max-input-buffer-size = 16
dispatcher = ""
subscription-timeout {
mode = cancel
timeout = 5s
}
# Enable additional troubleshooting logging at DEBUG log level
debug-logging = off
# Maximum number of elements emitted in batch if downstream signals large demand
output-burst-limit = 1000
auto-fusing = on
# Those stream elements which have explicit buffers (like mapAsync, mapAsyncUnordered,
# buffer, flatMapMerge, Source.actorRef, Source.queue, etc.) will preallocate a fixed
# buffer upon stream materialization if the requested buffer size is less than this
max-fixed-buffer-size = 1000000000
sync-processing-limit = 1000
debug {
fuzzing-mode = off
}
}
blocking-io-dispatcher = "akka.stream.default-blocking-io-dispatcher"
default-blocking-io-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 16
}
}
}
# configure overrides to ssl-configuration here (to be used by akka-streams,
and akka-http – i.e. when serving https connections)
ssl-config {
protocol = "TLSv1.2"
}
}
ssl-config {
logger = "com.typesafe.sslconfig.akka.util.AkkaLoggerBridge"
}
i've added:
println(system.settings.config)
but i get a result without stream section
Can you assist?
The syntax for the java command line is:
java [options] -jar filename [args]
This ordering matters: you must set any options before the -jar option.
So in your case:
java -Dconfig.file=/opt/myService/conf/application.conf -jar ./myService.jar

NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation

I'm trying to setup a simple control+compute on a single ubuntu node. I'm using devstack. This is the command that fails:
neutron net-create --tenant-id 6fad6bf2ae9c49d3b19958abd59f3ce0 private-net
And the error is:
NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.
here's my relevant config ml2 config:
[ml2]
tenant_network_types = flat
extension_drivers = port_security
type_drivers = flat
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = public-physical-net,private-physical-net,dpdk-physical-net
[ml2_type_vlan]
network_vlan_ranges = private-physical-net
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
vni_ranges = 1001:2000
[ml2_type_geneve]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
tunnel_types =
root_helper_daemon = sudo /usr/local/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
[ovs]
datapath_type = system
bridge_mappings = public:br-ex
this is the ovs:
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "p255p1"
Interface "p255p1"
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
ovs_version: "2.0.2"
The relevant section of local.conf:
# Do not use Nova-Network
disable_service n-net
# Enable Neutron
ENABLED_SERVICES+=,q-svc,q-dhcp,q-meta,q-agt,q-l3
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
## Neutron options
FIXED_RANGE="10.0.123.0/24"
NETWORK_GATEWAY=10.0.123.1 ##MY
FLOATING_RANGE="10.0.0.0/22"
Q_FLOATING_ALLOCATION_POOL=start=10.0.1.167,end=10.0.1.169
PUBLIC_NETWORK_GATEWAY="10.0.0.205"
Q_USE_SECGROUP=True
Q_L3_ENABLED=True
PUBLIC_INTERFACE=p255p1
Q_USE_PROVIDERNET_FOR_PUBLIC=True
OVS_PHYSICAL_BRIDGE=br-ex
PUBLIC_BRIDGE=br-ex
OVS_BRIDGE_MAPPINGS=public-physical-net:br-ex
Q_ML2_PLUGIN_TYPE_DRIVERS=flat
Q_ML2_TENANT_NETWORK_TYPE=flat
ENABLE_TENANT_VLANS=False
ENABLE_TENANT_TUNNELS=False
PUBLIC_PHYSICAL_NETWORK=public-physical-net
PHYSICAL_NETWORK=private-physical-net
PUBLIC_NETWORK_NAME=public-net
PRIVATE_NETWORK_NAME=private-net
Q_ML2_PLUGIN_FLAT_TYPE_OPTIONS="flat_networks=public-physical-net,dpdk-physical-net,private-physical-net" # CH did not exist