how to extract logs from VRLI to fluentd - vmware

I'm searching to the way of extract logs from VRLI to fluentd.
Ability that fluentd will pull from VRLI or VRLI will push somehow it to fluentd.
How can I do it?

I found that VRLI support event forward with protocol option Ingestion API.
After definition of this event forward to fluend with below configuration I start to get relevant events in my fluentd
<source>
#type http
port 24231
bind 0.0.0.0
</source>
<match api.v1.messages.ingest.**>
#type stdout
</match>

Related

Disable/block the logs sent from fluentd sidecar container to Log Explorer in GCP

Holla amigos, In Google Cloud GKE I have 3 containers inside a pod. The first one is the application, the second is the istio-proxy sidecar, and the third one is the fluentd sidecar. The scenario is simple where I would like to block/stop the logs that are being sent from the fluentd container to log explorer (GCP logging console). In the meantime, I would still like my fluentd store the logs inside the pod, so that I can check the logs manually using gke exec command. Please let me know if it is possible....
If your container is writing the logs at a specific path or on the Hostpath file system
you can use the fluentd to parse that specific application logs from path where the application container is writing. it depends on the application and your architecture.
You can also use the exclude_path to exclude
Here sharing one ref example for fluentd config
<source>
#type tail
path /var/log/app_name/*.log
exclude_path ["/var/log/istio-*"]
tag "kubernetes.*"
refresh_interval 1s
read_from_head true
follow_inodes true
<parse>
#type json
time_format %Y-%m-%dT%H:%M:%S.%NZ
keep_time_key true
</parse>
</source>
You can also set the filter as per requirement and push it further
<filter kubernetes.**>
#type grep
<regexp>
key $.kubernetes.namespace_name
pattern /^my-namespace$/
</regexp>
<regexp>
key $['kubernetes']['labels']['example.com/collect']
pattern /^yes$/
</regexp>
</filter>

How to disable JSON format and send only the log message to Cloudwatch with Fluentbit?

I am trying to setup Firelens for my Fargate tasks. I would like to send logs to multiple locations, Cloudwatch and Elasticsearch.
But only to Cloudwatch I want to disable JSON format and send only the log message as it is.
I have the below configuration for Cloudwatch output.
[OUTPUT]
Name cloudwatch
Match *
auto_create_group true
log_group_name /aws/ecs/containerinsights/$(ecs_cluster)/application
log_stream_name $(ecs_task_id)
region eu-west-1
Currently logs are coming like this,
{
"container_id": "1234567890",
"container_name": "app",
"log": "2021/08/10 18:42:49 [notice] 1#1: exit",
"source": "stderr"
}
I want only the line,
2021/08/10 18:42:49 [notice] 1#1: exit
in Cloudwatch.
I had a similar issue using just CloudWatch where everything was wrapped in JSON - I imagine it'll be the same when using several targets.
The solution was to add the following to the output section:
log_key log
This tells Fluent Bit to only include the data in the log key when sending to CloudWatch.
The docs have since been updated to include that line by default in this PR.

MultiQ in DPDK is configure "max_rx_queue=1" issue

i need help regarding enabling multiQ (multiple queue ) for port in DPDK testpmd , when i use in command like --rxq=2 or --txq=N , it shows max_rx_queue =1 and i am not able to configure for more queue . please help me how can i configure multiple queue .
one can modify kvn-qemu either in cmdline or XML to reflect the desired number of virtio queues.
XML:
<interface type='vhostuser'> <mac address='00:00:00:00:00:01'/> <source type='unix' path='/usr/local/var/run/openvswitch/dpdkvhostuser0' mode='client'/> <model type='virtio'/> <driver queues='2'> <host mrg_rxbuf='off'/> </driver> </interface>
P.S.: based on the update from the comment the VM environment is hosted over Virtualbox. Request for an update.

How to disable "Received gossip status" log in akka?

I use akka cluster in runtime i received log like this
Received gossip status from [akka.tcp://test#ip:port], chunk [1]
of [1] containing [WeakUpdatesManagerCoordinatorState,
SeqUpdatesManagerCoordinatorState, EventBusMediatorCoordinatorState,
PresenceManagerCoordinatorState, shardakka-kv-MigrationsCoordinatorState,
GroupProcessorCoordinatorState, SessionCoordinatorState, WebrtcCallCoordinatorState,
SocialManagerCoordinatorState, GroupPresenceManagerCoordinatorState,
UserProcessorCoordinatorState]
How disable this log?
use this line in logback file
<logger name="akka.cluster.ddata.Replicator" level="INFO"/>
I can't find any configuration to stop this log.

WSO2 ESB - nodes not joining the cluster

I configured an ESB cluster following this documentation
https://docs.wso2.com/display/ESB500/Clustered+Deployment but I still got a problem.
In each ESB nodes I got this log
WARN - CarbonEventManagementService CEP started with clustering enabled, but SingleNode configuration given.
instead of this one
INFO - RpcMembershipRequestHandler Received JOIN message from
Is there a way to have more details on the log to get what is wrong ?
I can post configuration files if someone can help ^^
Thanks !
If you are using hostnames for localMemberHost and <member> section in axis2.xml, change them to use IPs instead.