Where can I find request and response logs for Spark? - jetty

I have just started using Spark framework. And experimenting with a local server on Mac OS
The documentation says that to enable debug logs I simply need to add a dependency.
I've added a dependency and can observe logs in the console.
The question is where the log files are located?

If you are following the Spark example here, you are only enabling slf4j-simple logging. By default, this only logs items to the console. You can change this programmatically (Class information here) or by adding a properties file to the classpath, as seen in this discussion. Beyond this you will likely want to implement a logging framework like log4j or logback, as slf4j is designed to act as a facade over an existing logging implementation.

Related

Google Cloud Logging assigns ERROR severity to all Python logging.info calls

It's the first time I use Google Cloud Platform, so please be understanding!
I've built a scheduled workflow that simply runs a Batch job. The job runs Python code and uses the standard logging library for logging. When the job is executed, I can correctly see all the entries in Cloud Logging, but all the entries have severity ERROR although they're all INFO.
One possible reason I've been thinking about is that I haven't used the setup_logging function as described in the documentation here. The thing is, I didn't want to run the Cloud Logging setup when I run the code locally.
The questions I have are:
why does logging "work" (in the sense that logs end up in Cloud Logging) even if I did not use the setup_logging function? What is it's real role?
why do my INFO entries show up with ERROR severity?
if I include that snippet and that snippet solves this issue, should I include an if statement in my code that detects if I am running the code locally and skips that Cloud Logging setup step?
According to the documentation, you have to use a setup to send correctly logs to Cloud Logging.
This setup allows then to use the Python logging standard library.
Once installed, this library includes logging handlers to connect
Python's standard logging module to Logging, as well as an API client
library to access Cloud Logging manually.
# Imports the Cloud Logging client library
import google.cloud.logging
# Instantiates a client
client = google.cloud.logging.Client()
# Retrieves a Cloud Logging handler based on the environment
# you're running in and integrates the handler with the
# Python logging module. By default this captures all logs
# at INFO level and higher
client.setup_logging()
Then you can use the Python standard library to add logs to Cloud Logging.
# Imports Python standard library logging
import logging
# The data to log
text = "Hello, world!"
# Emits the data using the standard logging module
logging.warning(text)
why does logging "work" (in the sense that logs end up in Cloud Logging) even if I did not use the setup_logging function? What is
it's real role?
Without the setup, the log will be added to Cloud Logging but not with the correct type and as expected. It's better to use the setup.
why do my INFO entries show up with ERROR severity?
The same reason explained above
if I include that snippet and that snippet solves this issue, should I include an if statement in my code that detects if I am running the
code locally and skips that Cloud Logging setup step?
I think no need to add a if statement you run the code locally. In this case, the logs should be printed in the console even if the setup is present.

GCP: Is there a way to have both the OPS Agent and the Legacy Agent installed?

My current set up uses the legacy logging agent (google-fluentd), so all my logging is configured for fluentd. I am supposed to switch to the OPS Agent, which uses fluent bit, therefore a different configuration for the logging.
Is there any way to have both google-fluentd and the ops agent running, or is there an easy way to move the fluentd config to fluent bit? The reason why having both would help is to do a quick switch over when the configuration has been changed, but at the time it is not possible as the agents conflict with each other.
Currently both logging agent can't co-exist. Ops agent strictly prevents logging and monitoring agents from starting if the legacy agents are installed.
There is already an ongoing Feature request for a workaround similar to your use case. I recommend that you upvote and follow the case below for future updates:
https://issuetracker.google.com/issues/218671982
This github issue also has something to do with the use case.

Is it possible to configure spdlog from a file like log4j or log4cxx?

I have experience of log4j, and have used a port of it called log4net in c#. In both cases I find it very useful to configure loggers at run time, by means of logger config files. For example you can increase the log level of a particular subsystem without a recompile.
I am searching for a logging framework for c++. Currently checking log4cxx and spdlog.
I see that log4cxx can read its configuration from an xml file.
Does this ability to configure at run time exist for spdlog?
There is https://github.com/guangie88/spdlog_setup to configure spdlog using TOML

Is Apache Camel available in AmazonMQ?

Disclaimer: I did post this on Server Fault, first, and the replies there were:
I'm voting to close this question as off-topic because we are not AWS support.
This question does not appear to be about server, networking, or related infrastructure administration within the scope defined in the help center.
I think this is a valid question, and even first-party support can be found on the Stack Exchange network. I think issues/limitations are easier to find on SO than on the multitude of AWS 'documentation'. This is why I'm posting this question on SO.
The issue/question
From what I've found on the AWS documentation and the limited subset of Apache ActiveMQ configuration elements, I haven't found how to use the Camel plugin that is supposed to be built into newer versions of ActiveMQ. I figure this is left out of the AmazonMQ version, or is blocked by the configuration limitations.
This is the list of available configuration elements. Their configuration document's root element is <broker>, and it looks like camel is supposed to be configured as a sibling to that node an a traditional ActiveMQ config file.
Camel is not supported today running within the Amazon MQ broker itself, however here is a blog showing how to use Camel with Amazon MQ.
https://aws.amazon.com/blogs/compute/integrating-amazon-mq-with-other-aws-services-via-apache-camel
The Camel "plugin" is actually simply an imported Spring configuration file that fires up Camel. AmazonMQ does not, as to my understanding, permit imported configuration files hence running an embedded Camel is not possible.

How do I make Topshelf log exceptions to Elmah

Topshelf is a great little library for wrapping up a windows service. I'm trying to find out if I can configure it to log out exceptions to elmah. From what I can tell it has an internal handler for unhandled exceptions.
http://docs.topshelf-project.com/en/latest/configuration/logging.html
Would it be a case of writing a new logger and submitting a pull request?
Yes. Take a look at the existing logging implementations in the repository to get started. No one has asked for ELMAH before but I don't see why you'd be the only person interested.
You can also just create a new repository, it will be a new nuget package anyways.