Spring Boot Logging and Google Cloud Platform Log Viewer - google-cloud-platform

I'm running a Spring Boot application within the Google Cloud Platform and viewing the log files viewing the Google Platform Logs Viewer. Before using Spring Boot and just using simple servlets, the logging entries would be displayed as:
Each request would be grouped and all the logging information for that request could be seen by expanding the row. However, when using Spring Boot the requests are no longer grouped and the log entries are just shown line by line. When there are multiple requests the log entries get very confusing as a result because it isn't possible to view them in a grouped way. I have my logging.properties setup in the same way:
.level = INFO
handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=FINEST
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format = [%1$tc] %4$s: %2$s - %5$s %6$s%n
The Logger is initialised in each class as:
private static final java.util.logging.Logger LOG = java.util.logging.Logger.getLogger(MyClass.class.getName());
And then the logging API is used as:
LOG.info("My Message");
I don't understand why the statements are being logged differently and no longer grouped but it must have something with the way Spring Boot handles logging?

Since recent runtimes, AppEngine is evolving with a behaviour that is more and more converging with a container based approach, more "opened" as new other products (like Cloud Run for example).
This is changing a little the way we're developing with GAE, specific legacy libraries aren't available (SearchAPI...), and it is changing also how logs are managed.
We can reproduce this "nested log feature" with new java11 runtime, but we need to manage it ourself.
As official docs mentioned:
In the Logs Viewer, log entries correlated by the same trace can be
viewed in a "parent-child" format.
It means, if we retrieve the trace identifier received inside X-Cloud-Trace-Context HTTP header of our request, we can then use it to add a new LogEntry by passing it as the trace identifier attribute.
This can be done by using Stackdriver Logging Client libraries
With Spring GCP
Fortunately, Spring Cloud GCP is there to make our lives easier.
You can find a sample project which implements it. Be careful, it's a AppEngine Flexible example, but it will work fine with Standard runtime.
It uses Logback.
From a working Spring Boot project on GAE Java11, steps to follow are :
Add spring-cloud-gcp-starter-logging dependency :
<!-- Starter for Stackriver Logging -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-logging</artifactId>
<version>1.2.1.RELEASE</version>
</dependency>
Add a logback-spring.xml inside src/main/resources folder :
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/cloud/gcp/autoconfigure/logging/logback-appender.xml" />
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<root level="INFO">
<!-- If running in GCP, remove the CONSOLE appender otherwise logs will be duplicated. -->
<appender-ref ref="CONSOLE"/>
<appender-ref ref="STACKDRIVER" />
</root>
</configuration>
Enable Spring GCP logging feature, inside src/main/resources/application.properties :
spring.cloud.gcp.logging.enabled=true
And use LOGGER inside your code:
#SpringBootApplication
#RestController
public class DemoApplication {
private static final Log LOGGER = LogFactory.getLog(DemoApplication.class);
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#GetMapping()
public SomeData get() {
LOGGER.info("My info message");
LOGGER.warn("My warning message");
LOGGER.error("My error message");
return new SomeData("Hello from Spring boot !");
}
}
Result will be in Stackdriver Logging viewer, for appengine.googleapis.com/request_log :

Related

Importing secrets in Spring Boot application from AWS Secrets Manager

I stored my MySQL DB credentials in AWS secrets manager using the Credentials for other database option. I want to import these credentials in my application.properties file. Based on a few answers I found in this thread, I did the following:
Added the dependency spring-cloud-starter-aws-secrets-manager-config
Added spring.application.name = <application name> and spring.config.import = aws-secretsmanager: <Secret name> in application.properties
Used secret keys as place holders in the following properties:
spring.datasource.url = jdbc:mysql://${host}:3306/db_name
spring.datasource.username=${username}
spring.datasource.password=${password}
I am getting the following error while running the application:
java.lang.IllegalStateException: Unable to load config data from 'aws-secretsmanager:<secret_name>'
Caused by: java.lang.IllegalStateException: File extension is not known to any PropertySourceLoader. If the location is meant to reference a directory, it must end in '/' or File.separator
First, is the process I am following correct? If yes, what is this error regarding and how to resolve this?
I found the problem that was causing the error. Apparently I was adding the wrong dependency.
According to the latest docs, the configuration support for using spring.config.import to import AWS secrets has been moved to io.awspring.cloud from org.springframework.cloud. So the updated dependency would be io.awspring.cloud:spring-cloud-starter-aws-secrets-manager-config:2.3.3 and NOT org.springframework.cloud:spring-cloud-starter-aws-secrets-manager-config:2.2.6
You are trying to use spring.config.import, and the support for this was introduced in Spring Cloud 2.3.0:
https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available
Secrets Manager
Support loading properties through spring.config.import, introduced in Spring Cloud 2020.0 Read more about integrating your
Spring Cloud applicationwiththe AWS secrets manager.
Removed the dependency to auto-configure module #526.
Dropped the dependency to javax.validation:validation-api.
Allow Secrets Manager prefix without “/” in the front #736.
In spring-cloud 2020.0.0 (aka Ilford), the bootstrap phase is no
longer enabled by default. In order enable it you need an additional
dependency:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
<version>{spring-cloud-version}</version>
</dependency>
However, starting at spring-cloud-aws 2.3, allows import default aws'
secretsmanager keys (spring.config.import=aws-secretsmanager:) or
individual keys
(spring.config.import=aws-secretsmanager:secret-key;other-secret-key)
https://github.com/spring-cloud/spring-cloud-aws/blob/main/docs/src/main/asciidoc/secrets-manager.adoc
application.yml
spring.config.import: aws-secretsmanager:/secrets/spring-cloud-aws-sample-app
Or try to leave it empty:
spring.config.import=aws-secretsmanager:
As such, it will take spring.application.name by default,
App:
#SpringBootApplication
public class App {
private static final Logger LOGGER = LoggerFactory.getLogger(App.class);
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
#Bean
ApplicationRunner applicationRunner(#Value("${password}") String password) {
return args -> {
LOGGER.info("`password` loaded from the AWS Secret Manager: {}", password);
};
}
}

Getting error, "Entity doesn't exist in AsyncLocal" when trying to call CreateBatchWrite<T> method of DynamoDBContext object

I have created a DynamoDb table in my dev machine and I'm trying to insert couple of rows from my .NET Core application using the CreateBatchWrite<T> method of DynamoDBContext object. I'm able to query the table from DynamoDB Javascript Shell window from the localhost:8000/shell url and it returns row count as 0. But when trying to call the CreateBatchWrite<T> method I get the error, "Entity doesn't exist in AsyncLocal".
Explanation
When using X-Ray, this happens when there is an attempt to create a SubSegment without a Parent Segment. Depending on your setup, when you run a query it might try creating a SubSegment, but it's failing because there is no parent segment.
This is common when running a Lambda function locally, as the Mock Lambda Test Tool will not create a Segment for you like the actual Lambda environment does on AWS. This can happen in other scenarios too.
More details here: https://github.com/aws/aws-xray-sdk-dotnet/issues/125
Solution
Easiest way to solve this is disabling X-Ray locally (as you probably don't want to generate traces locally):
In appsettings.Development.json add this:
"XRay": {
"DisableXRayTracing": "true",
"UseRuntimeErrors": "false",
"CollectSqlQueries": "false"
}
The important bit is the DisableXRayTracing equals true.
Make sure your appsettings.Development.json is set to Copy Always in the properties window. You can do this by including this in your .csproj:
<ItemGroup>
<None Update="appsettings.Development.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Update="appsettings.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
If you really want to trace things locally, then make sure you create
a parent segment only when running locally (on AWS this would cause
problems as you would have two parent segments, one created manually
by you, another one created by AWS).
Add this line before any DynamoDB API methods are executed:
AWSXRayRecorder.Instance.ContextMissingStrategy = ContextMissingStrategy.LOG_ERROR;
You can find more info in GitHub discussion https://github.com/aws/aws-xray-sdk-dotnet/issues/69#issuecomment-482688754
Also, you will need to import these 2 packages.
using Amazon.XRay.Recorder.Core;
using Amazon.XRay.Recorder.Core.Strategies;
If you are tracing requests made with the AWS SDK, the X-Ray SDK attempts to generate a subsegment automatically to represent those requests, such as CreateBatchWrite. However, a subsegment can only be created as the child of an existing Segment, so if you have not created a segment beforehand that Entity doesn't exist error will occur.
See these docs for how to create custom segments. Alternatively, if you are developing a web app, the X-Ray SDK can automatically create segments for requests made to your service by adding configuration described in these docs

How to Rebuild Sitecore Index without blowing my log files?

I have the Sitecore + Coveo system. I have automated Rebuild/Refresh Index using command. But while performing Rebuild/Refresh my Logging files are going up to ~40GB.
Is there anyway so that I can restrict logging while Rebuild/Refresh?
You need to set the logging level for your Crawling log. In the web.config file, find the logger called Sitecore.Diagnostics.Crawling and set the log level.
This is mine, set to INFO
<logger name="Sitecore.Diagnostics.Crawling" additivity="false">
<level value="INFO" />
<appender-ref ref="CrawlingLogFileAppender" />
</logger>
That should reduce the amount of logs written. If you want to reduce it even further, you can set it to ERROR or NONE, but I would not recommend NONE.

Reporting Services and custom assembly connecting to third party web service

I have the following problem. I am using Reporting Services 2005 to create some report. I call method from my custom assembly and it works fine when my method is as follows:
public static string TestMethod() {
return "test"; }
However, when return "test"; is replaced by the code that calls third party web service, there is nothing returned to my RS report. I can't even log my exception to EventLog (probably because of security reasons).
I class is decorated by the following statement:
[System.Web.AspNetHostingPermission(SecurityAction.Assert, Level = System.Web.AspNetHostingPermissionLevel.Unrestricted)]
additionally, for the purpose of calling web service I've added the following in my custom assembly:
string serviceUri = "http://externallink/Default.asmx";
WebPermission p = new WebPermission(NetworkAccess.Accept, serviceUri);
p.Assert();
This does not help either. The error that is thrown in my custom assembly is as follows:
Request for the permission of type
'System.Net.WebPermission, System,
Version=2.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089'
failed
Can someone can please help me?
EDIT1:
I figured out that reporting services uses custom trust level called: RosettaSrv. Custom policy is set in rssrvpolicy.config. When I changed trust level to Full, everything works fine. However I don't want to specify full trust, just possibility to access custom web services, how can I do this?
you have to create your specific CodeGroup section in the CAS policy config file (as you mentioned) for RS, which tells it to grant that permission for your assembly.
Follow steps found in this MSDN article for details: http://support.microsoft.com/kb/842419

Simple question about Sitecore's publish:end:remote event

Anyone have luck with the publish:end:remote Sitecore event or can shed some light on how it's supposed to work? I simply cannot get it to fire.
From what I understand, it's an event that will trigger after a successful publish to a remote instance of Sitecore. The trouble is, there appears to be no documentation on which server(s) this event is fired on (master or slave) or which server should contain the config setting.
I have the "History Engine" enabled on both of my servers for all databases like so:
<Engines.HistoryEngine.Storage>
<obj type="Sitecore.Data.$(database).$(database)HistoryStorage, Sitecore.Kernel">
<param connectionStringName="$(id)">
</param>
</obj>
</Engines.HistoryEngine.Storage>
As a test, I added a custom class to the publish:end:remote event on both servers. The class simply logs "Hello World" via Log.Info(), but nothing shows up.
I am using Sitecore 6.4.1 (rev. 101221).
UPDATE 1
I have read the latest Scaling guide and instituted all of the required configuration changes. Both our single Staging/CM server and (2) Prod/CD servers have EnableEventQueues set to true and the ScalabilitySettings.config is in place on all instances. That said, I believe the issue is that Sitecore is storing these queued events in the core database. Our CD servers are isolated from the staging core database and they are only linked to Staging via the "web" database. Should I be storing these queued events in the production 'web' database like so...
/eventing/providers/add[#name="sitecore"]
... and set the following attribute: systemDatabaseName="coreweb"
UPDATE 2
I have set the eventing provider to use the (shared) production 'web' database and I now see event queues pouring into the EventQueue table. There are around 60 records for the "PublishEndRemoteEvent" event in that table at any given time. All of these events have the "InstanceName" set to my Staging instance name. "RaiseLocally" is set to FALSE and "RaiseGlobally" set to TRUE. Oddly, the "Created" date for new events are somehow 7 hours in the future. Our Staging server is located only 3 hours ahead of where I work. I'm thinking this time difference might be the culprit.
Be sure you have the "EnableEventQueues" setting set to true in both web.config files. You'll find it in the /sitecore/settings section of the web.config.
See my post in this thread on the SDN forum for more details:
http://sdn.sitecore.net/forum//ShowPost.aspx?PostID=34284
You may also want to check out the Scaling Guide document on SDN (it was recently updated):
http://sdn.sitecore.net/upload/sitecore6/64/scaling_guide_sc63-64-usletter.pdf
Time you looking at is stored in UTC. Because of this you shouldn't have problems even if your servers are situated on different continents.