Sitecore log file breakdown into 50MB chunks - sitecore

In our produciton system log files are pretty big (300MB etc.). I need smaller files to do analysis, I need these Sitecore log files to breakdown into 50MB chunks, how can I do this?
Sitecore 6.6

can you try to modify log4net in web.config file ?
You will have :
<appender name="LogFileAppender" type="log4net.Appender.SitecoreLogFileAppender, Sitecore.Logging">
<file value="$(dataFolder)/logs/log.{date}.txt"/>
<appendToFile value="true"/>
<!--Add this property maximumFileSize -->
<maximumFileSize value="50MB" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%4t %d{ABSOLUTE} %-5p %m%n"/>
</layout>
</appender>
<appender name="WebDAVLogFileAppender" type="log4net.Appender.SitecoreLogFileAppender, Sitecore.Logging">
<file value="$(dataFolder)/logs/WebDAV/WebDAV.log.{date}.txt"/>
<appendToFile value="true"/>
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%4t %d{ABSOLUTE} %-5p %m%n"/>
</layout>
</appender>

Related

SAS Parameter logconfigloc and conventional log output

I got the task to log user access to datasets in certain libraries.
To solve this I use the SAS Audit logger, which already provides the desired output.
To get this desired output, i use the start parameter logconfigloc with the following XML-file:
<?xml version="1.0" encoding="UTF-8"?>
<logging:configuration xmlns:logging="http://www.sas.com/xml/logging/1.0/">
<!-- Log file appender with immediate flush set to true -->
<appender name="AuditLog" class="FileAppender">
<param name="File" value="logconfig.xml.win.audit.file.xml.log"/>
<param name="ImmediateFlush" value="true" />
<filter class="StringMatchFilter">
<param name="StringToMatch" value="WORK"/>
<param name="AcceptOnMatch" value="false"/>
</filter>
<filter class="StringMatchFilter">
<param name="StringToMatch" value="Libref"/>
<param name="AcceptOnMatch" value="true"/>
</filter>
<!-- The DenyAllFilter filters all events not fullfilling the criteria of at least one filters before -->
<filter class="DenyAllFilter">
</filter>
<layout>
<param name="ConversionPattern"
value="%d - %u - %m"/>
</layout>
</appender>
<!-- Audit message logger -->
<logger name="Audit" additivity="false">
<appender-ref ref="AuditLog"/>
<level value="trace"/>
</logger>
<!-- root logger events not enabled -->
<root>
</root>
</logging:configuration>
My Problem is, that by using the logconfigloc parameter, the log parameter is not working any more hence I get no "conventional" SAS log.
I allready tried to enable the root logger, but it´s output only looks similar to the original logfiles but has some diffrences.
Is there an (easy) way to get the "conventional" SAS log in addition the to the afforementioned special access logging output?
Kind Regards,
MiKe
I found the answer to the question how to obtain the conventional log.
For this purpose the SAS logger named "App" with message level "info" can be used.
So the following XML does the trick:
<?xml version="1.0" encoding="UTF-8"?>
<logging:configuration xmlns:logging="http://www.sas.com/xml/logging/1.0/">
<appender name="AppLog" class="FileAppender">
<param name="File" value="D:\Jobs_MK\SAS_Logging_Facility\Advanced_Logging_Test_with_XML\logconfig_standard_log.log"/>
<param name="ImmediateFlush" value="true" />
<layout>
<param name="ConversionPattern"
value="%m"/>
</layout>
</appender>
<!-- Application message logger -->
<logger name="App" additivity="false">
<appender-ref ref="AppLog"/>
<level value="info"/>
</logger>
<!-- root logger events not enabled -->
<root>
</root>
</logging:configuration>

AWS CloudWatch log and disable log on server itself with springboot

In my springboot application, I configure to write logs to AWS CloudWatch, but the application also generates a log file log on the server itself in the folder /var/log/, now the log file is even larger than 19G
How can I disable the log in the server itself, and only write logs to CloudWatch?
The following is my current logback-spring.xml configuration. Any ideas will appreciate. Thanks in advance.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/base.xml" />
<springProperty scope="context" name="ACTIVE_PROFILE" source="spring.profiles.active" />
<property name="clientPattern" value="payment" />
<logger name="org.springframework">
<level value="INFO" />
</logger>
<logger name="com.payment">
<level value="INFO" />
</logger>
<logger name="org.springframework.ws.client.MessageTracing.sent">
<level value="TRACE" />
</logger>
<logger name="org.springframework.ws.client.MessageTracing.received">
<level value="TRACE" />
</logger>
<logger name="org.springframework.ws.server.MessageTracing">
<level value="TRACE" />
</logger>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [${HOSTNAME}:%thread] %-5level%replace([${clientPattern}] ){'\[\]\s',''}%logger{50}: %msg%n
</pattern>
</layout>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>TRACE</level>
</filter>
</appender>
<springProfile name="local,dev">
<root level="INFO">
<appender-ref ref="CONSOLE" />
</root>
</springProfile>
<springProfile name="prod,uat">
<timestamp key="date" datePattern="yyyy-MM-dd" />
<appender name="AWS_SYSTEM_LOGS" class="com.payment.hybrid.log.CloudWatchLogsAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>TRACE</level>
</filter>
<layout>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [${HOSTNAME}:%thread] %-5level%replace([${clientPattern}] ){'\[\]\s',''}%logger{50}:
%msg%n
</pattern>
</layout>
<logGroupName>${ACTIVE_PROFILE}-hybrid-batch</logGroupName>
<logStreamName>HybridBatchLog-${date}</logStreamName>
<logRegionName>app-northeast</logRegionName>
</appender>
<appender name="ASYNC_AWS_SYSTEM_LOGS" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="AWS_SYSTEM_LOGS" />
</appender>
<root level="INFO">
<appender-ref ref="ASYNC_AWS_SYSTEM_LOGS" />
<appender-ref ref="CONSOLE" />
</root>
</springProfile>
</configuration>
The most likely fix is to remove this line:
<appender-ref ref="CONSOLE" />
I say "most likely" because this is just writing output to the console. Which means that there's something else that redirects the output to /var/log/whatever, probably in the startup script for your application.
It's also possible that the included default file, org/springframework/boot/logging/logback/base.xml, because this file defines a file appender. I don't know if the explicit <root> definition will completely override or simply update the included default, but unless you know you need the default I'd delete the <include> statement.
If you need to recover space from the existing logfile, you can truncate it:
sudo truncate -s 0 /var/log/WHATEVER
Deleting it is not the correct solution, because it won't actually be removed until the application explicitly closes it (which means restarting your server).
As one of the commenters suggested, you can use logrotate to prevent the on-disk file from getting too large.
But by far the most important thing you should do is read the Logback documentation.

HistoryLevel mismatch on Camunda 7.2

I deployed camunda.war (7.2) into my vanilla tomcat7 following the guide on official web-site.
Now when i starting tomcat i have the following error:
GRAVE: Error while closing command context
org.camunda.bpm.engine.ProcessEngineException: historyLevel mismatch: configuration says org.camunda.bpm.engine.impl.history.HistoryLevelAudit#21 and database says 3
In the database (ACT_GE_PROPERTY table) the historyLevel is set to 3
this is my bpm-platform.xml file
<?xml version="1.0" encoding="UTF-8"?>
<bpm-platform xmlns="http://www.camunda.org/schema/1.0/BpmPlatform"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.camunda.org/schema/1.0/BpmPlatform http://www.camunda.org/schema/1.0/BpmPlatform ">
<job-executor>
<job-acquisition name="default" />
</job-executor>
<process-engine name="default">
<job-acquisition>default</job-acquisition>
<configuration>org.camunda.bpm.engine.impl.cfg.StandaloneProcessEngineConfiguration</configuration>
<datasource>java:jdbc/solve</datasource>
<properties>
<property name="history">full</property>
<property name="databaseSchemaUpdate">true</property>
<property name="authorizationEnabled">true</property>
<property name="jobExecutorDeploymentAware">true</property>
</properties>
<plugins>
<!-- plugin enabling Process Application event listener support -->
<plugin>
<class>org.camunda.bpm.application.impl.event.ProcessApplicationEventListenerPlugin</class>
</plugin>
<!-- plugin enabling integration of camunda Spin -->
<plugin>
<class>org.camunda.spin.plugin.impl.SpinProcessEnginePlugin</class>
</plugin>
<!-- plugin enabling connect support -->
<plugin>
<class>org.camunda.connect.plugin.impl.ConnectProcessEnginePlugin</class>
</plugin>
</plugins>
</process-engine>
</bpm-platform>
EDIT
Processes.xml
<?xml version="1.0" encoding="UTF-8" ?>
<process-application
xmlns="http://www.camunda.org/schema/1.0/ProcessApplication"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<process-archive name="MyProcess">
<process-engine>myEngine</process-engine>
<properties>
<property name="isDeleteUponUndeploy">false</property>
<property name="isScanForProcessDefinitions">true</property>
</properties>
</process-archive>
</process-application>
applicationContext.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xmlns:tx="http://www.springframework.org/schema/tx" xmlns:jee="http://www.springframework.org/schema/jee"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.1.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.1.xsd">
<jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/my_process" />
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
<property name="processEngineName" value="myEngine" />
<property name="dataSource" ref="dataSource" />
<property name="transactionManager" ref="transactionManager" />
<property name="databaseSchemaUpdate" value="true" />
<property name="jobExecutorActivate" value="false" />
<property name="deploymentResources" value="classpath*:*.bpmn" />
<property name="processEnginePlugins">
<list>
<bean id="spinPlugin" class="org.camunda.spin.plugin.impl.SpinProcessEnginePlugin" />
</list>
</property>
</bean>
<bean id="processEngine" class="org.camunda.bpm.engine.spring.container.ManagedProcessEngineFactoryBean" destroy-method="destroy">
<property name="processEngineConfiguration" ref="processEngineConfiguration" />
</bean>
<bean id="repositoryService" factory-bean="processEngine" factory-method="getRepositoryService" />
<bean id="runtimeService" factory-bean="processEngine" factory-method="getRuntimeService" />
<bean id="taskService" factory-bean="processEngine" factory-method="getTaskService" />
<bean id="historyService" factory-bean="processEngine" factory-method="getHistoryService" />
<bean id="managementService" factory-bean="processEngine" factory-method="getManagementService" />
<bean id="MyProcess" class="org.camunda.bpm.engine.spring.application.SpringServletProcessApplication" />
<context:annotation-config />
<bean class="it.processes.Starter" />
<bean id="cudOnProcessVariableService" class="it.services.CUDonProcessVariableService" />
</beans>
Did you download the 'camunda standalone webapp' WAR for Tomcat from camunda.org/download?
If so, you should adjust the history level inside the 'application-context.xml' file by adding <property name="history" value="full"/> to the process engine configuration bean <bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration>.
Otherwise it is strange that you write you install camunda.war to a vanilla Tomcat but post a reference to your bpm-platform.xml file which is not used at all when you are not using the shared process engine approach.

is it possible to have a new file for each new day with log4cXX

I know the rollingPolicy parameter for log4cxx config file, but I can't manage to have the config file which can tell the logger to create a new file each new day, how could I achieve this result ?
Yes. Using rolling style of Composite like this:
<appender name="LogAppender" type="log4net.Appender.RollingFileAppender">
<file type="log4j.Util.PatternString" value="LogFile.log" />
<appendToFile value="true" />
<rollingStyle value="Composite" />
<datePattern value="yyyyMMdd" />
<maxSizeRollBackups value="7" />
<maximumFileSize value="100MB" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date{ISO8601}: [%2thread] %-5level %logger: '%P{network}.%P{node}' %message%newline" />
</layout>
</appender>
Ref.:
Short introduction to Apache log4cxx
log4net Config Examples
I think the following appender will do the stuff ( can't test it on this pc )
<!-- the following appender with the name "TimeBasedLog.log", every night a few seconds after
12::00PM the old log will be renamed with append the date in filename, and a new log file
with the name "TimeBasedLog.log" will be create.
notice the RollingFileAppender is under "org.apache.log4j.rolling" namespace
-->
<appender name="MyRollingAppenderDaily" class="org.apache.log4j.rolling.RollingFileAppender">
<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
<param name="FileNamePattern" value="TimeBasedLog.%d{yyyy-MM-dd}.log"/>
<param name="activeFileName" value="TimeBasedLog.log"/>
</rollingPolicy>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss,SSS} %x [%p] (%F:%L) %m%n"/>
</layout>
<param name="file" value="TimeBasedLog.log"/>
<param name="append" value="true"/>
</appender>
I am wondering if it is possible to combine inside an appender both timebasedrollingpolicy and MaxFileSize/MaxBackupIndex feature ?
<param name="MaxFileSize" value="5KB"/>
<param name="MaxBackupIndex" value="5"/>

Nant Build script undifined Issue

I have the following code in a nant build script:
<project name="fgs">
<property name="build.dir" value="build"/>
<property name="build.bin.dir" value="${build.dir}/bin"/>
<fileset id="provider.1.0-references" basedir="${build.bin.dir}">
<include name="thenameofadllfile.*"/>
</fileset>
<macrodef name="build-dist">
<attributes>
<attribute name="version"/>
<attribute name="service.references"/>
<attribute name="release.type"/>
</attributes>
<sequential>
<echo message="service.references: ${service.references}" />
<copy todir="${build.dist.dir}/server/${version}/${release.type}/bin" >
<fileset refid="#{service.references}" casesensitive="false" />
</copy>
</sequential>
</macrodef>
<target name="create-dist">
<server-staging-dist release.type="staging" version="1.0" service.references="service.1.0-references" />
</target>
however when i run this code I get: fileset reference '#{service.references}' is not defined.
i have tried it with changing the doller sign for the # symbol. Not sure what the difference is?
Thanks in advance for any help or advice given.
Try <fileset refid="provider.1.0-references" casesensitive="false" />