Hello Sitecore masters,
I have an issue with auto publish and I am not able to figure it out.
So I create some items in Sitecore with an external page and they should have a specific publish / unpublish date
So I have a publish / unpublish value, I have setup the Agent but the publish does not work and I don't know how to debug it. I check the logs and it says that items were skipped. I assume it's about my items
This how it looks in sitecore.config
<agent type="Sitecore.Tasks.PublishAgent" method="Run" interval="00:00:10">
<param desc="source database">master</param>
<param desc="target database">web</param>
<param desc="mode (full or smart or incremental)">incremental</param>
<param desc="languages">nb, en</param>
</agent>
I have also changed this:
<scheduling>
<!-- Time between checking for scheduled tasks waiting to execute -->
<frequency>00:00:10</frequency>
The funny / strange thing is that if I publish manually from sitecore, Unpublish will work. The publish doesn't.
Any hint will be appreciated.
Other informations:
You should use the Smart publishing mode, instead of Incremental. Incremental publishing publishes all the items in the publishing queue, which is the list of items known to have been modified.
Get more info about publishing types here
Related
I'm running a Spring Boot application within the Google Cloud Platform and viewing the log files viewing the Google Platform Logs Viewer. Before using Spring Boot and just using simple servlets, the logging entries would be displayed as:
Each request would be grouped and all the logging information for that request could be seen by expanding the row. However, when using Spring Boot the requests are no longer grouped and the log entries are just shown line by line. When there are multiple requests the log entries get very confusing as a result because it isn't possible to view them in a grouped way. I have my logging.properties setup in the same way:
.level = INFO
handlers=java.util.logging.ConsoleHandler
java.util.logging.ConsoleHandler.level=FINEST
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format = [%1$tc] %4$s: %2$s - %5$s %6$s%n
The Logger is initialised in each class as:
private static final java.util.logging.Logger LOG = java.util.logging.Logger.getLogger(MyClass.class.getName());
And then the logging API is used as:
LOG.info("My Message");
I don't understand why the statements are being logged differently and no longer grouped but it must have something with the way Spring Boot handles logging?
Since recent runtimes, AppEngine is evolving with a behaviour that is more and more converging with a container based approach, more "opened" as new other products (like Cloud Run for example).
This is changing a little the way we're developing with GAE, specific legacy libraries aren't available (SearchAPI...), and it is changing also how logs are managed.
We can reproduce this "nested log feature" with new java11 runtime, but we need to manage it ourself.
As official docs mentioned:
In the Logs Viewer, log entries correlated by the same trace can be
viewed in a "parent-child" format.
It means, if we retrieve the trace identifier received inside X-Cloud-Trace-Context HTTP header of our request, we can then use it to add a new LogEntry by passing it as the trace identifier attribute.
This can be done by using Stackdriver Logging Client libraries
With Spring GCP
Fortunately, Spring Cloud GCP is there to make our lives easier.
You can find a sample project which implements it. Be careful, it's a AppEngine Flexible example, but it will work fine with Standard runtime.
It uses Logback.
From a working Spring Boot project on GAE Java11, steps to follow are :
Add spring-cloud-gcp-starter-logging dependency :
<!-- Starter for Stackriver Logging -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-gcp-starter-logging</artifactId>
<version>1.2.1.RELEASE</version>
</dependency>
Add a logback-spring.xml inside src/main/resources folder :
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/cloud/gcp/autoconfigure/logging/logback-appender.xml" />
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml" />
<root level="INFO">
<!-- If running in GCP, remove the CONSOLE appender otherwise logs will be duplicated. -->
<appender-ref ref="CONSOLE"/>
<appender-ref ref="STACKDRIVER" />
</root>
</configuration>
Enable Spring GCP logging feature, inside src/main/resources/application.properties :
spring.cloud.gcp.logging.enabled=true
And use LOGGER inside your code:
#SpringBootApplication
#RestController
public class DemoApplication {
private static final Log LOGGER = LogFactory.getLog(DemoApplication.class);
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
#GetMapping()
public SomeData get() {
LOGGER.info("My info message");
LOGGER.warn("My warning message");
LOGGER.error("My error message");
return new SomeData("Hello from Spring boot !");
}
}
Result will be in Stackdriver Logging viewer, for appengine.googleapis.com/request_log :
We have a content folder that has been turned into a bucket to manage the sheer number of items it will contain. The bucket items are published via a workflow and the bucket items are left to be published by the scheduled publish that runs periodically.
For the most part, all works well and fine with regards to the bucket item creation and editing process. But the bucket folders refuse to publish some times. After inspecting the bucket folders, we found that the bucket folders have Version 1 created but when you go to check that version it just lists 'Modified [Not set] by'. This causes the folder items to not be registered in the PublishQueue table and not get published.
The simplest way to fix this is by right-clicking on the folder item, click on Rename and then just click OK on the popup message. This updates the version message to the propery modified by and date values. And also the publish process picks it up.
Has anyone come across this issue or has any tips for us to try?
This is a know bug.
1) Place the attached Sitecore.Support.413254.dll file to the Webiste\bin folder.
2) Backup the "Sitecore.Buckets.config" file from the Website\App_Config\Inculde folder.
3) In the "Sitecore.Buckets.config" file , change the following processor:
<publish>
<!-- Extending publish pipeline to always add bucket folders to the queue when a bucketed item is being published -->
<processor patch:after="processor[#type='Sitecore.Publishing.Pipelines.Publish.AddItemsToQueue, Sitecore.Kernel']" type="Sitecore.Buckets.Pipelines.Publish.AddBucketFoldersToQueue, Sitecore.Buckets" />
</publish>
with this processor:
<publish>
<!-- Extending publish pipeline to always add bucket folders to the queue when a bucketed item is being published -->
<processor patch:after="processor[#type='Sitecore.Publishing.Pipelines.Publish.AddItemsToQueue, Sitecore.Kernel']" type="Sitecore.Support.Buckets.Pipelines.Publish.AddBucketFoldersToQueue, Sitecore.Support.413254" />
</publish>
Here is the dll:
https://www.dropbox.com/s/thr94mqi8967dab/Sitecore.Support.413254.dll?dl=0
I have an Azure subscription with location India. When using Sitecore Azure module the location for farms not contain India in that. Can I use that for Sitecore Azure deployment using the Sitecore Azure module?
The data centers appear to be defined in the /App_Config/AzureVendors/Microsoft.xml file. You can add a data center node to that XML something like this:
<vendor name="Microsoft">
<datacenters>
...
<datacenter name="West India">
<coordinates left="66%" top="56.5%" />
</datacenter>
</datacenters>
...
</vendor>
Note, however, that according to https://azure.microsoft.com/en-us/regions/#services, the India regions do not support all Cloud Service sizes.
I tested adding a new data center and it appears on the map and the menu seems to work. However, I have not actually tried deploying, so proceed with caution.
We have the Publish Agent set to run every 15 minutes with 'Incremental Publish'. Sitecore client users 'Check In' and 'Approve' an item in Sitecore to queue the item. They can also do a manual publish if required to make something live immediately. We are seeing some issues where some of the items that are checked in and approved through the workflow are not getting picked up by the scheduled publisher. Also, when the user tries to publish from the publish tab the parent publishes but not the child items. The child items have to be published one at a time.
To me the issue seems to be that these approved items are not getting added to the publishing queue. But I am not certain of this.
We installed a module called 'Publishing Status Manager' which basically shows a Sitecore user the various publish operations that are active or in queue. This problem started occurring after that module was installed. I am not sure if that is the cause of this issue though.
I am looking for some suggestions/advice on where to look and how to fix this issue.
Items that are in the final workflow step is always added to the publish queue. I guess your issue revolves around the fact that the items in the workflow is not in the final workflow step. Please ensure that the actually reaches this state.
If you would like to check what's in the publish queue, read this article:
http://briancaos.wordpress.com/2011/06/16/sitecore-publish-queue/
You must use the code as described in "THE CURRENT VIEW" as it tells you what is published next time an incremental publish is executed.
Also, ensure that the publish agent publishes the current targets and correct languages:
<agent type="Sitecore.Tasks.PublishAgent" method="Run" interval="00:00:00">
<param desc="source database">master</param>
<param desc="target database">web</param>
<param desc="mode (full or smart or incremental)">incremental</param>
<param desc="languages">en, da</param>
</agent>
It was just the module that we installed that over wrote the publishing pipeline
Publish agent will not pick up queued item if value of publishing.checksecurity is true in web.config. You can make this value as false. Or else create a user, give it proper access rights and override the agent to switch the user.
Anyone have luck with the publish:end:remote Sitecore event or can shed some light on how it's supposed to work? I simply cannot get it to fire.
From what I understand, it's an event that will trigger after a successful publish to a remote instance of Sitecore. The trouble is, there appears to be no documentation on which server(s) this event is fired on (master or slave) or which server should contain the config setting.
I have the "History Engine" enabled on both of my servers for all databases like so:
<Engines.HistoryEngine.Storage>
<obj type="Sitecore.Data.$(database).$(database)HistoryStorage, Sitecore.Kernel">
<param connectionStringName="$(id)">
</param>
</obj>
</Engines.HistoryEngine.Storage>
As a test, I added a custom class to the publish:end:remote event on both servers. The class simply logs "Hello World" via Log.Info(), but nothing shows up.
I am using Sitecore 6.4.1 (rev. 101221).
UPDATE 1
I have read the latest Scaling guide and instituted all of the required configuration changes. Both our single Staging/CM server and (2) Prod/CD servers have EnableEventQueues set to true and the ScalabilitySettings.config is in place on all instances. That said, I believe the issue is that Sitecore is storing these queued events in the core database. Our CD servers are isolated from the staging core database and they are only linked to Staging via the "web" database. Should I be storing these queued events in the production 'web' database like so...
/eventing/providers/add[#name="sitecore"]
... and set the following attribute: systemDatabaseName="coreweb"
UPDATE 2
I have set the eventing provider to use the (shared) production 'web' database and I now see event queues pouring into the EventQueue table. There are around 60 records for the "PublishEndRemoteEvent" event in that table at any given time. All of these events have the "InstanceName" set to my Staging instance name. "RaiseLocally" is set to FALSE and "RaiseGlobally" set to TRUE. Oddly, the "Created" date for new events are somehow 7 hours in the future. Our Staging server is located only 3 hours ahead of where I work. I'm thinking this time difference might be the culprit.
Be sure you have the "EnableEventQueues" setting set to true in both web.config files. You'll find it in the /sitecore/settings section of the web.config.
See my post in this thread on the SDN forum for more details:
http://sdn.sitecore.net/forum//ShowPost.aspx?PostID=34284
You may also want to check out the Scaling Guide document on SDN (it was recently updated):
http://sdn.sitecore.net/upload/sitecore6/64/scaling_guide_sc63-64-usletter.pdf
Time you looking at is stored in UTC. Because of this you shouldn't have problems even if your servers are situated on different continents.