Sitecore use of agents and tasks - sitecore

In sitecore we have the possibility of agents and tasks. But it is not very clear when to use which. My situation: I want to run an (possible taking half an hour) importer each night at a specified time. The importer will import data from an external source into sitecore. What is better: an agent or a task?

They roughly mean the same thing.
In the web.config you can define scheduled agents under the <scheduling> section, however some out of the box agents are in the Sitecore.Tasks namespace. So they appear to be one in the same, but really everything is an agent.
In Sitecore itself, under /sitecore/system/tasks you will see definition items for the same thing. These are called "tasks" but in reality, they are just logical definition items that run based on the schedule. In fact, these are just a CMS-friendly way to define what's also in the web.config as agents. There exists a configured agent that processes these from the CMS:
<!-- Agent to process schedules embedded as items in a database -->
<agent type="Sitecore.Tasks.DatabaseAgent" method="Run" interval="00:10:00">
<param desc="database">master</param>
<param desc="schedule root">/sitecore/system/tasks/schedules</param>
<LogActivity>true</LogActivity>
</agent>
<!-- Agent to process tasks from the task database (TaskDatabase) -->
<agent type="Sitecore.Tasks.TaskDatabaseAgent" method="Run" interval="00:10:00" />
So if you want something to be changed in the CMS, create a tasks under the system section. If you want something to be for developers only, create a config patch and apply your own custom <agent> on whatever timer you want.

Related

Sitecore 8: how to track Content Editors activity?

We have a website featuring Sitecore 8.1 with multiple content editors. Is there any way to log their activity, as in listing the actions they have performed in terms of editing/publishing/unpublishing?
We had a problem last week which I suspect being caused by someone unpublishing the wrong item, but I need to make sure this is the case, or at least I would like this ability in the future.
Do I need to create my own event-triggered logging?
There's nothing fully out of the box to provide those reports in Sitecore. You can take a look at the Sitecore Audit Trail module which will log all the "editor action" audit information into a separate log4net appender. You can find more information on the module in this blog post, but note that the module is only mark as compatible to Sitecore 7.5, it should not be hard to make this work with Sitecore 8.1, namely updating the appender config since the log4net config is now within the <sitecore> node in config.
Sitecore does log content editor actions out of the box in the normal log files (in Data\logs), they start with "AUDIT" so you can find them. It logs things such as items being saved, publishes starting, etc. Do a search in the log files to find them.
You can get these saved to a separate log file for easier review: https://sdn.sitecore.net/scrapbook/how%20to%20make%20sitecore%206%20write%20audit%20log%20to%20its%20own%20file.aspx
This still works in Sitecore 8 except the setting is in App_Config/Sitecore.config now.
You can use Sitecore Advanced System Reporter
Sitecore 6 ships with a very useful function called My Locked Items. At times, though, admin users may want to see all locked items, not just those locked by them. I wrote a little application to do just that. Then I thought of making it more generic, so that one could create other type of reports easily. The result is a little framework which allows to create many types of reports in very short time. In this module I provide this framework together with many useful example reports like:
items modified updated in the last X days
items that have more than X children
items that are publishable but either do not exist or have a different version in the web
database.
items that are based on a particular template
items with validation errors
which templates have been linked to a workflow
locked items
publishable items with broken links
audit information
errors in the log files
items that have stayed in the same workflow state for more than X days
and more.
You can now also parametrise those reports, save them as links in the desktop, export them, or even create a scheduled task that emails some of them automatically. In addition, you can also apply commands to the items reported.
You can download the module from : https://marketplace.sitecore.net/en/Modules/A/Advanced_System_Reporter.aspx
Module is available for Sitecore 6.4 to 8. I don't know if has the functionality you look but you can customize it.
You can check this blogpost how to extend it: http://www.seanholmesby.com/sitecore-auditing-with-the-advanced-system-reporter/
Update
Install the module
Run the module and choose reports like in picture.
Run the report
View the report or export it like csv, excel or xml .

Undoing A Sitecore Publish

Aside from investing in TDS or restoring the SQL database, is it possible to undo/rollback a Sitecore publish if someone publishes something that shouldn't have been?
I am using Sitecore 8.
If you know what the items are that were published, you could set the version that should not have been published to un-publishable and re-publish the item. That would set it back to the previous version.
You can always check the Sitecore logs for the items that have been published and then republish the versions intended while setting the unwanted versions unpublishable.
There is a setting you can activate in the config that logs every item thats been published:
<processor type="Sitecore.Publishing.Pipelines.PublishItem.UpdateStatistics, Sitecore.Kernel" runIfAborted="true">
<traceToLog>false</traceToLog>
</processor>
Beware it will add lots of information to the logs.
If you need to check back in time without this setting as true things get a bit harder. You could interrogate the history tables and eventqueue tables as these contain all the items that have changed and therefore contribute towards smart publishes. The logs should give you a view of what type of publishes have been run: smart vs incremental vs republish, and where in the tree they were kicked off.
Unfortunately there isn't the concept of a transaction over a publish, like Richard mentioned you'd need to replay items back over the top, or get a db restored.

Controlling Version of Deployed Camunda BPM

Everytime I modify and deploy a process, the version number is increasing.I understand why it is increasing. But is there a to force to a predefined version and the deployments will override only that version. The reason is even for small bug fixes, I don't want the version to change.
Are you talking about production or development?
In dev, you can configure the processes.xml so all instances and old version of the process are removed:
<process-archive>
<properties>
<property name="isDeleteUponUndeploy">true</property>
</properties>
</process-archive>
On production, you would not want to delete running or completed instances. You might want to migrate running instances to the next version, but that is not generic, it depends on the process and the changes made. Make sure to read process-versioning-version-migration from the user guide.
A third approach would be to work with calls to services (expressions/delegates/listeners) instead of hard modelling inside the bpmn. If for example you write "${price > 500}" at an exclusice gateway flow, you will have a new process version when you deploy a "fix" with the value "1000". If you design your process application that it calls "${myPriceCalculator.limitExceeded(price)}", you can deploy a new war, but the process remains untouched.
no this does not work. You can deploy a new version and delete the old one.
Camunda REST will help you to deploy and delete the version of deployment. You just have to pass deployment id:
If you are using separated Camunda process engine(server) then your REST API to delete deployment will be:
http://localhost:8080/engine-rest/deployment/fa9af59a-382b-11ea-96d8-5edcd02b4f71
or if your Camunda process engine integrated with spring boot application then your URL will be:
http://localhost:8080/rest/deployment/fa9af59a-382b-11ea-96d8-5edcd02b4f71
Or
You would have a process.xml file in resource folder of your application. You can set isDeleteUponUndeploy to true. So on every Undeployment of workflow, your workflow file will get deleted.
<process-archive>
<properties>
<property name="isDeleteUponUndeploy">true</property>
</properties>
</process-archive>
Or
You and delete from Camunda UI as well the link is: http://localhost:8080/app/cockpit/default/#/dashboard
Now goto deployment and select your deployed version and click on delete version.

Fault free ColdFusion app update process

When updating a ColdFusion website, with svn or git, there is a moment where half of the repo is updated and the other half is not, during which a request could occur, which could mean epic fails in some cases.
So it seems like I need a way of pausing the requests made while svn/git is updating the folder which a website's source resides. After which point I can have a updated version number trigger the app to update itself before responding to any requests.
It's a short amount of time, but could cause many different problems depending on the app.
Does anyone have any useful advice?
For our applications we follow Adam's advice and remove a node from the load balancer; however, for those who only have one server there is an easy solution.
Login to the ColdFusion Administrator
Click "Caching" on the left side bar
Ensure the "Trusted Cache" setting is selected.
Going forward, after you have completed a code checkout you will "Clear the Template Cache" which can be achieved on the "Caching" page, using the CFAdmin API or using the Adobe Air ColdFusion Server Manager application.
This setting will ensure your new CFML code is not "live" until you clear the template cache after a successful code checkout from your SCM. Additionally, this can bring a performance improvement as much as 40% since ColdFusion will no longer check your .cfc/.cfm files for changes. All production servers should run with this setting checked.
Typically this sort of problem is mitigated when you use a cluster. (But not the primary reason to use one.) You drain all connections from one node, remove it from the cluster, update it, put it back into the cluster, remove another, and repeat, until all nodes are updated.
You don't have to do them all in serial, there are plenty of ways to do it if you have several nodes. But that's the general idea.
If you have control of the web server, then you can re-route public requests to another folder that contains a maintenance message only. Otherwise, you can use onRequestStart to redirect all requests to a maintenance.cfm file.
This is just a thought I don't know if it would work. But what if you were, at the beginning of your deployment process, to replace your Application.cfc with a new one that had this in the onRequestStart() method?
<cffunction name="onRequestStart">
<cfset sleep(5000) />
</cffunction>
Then when the deployment is done, replace the cfc again with the original.
You might even be able to make it cleaner with a cfinclude.
<cffunction name="onRequestStart">
<cfinclude template="sleep.cfm" />
</cffunction>
Then you could just replace the sleep.cfm file with an empty file when you don't want the sleep() to happen

Sync services not actually syncing

I'm attempting to sync a SQL Server CE 3.5 database with a SQL Server 2008 database using MS Sync Services. I am using VS 2008. I created a Local Database Cache, connected it with SQL Server 2008 and picked the tables I wanted to sync. I selected SQL Server Tracking. It modified the database for change tracking and created a local copy (SDF) of the data.
I need two way syncing so I created a partial class for the sync agent and added code into the OnInitialized() to set the SyncDirection for the tables to Bidirectional. I've walked through with the debugger and this code runs.
Then I created another partial class for cache server sync provider and added an event handler into the OnInitialized() to hook into the ApplyChangeFailed event. This code also works OK - my code runs when there is a conflict.
Finally, I manually made some changes to the server data to test syncing. I use this code to fire off a sync:
var agent = new FSEMobileCacheSyncAgent();
var syncStats = agent.Synchronize();
syncStats seems to show the count of the # of changes I made on the server and shows that they were applied. However, when I open the local SDF file none of the changes are there.
I basically followed the instructions I found here:
http://msdn.microsoft.com/en-us/library/cc761546%28SQL.105%29.aspx
and here:
http://keithelder.net/blog/archive/2007/09/23/Sync-Services-for-SQL-Server-Compact-Edition-3.5-in-Visual.aspx
It seems like this should "just work" at this point, but the changes made on the server aren't in the local SDF file. I guess I'm missing something but I'm just not seeing it right now.
I thought this might be because I appeared to be using version 1 of Sync Services so I removed the references to Microsoft.Synchronization.* assemblies, installed the Sync framework 2.0 and added the new version of the assemblies to the project. That hasn't made any difference.
Ideas?
Edit: I wanted to enable tracing to see if I could track this down but the only way to do that is through a WinForms app since it requires entries in the app.config file (my original project was a class library). I created a WinForms project and recreated everything and suddenly everything is working. So apparently this requires a WinForm project for some reason?
This isn't really how I planned on using this - I had hoped to kick off syncing through another non-.NET application and provide the UI there so the experience was a bit more seemless to the end user. If I can't do that, that's OK, but I'd really like to know if/how to make this work as a class library project instead.
You can load a dll's config file this way (in your class' constructor):
AppDomain.CurrentDomain.SetData("APP_CONFIG_FILE", System.IO.Path.Combine(Environment.CurrentDirectory, "<dll name>.config"));
System.Configuration.ConfigurationManager.RefreshSection("configuration");
The senario you describe should work. I have a similar app that used the sync inside a class, and thats ok.. You do need to be able to set the trace properties, most easily through a app.config, but thats not your problem... No errors reported by the Syncronize method?
No Conflict errors raised?
If you are still having difficulty you can add the following bits to your applications config file, in the section
<switches>
<add name="DefaultSwitch" value="Information" />
<!-- Sync Tracer Setting 0-off, 1-error, 2-warn, 3-info, 4-verbose. -->
<add name="SyncTracer" value="4" />
</switches>
<trace autoflush="true">
<listeners>
<add name="TestSyncListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="c:\SyncTraceFile.txt" />
</listeners>
</trace>
Then have a look in the c:\SyncTraceFile.txt for any issues.