In our Sitecore project we have set a manual policy for indexing. We need to copy the indexes after the build to other servers. I know how we can copy the files from one server to another. So my question is:
How can I set a tool to run when the index rebuild is finished? Is this even possible?
We don't want do run the tool manually after each run.
You can call code when certain events have started or finished.
In your case, one option is for you to hook into the indexing:end handler to start an xcopy command for instance - or call your tool programmatically from there.
Is there a reason why you can't keep the indexes up-to-date on the servers themselves?
Related
We are using Build Pipeline in Azure DevOps to create a Deployment Artifact. Typical steps in such pipeline are:
Build Solution / Project
Copy dlls output into $Build.ArtifactStagingDirectory
Publish Artifact from $Build.ArtifactStagingDirectory
I just wonder if I can rely on the fact, that on start of each Build the Build.ArtifactStagingDirectory is empty. Or should I clean the folder as first step to be sure?
From my experience the folder was always empty, but I am not sure if I can rely on that. Is that something specific to Azure hosted Agent and maybe by using custom Build agents I have to do manual clean-ups of this folder? Maybe some old files from last build could remain there? I did not found this info in documentation.
Thanks.
I think that the main idea of this variable $Build.ArtifactStagingDirectory is to be a clean area so you can manage the code you're pushing from your repo. As far as I know, there is no explicit information on documentation talking that this folder is empty at every new build, but there are a few "clues":
You can see at the Microsoft's Build Variables documentation that Build.StagingDirectory is always purged before each new build, so you have a fresh start every build.
In the documentation above you have a few cases where it explicitly cites that some folders or files are not cleaned on a new build, like the Build.BinariesDirectory variable.
I've run a few build and realeases pointing to my Web App on Azure, and I never saw an unwanted file or folder that was not related to my build pipeline.
I hope that helps.
I'm looking to access control example. https://github.com/strongloop/loopback-example-access-control
It says we need to put sample-models.js file under server/boot folder. That means, everytime I run the application, the creation process will be made again and again. Of course, I'm getting errors on the second call.
Should I put my own mechanism to disable if ones it run, or is there a functionality in loopback?
Bot scripts are for setting up the application. And run once per application start.
So if you want to initialize database or any initializing which would be persisted by running boot script, you need to check if it is initialized first or not.
For example for initializing roles in db, you need to check if there is desired roles in db or not. And if there is not, so create ones.
There is no other functionality in loopback for this.
Hopefully I find here someone who has experience with Hudson and its functions.
Now . I have Hudson installed this did not reveal any problems. But now I want to create a new job and that I'm developing in C / C + +.
In addition, I am working on Subversion svn where I run on the first error. Hudson did not find my svn . He says that I need an authentication . As I learned I can at Hudson authenticate but that does not work .
Maybe one of you knows how to create a project.
The things should be done in the job of Hudson.
Hudson is on my computer (local ) delete my project.
Then Hudson to access my SVN and check out the project from there.
The whole is now compiling Hudson . ( The best would be a compiler for C / C + + for Visual Studio 2008 compiler ) . The compiler then creates a * . Exe file.
Now Hudson to start the project on the basis of the *. Exe file and run the program .
Last but not least is to Hudson case of an error or if it was all right, inform the persons working on the project via email.
So that would be it what I 've hoped of Hudson. Otherwise, I take the whole not much. I know that I can do all this via a batch file . But that's not my goal. I want Hudson to automate so that I can start at midnight my builds / tests daily.
Do you think that at Hudson are my requirement too high?
For your help I would be very grateful , as I am stuck for days.
Here is a "basic" Hudson job
Create a new free-style software project job.
Configure that job.
(Optional) Configure triggers, such as "timer", "SCM polling", or others.
(Optional) Under Source Code Management section, select your SCM source and configure your repositories and local workspace
Under Build section, select Add build step and select:
Execute Shell if on *nix
OR
Execute Windows Batch Command if on Windows
OR
Pick whatever build-step plugin you are using.
(If using either of the "execute" build steps) Write your build/make/compile command as you would from command line.
(If using another plugin build step) Configure the plugin options according to your requirements.
(Optional) Archive the artifacts of the build with Archive the artifacts under Post-build Actions
(Optional) Execute other post-build actions
(Optional) Send out an email
Now to address your specific scenario. First things first, your question is too broad, and may get locked. Don't get discouraged if that happens, create separate question for each item individually. I cannot cover in details all these items, but I will give you an overview.
The SCM part
Based on your previous question, No Credentials to try in Hudson, I am now guessing that you are not providing Hudson with an HTTP URL to your SVN server, but trying to give it your local workspace location... Please do the command line check that I asked in that question.
You need to provide it with a proper HTTP server URL. Hudson will check out the project from the SVN URL you provided, under what is called a Workspace. The location of workspace can differ, based on your Hudson configuration, but it is a folder inside Hudson installation that is dedicated to the job. It can be referenced from within the job through %WORKSPACE% environment variable.
There are ways to use a different workspace location, but that is outside the scope of this overview. The whole SCM part is also optional, you can rely on existing file system, but this is not a good approach, and again, out of scope of this overview.
The Build step
After Hudson checked-out/updated the Workspace with your SVN, comes the building step. Hudson can do Execute Windows Batch Command by default. It can also Invoke Ant by default. (It can also do Maven, but that is not applicable to your situation)
To do other types of builds, you need a Build Wrapper plugin. In your particular case, the MSBuild plugin is probably what you want. I've never used MSBuild, so cannot give you details. Again, if you have a specific question on how to use MSBuild plugin, you should probably make a separate question with specific issues.
So, using either Execute Windows Batch Command or MSBuild plugin, configure your building step.
Running the exe???
This is very vague. You want to start the .exe and then what? Will it quit and you need an exit code? Do you want to see it on the screen? Again, this is very broad, and deserves a separate question (or read existing questions). If you just want to make a call to the .exe, you can configure a second Execute Windows Batch Command step, type there call path\to\yourfile.exe. But most likely you will not see that on screen. Read my answer here, Open Excel on Jenkins CI, on details of launching an .exe from Hudson/Jenkins that would be visible on screen.
Email
If you want a simple email, Hudson Post-Build actions has a way to send an email. For better customization options, you would want Email-Ext plugin. Once again, if you need details on how to use the email-ext plugin, create a new question (after searching existing questions first), as this is too much to cover in one question.
Conclusion
Your requirements are not too high, but Hudson is not a magic tool that will do the work for you. You still need to configure every step of it. And unless you have a Maven based project (which integrate very well with Hudson), a lot of actions will need to be done through the Execute Windows Batch Command and scripting of your own.
Using DDE if I import a Jar file into an nsf, either using the new Jar Design element or via web-inf\lib, then as soon as I save an xpage the workspace goes into constant rebuild. It rebuilds the workspaces, stops, rebuilds, stops etc.
It will only stop for good if I delete the jar design element, remove it from the build path or turn Build Automatically off.
I've tried this with a selection of different Jars on a local database with no network connection and on a server copy, all result in the same constant rebuild.
Referencing an external jar works fine but I'd prefer to keep it in the nsf.
Am using DDE 9.0
I'm guessing it's somehow related to this issue which describes how jars in nsfs have to be detached to compile. It's as if this detachment causes an update which makes DDE think it has to rebuild again
https://stackoverflow.com/search?q=xpages+jar+build
What works for me:
switch off automatic build
import Jar
add Jar to build path
link NSF to onDisk project
Set DDE to monitor changes automatically (in preferences)
switch back on automatic build
Then when you need to replace the Jar with a newer version, just copy it into the OnDisk project - you need to restart the http preview after replacing the jar.
Having spent a couple of hours coding an event gateway solution, I discover that they are not supported by CF standard edition. Buggerit! So back to the drawing board.
I can see how I can check the folder's dateLastModified attribute using cfdirectory and so I can run a scheduled task to see when a new file has been uploaded, but whats the best way of storing/comparing the file list so as to get a list of just the ones added since last check.
General hints/links appreciated
Assuming that, for whatever reason, you can't use a gateway, the simplest soluition that springs to mind is to move files you've procesed to a separate directory. Then, your scheduled task can just deal with files in the FTP directory itself.
they are not supported by CF standard
edition
Are you still using CF7? It has been supported by CF Standard Edition since CF8
As #Henry pointed out, you can use an Event Gateway.
If you decide not to use that approach, I'd suggest a ColdFusion scheduled task. Most foolproof algorithm for that task is storing the results of the last <cfdirectory/> call either in a persistent scope - application or server - or writing it out to a database or file (e.g. WDDX). Reason to hold on to all this information, rather than just a timestamp, is handling situations where newly added or changed files do not take on the correct timestamp for whatever reason (system clock off comes to mind).
If you use a database to capture the data you could use a MINUS/EXCEPT query in SQL Server or Oracle, respectively, to determine what's new. Else you'll need to do perform some nested looping in ColdFusion over the old a new queries to generate the list of new files.