Siebel repository migration - siebel

Very new to siebel and I want to perform a repository migration from one environment to another.
The command I am using is something like this on the target serve
./srvrupgwiz /m master_Test2Prod.ucf
so my question is what happened if repo migration fails in the middle and unable to continue?
Will the target environment becomes corrupted? Is there a way to recover?
I am thinking must be a way to take a backup of the current repository on the target environment and somehow be able to restore that?
If this is true, then how to do that?
thanks

By default, the Siebel Respository you are replacing in the target environment will be renamed to "SS Temp Siebel Respository". You are prompted to supply the name for the newly imported repository (which will default to "Siebel Repository"). When a new repository row is being imported, its ROW_ID value is appended to the end of the name you provided. Once it is successfully committed, that suffixed value is removed. Therefore you can always tell when a repository is partially imported. If something fails, it's perfectly safe to delete the partial one (or leave it there, the next attempt will result in an entirely new one with yet another ROW_ID value suffixed to the end). You can recover the old one simply by renaming it. You can see the exact steps followed by the Database Configuration utility's Migrate Repository process by looking in the UCF files that drive it (e.g. master_dev2prod.ucf and driver_dev2prod.ucf).

In all fairness Siebel version and Database system have little influence on the type of solution that most will put in place: which is reversal of the database changes.
Now, Oracle, Microsoft and IBM (only supported brands) each have their own approaches and I'm more familiar with those of Oracle. Many Oracle implementations support flashback. This is a rolling log of all changes which allows one to 'travel back in time' by undoing the statements. This includes deletes as well. The maximum size of this log is one to have attention for as the Siebel DB is quite a large volume of data to be imported. I'm sure that Microsoft and IBM systems have similar technologies.
In any case the old fashioned export to disk works in all systems.

You can backup the existing repository by going to Repository object type in the object explorer and renaming the existing repository in the siebel tools.
In case the repository import fails, you just need to change the name of the backed up repository to Siebel Repository.
Also use /l log_file_name in the command to capture the logs of the import process.

Your command is fine for a migration of repository using an answer file. However, you can split out the repository migration into individual commands rather than using the unattended upgrade wizard. One of these commands is (windows):
%SIEBSRVR_HOME%\bin\repimexp.exe
You can use this executable to import or export repositories. It is often used as a means to backup existing repositories, which tends to be referred to as "exprep". Rather than spend additional time during a release doing full export from source then import into target, the export from source can be done in advance writing out to a .dat file which represents the entire repository. This file can then be read in as part of a repository import which can save time.
In order to perform an export/backup of your current repository, you can use a command like below (windows):
%SIEBSRVR_HOME%\bin\repimexp.exe /A E /U SADMIN /P PASSWORD /C ENTERPRISE_DATASOURCENAME_DSN /D SIEBEL /R "Siebel Repository" /F c:\my_export.dat /V Y /L c:\my_exprep.log
Once you have the exported .dat file, you can run a repository import referring to this file, rather than a database with your repository inside. You do this the same way using an answer file like in your original command, but the answer file will reference the .dat file. You can step through the Siebel wizard in order to write out this answer file if you are not confident editing it manually.

Related

SAS - How to configure sas to use resources from local disk other than local disk C:

Basically when I do sorting or join table in sas, the sas will use resources / space from local disk C: to process the code, but since I only have 100GB left on local disk C:, It will result in error whenever SAS was out of resources.
My question is how to configure / change the setting in SAS to use resources from Local Disk E: instead, since I have larger space there.
I already looking through the forum, but found no similiar question.
Please Help.
Assuming you are talking about desktop SAS, or a server that you administer, you can control where the work and utility folders are stored in a few ways.
The best way is to use the -work and -utilloc options in your sasv9.cfg file. That file can be in a few places, but often the SAS Shortcut you open SAS with specifies it with the -CONFIG option. You can also set the option in that shortcut with -WORK or -UTILLOC command line options. The article How SAS Finds and Processes Configuration Files can help you decide the location of the sasv9.cfg you want to modify; if you are using a personal copy on your own laptop, you may change the one in the Program Files folder, but if not, or if you don 't have administrative rights, you have other places you can place a config file that will override that one.
A paper that discusses a few of these options is one by Peter Eberhardt and Mengting Wang.
One way is to set up a library named user for projects that will be time intensive and this way you get it to be dynamic as needed. When you have a library called user, that becomes the default workspace instead of work. But, you need to clean up that library manually, it won't delete data sets automatically when you're done with it.
libname user '/folders/myfolders/demo';
As #Tom indicates, you can also set an option to use a library that already exists if desired.
options user = myLib;
An advantage of this method over the config file method as it only does it for projects where it's needed, rather than your full system.

GoldenFiles testing and TFS server workspaces

Our product (C++ windows application, Google Test as testing framework, VS2015 as IDE) has a number of file-based interfaces to external products, i.e., we generate a file which is then imported into an external product. For testing these interfaces, we have chosen a golden file approach:
Invoke the code that produces an interface file, save the resulting file for later reference (this is our golden file - we here assume that the current state of interface code is correct).
Commit the golden file to the TFS repository.
Make changes to the interface code.
Invoke the code, compare the resulting file with the according golden file.
If the files are equal, the test passes (the change was a refactoring). Otherwise,
Enable the refresh modus which makes sure that the golden file is overriden by the file resulting from invoking the interface code.
Invoke the interface code (thus refreshing the golden file).
Investigate the outgoing changes in VS's team explorer. If the changes are as desired by our code changes from step 3, commit code changes and golden file. Otherwise, go back to step 3.
This approach works great for us, but it has one drawback: VS only recognizes that the golden files have changed (and thus allows us to investigate the changes) if we use a local workspace. If we use a server workspace, programmatically remove the read-only flag from the golden files and refresh them as described above, VS still does not recognize that the files have changed.
So my question is: Is there any way to make our golden file testing approach work with server workspaces, e.g. by telling VS that some files have changed?
I can think of two ways.
First approach is to run a tf checkout instead of removing the Read-Only attribute.
This has an intrinsic risk as one may inadvertently checking-in the generated file; this should be prevented by restricting check-in permissions on those files. Also you may need to run tf undo to clean up the local state.
Another approach would be to map the golden files in a different directory and use a local diff tool instead of relying on Visual Studio builtin tool. This is less risky than the other solution, but may be cumbersome. Do not forget that you can "clone" a workspace (e.g. Import Visual Studio TFS workspaces).

Copy files from local PC to SAS server in Enterprise Guide

I need to extract a particular sheet from a .xls file on my local machine and get it as a .sas7bdat files on the SAS server on which I work (or the other way round, that is, import it and then convert).
The problem is that although this can be done using the Import Wizard, I need to do this using the 'Copy Files Add-in' because it needs to be built as a part of an automated process.
When I tried doing this using the copy files add-in, it DID copy the .xls file onto the server according to the log, but the .xls file didn't actually show in the library and could not be referenced either (or maybe I'm just referencing it wrongly).
This has led me to believe that I need to convert it to a .sas7bdat and then import it.
Is there a way to get past this? Please bear in mind that I am talking about an automated process, so the wizard is useless for me (or is it? I'm not sure)
NOTE : I am extremely sorry that I cannot post the log and screenshots here, because I work as an offshore resource for a very large bank and cannot post anything here. I have, however, tried to make my problem as clear as possible. If any further clarifications are needed, please let me know!
I assume you've read There and Back Again which covers this in some detail.
Ultimately, all you're doing is copying the excel file as a file onto the SAS server. You're not importing it into a SAS dataset. You would import it by placing the file (either the remote file after copy files add-in, or the local file) in the workflow as an import step.
For example, if you chose /usr/lib/sasdata/myexcel.xls as the remote copy destination, you then need to include that file in your workflow as an import step (you can drag/drop the file and it will automatically create that step for you, with some wizardry).

Change stored macro SAS

In SAS using SASMSTORE option I can specify a place where the SASMACR catalog will exist. In this catalog will reside some macro.
At some moment I may need to change the macro and this moment may occure while this macro and therefore the catalog will be in use by another user. But then it will be locked and unavailable to be modified.
How can I avoid such a situation?
If you're using a SAS Macro catalog as a public catalog that is shared among colleagues, a few options exist.
First, use SVN or similar source control option so that you and your colleagues each have a local copy of the macro catalog. This is my preferred option. I'd do this, and also probably not used stored compiled macros - I'd just set it up as autocall macros, personally - because that makes it easy to resolve conflicts (as you have separate files for each macro). Using SCMs you won't be able to resolve conflicts, so you'll have to make sure everyone is very well behaved about always downloading the newest copy before making any changes, and discusses any changes so you don't have two competing changes made at about the same time. If SCMs are important for your particular use case, you could version control the macros that create the SCMs and build the SCM yourself every time you refresh your local copy of the sources.
Second, you could and should separate development from production here. Even if you have a shared library located on a shared network folder, you should have a development copy as well that is explicitly not locked by anyone except when developing a new macro for it (or updating a currently used macro). Then make your changes there, and on a consistent schedule push them out once they've been tested and verified (preferably in a test environment, so you have the classic three: dev, test, and prod environments). Something like this:
Changes in Dev are pushed to Test on Wednesdays. Anyone who's got something ready to go by Wednesday 3pm puts it in a folder (the macro source code, that is), and it's compiled into the test SCM automatically.
Test is then verified Thursday and Friday. Anything that is verified in Test by 3pm Friday is pushed to the Dev source code folder at that time, paying attention to any potential conflicts in other new code in test (nothing's pushed to dev if something currently in test but not verified could conflict with it).
Production then is run at 3pm Friday. Everyone has to be out of the SCM by then.
I suggest not using Friday for prod if you have something that runs over the weekend, of course, as it risks you having to fix something over the weekend.
Create two folders, e.g. maclib1 and maclib2, and a dataset which stores the current library number.
When you want to rebuild your library, query the current number, increment (or reset to 1 if it's already 2), assign your macro library path to the corresponding folder, compile your macros, and then update the dataset with the new library number.
When it comes to assigning your library, query the current library number from the dataset, and assign the library path accordingly.

Assign Global Variable/Argument for Any Build to Use

I have several (15 or so) builds which all reference the same string of text in their respective build process templates. Every 90 days that text expires and needs to be updated in each of the templates. Is there a way to create a central variable or argument
One solution would be to create an environment variable on your build machine. Then reference the variable in all of your builds. When you needed to update the value you would only have to set it in one place.
How to: Use Environment Variables in a Build
If you have more than one build machine then it could become too much of a maintenance issue.
Another solution would involve using MSBuild response files. You create an .rsp file that holds the property value and the value would be picked up and set from MSBuild via the command line.
You need to place it into somewhere where all your builds can access it, then customize your build process template to read from there (build definitions - as you know - do not have a mechanism to share data between defs).
Some examples would be a file checked into TFS, a file in a known location (file share), web page, web service, etc.
You could even make a custom activity that knew how to read it and output the result as an OutArgument (e.g. Custom activity that read the string from a hardcoded URL).