Calling a stored package from informatica - informatica

I have a requirement to run an existing package after data load in source tables for package is complete. I know I can use a Stored Procedure Transformation to call the package. However, there are no input or output parameters defined in the package, and I can't edit the package either, as it is being used in a lot of other processes.
Is there a way to call a stored procedure transformation which has no input and output parameters defined in the stored package? Please advise.

Use Post SQL option. It's available in your session. Go to Mapping tab and check the properties of the applicable target transformation.

Related

SAS - How to configure sas to use resources from local disk other than local disk C:

Basically when I do sorting or join table in sas, the sas will use resources / space from local disk C: to process the code, but since I only have 100GB left on local disk C:, It will result in error whenever SAS was out of resources.
My question is how to configure / change the setting in SAS to use resources from Local Disk E: instead, since I have larger space there.
I already looking through the forum, but found no similiar question.
Please Help.
Assuming you are talking about desktop SAS, or a server that you administer, you can control where the work and utility folders are stored in a few ways.
The best way is to use the -work and -utilloc options in your sasv9.cfg file. That file can be in a few places, but often the SAS Shortcut you open SAS with specifies it with the -CONFIG option. You can also set the option in that shortcut with -WORK or -UTILLOC command line options. The article How SAS Finds and Processes Configuration Files can help you decide the location of the sasv9.cfg you want to modify; if you are using a personal copy on your own laptop, you may change the one in the Program Files folder, but if not, or if you don 't have administrative rights, you have other places you can place a config file that will override that one.
A paper that discusses a few of these options is one by Peter Eberhardt and Mengting Wang.
One way is to set up a library named user for projects that will be time intensive and this way you get it to be dynamic as needed. When you have a library called user, that becomes the default workspace instead of work. But, you need to clean up that library manually, it won't delete data sets automatically when you're done with it.
libname user '/folders/myfolders/demo';
As #Tom indicates, you can also set an option to use a library that already exists if desired.
options user = myLib;
An advantage of this method over the config file method as it only does it for projects where it's needed, rather than your full system.

Siebel repository migration

Very new to siebel and I want to perform a repository migration from one environment to another.
The command I am using is something like this on the target serve
./srvrupgwiz /m master_Test2Prod.ucf
so my question is what happened if repo migration fails in the middle and unable to continue?
Will the target environment becomes corrupted? Is there a way to recover?
I am thinking must be a way to take a backup of the current repository on the target environment and somehow be able to restore that?
If this is true, then how to do that?
thanks
By default, the Siebel Respository you are replacing in the target environment will be renamed to "SS Temp Siebel Respository". You are prompted to supply the name for the newly imported repository (which will default to "Siebel Repository"). When a new repository row is being imported, its ROW_ID value is appended to the end of the name you provided. Once it is successfully committed, that suffixed value is removed. Therefore you can always tell when a repository is partially imported. If something fails, it's perfectly safe to delete the partial one (or leave it there, the next attempt will result in an entirely new one with yet another ROW_ID value suffixed to the end). You can recover the old one simply by renaming it. You can see the exact steps followed by the Database Configuration utility's Migrate Repository process by looking in the UCF files that drive it (e.g. master_dev2prod.ucf and driver_dev2prod.ucf).
In all fairness Siebel version and Database system have little influence on the type of solution that most will put in place: which is reversal of the database changes.
Now, Oracle, Microsoft and IBM (only supported brands) each have their own approaches and I'm more familiar with those of Oracle. Many Oracle implementations support flashback. This is a rolling log of all changes which allows one to 'travel back in time' by undoing the statements. This includes deletes as well. The maximum size of this log is one to have attention for as the Siebel DB is quite a large volume of data to be imported. I'm sure that Microsoft and IBM systems have similar technologies.
In any case the old fashioned export to disk works in all systems.
You can backup the existing repository by going to Repository object type in the object explorer and renaming the existing repository in the siebel tools.
In case the repository import fails, you just need to change the name of the backed up repository to Siebel Repository.
Also use /l log_file_name in the command to capture the logs of the import process.
Your command is fine for a migration of repository using an answer file. However, you can split out the repository migration into individual commands rather than using the unattended upgrade wizard. One of these commands is (windows):
%SIEBSRVR_HOME%\bin\repimexp.exe
You can use this executable to import or export repositories. It is often used as a means to backup existing repositories, which tends to be referred to as "exprep". Rather than spend additional time during a release doing full export from source then import into target, the export from source can be done in advance writing out to a .dat file which represents the entire repository. This file can then be read in as part of a repository import which can save time.
In order to perform an export/backup of your current repository, you can use a command like below (windows):
%SIEBSRVR_HOME%\bin\repimexp.exe /A E /U SADMIN /P PASSWORD /C ENTERPRISE_DATASOURCENAME_DSN /D SIEBEL /R "Siebel Repository" /F c:\my_export.dat /V Y /L c:\my_exprep.log
Once you have the exported .dat file, you can run a repository import referring to this file, rather than a database with your repository inside. You do this the same way using an answer file like in your original command, but the answer file will reference the .dat file. You can step through the Siebel wizard in order to write out this answer file if you are not confident editing it manually.

Sas workspace on SaS EG

We have default SAS workspace of x TB. We also have alternate 10X TB workspace on same server at different folder location.
Can anyone please help me with syntax that can be used in SAS EG to point to the alternate workspace instead of default one?
The SAS work directory can be changed for individuals by creating a $HOME/sasv9.cfg file and placing one line in it:
-WORK {full path to the SAS work directory}
if you are running in unix, you can change the work directory in the execution. nohup sas -work /myworkdirectory mypgm.sas &
Are you referring to the SAS work library, which is the location where SAS lets you store temporary data sets?
If so, then it depends. Are you using EG to in a client/server setup? In that setup you will have to get your SAS Admin to make changes on the server or in the SAS Metadata to point the work library for all Workspace Servers that start to use the other location that has more available space.
Would you not define SAS libraries out of these workspaces?
i.e. libname mydata '/folders/myfolders/'
This will then assign each library to your active SAS session.
Use this as precode to any manipulation your doing.
If you have Management Console, or Using PROC METADATA you can create permanent libraries.
You mentioned workspace, so I assume you need to control the WORK library.
Use the SAS system option
options work=library-specification
In the SAS documentation it states: specifies the libref or physical name of the storage space where all data sets with one-level names are stored. This library must exist.
Make sure the the file space is "close" to where the processing is done or file transfer will be a bottleneck.

Export Microstrategy grid data in text format to a FTP server

Can anybody please let me know whether it is possible to export microstrategy grid data in text format to a FTP server (required access will be provided). If not directly, then can we use some kind of java coding/web services to achieve this. I don't want the process but want to understand whether this can be achieved or not?
Thanks in Advance!
You can retrieve report results (and build a new report from scratch at that) via the SDK and from there you can process the data to your liking, i.e. transform & upload to a ftp-server.
Possibly easier would be to create a file-subscription and store the file to a specific directory where you automatically pick it up and deliver it to your ftp.
There might be other solutions as well, but Yes is the answer to the "Yes/No" part of your question.

Assign Global Variable/Argument for Any Build to Use

I have several (15 or so) builds which all reference the same string of text in their respective build process templates. Every 90 days that text expires and needs to be updated in each of the templates. Is there a way to create a central variable or argument
One solution would be to create an environment variable on your build machine. Then reference the variable in all of your builds. When you needed to update the value you would only have to set it in one place.
How to: Use Environment Variables in a Build
If you have more than one build machine then it could become too much of a maintenance issue.
Another solution would involve using MSBuild response files. You create an .rsp file that holds the property value and the value would be picked up and set from MSBuild via the command line.
You need to place it into somewhere where all your builds can access it, then customize your build process template to read from there (build definitions - as you know - do not have a mechanism to share data between defs).
Some examples would be a file checked into TFS, a file in a known location (file share), web page, web service, etc.
You could even make a custom activity that knew how to read it and output the result as an OutArgument (e.g. Custom activity that read the string from a hardcoded URL).