Informatica powerexchange source - informatica

I am working in a development environment developing in percentage my source is powerexchange. I am going to migrate to another environment UAT. And the source will change to a more accurate data. Can I migrate data maps?

It depends.
If you have Oracle as source , no need to use the DTLURDMO utility. You can just change the ORACLEID in the dbmover.cfg file.
for example:
ORACLEID =(SCHEMA_SUDO_NAME,SID,SID,SID)
Where SCHEMA_SUDO_NAME is the instance name you use in the DTLUCBRG utility. This is also name in the xtramaps(after d8).
If you have other sources like db2 , you need to use the DTLURDMO utility.

As the source schemaname is coded into the extraction map and the capture registration, you will need to run dtlurdmo tool in order to replace the source schema from dev with the source schema on prod

Related

Convert an unknown database file from a windows software into a MySqli Database

I have installed a software in my system and I have a lot of data from client in it. All the files which are inside DB folder of this software are with extensions for each individual party.
I want to to use these files to get converted to a MySqli Database.
Sample file from DB folder can be download from here
I have tried understanding for firebird service which this software uses to connect with these database files to get the things.
I want to extract database and import it inside MySqli (PhpMyAdmin)
The linked file seems to be a renamed Firebird database with structure version ODS 11.2 which corresponds to Firebird 2.5.x line.
For making a quick peep into the database you can use
IBSurgeon First Aid -- http://ib-aid.com
IB Expert (the Database Explorer feature) -- http://ibexpert.net
Free mode of FirstAID would let you peep into the data, but not extract it out, probably not even scroll ALL the tables. It also would most probably ignore all database structures that are not tables (UDF functions, procedures, VIEWs, auto-computed columns in tables) - afterall it is just low-level format parser, not an SQL engine.
IB Expert has as a non-commercial Personal edition, but it probably does not include DB Exp, however you may try a trial period of full version. However IBE's DBExp would probably also only show basic structures of the database, maybe it would be enough.
Alternatively you can install Firebird 2.5.8 - either a standalone version or maybe embedded (a set of DLLs used instead of FB server process) if your application can use it, then use any DB IDE suit to explore it. Most often mentioned for Firebird would be IBExpert, FlameRobin, Firebird Maestro or any other. Then you would be able to try different SQL queries, including SPs, VIEWs and UDF-functions if there were any registered for the database and actually used.
BTW IBExpert comes bundled with FB 2.5 Embedded, which one can use to open the database file.
After you figure out the format, you can either export required data into some intermediate format like CSV (for example: http://fbutils.sourceforge.net/ ) or use your C++ application (though why would anyone develop web-application in C++) using libraries like IB++ or OLE DB, etc. Maybe it would be better to just use the Firebird server and original DB files from PHP or what would you write the application in.

Siebel repository migration

Very new to siebel and I want to perform a repository migration from one environment to another.
The command I am using is something like this on the target serve
./srvrupgwiz /m master_Test2Prod.ucf
so my question is what happened if repo migration fails in the middle and unable to continue?
Will the target environment becomes corrupted? Is there a way to recover?
I am thinking must be a way to take a backup of the current repository on the target environment and somehow be able to restore that?
If this is true, then how to do that?
thanks
By default, the Siebel Respository you are replacing in the target environment will be renamed to "SS Temp Siebel Respository". You are prompted to supply the name for the newly imported repository (which will default to "Siebel Repository"). When a new repository row is being imported, its ROW_ID value is appended to the end of the name you provided. Once it is successfully committed, that suffixed value is removed. Therefore you can always tell when a repository is partially imported. If something fails, it's perfectly safe to delete the partial one (or leave it there, the next attempt will result in an entirely new one with yet another ROW_ID value suffixed to the end). You can recover the old one simply by renaming it. You can see the exact steps followed by the Database Configuration utility's Migrate Repository process by looking in the UCF files that drive it (e.g. master_dev2prod.ucf and driver_dev2prod.ucf).
In all fairness Siebel version and Database system have little influence on the type of solution that most will put in place: which is reversal of the database changes.
Now, Oracle, Microsoft and IBM (only supported brands) each have their own approaches and I'm more familiar with those of Oracle. Many Oracle implementations support flashback. This is a rolling log of all changes which allows one to 'travel back in time' by undoing the statements. This includes deletes as well. The maximum size of this log is one to have attention for as the Siebel DB is quite a large volume of data to be imported. I'm sure that Microsoft and IBM systems have similar technologies.
In any case the old fashioned export to disk works in all systems.
You can backup the existing repository by going to Repository object type in the object explorer and renaming the existing repository in the siebel tools.
In case the repository import fails, you just need to change the name of the backed up repository to Siebel Repository.
Also use /l log_file_name in the command to capture the logs of the import process.
Your command is fine for a migration of repository using an answer file. However, you can split out the repository migration into individual commands rather than using the unattended upgrade wizard. One of these commands is (windows):
%SIEBSRVR_HOME%\bin\repimexp.exe
You can use this executable to import or export repositories. It is often used as a means to backup existing repositories, which tends to be referred to as "exprep". Rather than spend additional time during a release doing full export from source then import into target, the export from source can be done in advance writing out to a .dat file which represents the entire repository. This file can then be read in as part of a repository import which can save time.
In order to perform an export/backup of your current repository, you can use a command like below (windows):
%SIEBSRVR_HOME%\bin\repimexp.exe /A E /U SADMIN /P PASSWORD /C ENTERPRISE_DATASOURCENAME_DSN /D SIEBEL /R "Siebel Repository" /F c:\my_export.dat /V Y /L c:\my_exprep.log
Once you have the exported .dat file, you can run a repository import referring to this file, rather than a database with your repository inside. You do this the same way using an answer file like in your original command, but the answer file will reference the .dat file. You can step through the Siebel wizard in order to write out this answer file if you are not confident editing it manually.

Sas workspace on SaS EG

We have default SAS workspace of x TB. We also have alternate 10X TB workspace on same server at different folder location.
Can anyone please help me with syntax that can be used in SAS EG to point to the alternate workspace instead of default one?
The SAS work directory can be changed for individuals by creating a $HOME/sasv9.cfg file and placing one line in it:
-WORK {full path to the SAS work directory}
if you are running in unix, you can change the work directory in the execution. nohup sas -work /myworkdirectory mypgm.sas &
Are you referring to the SAS work library, which is the location where SAS lets you store temporary data sets?
If so, then it depends. Are you using EG to in a client/server setup? In that setup you will have to get your SAS Admin to make changes on the server or in the SAS Metadata to point the work library for all Workspace Servers that start to use the other location that has more available space.
Would you not define SAS libraries out of these workspaces?
i.e. libname mydata '/folders/myfolders/'
This will then assign each library to your active SAS session.
Use this as precode to any manipulation your doing.
If you have Management Console, or Using PROC METADATA you can create permanent libraries.
You mentioned workspace, so I assume you need to control the WORK library.
Use the SAS system option
options work=library-specification
In the SAS documentation it states: specifies the libref or physical name of the storage space where all data sets with one-level names are stored. This library must exist.
Make sure the the file space is "close" to where the processing is done or file transfer will be a bottleneck.

How to generate a list of all metadata objects deployed in SAS Platform?

I have a production environment server using SAS Platform.
Is there a way to generate a report of a list of all metadata objects deployed in this production environment?
More accurately, is there an easy point-and-click way using one of the SAS Tools (e.g. SAS EG, SAS DI, SAS SMC)? If not I am open to the "right" way of doing it.
you can try using the %MDSECDS macro which is shipped whit SAS/Foundation. This macro provide a lot of informations like you are looking for.
If you are looking to extract a list of all the objects that can be seen in the folder tree, this macro from the macrocore library will do it:
%mm_tree(outds=allmyobjects)
Note - if you have multiple repositories, you will need to run this for each (and set options metarepository=YOURREPO; each time).
A macro to get the list of repos is available here.

SAS Folder mapping

I have created a SAS folder say "/Public Development/Area Name/Project Name" under "Folders" tab of SAS Management console.
In SAS EG this folder shows under "SAS Folder" option. I'm able to save EGP project and stored processes in this folder but not SAS code, log etc.
I believe its just a folder at meta data level and only items registered at meta data can be saved here.
So what approach should I take to organize my other project items like code, jobs, macros, Reports...?
The Enterprise Guide model includes storing your code as part of your EGP project. You put code modules in process flows, and log and output are stored alongside them (in a somewhat similar fashion to if you had run them in batch mode - log, output, and program are grouped as one entity effectively).
Your organization may have specific rules for how code/etc. is stored, such as storing it in a SVN repository or similar, so you should check with your manager or site SAS admin to get a more complete answer that is specific to your site.
I tend to keep metadata folders for storing metadata objects (stored processes, DI jobs, etc), and I use OS file system for storing code (.sas files), .log files, etc and .egp projects. Generally I don't store code as part of the EG project, instead the project just links to code that is sitting in the OS file system. So basically, I store my code, logs, macros, format catalogs, output reports, etc etc the same way as I did when I was using PC SAS.