informatica data is not loading into sybase target - informatica

I am using Sybase as my target as well as source. It's a direct mapping with a SQL query in source qualifier with some case statements involved. When I run the mapping, data is successfully read but not able to write it to target. I am using sybase 15.4.

Related

What does Is Staged mean in Informatica Connection object definition?

I am trying to replicate an Informatica Powercenter mapping in Informatica Cloud. When looking at the Target table properties, found the attribute "Is Staged" in the target connection object definition.
The property Truncate Target Table can be inferred easily, it means Truncate table before it is being loaded with data. What does the property "Is Staged" mean?
is Staged the name says infa will stage the data into a staging area flat file. And then read from the file and load into target table. If its unchecked, data will be loaded using a direct targte writing pipeline.
This is done to make sure data is extracted from source asap and if there is a failure in load, you can restart and re-load.
But this is set for certain data sources. Also you need to setup stage directory.

Error: Unable to detect column types for pre-aggregation on empty values in readOnly mode [CubeJs]

Using mongo Bi connector via Atlas as source database
Using mysql for pre-aggregation
Currently, it's not possible to use pre-aggregations if the source data is stored in MongoDB. Please see this GitHub issue for details: https://github.com/cube-js/cube.js/issues/2858
If you'd like to use pre-aggregations for performance, syncing data from MongoDB into another data store is the only viable option.

Does full Push Down Optimization allow Database Server to load data to target tables without intervention of Informatica Server?

I want to learn more about pushdown optimization (PDO) in Informatica.
As per my research you can have below 3 kinds of PDO at Informatica:
Source side PDO
Target side PDO
Full PDO
I am curious to know if full PDO can push entire Informatica code to either Source DB or Target DB or some portion to source DB and remaining to target DB? It would be good from performance perspective if entire code is pushed to target DB then returned results should be loaded to target tables by target DB server itself.
Need your help to understand how it works exactly. If PDO at target is enabled then results will be returned to Informatica Power Center (IPC) server and then it’s responsibility of IPC to process the results returned by target DB to target table or it is directly processed by target db server to target tables?
PDO is a giant SQL clause of Informaticatransformations. Informatica creates a big SELECT statement or INSERT/UPDATE statement based on whatever it can/can not do and issue it in DB rather than processing it in server.
When you run a session configured for
Source side PDO - the Integration Service analyzes the mapping from the source to the target or until it reaches a downstream transformation it cannot push to the source database. creates big SELECT statement.
Target side PDO - the Integration Service analyzes the mapping from the target back to the source or until it reaches a upstream transformation it cannot push to the target database. creates big INSERT/UPDATE statement.
Full PDO - the Integration Service analyzes the mapping from the source to the target or until it reaches a downstream transformation it cannot push to the target database. Works when source and target are on same db.
There is no single best practice. All depends on your mapping. If you have simple transformations then target PDO is good. If you have multiple active transformations or union, joiner,lookup to another DB, PDO should be decided on their distance from source/target.
You can go through this for more info - https://docs.informatica.com/data-integration/powercenter/10-4-0/advanced-workflow-guide/pushdown-optimization.html

How to run a simple query using Informatica PowerCenter

I have never used Informatica PowerCenter before and just don't know where to begin. To summarize my goal, I need to run a simple count query against a Teradata database using Informatica PowerCenter. This query needs to be ran on a specific day, but doesn't require me to store or manipulate the data returned. Looking at Informatica PowerCenter Designer is a bit daunting to me as I'm not sure what to be looking for.
Any help is greatly appreciated in understanding how to setup (if needed):
Sources
Targets
Transformations
Mappings
Is a transformation the only way to query data using PowerCenter? I've looked at a lot of tutorials, but most seem to be oriented to familiar users.
You can run a query against a database using informatica, only if you create a mapping, session and workflow to run that. But you cannot see the result unless you store it somewhere, either in a flatfile or a table.
Here are the steps to create it anyway.
Import your source table in source analyzer from Teradata.
Create a flat file target or import a relational target in target analyzer
Create a mapping m_xyz, drag and drop your source into the mapping.
You will see your source and source qualifier in the mapping. Write your custom query in source qualifier, say select count(*) as cnt from table
Remove all the ports from SQ except one numeric port from source to SQ and name it as cnt, count from your select will be assigned to this port.
Now drag and drop this port to an expression transformation.
Drag and drop your target into the mapping
Propagate the column from expression to this flat file/relational target.
Create a workflow and a session for this mapping.
In workflow you can schedule it to run on specific date.
When you execute this, count will be loaded into the column of that flat file or table.

Dynamic Mapping changes

When there is any change in DDL of any table, we have to again import source and target definition and change mapping. Is there a way to dynamically fetch the DDL of the table and do the data copy using Informatica mapping.
The ETL uses an abstractive layer, separated from any physical database. It uses Source and Target definition that indicate what should be expected to find in DB to which the job will be connecting. Keep in mind that the same data mapping can be applied to many different source and / or target systems. It's not bound to any of them, it just defines what data to fetch and what to do with them.
In Informatica this is reflected by separating Mappings, that define data flow, and Sessions, which indicate where the logic should be applied.
Imagine you're transferring data from multiple servers. A change applied on one of them should not break the whole data integration. If the changes would be dynamically reflected, then a column added on one server would make it impossible to read data from the others.
Of course if perfectly fine to have such requirement as you've mentioned. It's just not something Informatica supports with their approach.
The only way workaround is to create your own application that would fetch table definitions, generate the Workflows and import them into Informatica prior to execution.