I all
I have a job schudled by tivoli for an Informatica workflow.
i have checked property to save workflow logs for 5 runs.
Job is running fine through informatica but if u try to run is from tivoli using pmcmd it fails to rename the workflow log file .
pLease help , i am getting this error :
Cannot rename workflow log file [E:\Informatica\etl_d\WorkflowLogs\wf_T.log.bin] to [E:\Informatica\etl_d\WorkflowLogs\wf_T.log.4.bin]. Please check the Integration Service log for more information.
Disconnecting from Integration Service
Check the log file name in Workflow Edit options. Possibly you have same workflow log file name for multiple workflows.
HTH
Irfan
Related
I am trying to migrate a table from teradata to BQ using GCP's Data Transfer functionality. I have following the steps suggested on https://cloud.google.com/bigquery-transfer/docs/teradata-migration.
A detailed description of the steps is below:
The APIs suggested on the aobove were enabled.
A pre-existing GCS bucket, BigQuery DataSet and GCP Service Account was used in this process.
Google SDK setup was completed on the device.
Google Cloud service account key was set to an environment variable called GOOGLE_APPLICATION_CREDENTIALS.
BigQuery Data Transfer was set up.
Initialized the migration agent
On the command line, a command to run the jar file was issued, with some particular flags e.g.
java -cp C:\migration\tdgssconfig.jar;C:\migration\terajdbc4.jar;C:\migration\mirroring-agent.jar com.google.cloud.bigquery.dms.Agent --initialize
When prompted, the required parameters were entered.
When prompted for a BigQuery Data Transfer Service Resource name was entered using the data transfer config from GCP.
After entering all the requested parameters, the migration agent creates a configuration file and puts it into the local path provided in the parameters.
Run the migration agent
The following command was executed by using the classpath to the JDBC drivers and path to the configuration file created in the previous initialization step
e.g.
java -cp C:\migration\tdgssconfig.jar;C:\migration\terajdbc4.jar;C:\migration\mirroring-agent.jar com.google.cloud.bigquery.dms.Agent --configuration-file=config.json
At this point, an error was faced which said “Unable to start tbuild command”. Below is a screenshot of the error:
Following steps were taken to try to resolve this error using steps given here:
Teradata Parallel Transporter was installed using Teradata Tools and Utilities.
No bin file was found for the installation.
Below is a screenshot of the error message:
Exception in thread "main" com.google.cloud.bigquery.dms.common.exceptions.AgentException: Unable to start tbuild command
I couldn't find relevant information in the Documentation. I have tried all options and links in the batch transform pages.
They can be found, but unfortunately not via any links in the Vertex AI console.
Soon after the batch prediction job fails, go to Logging -> Logs Explorer and create a query like this, replacing YOUR_PROJECT with the name of your gcp project:
logName:"projects/YOUR_PROJECT/logs/ml.googleapis.com"
First look for the same error reported by the Batch Prediction page in the Vertex AI console: "Job failed. See logs for full details."
The log line above the "Job Failed" error will likely report the real reason your batch prediction job failed.
I have found that just going to Cloud logger after batch prediction job fails and clicking run query shows the error details
I'm having troubles with a job I've set up on dataflow.
Here is the context, I created a dataset on bigquery using the following path
bi-training-gcp:sales.sales_data
In the properties I can see that the data location is "US"
Now I want to run a job on dataflow and I enter the following command into the google shell
gcloud dataflow sql query ' SELECT country, DATE_TRUNC(ORDERDATE , MONTH),
sum(sales) FROM bi-training-gcp.sales.sales_data group by 1,2 ' --job-name=dataflow-sql-sales-monthly --region=us-east1 --bigquery-dataset=sales --bigquery-table=monthly_sales
The query is accepted by the console and returns me a sort of acceptation message.
After that I go to the dataflow dashboard. I can see a new job as queued but after 5 minutes or so the job fails and I get the following error messages:
Error
2021-09-29T18:06:00.795ZInvalid/unsupported arguments for SQL job launch: Invalid table specification in Data Catalog: Could not resolve table in Data Catalog: bi-training-gcp.sales.sales_data
Error 2021-09-29T18:10:31.592036462ZError occurred in the launcher
container: Template launch failed. See console logs.
My guess is that it cannot find my table. Maybe because I specified the wrong location/region, since my table is specified to be location in "US" I thought it would be on a US server (which is why I specified us-east1 as a region), but I tried all us regions with no success...
Does anybody know how I can solve this ?
Thank you
This error occurs if the Dataflow service account doesn't have access to the Data Catalog API. To resolve this issue, enable the Data Catalog API in the Google Cloud project that you're using to write and run queries. Alternately, assign the roles/datacatalog.
I am trying to run an Informatica workflow to check database table and write fetched data to a flat file on the server.
source: database table
target: flat file
transformations: non
this wf runs fine when "run on demand" but I need to run this continuously so I tried with INFA scheduler to run it every 5 minutes. When the scheduler is enabled the workflow continuously fails. Kindly help with any ideas to run this on scheduler.
oops... sorted, this was my mistake. I have not checked in the Target I created for flat file. my bad. Thanks all
Two questions on WSO2 BAM 2.5 Output Event Adaptor - 1) Why is there no "email" option in the output event adaptor type? As per the documentation, it should be there. Even if I create my own XML file for the Email event adaptor and drop it in the required folder, then type "email" is not recognized and the BAM is showing that as "inactive". 2) Which directory and file does the default logger output event adaptor write the logs to? I have configured that and I can see that the messages have got generated through Hive scripts and written to BAMNotifications column family but I am not able to see the logs in the repository/logs directory log files? Please help.
1) This issue occurred because of an OSGI loading issue in soap output adapter (It causes failure to some other output adapters). We have fixed that in next BAM version. For the moment to overcome this issue, please remove the soap output adapter jar (from plugins directory), restart and continue.
2) It needs to go to wso2carbon.log file. Can you please verify the log4j properties.