Writing a flat file, permission denied - informatica

I have got this issue :
WRT_8004
Writer initialization failed [Error opening session output file [/*/diff_zipcode1.out] [error=Permission denied]].
Writer terminating.
The user for informatica has the right to write in this specific folder (I tried a touch it directly and it worked) but I still get this error.
The only way for this workflow to work is to set the writing permission to everyone...
So I was wondering if informatica uses another user than the one who launchs the informatica server like my user on informatica ? And if this is the case how can I set the properties right to write on my folder.
Answer to my situation : I change the settings of the user of informatica after I launched the informatica server so the modification wasn't really done for informatica point of view. To fix this problem, I only had to reboot the informatica server.

Informatica will use whichever user has logged in to Power Center to create the file.
If you do not want to set full permissions to your folder, it would be best if you add the user into a group and provide write permissions to groups only.

Related

Connect BigQuery as a source to Data Fusion in another GCP project

I am trying to connect BigQuery of ProjectA to Data Fusion of ProjectB and its asking me to enter a service key file. I have tried to upload the service key file to Cloud Storage of ProjectB and provided the link but it's asking me to provide a local file path.
Can someone help me on this?
Thanks in advance.
Can you try this, grant BQ permission of project A to data fusion in project B.
service-project_number#gcp-sa-datafusion.iam.gserviceaccount.com.
project_number-compute#developer.gserviceaccount.com.
Steps:
Navigate to the customer project that contains the CDF instance and copy the project number (this is found on the Home Page in the Project Info card)
Navigate to the project that contains the resources you would like to interact with.
In the sidebar, click on ‘IAM & Admin’
Click on ‘Add’ at the top of the page.
Provide the first service account name from the table above, be sure to replace with the actual number you obtained in step 1
Grant the Admin role for the resource you would like to interact with. Ex. BigQuery Admin for reading/writing to BigQuery. For BigQuery, you will also need to grant the BigQuery Data Owner role as well.
Repeat steps 5 & 6 for the second service account in the table above.
In your pipeline, ensure you define the correct Project Id for the sources/sinks. Using ‘auto-detect’ will default to the customer project that contains the CDF instance.
Can you try download the service key json file to the local, ie you local computer? And try to put the file into some folder and provide the full path to that service key file in the BigQuery properties.

inheritance of pentaho log kettle.properties in sub transforms

I have setup to log Pentaho jobs and transformations to a database
This works fine provided I define every job and every transformation in its individual log settings dialogue.
I see that I can configure the kettle properties file to hold these values.
However I can't get this to inherit autoamtically in a transformation when it is called by a job. I assume that if defined in properties it should just inherit and work.
Any ideas on what I am missing?
Thanks
(MS windows env with MS Sql server- we don't have Pentaho enterprise).
You can do it by adding below entries in "kettle.properties" file.
kettle logging properties
KETTLE_TRANS_LOG_DB=
KETTLE_TRANS_LOG_SCHEMA=
KETTLE_TRANS_LOG_TABLE=etl_trans_log
KETTLE_JOB_LOG_DB=
KETTLE_JOB_LOG_SCHEMA=
KETTLE_JOB_LOG_TABLE=etl_job_log
Ok so I have foound that provided I set the proerties file on the machine and then set each transformation by right clicking and setting each log to have the connection, then when I call the job it all logs correctly.
So you need the database connection in all transforms and you need to set this a sdefault in the logging tab.
I think this is right anyway unless someone else has a shorter cut

APEX_ADMINISTRATOR_ROLE in AWS RDS Oracle Instance

I am trying to install APEX on my AWS Oracle 12 RDS Instance. In order to achieve this, I am following these instructions : http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.APEX.HTML
However, I got stucked in step 7:
Step 7:
You must set a password for the APEX admin user. To do this, use
SQL*Plus to connect to your DB instance as the master user, and then
issue the following commands:
grant APEX_ADMINISTRATOR_ROLE to master;
#/home/apexuser/apex/apxchpwd.sql
Replace master with your master user name. When the apxchpwd.sql
script prompts you, type a new admin password
When I log into my my RDS Instance with my master user and execute this:
grant APEX_ADMINISTRATOR_ROLE to [mymasteruser];
I received this error:
ERROR at line 1:
ORA-01924: role 'APEX_ADMINISTRATOR_ROLE' not granted or does not exist
Can you please help me to solve this?
Edit 12/09/2017.
Using this post/answer:
https://serverfault.com/questions/276541/how-do-you-recover-you-rds-master-user-username
I understand my master user is shown in the following image. As I know, in RDS instance i have no access to sys or system user, so this is the only user i can use.
Many thanks
Edit 20/09/2017.
I applied Alex solution, and it works!!. However, some issues to comment:
The tutorial was changed, in fact the url changed, now is
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.APEX.html (the last "html" was in uppercase before)
but is not reliable now, there are some points that should be fixed, e.g. it says now that RDS support Oracle APEX version 5.1.2, i tried with this versión and I got this error:
Also, some directories dont match with the previos step ....
So, I used the versión that the tutorial originally says : Oracle APEX version 4.2.6.v1
I had to execute both statements :
EXEC rdsadmin.rdsadmin_util.grant_apex_admin_role;
grant APEX_ADMINISTRATOR_ROLE to [master];
Then i could execute the apxchpwd.sql script successfully!!.
But, unfortunately, when I accessed to my apex home page and tried to create a new workspace "ws_prueba", I receive this error (Im trying to create it with my apex admin user):
Any ideas?
Use
EXEC rdsadmin.rdsadmin_util.grant_apex_admin_role;
instead. I have a case open on this with AWS and just asked them to update the documentation page.

Migrate ColdFusion scheduled tasks using neo-cron.xml

We currently have two ColdFusion 10 dedicated servers which we are migrating to a single VPS server. We have many scheduled tasks on each. I have taken each of the neo-cron.xml files and copied the var XML elements, from within the struct type='coldfusion.server.ConfigMap' XML element, and pasted them within that element in the neo-cron.xml file on the new server. Afterward I restarted the ColdFusion service, log into cf admin, and the tasks all show as expected.
My problem is, when I try to update any of the tasks I get the following error when saving:
An error occured scheduling the task. Unable to store Job :
'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)',
because one already exists with this identification
Also, when I try to delete a task it tells me a task with that name does not exist. So it seems to me that the task information must also be stored elsewhere. So there when I try to update a task, the record doesn't exist in the secondary location so it tries to add it new to the neo-cron.xml file, which causes an error because it already exists. And when trying to delete, it doesn't exist in the secondary location so it says a task with that name does not exist. That is just a guess though.
Any ideas how I can get this to work without manually re-creating dozens of tasks? From what I've read this should work, but I need to be able to edit the tasks.
Thank you.
After a lot of hair-pulling I was able to figure out the problem. It all boiled down to having parentheses in the scheduled task names. This was causing both the "Unable to store Job : 'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)', because one already exists with this identification" error and also causing me to be unable to delete jobs. I believe it has something to do with encoding the parentheses because the actual neo-cron.xml name attribute of the var element encodes the name like so:
serverscheduletask#$%^default#$%^MAKE CATALOGS (SITE CONTROL)
Note that this anomaly did not exist on ColdFusion 10, Update 10, but does exist on Update 13. I'm not sure which update broke it, but there you go.
You will have to copy the neo-cron.xml from C:\ColdFusion10\\lib of one server to another. After that restart the server to make the changes effective. Login to the CF Admin and check the functionality.
This should work.
Note:- Please take a backup of the existing neo-cron.xml, before making the changes.

Error while deploying Sharepoint 2013 timer job :The EXECUTE permission was denied on the object 'proc_putObjectTVP', database 'MSSQL', schema 'dbo'

While trying to create a custom SharePoint timer job at feature activation I got the following error from the log files:
System.Data.SqlClient.SqlException (0x80131904): The EXECUTE permission was denied on the object 'proc_putObjectTVP', database 'MSSQL', schema 'dbo'. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.Sql... 5c6d109c-dbc6-e02e-7ae4-010d7f559e0b
In order to make it work i located the stored procedure proc_putObjectTVP and granted execute permission to the site apppool userID. It worked as desired.
My question is:
Is this a bug in Sharepoint 2013?
Is this the proper way to do it? (On production environment I may not be allowed by the server administrator to perform such operations)
I had a similar error in the event log for the account used for SharePoint 2013 services:
Insufficient SQL database permissions for user 'Name:
XXXXX\SP_Services SID: xxxxxxxxxxxxxxx ImpersonationLevel: None' in
database 'XXXX_Config' on SQL Server instance 'XXXXXXXXX'. Additional
error information from SQL Server is included below.
The EXECUTE permission was denied on the object 'proc_putObjectTVP',
database 'XXXX_Config', schema 'dbo'.
Googling around lots of blog posts recommend the same approach of applying the required permission to the stored proc. Personally I didn't like this approach, however I eventually found this TechNet post which grants the required permissions by adding the stored proc to the securables of the WSS_Content_Application_Pools role.
Using SQL Server Management Studio do the following:
Expand Databases then expand the SharePoint_Config Database.
Expand Security -> Roles -> Database Roles
Find WSS_Content_Application_Pools role, right click it, and select Properties
Click on Securables and click Search
Next click Specific objects and click OK
Click Object Types and select Stored Procedures. Click OK
Add the Stored Procedure 'proc_putObjectTVP' and click OK (if it does not automatically grant it exec permission; you need to click the
checkbox on "execute" and save it)
Using this method any new accounts added to the WSS_Content_Application_Pools role will have the correct rights preventing the problem cropping up again.
SPDataAccess role in SharePoint_Config was configured to execute proc_putObjectTVP for my install of SharePoint 2013 (which has been a trial-by-fire to get used to SQL Server 2012), anyway, making sure my sharepoint users had that role set seems to have done the trick (and of course brought up more errors to debug, now that more things are successfully starting...)
SPDataAccess (also written as SP_DATA_ACCESS) has been a useful role to Google for, bringing up tons of good resources and tips to fix one problem or another. I'll be reading blogs all night. I suspect configuring databases is old hat for quite a few SharePoint admins and devs, but it's not as well-explained, particularly as the wizard does so much (and so little) for you.
I signed up for Safari Books just to access http://my.safaribooksonline.com/book/programming/microsoft-sharepoint/9781118655047 and books like it. It's useful to help me "think like SharePoint", though Google has been just as much help. (More, really.)