WSO2 IS - POST_DELETE_USER error while deleting user from IS - wso2

We have installed WSO2AM 2.6.0 with IS as KM (5.7). We deployed AM as an active-active all in one instance and IS as KM active-active too following all the directives written on the Official documentation.
Based on the documentation, we created the following databases with their respectives datasources: regdb (registry), carbondb, userdb (user store), mb-store, apimdb.
The issue that we have now is on IS side. We tried several things to check that everything was working correctly, like create users, check registry acces etc. We created a user called "test", chaged some properties, etc and after that, we proceed to delete the user.
When we deleted the user we get the following popup on the IS console:
Checking the logs we find the following:
Caused by: org.postgresql.util.PSQLException: ERROR: relation "cm_receipt" does not exist
Position: 135
TID: [-1234] [] [2020-05-11 09:00:30,062] ERROR {org.wso2.carbon.user.mgt.ui.UserAdminClient} - Error when handling event : POST_DELETE_USER
org.wso2.carbon.user.mgt.stub.UserAdminUserAdminException: UserAdminUserAdminException
We checked on the database and the user was deleted correctly and IS carbon console is not displaying it any more, so the user was correctly deleted. Checking a little bit more, the Delete user process is trying to access table "cm_receipt" on carbondb, but the table exists on apimdb.
On postgres side, we have this log during the delete:
<2020-05-08 11:49:50.452 -03:172.19.35.21(45740):wso2carbon#carbondb:[12476]:>ERROR: relation "cm_receipt" does not exist at character 135
<2020-05-08 11:49:50.452 -03:172.19.35.21(45740):wso2carbon#carbondb:[12476]:>STATEMENT: SELECT R.CONSENT_RECEIPT_ID, R.LANGUAGE, R.PII_PRINCIPAL_ID, R.PRINCIPAL_TENANT_ID, R.STATE,RS.SP_DISPLAY_NAME,RS.SP_DESCRIPTION FROM CM_RECEIPT R INNER JOIN CM_RECEIPT_SP_ASSOC RS ON R.CONSENT_RECEIPT_ID=RS.CONSENT_RECEIPT_ID WHERE PII_PRINCIPAL_ID LIKE $1 AND PRINCIPAL_TENANT_ID =$2 AND SP_NAME LIKE $3 AND STATE LIKE $4 ORDER BY ID ASC LIMIT $5 OFFSET $6
Have you got any idea why it can be happening? There is some bug related or something?
Thanks!

There could be two reasons for this.
You've forgot to execute the D script which contains the consent management tables. /wso2is-5.7.0/dbscripts/consent/postgresql.sql.
Your wso2is-5.7.0/repository/conf/consent-mgt-config.xml configuration file is referring to the wrong datasource.
Solution
Check what's the datasource that the consent-mgt-config.xml file is referring to. By default it's like this.
<ConsentManager xmlns="http://wso2.org/carbon/consent/management" xmlns:svns="http://org.wso2.securevault/configuration">
<DataSource>
<!-- Include a data source name (jndiConfigName) from the set of data sources defined in master-datasources
.xml -->
<Name>jdbc/WSO2IdentityDB</Name>
</DataSource>
Here, it's the jdbc/WSO2IdentityDB. Then go to your wso2is-5.7.0/repository/conf/datasources/master-datasource.xml file and check the database of that datasource. If the mentioned tables are not created in that database you can execute the above mentioned postgre.sql script in that database. (If you've already created these tables in a different datasource, you might want to change the datasource defined in the consent-mgt-config.xml file.)
P.S. Never use -Dsetup argument for automatic executions of database scripts on the startup. Always manually execute the database scripts against the database.
P.S. The reason for the user deletion success is that this user consent removal process being a POST_USER_DELETION event. A failure in a POST handler won't effect the action itself.

Related

Internal Error in Redmine Initialization phase

I'm trying to setup Redmine on the following products
redmine-4.0.7
Rails 5.2.4.2
Phusion Passenger 6.0.7
Apache/2.4.6
mysql Ver 14.14
I expected there will be initializing page however, I got `Internal Error' from http://mydomain/redmine/
I can see the following messages in log/prduction.log
Completed 500 Internal Server Error in 21ms (ActiveRecord: 1.5ms)
ActiveRecord::StatementInvalid (Mysql2::Error: Can't find file: './redmine/settings.frm' (errno: 13 - Permission denied): SHOW FULL FIELDS FROM `settings`):
It seems I need ./redmine/settings.frm but there isn't.
Does anyone know how to place ./redmine/settings.frm and what content should be in?
The error is thrown by your database server (i.e. MySQL). It seems that MySQL does not have the required permission to access the files where it stores the table data.
Usually, those files are handled (i.e. created, updated, and eventually deleted) entirely by MySQL which requires specific access patterns to ensure consistent data. Because of that, you should strongly avoid to manually change any files under control of MySQL. Instead, you should only use SQL commands to update table structures and table data.
o fix this issue now, you need to fix the permissions of your MySQL data files so that MySQL can properly access them. What exactly is required here is unfortunately not simply explained since there can be various causes. If you have jsut setup your MySQL server, it might be best start entirely new.

Camunda Rest API: Cannot fetch and lock an External Task for a Tenant

I have a Process Instance that was started by the Tenant 949.
I tried to fetch and lock that Task, like described here: https://docs.camunda.org/manual/7.10/reference/rest/external-task/fetch/
Here is the Body of the Request:
{"workerId":"testUser","maxTasks":1,"usePriority":false,
"topics":[
{"topicName":"archive-document","tenantIdIn":["949"],"lockDuration":10000,"localVariables":true,"deserializeValues":false}
]}
I don't get any Task with it.
The same request works if the Process Instance is started without a Tenant and fetched accordingly.
Do I miss something, or is this a Bug of Camunda?
Have you attempted to simply do a query to first retrieve the task? (Rather than attempting to fetch it and lock it?) You could use this endpoint: https://docs.camunda.org/manual/7.10/reference/rest/external-task/get-query/.
You may also want to query the runtime database directly using SQL. Your External Task would be in the ACT_RU_EXT_TASK table and would have a TOPIC_NAME_ defined within it (as well as a TENANT_ID_).
The problem was the Authentication.
I had a different User to start the process and to fetch the Task.
And this User had no rights to fetch the Task for this Tenant.

PowerBI Embedded: Datasource has no credentials, unable to Patch the gateway

I wanted to test out PowerBI embedded so I downloaded the the sample app that is able to publish a pbix file and to embed it.
So I created the easiest PowerBI file one is able to make with Azure SQL, using the DirectQuery option, as underlying data source.
I succesfully imported the PowerBI file in my workspace collection
I changed the connection string of my PowerBI file succesfully
After that the code to patch the gateway with the username and password credentials fails
Then when I tried to view the embedded report I got this error.
I believe the connectionstring is in the correct format because it was updated succesfully. I also already tried to point it to another SQL database and then the error shows the other SQL database in the error message.
1) I thought this could be because the Gateway does not get the credentials that I gave it is that correct?
2) Does someone know how can I fix this?
Thanks in advance!
As #Cuong Le stated, this was a Microsoft Issue at first.
When the problem was fixed I still received a BadRequest exception. After trying to update the credentials with the PowerBI-CLI the problem became clearer. I needed to grant rights for Azure IP addresses to the relevant SQL database. Once I did that I was able to update the credentials. Unfortunately PowerBI API SDK's exception messages are not as good as the PowerBI-CLI messages. I also tried it with PowerBI API SDK and it also worked.
The exception message I got was the following:
[ powerbi ] {"error":{"code":"DM_GWPipeline_Gateway_DataSourceAccessError","pbi.error":{"code":"DM_GWPipeline_Gateway_DataSourceAccessError","parameters":{},"details":[{"code":"DM_ErrorDetailNameCode_UnderlyingErrorCode","detail":{"type":1,"value":"-2146232060"}},{"code":"DM_ErrorDetailNameCode_UnderlyingErrorMessage","detail":{"type":1,"value":"Cannot open server 'engiep-dev-weeu-sql' requested by the login. Client with IP address 'xx.xx.xx.213' is not allowed to access the server. To enable access, use the Windows Azure Management Portal or run sp_set_firewall_rule on the master database to create a firewall rule for this IP address or address range. It may take up to five minutes for this change to take effect."}},{"code":"DM_ErrorDetailNameCode_UnderlyingHResult","detail":{"type":1,"value":"-2146232060"}},{"code":"DM_ErrorDetailNameCode_UnderlyingNativeErrorCode","detail":{"type":1,"value":"40615"}}]}}}
The correct connectionstring format to use is:
Data Source=yourDataSource;Initial Catalog=yourDataBase;User ID=yourUser;Password=yourPass;
(Don't use quotes anywhere.)
I was experiencing the same issue. Also it is an open issue on github.
Attached Image :
enter image description here
To solve this, I used the PowerBI Cli 1.0.4 from NPM. And used Update Connection Operation,(remember to add -d).
powerbi update-connection -c [workspace name] -k [access key] -w [workspace id] -d [dataset id] -s "Data Source=xxx.database.windows.net;Initial Catalog=xxx;User ID=xxx;Password=xxx"
If it fails do it(Update-Connection Operation) again.
The issue happens since sometimes datasource credentials are not carried over to the workspace.
In the case of reports that use direct query, credentials are never brought with the pbix as an import is done. All private info are stripped out.
Hope this helps!
Thanks

Migrate ColdFusion scheduled tasks using neo-cron.xml

We currently have two ColdFusion 10 dedicated servers which we are migrating to a single VPS server. We have many scheduled tasks on each. I have taken each of the neo-cron.xml files and copied the var XML elements, from within the struct type='coldfusion.server.ConfigMap' XML element, and pasted them within that element in the neo-cron.xml file on the new server. Afterward I restarted the ColdFusion service, log into cf admin, and the tasks all show as expected.
My problem is, when I try to update any of the tasks I get the following error when saving:
An error occured scheduling the task. Unable to store Job :
'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)',
because one already exists with this identification
Also, when I try to delete a task it tells me a task with that name does not exist. So it seems to me that the task information must also be stored elsewhere. So there when I try to update a task, the record doesn't exist in the secondary location so it tries to add it new to the neo-cron.xml file, which causes an error because it already exists. And when trying to delete, it doesn't exist in the secondary location so it says a task with that name does not exist. That is just a guess though.
Any ideas how I can get this to work without manually re-creating dozens of tasks? From what I've read this should work, but I need to be able to edit the tasks.
Thank you.
After a lot of hair-pulling I was able to figure out the problem. It all boiled down to having parentheses in the scheduled task names. This was causing both the "Unable to store Job : 'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)', because one already exists with this identification" error and also causing me to be unable to delete jobs. I believe it has something to do with encoding the parentheses because the actual neo-cron.xml name attribute of the var element encodes the name like so:
serverscheduletask#$%^default#$%^MAKE CATALOGS (SITE CONTROL)
Note that this anomaly did not exist on ColdFusion 10, Update 10, but does exist on Update 13. I'm not sure which update broke it, but there you go.
You will have to copy the neo-cron.xml from C:\ColdFusion10\\lib of one server to another. After that restart the server to make the changes effective. Login to the CF Admin and check the functionality.
This should work.
Note:- Please take a backup of the existing neo-cron.xml, before making the changes.

Error while deploying Sharepoint 2013 timer job :The EXECUTE permission was denied on the object 'proc_putObjectTVP', database 'MSSQL', schema 'dbo'

While trying to create a custom SharePoint timer job at feature activation I got the following error from the log files:
System.Data.SqlClient.SqlException (0x80131904): The EXECUTE permission was denied on the object 'proc_putObjectTVP', database 'MSSQL', schema 'dbo'. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.Sql... 5c6d109c-dbc6-e02e-7ae4-010d7f559e0b
In order to make it work i located the stored procedure proc_putObjectTVP and granted execute permission to the site apppool userID. It worked as desired.
My question is:
Is this a bug in Sharepoint 2013?
Is this the proper way to do it? (On production environment I may not be allowed by the server administrator to perform such operations)
I had a similar error in the event log for the account used for SharePoint 2013 services:
Insufficient SQL database permissions for user 'Name:
XXXXX\SP_Services SID: xxxxxxxxxxxxxxx ImpersonationLevel: None' in
database 'XXXX_Config' on SQL Server instance 'XXXXXXXXX'. Additional
error information from SQL Server is included below.
The EXECUTE permission was denied on the object 'proc_putObjectTVP',
database 'XXXX_Config', schema 'dbo'.
Googling around lots of blog posts recommend the same approach of applying the required permission to the stored proc. Personally I didn't like this approach, however I eventually found this TechNet post which grants the required permissions by adding the stored proc to the securables of the WSS_Content_Application_Pools role.
Using SQL Server Management Studio do the following:
Expand Databases then expand the SharePoint_Config Database.
Expand Security -> Roles -> Database Roles
Find WSS_Content_Application_Pools role, right click it, and select Properties
Click on Securables and click Search
Next click Specific objects and click OK
Click Object Types and select Stored Procedures. Click OK
Add the Stored Procedure 'proc_putObjectTVP' and click OK (if it does not automatically grant it exec permission; you need to click the
checkbox on "execute" and save it)
Using this method any new accounts added to the WSS_Content_Application_Pools role will have the correct rights preventing the problem cropping up again.
SPDataAccess role in SharePoint_Config was configured to execute proc_putObjectTVP for my install of SharePoint 2013 (which has been a trial-by-fire to get used to SQL Server 2012), anyway, making sure my sharepoint users had that role set seems to have done the trick (and of course brought up more errors to debug, now that more things are successfully starting...)
SPDataAccess (also written as SP_DATA_ACCESS) has been a useful role to Google for, bringing up tons of good resources and tips to fix one problem or another. I'll be reading blogs all night. I suspect configuring databases is old hat for quite a few SharePoint admins and devs, but it's not as well-explained, particularly as the wizard does so much (and so little) for you.
I signed up for Safari Books just to access http://my.safaribooksonline.com/book/programming/microsoft-sharepoint/9781118655047 and books like it. It's useful to help me "think like SharePoint", though Google has been just as much help. (More, really.)