Configure WSO2 Identity Server with Active Directory? - wso2

I am trying to stand up WSO2 Identity Server 5.0.0 with Active Directory as the primary user store. I have connectivity, LDAPS, working database, etc. and can login to the admin console as the admin user I have chosen, and can successfully search for AD users and see roles.
However, if I try to show the details of a user, or change their password, I receive errors. For example, when attempting to change password, I see in the logs:
TID: [0] [IS] [2016-04-15 16:14:15,135] ERROR
{org.wso2.carbon.user.mgt.ui.UserAdminClient} - User testuser does
not exisit in the user store
{org.wso2.carbon.user.mgt.ui.UserAdminClient}
org.wso2.carbon.user.mgt.stub.UserAdminUserAdminException:
UserAdminUserAdminException
but I have looked up this user and clicked the "change password" link in the resulting display, so it was able to at least find that user in the search.
I suspect that the AD roles of the user that I have configured for the LDAP connection are not sufficient to perform these tasks, but am unsure. Am I on the right trail, and if so, what are the requirements for this user? Or, is there something else to troubleshoot related to these errors?
UPDATE 4/18/2016:
OK when I added the debug suggested in an answer (log4j.logger.org.wso2.carbon.user.core=DEBUG), I noticed that the tool was searching for the user by CN and not finding it:
TID: [0] [IS] [2016-04-18 10:19:42,394] DEBUG {org.wso2.carbon.identity.mgt.IdentityMgtEventListener} - Pre update credential by admin is called in IdentityMgtEventListener {org.wso2.carbon.identity.mgt.IdentityMgtEventListener}
TID: [0] [IS] [2016-04-18 10:19:42,394] DEBUG {org.wso2.carbon.identity.mgt.IdentityMgtEventListener} - Updating credentials of user astudent16 by admin with a non-empty password {org.wso2.carbon.identity.mgt.IdentityMgtEventListener}
TID: [0] [IS] [2016-04-18 10:19:42,394] DEBUG {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager} - Searching for user astudent16 {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager}
TID: [0] [IS] [2016-04-18 10:19:42,409] DEBUG {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager} - Searching for user with SearchFilter: (&(objectClass=user)(cn=astudent16)) in SearchBase: {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager}
TID: [0] [IS] [2016-04-18 10:19:42,472] DEBUG {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager} - Name in space for astudent16 is null {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager}
TID: [0] [IS] [2016-04-18 10:19:42,472] DEBUG {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager} - User: astudent16 exist: false {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager}
TID: [0] [IS] [2016-04-18 10:19:42,487] ERROR {org.wso2.carbon.user.mgt.ui.UserAdminClient} - User astudent16 does not exisit in the user store {org.wso2.carbon.user.mgt.ui.UserAdminClient}
org.wso2.carbon.user.mgt.stub.UserAdminUserAdminException: UserAdminUserAdminException
I referred back to documentation, and the vendor documentation suggests that for Active Directory, the UserNameAttribute in user-mgt.xml be set to CN - we had this set to sAMAccountName.
So, we changed to CN, and now the error is different:
TID: [0] [IS] [2016-04-18 10:30:46,338] DEBUG {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager} - Searching for user A Student16 {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager}
TID: [0] [IS] [2016-04-18 10:30:46,354] DEBUG {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager} - Searching for user with SearchFilter: (&(objectClass=user)(cn=A Student16)) in SearchBase: {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager}
TID: [0] [IS] [2016-04-18 10:30:46,354] DEBUG {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager} - Name in space for A Student16 is CN=A Student16,OU=2016,OU=Students,OU=Accounts,DC=some,DC=org {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager}
TID: [0] [IS] [2016-04-18 10:30:46,354] DEBUG {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager} - User: A Student16 exist: true {org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager}
TID: [0] [IS] [2016-04-18 10:30:46,463] DEBUG {org.wso2.carbon.user.core.ldap.ActiveDirectoryUserStoreManager} - value after escaping special characters in A Student16 : A Student16 {org.wso2.carbon.user.core.ldap.ActiveDirectoryUserStoreManager}
TID: [0] [IS] [2016-04-18 10:30:46,463] DEBUG {org.wso2.carbon.user.core.ldap.ActiveDirectoryUserStoreManager} - Can not access the directory service for user : A Student16 {org.wso2.carbon.user.core.ldap.ActiveDirectoryUserStoreManager}
javax.naming.NameNotFoundException: [LDAP: error code 32 - 0000208D: NameErr: DSID-03100238, problem 2001 (NO_OBJECT), data 0, best match of:
'DC=some,DC=org'
]; remaining name 'CN=A Student16'
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3112)

Add following the line to <IS_HOME>repository/conf/log4j.properties file, and try the scenario which failed. Also attach wso2caron.log file to analyze the issue.
log4j.logger.org.wso2.carbon.user.core=DEBUG
Also attach user-mgt.xml file.

I think I am having the same problem you are. I was able to get the password recovery to work for a user by changing the UserSearchBase so it fully referenced the OU for the user.
In our case we have users in:
OU=FacStaff,OU=People,DC=SomeCollege,DC=edu
OU=Students,OU=People,DC=SomeCollege,DC=edu
OU=Sysusers,OU=PrivUsers,DC=SomeCollege,DC=edu
By specifying the full OU, the password recovery works for users in that OU, but users in any other OU cannot even log in.
I want to have my UserSearchBase set to DC=SomeCollege,DC=edu so it can search our whole tree, and this does work for authentication, but not anything where it has to write to Active Directory.
I can replicate the error by doing an ldapmodify with an ldif that leaves out part of the DN for a user, so I suspect the problem lies in the updateCredentialByAdmin function in that it is attempting to use the CN and UserSearchBase where it should be using the full DN.
I have also tried using multiple UserSearchBase entries, separating them with the hash character. Again, it works for authentication, but not updates.

Related

PutHDFS processor fails to write to kerberised and TLS/SSL enabled HDFS

I get the below error message in my NiFi logs when I tried to write a file to HDFS, but when i try to write a file with in the hadoop cluster it works fine, but from NiFi it fails with the below message.
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.c.s.StandardProcessScheduler Starting LogMessage[id=b4bc6d2c-0173-1000-0000-00002905a41b]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.controller.StandardProcessorNode Starting LogMessage[id=b4bc6d2c-0173-1000-0000-00002905a41b]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.c.s.StandardProcessScheduler Starting LogMessage[id=b4bd264b-0173-1000-0000-000018f91304]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.controller.StandardProcessorNode Starting LogMessage[id=b4bd264b-0173-1000-0000-000018f91304]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.c.s.StandardProcessScheduler Starting GetFile[id=b4d14ae8-0173-1000-ffff-ffffe680a6a0]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.controller.StandardProcessorNode Starting GetFile[id=b4d14ae8-0173-1000-ffff-ffffe680a6a0]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.c.s.StandardProcessScheduler Starting PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]
2020-08-10 13:41:59,519 INFO [NiFi Web Server-32056] o.a.n.controller.StandardProcessorNode Starting PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]
2020-08-10 13:41:59,519 INFO [Timer-Driven Process Thread-6] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled GetFile[id=b4d14ae8-0173-1000-ffff-ffffe680a6a0] to run with 1 threads
2020-08-10 13:41:59,519 INFO [Timer-Driven Process Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled LogMessage[id=b4bc6d2c-0173-1000-0000-00002905a41b] to run with 1 threads
2020-08-10 13:41:59,519 INFO [Timer-Driven Process Thread-5] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled LogMessage[id=b4bd264b-0173-1000-0000-000018f91304] to run with 1 threads
2020-08-10 13:41:59,543 INFO [Timer-Driven Process Thread-10] o.a.hadoop.security.UserGroupInformation Login successful for user abc#UX.xyzCORP.NET using keytab file /home/abc/confFiles/abc.keytab
2020-08-10 13:41:59,544 INFO [Timer-Driven Process Thread-10] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8] to run with 1 threads
2020-08-10 13:41:59,595 INFO [Thread-9481] o.a.h.h.p.d.sasl.SaslDataTransferClient SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-08-10 13:41:59,599 INFO [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Exception in createBlockOutputStream blk_1075334640_1594409
java.io.EOFException: null
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1731)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-08-10 13:41:59,599 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Abandoning BP-1824237254-0.00.64.55-1545405130172:blk_1075334640_1594409
2020-08-10 13:41:59,601 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Excluding datanode DatanodeInfoWithStorage[0.00.64.57:50010,DS-d6f56418-6e18-4317-a8ec-4a5b15757728,DISK]
2020-08-10 13:41:59,605 INFO [Thread-9481] o.a.h.h.p.d.sasl.SaslDataTransferClient SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-08-10 13:41:59,606 INFO [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Exception in createBlockOutputStream blk_1075334641_1594410
java.io.EOFException: null
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1731)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-08-10 13:41:59,606 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Abandoning BP-1824237254-0.00.64.55-1545405130172:blk_1075334641_1594410
2020-08-10 13:41:59,608 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Excluding datanode DatanodeInfoWithStorage[0.00.64.56:50010,DS-286b28e8-d035-4b8c-a2dd-aabb08666234,DISK]
2020-08-10 13:41:59,612 INFO [Thread-9481] o.a.h.h.p.d.sasl.SaslDataTransferClient SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-08-10 13:41:59,612 INFO [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Exception in createBlockOutputStream blk_1075334642_1594411
java.io.EOFException: null
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1731)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-08-10 13:41:59,612 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Abandoning BP-1824237254-0.00.64.55-1545405130172:blk_1075334642_1594411
2020-08-10 13:41:59,614 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Excluding datanode DatanodeInfoWithStorage[0.00.64.58:50010,DS-53536364-33f4-40d6-85c2-508abf7ff023,DISK]
2020-08-10 13:41:59,618 INFO [Thread-9481] o.a.h.h.p.d.sasl.SaslDataTransferClient SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-08-10 13:41:59,619 INFO [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Exception in createBlockOutputStream blk_1075334643_1594412
java.io.EOFException: null
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFieldsLegacy(BlockTokenIdentifier.java:240)
at org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.readFields(BlockTokenIdentifier.java:221)
at org.apache.hadoop.security.token.Token.decodeIdentifier(Token.java:200)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:530)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:342)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:276)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1731)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1679)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
2020-08-10 13:41:59,619 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Abandoning BP-1824237254-0.00.64.55-1545405130172:blk_1075334643_1594412
2020-08-10 13:41:59,621 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Excluding datanode DatanodeInfoWithStorage[0.00.64.84:50010,DS-abba7d97-925a-4299-af86-b58fef9aaa12,DISK]
2020-08-10 13:41:59,621 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer DataStreamer Exception
java.io.IOException: Unable to create new block.
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1694)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:716)
*2020-08-10 13:41:59,621 WARN [Thread-9481] org.apache.hadoop.hdfs.DataStreamer Could not get block locations. Source file "/user/abc/puthdfs_test/.test.txt" - Aborting...block==null*
2020-08-10 13:41:59,626 ERROR [Timer-Driven Process Thread-2] o.apache.nifi.processors.hadoop.PutHDFS PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8] Failed to write to HDFS due to org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]: java.io.IOException: Could not get block locations. Source file "/user/abc/puthdfs_test/.test.txt" - Aborting...block==null: org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]: java.io.IOException: Could not get block locations. Source file "/user/abc/puthdfs_test/.test.txt" - Aborting...block==null
org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=4d34342b-2901-125d-917f-567e466964c8]: java.io.IOException: Could not get block locations. Source file "/user/abc/puthdfs_test/.test.txt" - Aborting...block==null
Any help would be highly appreciated, i have eliminated that it has nothing to do with HDFS
Thanks
David

WSO2 APIM doesnt update a published API in a cluster setup

Setup: Two WSO2 APIM pointing to same MYSQL. These two WSO2 instances are behind a LB and publisher session is sticky. The API session is not sticky. I have published API "/public/1.0.0/abc" which previously pointed to "/api/app/v1/xyz". I updated the published API to now point to "/api/app/v1/abc".
Issue: Now after the update when the published API is accessed through curl, at times returns back expected result but other times it throws 403 error. It seems update in one instance did not propagate to another? Below steps to make it work means downtime and we are trying to avoid the downtime.
The only way to make this work is by doing the following:
1. Shutdown the wso2am app on the working instance (Instance1)
2. Update the API again on WSO2 publisher so Instance2 picks up the change
3. Start back the Instance1 wso2am app
WSO2 Instance1 logs where the call works:
==> /usr/lib64/wso2/wso2am/2.6.0/repository/logs/http_access_.log <==
- <private-subnet-ip> - - [04/Jun/2019:20:02:00 +0000] "GET /services/Version HTTP/1.1" - - "-" "ELB-HealthChecker/2.0"
<my-ip> <private-subnet-ip> - - [04/Jun/2019:20:02:22 +0000] "GET /public/1.0.0/abc HTTP/1.1" - - "-" "curl/7.54.0"
<my-ip> <private-subnet-ip> - - [04/Jun/2019:20:02:22 +0000] "GET /api/app/v1/abc HTTP/1.1" - - "-" "Synapse-PT-HttpComponents-NIO"
- <private-subnet-ip> - [04/Jun/2019:20:02:00 +0000] "- - " 200 - "-" "-"
- <private-subnet-ip> - [04/Jun/2019:20:02:22 +0000] "- - " 200 - "-" "-"
- <private-subnet-ip> - [04/Jun/2019:20:02:22 +0000] "- - " 200 - "-" "-"
WSO2 Instance2 logs where the call doesnt works:
==> /usr/lib64/wso2/wso2am/2.6.0/repository/logs/wso2carbon.log <==
TID: [-1234] [] [2019-06-04 20:01:24,511] WARN {org.apache.synapse.rest.API} - Trying to access API : admin--PublicAPI on restricted transport chanel [https] {org.apache.synapse.rest.API}
==> /usr/lib64/wso2/wso2am/2.6.0/repository/logs/wso2-apigw-errors.log <==
2019-06-04 20:01:24,511 [-] [PassThroughMessageProcessor-123] WARN API Trying to access API : admin--PublicAPI on restricted transport chanel [https]
==> /usr/lib64/wso2/wso2am/2.6.0/repository/logs/wso2carbon.log <==
TID: [-1234] [] [2019-06-04 20:01:24,511] INFO {org.apache.synapse.mediators.builtin.LogMediator} - STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /public/1.0.0/abc {org.apache.synapse.mediators.builtin.LogMediator}
==> /usr/lib64/wso2/wso2am/2.6.0/repository/logs/wso2-apigw-service.log <==
2019-06-04 20:01:24,511 [-] [PassThroughMessageProcessor-123] INFO __SynapseService STATUS = Message dispatched to the main sequence. Invalid URL., RESOURCE = /public/1.0.0/abc
This happens if you haven't shared the API artifacts between the two servers. Only one server has the updated artifact. When you publish an API, it creates a Synapse(API) artifact which stored in the file system of the server. The location is SERVER_HOME/repository/deployment/server/synapse/default/api location. If you are running more than one instance, you should have a mechanism to sync those artifacts between the servers. There are several options.
Use a shared file system for the nodes
Rsync option - you can keep always point to node 1 and publish the API. From node 1 you can rsync data to the node 2. In the api-manager.xml, APIGateway serverURL should be changed to node 1 in both nodes.

Issues while configuring the API Subscription BPS WSO2

So, i've my WSO2 BPS 3.6.0 configured to support SSL and a custom hostname i.e. mydomain.domain.com:9445 etc. and i'm trying to implement the API Subscription Workflow by following this documentation.
Now i've performed the following steps:
set the offset of wso2 bps to 2 and it is running fine with port: 9445
edited the wsa:Address tag in bothSubscriptionService.epr and SubscriptionCallbackService.epr located in API-M_HOME/business-processes/epr
as the bps server had a customized hostname instead of localhost (not sure if performing this step was right)
SubscriptionService.epr
SubscriptionCallBackService.epr
copy-pasted the epr folder from API-M_HOME/business-processes/epr to BPS_HOME/repository/conf/epr
Added the required BPEL package and human task accordingly
Navigated to the carbon console from APIM and edited the workflow-extensions.xml, here's how it looks like
set the TaskCoordinationEnabled tag of b4p-cordination-config.xml to true located in BPS_Home\repository\conf
Consider OTHER required configurations:
At API Manager End:
site.json file located at APIM_Home\repository\deployment\server\jaggeryapps\admin\site\conf
{
"theme": {
"base": "wso2",
"subtheme": "modern"
},
"context": "/admin",
"request_url": "READ_FROM_REQUEST",
"tasksPerPage": 10,
"allowedPermission": "/permission/admin/manage/apim_admin",
"workflows": {
"workFlowServerURL": "https://mydomain.domain.com:9445/services/",
},
"ssoConfiguration": {
"enabled": "false",
"issuer": "API_WORKFLOW_ADMIN",
"identityProviderURL": "https://localhost:9443/samlsso",
"keyStorePassword": "",
"identityAlias": "",
"keyStoreName": "",
"verifyAssertionValidityPeriod": "true",
"audienceRestrictionsEnabled": "true",
"responseSigningEnabled": "true",
"assertionSigningEnabled": "true",
"assertionEncryptionEnabled": "false",
"idpInit" : "false",
"idpInitSSOURL" : "https://localhost:9443/samlsso?spEntityID=API_WORKFLOW_ADMIN",
"externalLogoutPage" : "https://localhost:9443/samlsso?slo=true"
},
"reverseProxy": {
"enabled": false,
// values true , false , "auto" - will look for X-Forwarded-* headers
"host": "sample.proxydomain.com",
// If reverse proxy do not have a domain name use IP
"context": ""
//"regContext":"" // Use only if different path is used for registry
}
}
the workflowconfiguration in api-manager.xml
At BPS end:
carbon.xml
Issue: Now whenever a user navigates to APIM Store and subscribes to any API, the subscription request is listed at the APIM Admin console. When i select APPROVE from the provided ddl and click on the COMPLETE button, the record vanishes. However, this is the error that i see at WSO2's CMD windows:
APIM's cmd window
[2017-11-09 00:13:17,022] INFO - TimeoutHandler This engine will
expire all cal lbacks after GLOBAL_TIMEOUT: 120 seconds, irrespective
of the timeout action, af ter the specified or optional timeout
[2017-11-09 00:13:17,164] ERROR - TargetHandler I/O error: Host name
verificatio n failed for host : localhost javax.net.ssl.SSLException:
Host name verification failed for host : localhost
at org.apache.synapse.transport.http.conn.ClientSSLSetupHandler.verify(C
lientSSLSetupHandler.java:171)
at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession
.java:308)
at org.apache.http.nio.reactor.ssl.SSLIOSession.isAppInputReady(SSLIOSes
sion.java:410)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(Abstra
ctIODispatch.java:119)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor
.java:159)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(Abstr
actIOReactor.java:338)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(Abst
ractIOReactor.java:316)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIO
Reactor.java:277)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.
java:105)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.
run(AbstractMultiworkerIOReactor.java:586)
at java.lang.Thread.run(Thread.java:745)
[2017-11-09 00:13:17,188] WARN - EndpointContext Endpoint : AnonymousEndpoint w
ith address
https://localhost:9443/store/site/blocks/workflow/workflow-listener/
ajax/workflow-listener.jag will be marked SUSPENDED as it failed
[2017-11-09 00:13:17,193] WARN - EndpointContext Suspending endpoint
: Anonymou sEndpoint with address
https://localhost:9443/store/site/blocks/workflow/workflo
w-listener/ajax/workflow-listener.jag - current suspend duration is :
30000ms - Next retry after : Thu Nov 09 00:13:47 EST 2017
[2017-11-0900:13:17,201] INFO - LogMediator STATUS = Executing default 'fault'
sequence, ERROR_CODE = 101500, ERROR_MESSAGE = Error in Sender
[2017-11-09 00:14:17,238] INFO - SourceHandler Writer null when
calling informW riterError [2017-11-09 00:14:17,238] WARN -
SourceHandler Connection time out after reques t is read:
http-incoming-1 Socket Timeout : 60000 Remote Address : /10.10.30.130
:49249
[2017-11-09 00:14:24,671] ERROR - AxisEngine The endpoint
reference (EPR) for th e Operation not found is
/services/WorkflowCallbackService and the WSA Action = null. If this
EPR was previously reachable, please contact the server administra
tor. org.apache.axis2.AxisFault: The endpoint reference (EPR) for the
Operation not f ound is /services/WorkflowCallbackService and the WSA
Action = null. If this EPR was previously reachable, please contact
the server administrator.
at org.apache.axis2.engine.DispatchPhase.checkPostConditions(DispatchPha
se.java:102)
at org.apache.axis2.engine.Phase.invoke(Phase.java:329)
at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:261)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:167)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEn
closingRESTHandler(ServerWorker.java:325)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.j
ava:158)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(Native
WorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
at java.lang.Thread.run(Thread.java:745) [2017-11-09 00:14:24,673] ERROR - ServerWorker Error processing GET request for :
/services/WorkflowCallbackService org.apache.axis2.AxisFault: The
endpoint reference (EPR) for the Operation not f ound is
/services/WorkflowCallbackService and the WSA Action = null. If this
EPR was previously reachable, please contact the server
administrator.
at org.apache.axis2.engine.DispatchPhase.checkPostConditions(DispatchPha
se.java:102)
at org.apache.axis2.engine.Phase.invoke(Phase.java:329)
at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:261)
at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:167)
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEn
closingRESTHandler(ServerWorker.java:325)
at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.j
ava:158)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(Native
WorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
at java.lang.Thread.run(Thread.java:745)
BPS's cmd window:
[2017-11-09 00:14:16,738] ERROR {org.wso2.carbon.bpel.core.ode.integration.Partn erService} - Error
sending message to Axis2 for ODE mex {PartnerRoleMex#hqejbhc
nphrcr2a32g83oh [PID
{http://workflow.subscription.apimgt.carbon.wso2.org}Subscr
iptionApprovalWorkFlowProcess-1] calling
org.apache.ode.bpel.epr.WSAEndpoint#705 fc38f.resumeEvent(...) Status
REQUEST} org.apache.axis2.AxisFault: Read timed out
at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.jav
a:199)
at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:77)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessa
geWithCommons(CommonsHTTPTransportSender.java:451)
at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(Com
monsHTTPTransportSender.java:278)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutOnlyAxisOperationClient.executeImpl(O
utOnlyAxisOperation.java:297)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:
149)
at org.wso2.carbon.bpel.core.ode.integration.utils.AxisServiceUtils.invo
keService(AxisServiceUtils.java:323)
at org.wso2.carbon.bpel.core.ode.integration.PartnerService.invoke(Partn
erService.java:333)
at org.wso2.carbon.bpel.core.ode.integration.BPELMessageExchangeContextI
mpl.invokePartner(BPELMessageExchangeContextImpl.java:43)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.invoke(BpelRuntimeC
ontextImpl.java:897)
at org.apache.ode.bpel.runtime.INVOKE.run(INVOKE.java:130)
at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.ode.jacob.vpu.JacobVPU$JacobThreadImpl.run(JacobVPU.java:4
51)
at org.apache.ode.jacob.vpu.JacobVPU.execute(JacobVPU.java:139)
at org.apache.ode.bpel.engine.BpelRuntimeContextImpl.execute(BpelRuntime
ContextImpl.java:1002)
at org.apache.ode.bpel.engine.PartnerLinkMyRoleImpl.invokeInstance(Partn
erLinkMyRoleImpl.java:250)
at org.apache.ode.bpel.engine.BpelProcess$1.invoke(BpelProcess.java:288)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java
:224)
at org.apache.ode.bpel.engine.BpelProcess.invokeProcess(BpelProcess.java
:279)
at org.apache.ode.bpel.engine.BpelProcess.handleJobDetails(BpelProcess.j
ava:434)
at org.apache.ode.bpel.engine.BpelEngineImpl.onScheduledJob(BpelEngineIm
pl.java:558)
at org.apache.ode.bpel.engine.BpelServerImpl.onScheduledJob(BpelServerIm
pl.java:467)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleS
cheduler.java:633)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob$1.call(SimpleS
cheduler.java:627)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(Simpl
eScheduler.java:298)
at org.apache.ode.scheduler.simple.SimpleScheduler.execTransaction(Simpl
eScheduler.java:253)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleSch
eduler.java:627)
at org.apache.ode.scheduler.simple.SimpleScheduler$RunJob.call(SimpleSch
eduler.java:611)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
at java.lang.Thread.run(Thread.java:745) Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:150)
at java.net.SocketInputStream.read(SocketInputStream.java:121)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:961)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:918)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:
78)
at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106
)
at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.
java:1116)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$Http
ConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413)
at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMetho
dBase.java:1973)
at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodB
ase.java:1735)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.j
ava:1098)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(Htt
pMethodDirector.java:398)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMe
thodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.jav
a:397)
at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(Abst
ractHTTPSender.java:659)
at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.jav
a:195)
... 34 more
What could be the issue here? Any idea? do let me know. Thanks
Note that the bps workflow for API STATE CHANGE works just fine with the same configurations
Please note, that you are using calls with HTTPS with specific domain names
Host name verification failed for host : localhost at org.apache.synapse.transport.http.conn.ClientSSLSetupHandler.verify(ClientSSLSetupHandler.java:171) at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession .java:308)
the certificate provided is CN=localhost, so indeed the host verification fails
what you can do about it
simplest way is switching to http when on secure network (behind firewall, vpn, ..)
update SSL certificates of BPS and APIM to match their hostnames and they have to trust each others certificate (or certificate issuer)
disable SSL hostname validation in axis2.xml (I do not recommend it, good for DEV, VERY BAD for PROD) - set <parameter name="HostnameVerifier">AllowAll</parameter>

WSO2 API Publisher SAML SSO login fails

I configured WSO2 API Publisher (1.10.0) SAML SSO however login fails with the following error:
TID: [-1234] [] [2016-02-10 18:33:16,643] WARN {org.wso2.carbon.identity.sso.saml.processors.SPInitSSOAuthnRequestProcessor} - Destination validation for Authentication Request failed. Received: [null]. Expected one in the list: [https://identity.mydomain.pt:443/samlsso]
File publisher/site/conf/site.json:
"ssoConfiguration" : {
"enabled" : "true",
"issuer" : "apis-publisher",
"identityProviderURL" : "https://identity.mydomain.pt:443/samlsso",
"keyStorePassword" : "wso2carbon",
"identityAlias" : "wso2carbon",
"responseSigningEnabled":"true",
"keyStoreName" :"/home/wso2/wso2am-1.10.0/repository/resources/security/wso2carbon.jks",
//"nameIdPolicy" : "urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified", //If not specified, 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified' will be used
},
and the service provider configuration (sso-idp-config.xml):
<!-- API MANAGER PUBLISHER -->
<ServiceProvider>
<Issuer>apis-publisher</Issuer>
<AssertionConsumerServiceURLs>
<AssertionConsumerServiceURL>https://mgt.apis.mydomain.pt:443/publisher/jagg/jaggery_acs.jag</AssertionConsumerServiceURL>
<AssertionConsumerServiceURL>https://mgt.apis.mydomain.pt/publisher/jagg/jaggery_acs.jag</AssertionConsumerServiceURL>
</AssertionConsumerServiceURLs>
<DefaultAssertionConsumerServiceURL>https://mgt.apis.mydomain.pt:443/publisher/jagg/jaggery_acs.jag</DefaultAssertionConsumerServiceURL>
<EnableSingleLogout>true</EnableSingleLogout>
<SLOResponseURL/>
<SLORequestURL/>
<SAMLDefaultSigningAlgorithmURI>http://www.w3.org/2000/09/xmldsig#rsa-sha1</SAMLDefaultSigningAlgorithmURI>
<SAMLDefaultDigestAlgorithmURI>http://www.w3.org/2000/09/xmldsig#sha1</SAMLDefaultDigestAlgorithmURI>
<SignResponse>true</SignResponse>
<ValidateSignatures>true</ValidateSignatures>
<EncryptAssertion>false</EncryptAssertion>
<CertAlias>wso2carbon</CertAlias>
<EnableAttributeProfile>false</EnableAttributeProfile>
<IncludeAttributeByDefault>false</IncludeAttributeByDefault>
<ConsumingServiceIndex/>
<EnableAudienceRestriction>false</EnableAudienceRestriction>
<AudiencesList>
<Audience>apis-publisher</Audience>
</AudiencesList>
<EnableRecipients>false</EnableRecipients>
<RecipientList>
<Recipient/>
</RecipientList>
<EnableIdPInitiatedSSO>false</EnableIdPInitiatedSSO>
<EnableIdPInitSLO>false</EnableIdPInitSLO>
<ReturnToURLList>
<ReturnToURL/>
</ReturnToURLList>
</ServiceProvider>
I did the same configuration for the API Store and login is working.
I solve my problem by turning off signatures validation.
<ValidateSignatures>false</ValidateSignatures>

WSO2 ESB HTTPS Endpoint, call mediator - But a callback is not registered ( anymore) to process this response error

While using Call mediator and Send Mediator (calling HTTPS endpoint with username token by using policy at the endpoint)
Getting the following error :
[2015-01-03 03:38:23,058] DEBUG - SynapseCallbackReceiver Callback removed for request message id :
urn:uuid:160c12bd-1286-4ef0-873e-93777380a4a2. Pending callbacks count : 1
[2015-01-03 03:38:23,061] DEBUG - TargetHandler http-outgoing-19: Closed
[2015-01-03 03:38:23,065] DEBUG - TargetHandler http-outgoing-19: Keep-Alive Connection closed
[2015-01-03 03:38:23,064] WARN - SynapseCallbackReceiver Synapse received a response for the reques
t with message Id : urn:uuid:160c12bd-1286-4ef0-873e-93777380a4a2 But a callback is not registered (
anymore) to process this response
[2015-01-03 03:38:23,067] DEBUG - LoggingNHttpClientConnection http-outgoing-19: Shutdown connection
[2015-01-03 03:38:51,472] DEBUG - access - 68.232.203.67 - - [03/Jan/2015:03:38:51 +0530] "POST /Ser
vice.asmx HTTP/1.1" - - "-" "Synapse-PT-HttpComponents-NIO"
[2015-01-03 03:38:51,476] DEBUG - access - 68.232.203.67 - [03/Jan/2015:03:38:51 +0530] "- - " 200
I can see the expected response on the console WIRE>>
but getting this error and the response is not being correlated
please let me know if anyone has faced this issue earlier and how to resolve this.
Thanks
Reason might be, your service endpoint takes much time to respond. Increase the endpoint timeout value in the endpoint configuration and synapse.global_timeout_interval value in synapse properties file.