Blocking in classloading while quering hazelcast - classloader

We are using hazelcast as a distributed cache. After the application runs for a certain time we start to get blocking in classloading. Following is the stacktrace :
java.lang.Thread.State: BLOCKED (on object monitor)      at java.lang.ClassLoader.loadClass(ClassLoader.java:404)      - locked <
0x00002acaac4c4718> (a java.lang.Object)      at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)      at java.lang.ClassLoader.loadClass(ClassLoader.java:357)      at com.hazelcast.nio.ClassLoaderUtil.tryLoadClass(ClassLoaderUtil.java:124)      at com.hazelcast.nio.ClassLoaderUtil.loadClass(ClassLoaderUtil.java:97)      at com.hazelcast.nio.IOUtil$1.resolveClass(IOUtil.java:113)      at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)      at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)      at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)      at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)      at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)      at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)      at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)      at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)      at com.hazelcast.nio.serialization.DefaultSerializers$ObjectSerializer.read(DefaultSerializers.java:196)      at com.hazelcast.nio.serialization.StreamSerializerAdapter.toObject(StreamSerializerAdapter.java:65)      at com.hazelcast.nio.serialization.SerializationServiceImpl.toObject(SerializationServiceImpl.java:260)      at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:186)      at com.hazelcast.map.impl.AbstractMapServiceContextSupport.toObject(AbstractMapServiceContextSupport.java:42)      at com.hazelcast.map.impl.DefaultMapServiceContext.toObject(DefaultMapServiceContext.java:28)      at com.hazelcast.map.impl.proxy.MapProxySupport.toObject(MapProxySupport.java:1038)      at com.hazelcast.map.impl.proxy.MapProxyImpl.get(MapProxyImpl.java:84)
Hazelcast is loading the class every-time it de-serializes an object. I am not sure why classloading is required each time.
Can somebody please help.

This is not Hazelcast specific, whenever you create an instance you have to ask the classloader for the class, no matter you use reflection or a new call. The problem is actually when synchronized classloaders come into the play (like in webapps or stuff). Hazelcast, obviously, has to deserialize a lot and therefore requests a lot of classes.
The internal deserialization is kind of optimized by now (by caching the conctructor instances - as far as I remember) but the Java standard serialization (the one you use) always wants the class and classes aren't yet cached.

Related

Can an actor have multiple addresses?

Say I wish to model a physical individual with an actor. Such an individual has multiple aliases (all unique), i.e. email address, social security number, passport number etc.
I want to merge all data associated with any alias.
example
Transaction - ID
#1 - A,B
#2 - B,C
#3 - D
If I assign the actor address by ID, I should have only 2 actors, the first has 3 different addresses (A,B,C) and containing transactions #1 and #2. The second with address D (but not limited to only D) with transaction #3.
#1, #2 - A,B,C [Actor 1]
#3 - D [Actor 2]
Additionally, if transaction #4 should arrive with IDs [C,D], I will be left with 1 actor containing all transactions and all aliases (A,B,C,D).
#1,#2,#3,#4 - A,B,C,D [Actor 1]
Can an actor have multiple addresses, or is there an alternative idiomatic pattern to combine actors?
An actor has only one address.
But you can model each alias as an actor which forwards messages to a target.
An example of this would be along the lines of (in Scala, untyped/classic Akka... things like constructor parameters, Props instances etc. omitted for brevity)
object AliasActor {
case class AliasFor(ref: ActorRef)
}
class AliasActor extends Actor {
import AliasActor.AliasFor
override def receive: Receive = {
case AliasFor(ref) =>
// If there's some state associated with this alias that should be forwarded, forward it here
context.become(aliased(ref))
}
def aliased(ref: ActorRef): Receive = {
case AliasFor(ref) =>
() // Explicitly ignore this message (could log, etc.)
case msg =>
ref ! msg
}
}
IOW, each alias is itself an actor where once it's told which actor it's an alias for, it forwards any message it receives to that actor, thus making a send to the alias equivalent to the send to what it's an alias for (at the cost of some indirection).
You may find cluster sharding a better fit than working with actor addresses, even in the single node case.
In general, there cannot be a general way to combine 2 actors. You have to design their protocol to allow the state of one to be incorporated into the other (or the state of both to be incorporated into a new actor) and then have one forward to the other (or have both forward to the new actor).

Writing flows using accounts in corda

I am using accounts in Corda. New accounts are creating successfully in my code but, I am facing difficulties in doing these two things.
1.) How to check if account is actually created and is present in the node, means if we can check the list of all the accounts in Corda.
2.) How to write responder flow for accounts, means My transaction flow is not working properly. Is there anything to change in the responder flow classic code if we now start using the accounts library?
My Code is as follows :
#InitiatedBy(Proposal.class)
public static class ProposalAcceptance extends FlowLogic<Void> {
//private variable
private FlowSession counterpartySession;
//Constructor
public ProposalAcceptance(FlowSession counterpartySession) {
this.counterpartySession = counterpartySession;
}
#Suspendable
#Override
public Void call() throws FlowException {
SignedTransaction signedTransaction = subFlow(new SignTransactionFlow(counterpartySession) {
#Suspendable
#Override
protected void checkTransaction(SignedTransaction stx) throws FlowException {
/*
* SignTransactionFlow will automatically verify the transaction and its signatures before signing it.
* However, just because a transaction is contractually valid doesn’t mean we necessarily want to sign.
* What if we don’t want to deal with the counterparty in question, or the value is too high,
* or we’re not happy with the transaction’s structure? checkTransaction
* allows us to define these additional checks. If any of these conditions are not met,
* we will not sign the transaction - even if the transaction and its signatures are contractually valid.
* ----------
* For this hello-world cordapp, we will not implement any aditional checks.
* */
}
});
//Stored the transaction into data base.
subFlow(new ReceiveFinalityFlow(counterpartySession));
return null;
}
}
Account internally are simply corda states of type AccountInfo, so you could query the vault to list all accounts the node knows about using:
run vaultQuery contractStateType: com.r3.corda.lib.accounts.contracts.states.AccountInfo
There isn't anything specific that changes in the responder flow, please make sure you are using the correct sessions in the initiator flow. Take a look at few of the samples available in the samples repository here: https://github.com/corda/samples-java/tree/master/Accounts
To check that an account was created, you need to write flow tests; a great way to learn is to look at how R3 engineers conduct their tests.
For instance, you can find here the test scenarios for CreateAccount flow.
To get an account, the library has a very useful service KeyManagementBackedAccountService with different methods to get an account (by name, UUID, or PublicKey); have a look here.
Now regarding requesting the signature of an account, one thing that is important to understand is that it's not the account that signs the transaction, it's the node that hosts the account that signs on behalf of the account.
So let's say you have 3 nodes (A, B, and C); A initiates a flow and requests the signature of 10 accounts (5 are hosted on B, and 5 are hosted on C).
After A signs the initial transaction it will create FlowSessions to collect signatures.
Since it's the host nodes that sign on behalf of accounts, then in our example you only need 2 FlowSessions; one with node B (so it signs on behalf of the 5 accounts it hosts) and one session with node C (for the other 5 accounts).
In the responder flow nodes B and C will receive the transaction that was signed by the initiator.
Out of the box, when a node receives a transaction it looks at all of the required signatures and for each required signature if it owns the private key; it will provide that signature.
Meaning, when node B will receive the transaction; it will see 10 required signatures, and because it hosts 5 of the accounts (meaning it owns the private keys for those 5 accounts), then it will automatically provide 5 signatures.

Is there any difference where we specify includes in Rails activerecord

I am surprised with includes method behaviour. let's take example
case 1:
User.includes(:organizations, :category, subscriptions: :plan).coaches
Time taken: 200ms
Case 2:
User.coaches.includes(:organizations, :category, subscriptions: :plan)
Time taken: 20 ms
Both cases where executed without cache.
Is make difference where we specify includes ?

Encrypt and sign a message using Microsoft Security Support Provider Interface (SSPI)

I want to use Microsoft's Security Support Provider Interface (SSPI) in a Windows Domain environment (with Kerberos) to send encrypted and signed messages between two entities (in C++).
Based on the documentation within the MSDN, there are the two functions MakeSignature() and EncryptMessage() [1] but the documentation as well as the example code [2] do not explicitly anwser the question of how to send data encrypted and signed (according to encrypt-than-mac).
Can anyone confirm that I have to use manually invoke EncryptMessage() and MakeSignature() in sequence to get to the desired result? Or do I miss something there and EncryptMessage() has a way to directly create a signature of the encrypted data?
[1] MSDN documentation of EncryptMessage() and MakeSignature()
https://msdn.microsoft.com/en-us/library/windows/desktop/aa378736(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/aa375378(v=vs.85).aspx
[2] MSDN Example Code
https://msdn.microsoft.com/en-us/library/windows/desktop/aa380531(v=vs.85).aspx
---- Reply to Remus Rusanu's answer 2017-03-09 ---------------------------
Thanks #Remus Rusanu for your answer, I didn't take the GSSAPI interoperability document into account yet.
Here it is stated that "GSS_Wrap and GSS_Unwrap are used for both integrity and privacy with the use of privacy controlled by the value of the "conf_flag" argument." and that "The SSPI equivalent to GSS_Wrap is EncryptMessage (Kerberos) for both integrity and privacy".
You said that "EncryptMessage [...] will do the signing too, if the negotiated context requires it.". This means for me, that the at least the following fContextReq flags need to be set for InitializeSecurityContext():
ISC_REQ_CONFIDENTIALITY
ISC_REQ_INTEGRITY
Can you (or somebody else) can confirm this?
---- Update 2017-03-16 ----------------------------------------------------------------
After further research I came up with the following insights:
The Kerberos specific EncryptMessage() function does not provide message integrity, regardless of how the Securitycontext was initialized.
The general EncryptMessage() and general DecryptMessage functions support the feature of creating and verifying the message's integrity because there exists some supporting Security Support Providers (SSPs) - but Kerberos does not.
If DecryptMessage would check the message's integrity, there must be a respective error return code in case of a modified message. The general DecryptMessage interface lists the error code "SEC_E_MESSAGE_ALTERED" which is described as "The message has been altered. Used with the Digest and Schannel SSPs.".
The specific DecryptMessage interface for the SSPs Digest and Schannel lists the SEC_E_MESSAGE_ALTERED - but the Kerberos DecryptMessage does not.
Within the parameter description of the general EncryptMessage's documentation, the term 'signature' is used only regarding the Digest SSP: "When using the Digest SSP, there must be a second buffer of type
SECBUFFER_PADDING or SEC_BUFFER_DATA to hold signature information".
MakeSignature does not create a digital signature according to Wikipedia's definition (authenticity + non-repudiation + integrity). MakeSignature creates a cryptographic hash to provide message integrity.
The name of the MakeSignature() function leads to the idea that SSPI creates a digital signature (authenticity + non-repudiation + integrity) but the MakeSignature documentation explains that only a cryptographic checksum is created (providing integrity): "The MakeSignature function generates a cryptographic checksum of the message, and also includes sequencing information to prevent message loss or insertion."
The VerifySignature documentation helps as well to clarify SSPI's terminology: "Verifies that a message signed by using the MakeSignature function was received in the correct sequence and has not been modified."
From (1) and (2) it follows that one needs to invoke EncryptData() and afterwards MakeSignature() (for the ciphertext) to achieve confidentiality and integrity.
Hope that my self-answer will help someone at some point in time ;)
If someone has something to add or correct in my answer, please reply and help to improve the information collected here!
If I remember correctly you only call EncryptMessage/DecryptMessage and this will do the signing too, if the negotiated context requires it. For example if you look at SSPI/Kerberos Interoperability with GSSAPI it states that EncryptMessagepairs with GSS_Unwrap and DecryptMessage pairs with GSS_Wrap, without involving MakeSignature. The example in the link also shows that you must supply 3 SecBuffer structures (SECBUFFER_TOKEN, SECBUFFER_DATA and SECBUFFER_PADDING, the last I think is optional) to EncryptMessage and 2 for DecryptMessage. The two complementary examples at Using SSPI with a Windows Sockets Server and Using SSPI with a Windows Sockets Client give full functional message exchange and you can also see that MakeSignature/VerifySignature are never called, the signature is handled by Encrypt/Decrypt and is placed in the 'security token' header or trailer (where to it goes on the wire is not specified by SSPI/SPNego/Kerberos, this is not TLS/Schannel...).
If you want to create a GSS Wrap token with only a signature (not encrypted), pass KERB_WRAP_NO_ENCRYPT as the qop value to EncryptMessage. The signed wrap token includes the payload and the signature.
MakeSignature creates a GSS MIC token - which is only the signature and does not include the payload. You can use this with application protocols that require a detached signature.

Understanding the workflow of the messages in a generic server implementation in Erlang

The following code is from "Programming Erlang, 2nd Edition". It is an example of how to implement a generic server in Erlang.
-module(server1).
-export([start/2, rpc/2]).
start(Name, Mod) ->
register(Name, spawn(fun() -> loop(Name, Mod, Mod:init()) end)).
rpc(Name, Request) ->
Name ! {self(), Request},
receive
{Name, Response} -> Response
end.
loop(Name, Mod, State) ->
receive
{From, Request} ->
{Response, State1} = Mod:handle(Request, State),
From ! {Name, Response},
loop(Name, Mod, State1)
end.
-module(name_server).
-export([init/0, add/2, find/1, handle/2]).
-import(server1, [rpc/2]).
%% client routines
add(Name, Place) -> rpc(name_server, {add, Name, Place}).
find(Name) -> rpc(name_server, {find, Name}).
%% callback routines
init() -> dict:new().
handle({add, Name, Place}, Dict) -> {ok, dict:store(Name, Place, Dict)};
handle({find, Name}, Dict) -> {dict:find(Name, Dict), Dict}.
server1:start(name_server, name_server).
name_server:add(joe, "at home").
name_server:find(joe).
I tried so hard to understand the workflow of the messages. Would you please help me to understand the workflow of this server implementation during the executing of the functions: server1:start, name_server:add and name_server:find?
This example is an introduction to the behavior concept used in Erlang. It illustrates how you can build a server in 2 parts:
The first part is the module server1 which contains only generic features that could be used by any server. Its role is to maintain available some
information (the State variable) and to be ready do answer some request. This is what the gen_server behavior does, with much more features.
The second part is the module name_server. This one describe what a particular sever does. It implements interfaces for the user of the server and internal functions (callback) which describe what to do for each specific user request.
Lets follow the 3 shell commands (see diagram at the end):
server1:start(name_server, name_server). the user calls the start routine of the generic server, giving 2 informations (with the save values), the name of the server he wants to start, and the name of the module which contains the callbacks. with this the generic start routine
1/ calls back the init routine of name_server to get the server state Mod:init(), you can see that the generic part does not know which kind of information it will keep; the state is created by the name_server:init/0 routine, the first callback function. here it is an empty dictionary dict:new().
2/ spawns a new process calling the generic server loop, with the 3 informations (server name, callback module and initial server state) spawn(fun() -> loop(Name, Mod, Mod:init()). The loop itself just starts and wait for a message of the form {,} in the receive block.
3/ registers the new process with the name name_server register(Name, spawn(fun() -> loop(Name, Mod, Mod:init()) end)).
4/ returns to the shell.
At this point, in parallel to the shell, there is a new living process named name_server running and waiting for a request. Note that generally this step is not done by the user, but by the application. It is why there is no interface to do that in the callback module, and that the start function is directly called in the generic server.
name_server:add(joe, "at home"). The user adds an information in the server, calling the add function of the name_server . This interface is here to hide the mechanism to call the server, and it runs in the client process.
1/ The add function calls the rpc routine of the server with 2 parameters rpc(name_server, {add, Name, Place}): the callback module and the request itself {add, Name, Place}. the rpc routine is still executed in the client process,
2/ it builds a message for the server made of 2 information: the pid of the client process (here the shell) and the request itself then send it to the named server: Name ! {self(), Request},
3/ The client waits for a response. Remember that we left the server waiting for a message in the loop routine.
4/ The message sent matches the expected format {From, Request} of the server, so the server enters in the message processing. First it callback the name_server module with 2 parameters: the request and the current state Mod:handle(Request, State). the intent is to have a generic server code, so it is not aware of what to do with the requests. In the name_server:handle/2 function, the right operation is done. Thanks to pattern matching, the clause handle({add, Name, Place}, Dict) -> {ok, dict:store(Name, Place, Dict)}; is called and a new dictionary is created storing the key/value pair Name/Place (here joe/"at home"). the new dict is returned with the response in a tuple {ok,NewDict}.
5/ Now the generic server can build the answer and return it to the client From ! {Name, Response}, en re-enter in the loop with the new state loop(Name, Mod, State1) and wait for the next request.
6/ The client who was waiting on the receive block get the message {Name, Response} and can then extract the Response and return it to the shell, here it is simply ok.
name_server:find(joe). The user wants to get an information from the server. The process is exactly the same as before, and it is the interest of the generic server. whatever the request is, it does the same job. When you will look into gen_server behavior, you will see that there are several kind of accesses to the sever such as call, cast, info... So if we look at the flow of this request:
1/ call rpc with callback module and request rpc(name_server, {find, Name}).
2/ send a message to the server with client pid and request
3/ wait for the answer
4/ the server receive the message and callback the name_server with the request Mod:handle(Request, State), it get the response from the handle handle({find, Name}, Dict) -> {dict:find(Name, Dict), Dict}. which returns the result of the dictionary search and the dictionary itself.
5/ the server build the answer and sent it to the client From ! {Name, Response}, and re-enter in the loop with the same state, waiting for the next request.
6/ The client who was waiting on the receive block get the message {Name, Response} and can then extract the Response and return it to the shell, now it is the place where joe is: "at home".
the next picture shows the different message exchanges: