XmlSerializer First Chance Exception on every web service call - web-services

I'm hoping one of you can assist me with diagnosing a stack trace of a web service call. I am in the process of trying to figure out what is causing high CPU usage on our web services, so I have taken a few memory dumps when the CPU is spiking higher than 80%.
After running through the dump files (and picking off the easy fixes) I am left with one that is particularly interesting.
bcryptPrimitives!accumulate+54
bcryptPrimitives!create_modulus+200
bcryptPrimitives!create_modulus_select_arithmetic+2d
bcryptPrimitives!rsa_import+254
bcryptPrimitives!MSCryptImportKeyPair+132
bcrypt!BCryptImportKeyPair+179
rsaenh!LocalPopulateBCryptPublicKey+206
rsaenh!CPImportKey+346
cryptsp!CryptImportKey+163
clr!StrongNameTokenFromPublicKey+1a5
clr!CAssemblyName::SetProperty+218
clr!BaseAssemblySpec::CreateFusionName+32b
clr!BaseAssemblySpec::GetFileOrDisplayName+4e
clr!AssemblyNameNative::ToString+164
[[HelperMethodFrame_1OBJ] (System.Reflection.AssemblyName.nToString)] System.Reflection.AssemblyName.nToString()
mscorlib_ni!System.Reflection.AssemblyName.get_FullName()+9
RazorEngine.Compilation.CompilerServiceBase.CurrentDomain_AssemblyResolve(System.Object, System.ResolveEventArgs)+124
mscorlib_ni!System.AppDomain.OnAssemblyResolveEvent(System.Reflection.RuntimeAssembly, System.String)+a4
clr!CallDescrWorkerInternal+83
clr!CallDescrWorkerWithHandler+4a
clr!MethodDescCallSite::CallTargetWorker+251
clr!AppDomain::RaiseAssemblyResolveEvent+d6860
[[GCFrame]]
clr!AppDomain::TryResolveAssembly+82
clr!AppDomain::PostBindResolveAssembly+d1
clr!`AppDomain::BindAssemblySpec'::`1'::catch$5+d7
MSVCR120_CLR0400!CallSettingFrame+20
MSVCR120_CLR0400!_CxxCallCatchBlock+f5
ntdll!RcConsolidateFrames+3
clr!AppDomain::BindAssemblySpec+ef7
clr!AssemblySpec::LoadDomainAssembly+1ec
clr!AssemblySpec::LoadAssembly+1b
clr!AssemblyNative::Load+304
[[HelperMethodFrame_PROTECTOBJ] (System.Reflection.RuntimeAssembly._nLoad)] System.Reflection.RuntimeAssembly._nLoad(System.Reflection.AssemblyName, System.String, System.Security.Policy.Evidence, System.Reflection.RuntimeAssembly, System.Threading.StackCrawlMarkByRef, IntPtr, Boolean, Boolean, Boolean)
mscorlib_ni!System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(System.Reflection.AssemblyName, System.Security.Policy.Evidence, System.Reflection.RuntimeAssembly, System.Threading.StackCrawlMark ByRef, IntPtr, Boolean, Boolean, Boolean)+d2
mscorlib_ni!System.Reflection.Assembly.Load(System.Reflection.AssemblyName)+3b
System_Xml_ni!System.Xml.Serialization.TempAssembly.LoadGeneratedAssembly(System.Type, System.String, System.Xml.Serialization.XmlSerializerImplementation ByRef)+1a6
System_Xml_ni!System.Xml.Serialization.XmlSerializer.FromMappings(System.Xml.Serialization.XmlMapping[], System.Type)+59
System_ServiceModel_ni!System.ServiceModel.Description.XmlSerializerOperationBehavior+Reflector+SerializerGenerationContext.GenerateSerializers()+dd
System_ServiceModel_ni!System.ServiceModel.Description.XmlSerializerOperationBehavior+Reflector+SerializerGenerationContext.GetSerializer(Int32)+74
System_ServiceModel_ni!System.ServiceModel.Dispatcher.XmlSerializerOperationFormatter.AddHeadersToMessage(System.ServiceModel.Channels.Message, System.ServiceModel.Description.MessageDescription, System.Object[], Boolean)+be
System_ServiceModel_ni!System.ServiceModel.Dispatcher.OperationFormatter.SerializeRequest(System.ServiceModel.Channels.MessageVersion, System.Object[])+e2
System_ServiceModel_ni!System.ServiceModel.Dispatcher.ProxyOperationRuntime.BeforeRequest(System.ServiceModel.Dispatcher.ProxyRpc ByRef)+1d1
System_ServiceModel_ni!System.ServiceModel.Channels.ServiceChannel.PrepareCall(System.ServiceModel.Dispatcher.ProxyOperationRuntime, Boolean, System.ServiceModel.Dispatcher.ProxyRpc ByRef)+85
System_ServiceModel_ni!System.ServiceModel.Channels.ServiceChannel.Call(System.String, Boolean, System.ServiceModel.Dispatcher.ProxyOperationRuntime, System.Object[], System.Object[], System.TimeSpan)+27f
System_ServiceModel_ni!System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(System.Runtime.Remoting.Messaging.IMethodCallMessage, System.ServiceModel.Dispatcher.ProxyOperationRuntime)+6c
System_ServiceModel_ni!System.ServiceModel.Channels.ServiceChannelProxy.Invoke(System.Runtime.Remoting.Messaging.IMessage)+133
mscorlib_ni!System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(System.Runtime.Remoting.Proxies.MessageData ByRef, Int32)+1f4
clr!CTPMethodTable__CallTargetHelper3+12
clr!CallTargetWorker2+74
clr!CTPMethodTable::OnCall+1fb
clr!TransparentProxyStub_CrossContextPatchLabel+a
[[TPMethodFrame] (x.Server.WebReferences.x.x.Save)] x.Server.WebReferences.xxx.xxxServiceSoap.Save()
Our architecture is simple: We have a desktop client, which communicates via WCF to our services. A few of our service calls will then push data through to another system via web services. The stack trace above represents just such a call - from our service to another service.
I am receiving a first chance exception when loading the XmlSerializer (from my research done, this is intended behavior given that if the generated assembly is not found, it will generate one and continue). However, we appear to be receiving this exception for every web service call made. I was under the impression that once the assembly has been generated, it would no longer throw a first chance exception?)
Is this normal behavior for a web service call?
To me it appears as though we are generating this assembly every time - which then in turns raises the AssemblyResolveEvent - which in turn then executes the RazorEngine.Compilation resolve event... which at this point is completely unnecessary..
Any thoughts and ideas?
Thanks in advance

It appears from your traceback that the the method that is taking all the time is not the method that generates and loads dynamically created XmlSerializer DLLs. Instead the method taking the time is the method that tries to load pre-generated XmlSerializer DLLs.
To see this, check the reference source from XmlSerializer.FromMappings:
public static XmlSerializer[] FromMappings(XmlMapping[] mappings, Type type) {
if (mappings == null || mappings.Length == 0) return new XmlSerializer[0];
XmlSerializerImplementation contract = null;
Assembly assembly = type == null ? null : TempAssembly.LoadGeneratedAssembly(type, null, out contract);
TempAssembly tempAssembly = null;
if (assembly == null) {
if (XmlMapping.IsShallow(mappings)) {
return new XmlSerializer[0];
}
else {
if (type == null) {
tempAssembly = new TempAssembly(mappings, new Type[] { type }, null, null, null);
XmlSerializer[] serializers = new XmlSerializer[mappings.Length];
contract = tempAssembly.Contract;
for (int i = 0; i < serializers.Length; i++) {
serializers[i] = (XmlSerializer)contract.TypedSerializers[mappings[i].Key];
serializers[i].SetTempAssembly(tempAssembly, mappings[i]);
}
return serializers;
}
else {
// Use XmlSerializer cache when the type is not null.
return GetSerializersFromCache(mappings, type);
}
}
}
else {
XmlSerializer[] serializers = new XmlSerializer[mappings.Length];
for (int i = 0; i < serializers.Length; i++)
serializers[i] = (XmlSerializer)contract.TypedSerializers[mappings[i].Key];
return serializers;
}
}
Which calls TempAssembly.LoadGeneratedAssembly:
/// <devdoc>
/// <para>
/// Attempts to load pre-generated serialization assembly.
/// </para>
/// </devdoc>
internal static Assembly LoadGeneratedAssembly(Type type, string defaultNamespace, out XmlSerializerImplementation contract)
Loading of run-time XmlSerializer DLLs happens inside GetSerializersFromCache which, as the name suggests, are cached.
If XmlSerializer.FromMappings is taking substantial amounts of CPU time on every call in this specific traceback, perhaps you have some obsolete pre-compiled .XmlSerializers.dll files that you have left on your server (on disk or in the GAC) and need to be cleaned out? On every call .Net could be trying to load them, then finding that version numbers or signatures are mismatched, then throwing an exception, consuming significant CPU time in doing so. Or, if the DLLs are not obsolete, perhaps there is an architecture mismatch?

Related

libcurl how do I know a transfer is complete when using multi interface?

I know it's not a problem when using the easy interface because after curl_easy_perform returns the transfer is complete. But how do I know that when using the multi interface?
After going through the document. This is the only way I came up with so far:
class CompleteListener {
public:
virtual void onComplete(CURLcode) = 0;
};
CURLMsg* msg = curl_multi_info_read(...);
void* ptr;
if(msg && msg->msg == CURLMSG_DONE)
{
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, &ptr);
static_cast<CompleteListener*>(ptr)->onComplete(msg->data.result);
}
The problem with this approach is that now all private pointer must point to an instance of derived class of CompleteListener. If there is a way to get the pointer stored in CURLOPT_WRITEDATA, I can also just store a callback inside CURLOPT_PRIVATE.
When you call curl_multi_perform() (or curl_multi_socket_action()) you include a pointer to a counter that returns the number of currently "active" transfers. When that counter gets decreased, or even reaches zero, you know that one or more transfers were completed.
When you call curl_multi_info_read() (perhaps after you called curl_multi_perform()), it can return a pointer to a message from libcurl that can tell you the easy handle of a completed transfer (and its return code). If there are more than one transfer completed, repeated calls to the function will return more information until you get NULL back when there's no more info to get.
You can find this used in numerous of the examples hosted on the curl web site, for example the multi-app one.

Invalid address specified to RtlValidateHeap in cross-dll application when using QTcpSocket

Background:
Sorry this is such a complex problem but it is driving me nuts. Finding a solution may help others who need a compartmentalized application.
I have a Qt program that is VERY compartmentalized because it is meant to host plugins and be used in a variety of situations, sometimes as a server, sometimes as a client, sometimes as both. The plugins that are loaded are login dependent. (Because the access defined for the user is not necessarily up to the user and the user's access to data and functionality may be limited).
The application relies on a core DLL library (specific to the application) which is used by the main exe, the client, the server, and all plugin dlls. Client and server functionality are also in separate dlls. I am new to this style of programming so that may be leading to my issue.
My Problem:
I have a class called "BidirectionalTcpConnection" that is defined in the core DLL which is to be used by the executable, the client dll, and the server dll. It is a class that keeps track of data that is passed back and forth over a QTcpSocket. I wrote the class to avoid THE SAME problem as I am having now except that the problem originally occurred while using the QTcpSocket.ReadAll() function AND in the current situation. (If I tried reading all but the last byte, and then read the last byte using the QTcpSocket.peek(...) function it would work fine).
My new class successfully reads from and writes to the socket without error but when I try and close or abort the socket (this happened with my earlier workaround too...), I get the same error I was getting when I tried to read it (only on the last byte). I get an Invalid address specified to RtlValidateHeap. Basically it throws a "User Breakpoint" in dbgheap.c
My Hypothesis (What I believe is wrong):
The dbgheap.c documents that it is checking to see if the address is valid and that it resides on the current heap.
It is possible that the need for compartmentalizing my application may be leading to this issue. The data being supplied to the socket for sending was originally being allocated in the executable's heap along with the instance of BidirectionalTcpConnection. (I am trying to send the login and receive the permissions for application access). The socket itself however is being allocated in the core heap (assuming that the dll has a separate heap from the exe for internal data). I tried avoiding this by doing a deep copy of each piece of data that is to be sent over the socket within the core dll code. But that hasn't solved the problem. Presumably because the BidirectionalTcpConnection is still being allocated on a separate heap from the socket itself.
My question(s) for anyone who can help:
Is the assumption in my hypothesis correct?
Do I need to allocate the socket and the connection on the same heap? How do I
overcome this issue?
Also... if you look at the code, will I need to delete the returned
string that needs to be processed by the executable within the core
dll in order to avoid the same issue?
If you guys need some code... I have supplied what I think is necessary. I can supply more upon request.
Some Code:
For starters.. here is some basic code to show the way things are allocated. The login is performed in main before the main interface is shown. w is the main interface window class instance. Here is the code that starts the process leading to the crash:
while (loginFailed)
{
splash->showLogin();
while (splash->isWaitingOnLogin())
a.processEvents();
QString username(*splash->getUserName());
QString password(*splash->getPassword());
// LATER: encrypt login for sending
loginFailed = w.loginFailed(username, password, a);
}
Here is the code that instantiates the BidirectionalTcpConnection on the executable's stack and sends the login data. This code is inside a few separate private methods of the Qt main window class.
// method A
// processes Qstring parameters into sendable data...
// then calls method B
// which creates the instance of *BidirectionalTcpConnection*
...
if (getServerAddress() == QString("LOCAL"))
mTcpConnection = new BidirectionalTcpConnection(getHostAddressIn()->toString(),
(quint16)ServerPorts::loginRequest, (long)15, this);
else
mTcpConnection = new BidirectionalTcpConnection(*getServerAddress(),
(quint16)ServerPorts::loginRequest, (long)15, this);
...
// back to method A...
mTcpConnection->sendBinaryData(*dataStream);
mTcpConnection->flushMessages(); // sends the data across the socket
...
// waits for response and then parses user data when it comes
while (waitForResponse)
{
if (mTcpConnection->hasBufferedMessages())
{
QString* loginXML = loginConnection->getNextMessageAsText();
// parse the xml
if (parseLogin(*loginXML))
{
waitForResponse = false;
}
...
}
}
...
// calls method that closes the socket which causes crash
mTcpConnection->abortConnection(); // crash occurs inside this method
delete mTcpConnection;
mTcpConnection = NULL;
Here is the relevant BidirectionalTcpConnection code in order of use. Note, this code is located in the core dll so presumably it is allocating data on a separate stack...
BidirectionalTcpConnection::BidirectionalTcpConnection(const QString& destination,
quint16 port, long timeOutInterval, TimeUnit unit, QObject* parent) :
QObject(parent),
mSocket(parent),
...
{ }
void BidirectionalTcpConnection::sendBinaryData(QByteArray& data)
{
// notice I try and avoid different heaps where I can by copying the data...
mOutgoingMessageQueue.enqueue(new QByteArray(data)); // member is of QQueue type
}
QString* BidirectionalTcpConnection::getNextMessageAsText()
// NOTE: somehow I need to delete the returned pointer to prevent memory leak
{
if (mIncomingMessageQueue.size() == 0)
return NULL;
else
{
QByteArray* data = mIncomingMessageQueue.dequeue();
QString* stringData = new QString(*data);
delete data;
return stringData;
}
}
void BidirectionalTcpConnection::abortConnection()
{
mSocket.abort(); // **THIS CAUSES ERROR/CRASH**
clearQueues();
mIsConnected = false;
}

Flex database - HTTP call collision?

I have a Flex application for AIR. I fetch some data from a JSON-RPC web service through the mx.rpc.http.HTTPService class. I make all the calls asynchronously. When the results return I process them and put the data into an SQLite database through the flash.data.SQLConnection. This means quite some updates per web service calls so every callback starts a transaction, does the updates and then commits.
According to my debug console tracing I see two kinds of behaviour: either a callback successfully begins a transaction, calls the transaction event handler function, does all the updates, commits and then the next web service call returns. Or a callback successfully begins a transaction and as the next web service call returns (without trying to start a new transaction yet) the previous callback just... ceases to exist even before the callback of the beginning of the transaction.
Is that a bug in Flex? Or in AIR? Or in ActionScript? Or in the specific components? Do I do something wrong? Is this just my misunderstanding? (I'm just trying my wings in Flex, I don't really know what to expect from the system or how to handle this situation.)
Some code from my database manager class
public function beginTransaction(handler:Function):void {
// The calls are all fine up to this point
conn.begin(SQLTransactionLockType.DEFERRED, new Responder(handler, OnError));
// Begin is always called first. If another web service call doesn't come
// back up to this point then it won't until I call commit in an other
// function.
trace("this always runs yet");
// But if another call comes back just after begin is called then handler
// won't get called. Even though the previous trace still will.
}
My web service call
public function getWSCall(url:String, method:String, param:Object,
handler:Function):void
{
var http:HTTPService = new HTTPService();
http.addEventListener(FaultEvent.FAULT, JsonError);
http.addEventListener(ResultEvent.RESULT, handler);
http.url = url;
http.method = "POST";
http.contentType = "application/json";
var params:Object = {};
params.jsonrpc = "2.0";
params.method = method;
if (param !== null)
params.params = param;
params.id = method;
var json:String = JSON.stringify(params);
trace (url + " " + json);
http.send(json);
}
And an example of how I call it
JsonConnector.instance.getWSCall(WSConstants.GET_DATA_URL,
WSConstants.GET_DATA_METHOD, param, getDataCompleted);
And in the getDataCompleted after some rearrangement I call my database manager class where I finally begin the transaction:
dbConnector.Open(key, opened);
function opened(event:SQLEvent):void
{
if(event.type == SQLEvent.OPEN) {
dbConnector.beginTransaction(onBegin);
}
}

understanding RProperty IPC communication

i'm studying this source base. Basically this is an Anim server client for Symbian 3rd edition for the purpose of grabbing input events without consuming them in a reliable way.
If you spot this line of the server, here it is basically setting the RProperty value (apparently to an increasing counter); it seems no actual processing of the input is done.
inside this client line, the client is supposed to be receiving the notification data, but it only calls Attach.
my understanding is that Attach is only required to be called once, but is not clear in the client what event is triggered every time the server sets the RProperty
How (and where) is the client supposed to access the RProperty value?
After Attaching the client will somewhere Subscribe to the property where it passes a TRequestStatus reference. The server will signal the request status property via the kernel when the asynchronous event has happened (in your case the property was changed). If your example source code is implemented in the right way, you will find an active object (AO; CActive derived class) hanging around and the iStatus of this AO will be passed to the RProperty API. In this case the RunL function of the AO will be called when the property has been changed.
It is essential in Symbian to understand the active object framework and quite few people do it actually. Unfortunately I did not find a really good description online (they are explained quite well in Symbian OS Internals book) but this page at least gives you a quick example.
Example
In the ConstructL of your CMyActive subclass of CActive:
CKeyEventsClient* iClient;
RProperty iProperty;
// ...
void CMyActive::ConstructL()
{
RProcess myProcess;
TSecureId propertyCategory = myProcess.SecureId();
// avoid interference with other properties by defining the category
// as a secure ID of your process (perhaps it's the only allowed value)
TUint propertyKey = 1; // whatever you want
iClient = CKeyEventsClient::NewL(propertyCategory, propertyKey, ...);
iClient->OpenNotificationPropertyL(&iProperty);
// ...
CActiveScheduler::Add(this);
iProperty.Subscribe(iStatus);
SetActive();
}
Your RunL will be called when the property has been changed:
void CMyActive::RunL()
{
if (iStatus.Int() != KErrCancel) User::LeaveIfError(iStatus.Int());
// forward the error to RunError
// "To ensure that the subscriber does not miss updates, it should
// re-issue a subscription request before retrieving the current value
// and acting on it." (from docs)
iProperty.Subscribe(iStatus);
TInt value; // this type is passed to RProperty::Define() in the client
TInt err = iProperty.Get(value);
if (err != KErrNotFound) User::LeaveIfError(err);
SetActive();
}

NHibernate Load vs. Get behavior for testing

In simple tests I can assert whether an object has been persisted by whether it's Id is no longer at it's default value. But delete an object and want to check whether the object and perhaps its children are really not in the database, the object Id's will still be at their saved values.
So I need to go to the db, and I would like a helper assertion to make the tests more readable, which is where the question comes in. I like the idea of using Load to save the db call, but I'm wondering if the ensuing exceptions can corrupt the session.
Below are how the two assertions would look, I think. Which would you use?
Cheers,
Berryl
Get
public static void AssertIsTransient<T>(this T instance, ISession session)
where T : Entity
{
if (instance.IsTransient()) return;
var found = session.Get<T>(instance.Id);
if (found != null) Assert.Fail(string.Format("{0} has persistent id '{1}'", instance, instance.Id));
}
Load
public static void AssertIsTransient<T>(this T instance, ISession session)
where T : Entity
{
if (instance.IsTransient()) return;
try
{
var found = session.Load<T>(instance.Id);
if (found != null) Assert.Fail(string.Format("{0} has persistent id '{1}'", instance, instance.Id));
}
catch (GenericADOException)
{
// nothing
}
catch (ObjectNotFoundException)
{
// nothing
}
}
edit
In either case I would be doing the fetch (Get or Load) in a new session, free of state from the session that did the save or delete.
I am trying to test cascade behavior, NOT to test NHib's ability to delete things, but maybe I am over thinking this one or there is a simpler way I haven't thought of.
Your code in the 'Load'-section will always hit Assert.Fail, but never throw an exception as Load<T> will return a proxy (with the Id-property set - or populated from the 1st level cache) without hitting the DB - ie. ISession.Load will only fail, if you access a property other than your Id-property on a deleted entity.
As for your 'Get'-section - I might be mistaken, but I think that if you delete an entity in a session - and later try to use .Get in the same session - you will get the one in 1st level cache - and again not return null.
See this post for the full explanation about .Load and .Get.
If you really need to see if it is in your DB - use a IStatelessSession - or launch a child-ISession (which will have an empty 1st level cache.
EDIT: I thought of a bigger problem - your entity will first be deleted when the transaction is committed (when the session is flushed per default) - so unless you manually flush your session (not recommended), you will still have it in your DB.
Hope this helps.