Can UIActivityViewController work properly with SwiftUI - seems really buggy - swiftui

I am really struggling to get a UIActivityViewController to work as as a UIViewControllerRepresentable in SwiftUI. When trying to share to WhatsApp, for example, the screen just moves slightly as if some action is going to happen, then it returns to where it was, with the following litany of errors (some timestamps etc. removed for readability):
[core] HOST: Failed to load remote view controller with error:
Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service on pid 2539 named
net.whatsapp.WhatsApp.ShareExtension.viewservice was interrupted, but the message was
sent over an additional proxy and therefore this proxy has become invalid."
[core] Sheet not being presented, calling premature completion
[net.whatsapp.WhatsApp.ShareExtension(2.20.111)] Connection to plugin interrupted while in use.
[net.whatsapp.WhatsApp.ShareExtension(2.20.111)] Connection to plugin invalidated while in use.
Is this just to do with the (seemingly horrendously buggy) behaviour of modals when working with SwiftUI? I just can't seem to solve this ongoing problem.

Related

Akka EventsourcedBehavior JournalFailureException but stack trace doesn't show underlying cause

I have an akka persistence app (EventSourcedBehavior based actors, akka 2.6.13) and using akka-persistence-jdbc 3.5.3 for the journal/snapshot plugin along with a cockroachdb cluster. Things work fine, but recently I've seen a lot of event persist failures but the error logs do not show any underlying cause of the issue - no SQL level exceptions in the trace at all. At the same time as this, we usually see errors due to actors being restored, and the journal again throwing JournalFailureExceptions, but no underlying reason.
If I can't see any underlying reasons (the only thing the logs show is async write timed out after 5.00 s (is this timeout value configurable?) does this mean there is something else causing the issues, unrelated to the journal plugin implementation or database? How can I debug this further - i've examined the message handler in my EventSourcedBehavior that has failed when persisting an event to see if is doing anything weird or blocking, but I can't see anything obviously wrong.
Any ideas welcome.
Thanks
The JournalFailureExceptions likely indicate connectivity or slow responses from the DB. If it's just slowness, scaling out/up the cockroach cluster might help.
"async write timed out after" is from cluster sharding's remember-entities feature (that's the only occurrence in Akka) which also indicates connectivity issues or slow responses from the DB.
There is most likely no problem with your behaviors. It's worth noting that remember-entities (especially in eventsourced mode... ddata mode is a little better in this regard if you're OK with not remembering entities across full-cluster restarts) itself puts a substantial load on persistence and your DB and is consistently (if you have more than a few hundred entities) counterproductive, in my experience. Unless you've actually tried disabling it and seen an actual net negative effect, I suggest disabling remember-entities.

Asio Bad File Descriptor only on some systems

Recently I wrote a Discord-Bot in C++ with the sleepy-discord bot library.
Now, the problem here is that when I run the bot it shows me the following errors:
[2021-05-29 18:30:29] [info] Error getting remote endpoint: asio.system:9 (Bad file descriptor)
[2021-05-29 18:30:29] [error] handle_connect error: Timer Expired
[2021-05-29 18:30:29] [info] asio async_shutdown error: asio.ssl:336462100 (uninitialized)
Now, I searched far and wide what this could be triggered by but the answers always say like a socket wasn't opened and so on.
The thing is, it works on a lot of systems, but yesterday I was renting a VM (same system as my computer), and this seems to be the only one giving me that issue.
What could be the reason for this?
Edit: I was instructed to show a reproducible example, but I am not sure how I would write a minimal example that's why I link the bot in question:
https://github.com/ElandaOfficial/jucedoc
Update:
I tinkered a bit around in the library I am using and was able to increase the Websocketpp log level, thankfully I got one more line of information out of it:
[2021-05-29 23:49:08] [fail] WebSocket Connection Unknown - "" /?v=8 0 websocketpp.transport:9 Timer Expired
The error triggers when you so s.remote_endpoint on a socket that is not connected/no longer connected.
It would happen e.g. when you try to print the endpoint with the socket after an IO error. The usual way to work around that is to store a copy of the remote endpoint as soon as a connection is established, so you don't have to retrieve it when it's too late.
On the question why it's happening on the particular VM, you have to shift focus to the root cause. It might be that accept is failing (possibly due to limits like number of filedescriptors, available memory, etc.)

Azure Event Hub ServiceBusException causing skipped messages

We are using the Azure Java event hub library to read messages out of an event hub. Most of the time it works perfectly, but periodically we see exceptions of type "com.microsoft.azure.servicebus.ServiceBusException" occur that correspond to times when messages seem to be skipped that are in the event hub.
Here are some examples of exception details:
"The message container is being closed (some number here)."
This generally hits multiple partitions at the same time, but not all.
The callstack only includes com.microsoft.azure.servicebus and org.apache.qpid.proton.
"The link 'xxx' is force detached by the broker due to errors occurred in consumer(link#). Detach origin: InnerMessageReceiver was closed."
This is generally tied to com.microsoft.azure.servicebus.amqp.AmqpException exceptions.
The callstack only includes com.microsoft.azure.servicebus and org.apache.qpid.proton.
Example callstack:
at com.microsoft.azure.servicebus.ExceptionUtil.toException(ExceptionUtil.java:93)
at com.microsoft.azure.servicebus.MessageReceiver.onError(MessageReceiver.java:393)
at com.microsoft.azure.servicebus.MessageReceiver.onClose(MessageReceiver.java:646)
at com.microsoft.azure.servicebus.amqp.BaseLinkHandler.processOnClose(BaseLinkHandler.java:83)
at com.microsoft.azure.servicebus.amqp.BaseLinkHandler.onLinkRemoteClose(BaseLinkHandler.java:52)
at org.apache.qpid.proton.engine.BaseHandler.handle(BaseHandler.java:176)
at org.apache.qpid.proton.engine.impl.EventImpl.dispatch(EventImpl.java:108)
at org.apache.qpid.proton.reactor.impl.ReactorImpl.dispatch(ReactorImpl.java:309)
at org.apache.qpid.proton.reactor.impl.ReactorImpl.process(ReactorImpl.java:276)
at com.microsoft.azure.servicebus.MessagingFactory$RunReactor.run(MessagingFactory.java:340)
at java.lang.Thread.run(Thread.java:745)
There doesn't seem to be a way for clients of the library to recognize a problem occurs and avoid moving ahead in the event hub past our skipped messages. Has anyone else run into this? Is there some other way to recognize and avoid skipping or retrying missed messages?
This error DOESN'T SKIP any messages - it will throw an Exception, when it shouldn't have. This will result in EPH to RESTART the affected Partitions' Receiver. If the application using EventHubs javaclient doesn't handle the errors - they may experience loss of messages.
This is a bug in our retry logic - in the current version of EventHubs JavaClient - until 0.11.0.
Here's the corresponding issue to track progress.
In EventHubs service - these errors happen if - for any reason - the Container hosting your EventHubs' code has to close (for the sake of the explanation, imagine we run a set of Container's - like DockerContainers for every EventHub namespace) - this is a transient error - this Container will eventually be opened in another Node.
Our javaclient-retry logic should have handled this error and should have retried - Will keep this thread posted with the fix.
EDIT
We just released 0.12.0 - which fixes this issue.
Thanks!
Sreeram

Wrong `SocketKind` in `SocketActivityTrigger` background task

During testing of my project on a background server, I have encountered the weird situation where every time I triggers a request to my suspended server using ServerTestingTask, the ServerTask is triggered twice with identical SocketActivityTriggerDetails (trigger reason is SocketActivityTriggerReason::ConnectionAccepted, the socket information is always SocketActivityKind::StreamSocketListener). The problem is that the first one supplies a valid StreamSocket in the information and my code handled the request perfectly while the second trigger raises invalid object exception (just by accessing socketInformation->StreamSocket which is some kind of fatal and kill my server [need to resuming the app UI and click the button to start the server again]. It feels like the first trigger should indicate the socket kind to be SocketActivityKind::StreamSocket instead. Is it a known problem or is there some work around?

"Specified network name is no longer available" in Httplistener

I have built a simple web service that simply uses HttpListener to receive and send requests. Occasionally, the service fails with "Specified network name is no longer available". It appears to be thrown when I write to the output buffer of the HttpListenerResponse.
Here is the error:
ListenerCallback() Error: The specified network name is no longer available at System.Net.HttpResponseStream.Write(Byte[] buffer, Int32 offset, Int32 size)
and here is the guilty portion of the code. responseString is the data being sent back to the client:
buffer = System.Text.Encoding.UTF8.GetBytes(responseString);
response.ContentLength64 = buffer.Length;
output = response.OutputStream;
output.Write(buffer, 0, buffer.Length);
It doesn't seem to always be a huge buffer, two examples are 3,816 bytes and, 142,619 bytes, these errors were thrown about 30 seconds apart. I would not think that my single client application would be overloading HTTPlistener; the client does occasionally sent/receive data in bursts, with several exchanges happening one after another.
Mostly Google searches shows that this is a common IT problem where, when there are network problems, this error is shown -- most of the help is directed toward sysadmins diagnosing a problem with an app moreso than developers tracking down a bug. My app has been tested on different machines, networks, etc. and I don't think it's simply a network configuration problem.
What may be the cause of this problem?
I'm getting this too, when a ContentLength64 is specified and KeepAlive is false. It seems as though the client is inspecting the Content-Length header (which, by all possible accounts, is set correctly, since I get an exception with any other value) and then saying "Whelp I'm done KTHXBYE" and closing the connection a little bit before the underlying HttpListenerResponse stream was expecting it to. For now, I'm just catching the exception and moving on.
I've only gotten this particular exception once so far when using HttpListener.
It occurred when I resumed execution after my application had been standing on a breakpoint for a while.
Perhaps there is some sort of internal timeout involved? Your application sends data in bursts, which means it's probably completely inactive a lot of the time. Did the exception occur immediately after a period of inactivity?
Same problem here, but other threads suggest ignoring the Exception.
C# problem with HttpListener
May be that's not the right thing to do.
For me I find that whenever the client close the webpage before it load fully it gives me that exception. What I do is just add a try catch block and print something when the exception happen. In another word I just ignore the exception.
The problem occurs when you're trying to respond to an invalid request. Take a look at this. I found out that the only way to solve this problem is:
listener = new HttpListener();
listener.IgnoreWriteExceptions = true;
Just set IgnoreWriteExceptions to true after instantiating your listener and the errors are gone.
Update:
For a deeper explanation, Http protocol is based on TCP protocol which works with streams to which each peer writes data. TCP protocol is peer to peer and each peer can close the connection. When the client sends a request to your HttpListener there will be a TCP handshake, then the server will process the data and responds back to the client by writing into the connection's stream. If you try to write into a stream which is already closed by the remote peer the Exception with "Specified network name is no longer available" will occur.