C++ std::filesystem::copy fail with "Network location cannot be reached" - c++

I have written a small c++ application which is being automatically started after Windows boot on a couple of clients. This application will copy a file from a network share (same network share for all clients) to the local disk. When I reboot all clients at once, a bunch of them will get an error 1231 from the std::filesystem::copy function with following message:
"Network location cannot be reached"
If I reboot all clients with an interval of a couple of seconds between them, then there is no problem.
This makes me think that the copy function might be blocking the file during copying.
Is there some setting that I am missing that prevents this? Is this normal behaviour?
EDIT: I have been able to fix the network problem, I now however get an error 32 which states that "the process cannot access the file because it is being used by another process". Does the copy function lock the files that are currently being copied?

It sounds more like the network share has not been mounted yet. If all clients attempt to mount the same network share at the same time this may mean a lot of work for the server handing out the share. Consequently, some clients may time out and may have to repeat their request. Make sure the network share is actually mounted before you attempt to copy from it.

You are facing a problem due to an uninitialized network of your client workstations.
The error ERROR_NETWORK_UNREACHABLE - 1231 (0x4CF) indicates that the path provided is not reachable at an instance.
You can use two approaches:
1) Continue with while-loop until you get the success to check whether filepath exists. Handle the error situation with try-catch if any.
When you get the success go for download/copy.
2) Sleep for 60 sec to 180 sec before download/copy file in the current program.

I edited my question; there was indeed a problem with active directory where the client was not immediately given an IP address and thus not being able to access the share.
After some more testing, I now see that I am only able to perform a copy command on one of the clients using std::filesystem::copy, while the others show an error message 32, stating that "the process cannot access the file because it is being used by another process". If I use the xcopy command in a batch file instead on all devices simultaneously, I do not get any error...

Related

Asio Bad File Descriptor only on some systems

Recently I wrote a Discord-Bot in C++ with the sleepy-discord bot library.
Now, the problem here is that when I run the bot it shows me the following errors:
[2021-05-29 18:30:29] [info] Error getting remote endpoint: asio.system:9 (Bad file descriptor)
[2021-05-29 18:30:29] [error] handle_connect error: Timer Expired
[2021-05-29 18:30:29] [info] asio async_shutdown error: asio.ssl:336462100 (uninitialized)
Now, I searched far and wide what this could be triggered by but the answers always say like a socket wasn't opened and so on.
The thing is, it works on a lot of systems, but yesterday I was renting a VM (same system as my computer), and this seems to be the only one giving me that issue.
What could be the reason for this?
Edit: I was instructed to show a reproducible example, but I am not sure how I would write a minimal example that's why I link the bot in question:
https://github.com/ElandaOfficial/jucedoc
Update:
I tinkered a bit around in the library I am using and was able to increase the Websocketpp log level, thankfully I got one more line of information out of it:
[2021-05-29 23:49:08] [fail] WebSocket Connection Unknown - "" /?v=8 0 websocketpp.transport:9 Timer Expired
The error triggers when you so s.remote_endpoint on a socket that is not connected/no longer connected.
It would happen e.g. when you try to print the endpoint with the socket after an IO error. The usual way to work around that is to store a copy of the remote endpoint as soon as a connection is established, so you don't have to retrieve it when it's too late.
On the question why it's happening on the particular VM, you have to shift focus to the root cause. It might be that accept is failing (possibly due to limits like number of filedescriptors, available memory, etc.)

boost::last_write_time returns the wrong value on every first call after file modification(file in network drive connected through VPN)

I am using boost::last_write_time as one of my checks to see if a file has been modified or not.
The call works fine if the file I am using is a local file.
But if I request the same information for a file from a network drive, I get wrong result.
The call I make is : boost::filesystem::last_write_time( file_path )
I am connected to the drive through VPN. The result is wrong only for the first time I make a request after modifying the file. The next call returns the right time.
The wrong time I get is always the old modification time(the one prior to the new change).
It doesn't matter if I wait a while before making the request. The first time is always wrong and the second time I get the correct one.
I am working on a mac. And I see that internally the method makes use of stat function.
I tried passing the error_code struct to see if there was any error. But it held 0 after the call.
Is there any limitation related to getting status of files over a network using stat method?
Is there any function that I could call to ensure that the last_write_method always returns the right time(other than calling the method twice)?
ADDITIONAL INFORMATION:
Found some additional info on: IBM Knowledge Center
In usage notes, bullet 6 on "Network File System Differences" says:
Local access to remote files through the Network File System may produce unexpected results due to conditions at the server...
...The local Network File System also impacts operations that retrieve file attributes. Recent changes at the server may not be available at your client yet, and old values may be returned from operations. (Several options on the Add Mounted File System (ADDMFS) command determine the time between refresh operations of local data.)
But I still don't understand why the call works correctly the second time, even when the call is made immediately after the first one.
And why it doesn't work if I wait for some time before I make the first call.

How to smooth restart a c++ program without shut down the running program?

I have a server program which should run full time a day. If I want to change some parameters of it, Is there any way rather than shut down then restart way?
There are quite a few ways of doing this, including, but almost certainly not limited to:
You can maintain the parameters in a separate file so that the program will periodically check that file and update its internal information.
Similar to (1) but you can send some sort of signal to the application to get it to immediately re-read the file.
You can do either (1) or (2) but using shared memory rather than a configuration file.
You can have your program sit at the server end of an IPC conversation, so that a client can open up a connection to it to provide new parameters. Anything from a simple message queue to a full-blown HTTP server and associated pages.
Of course, all of these tend to need a fair amount of work in your program to get it to look for the new information.
You should take that into account when making your decision. By far the quickest solution to implement is to just (cleanly) kill off the process at something like 11:55pm then immediately restart it. It's simpler because your code probably already has the ability to load the information on startup, so this could be a simple cron one-liner.
Some people speak of laziness as a bad thing but that's not always the case :-)
If the Server maintains many alive connections from clients, restarting the server process is the last way you should consider. Except reloading configuration files, inserting a proxy process between clients and server can be another way.
The proxy process is Responsible for 2 things.
a. Maintaining the connection from clients and forwarding packets to Server for handling.
b. Judging weather the current server process(Server A) is alive and if it not, switching to another server(Server B) automatically.
Then you can change parameters by restart server without worrying about interrupting clients since there is always two(or more) servers running.

What happens to a named pipe if server crashes?

i know little about pipes but have used one to connect two processes in my code in visual C++. The pipe is working well, but I need to add error handling to the same, hence wanted to know what will happen to a pipe if the server creating it crashed and how do I recognize it from client process?
Also what will happen if the client process tried accessing the same pipe, after the server crash, if no error handling is put in place?
Edit:
What impact will be there on the memory if i keep creating new pipes (say by using system time as pipe name) while the previous was broken because of a server crash? Will these broken pipes be removed from the memory?
IIRC the ReadFile or WriteFile function will return FALSE and GetLastError() will return STATUS_PIPE_DISCONNECTED
I guess this kind of handling is implemented in your code, if not you should better add it ;-)
I just want to throw this out there.
If you want a survivable method for transferring data between two applications, you might consider using MSMQ or even bringing in BizTalk or another message platform.
There are several things to consider:
what happens if the server is rebooted or loses power?
What happens if the server application becomes unresponsive?
What happens if the server application is killed or goes away completely?
What is the appropriate response of a client application in each of the above?
Each of those contexts represent a potential loss of data. If the data loss is unacceptable then named pipes is not the mechanism you should be using. Instead you need to persist the messages somehow.
MSMQ, storing to a database, or even leveraging Biztalk can take care of the survivability of the message itself.
If 1 or 3 happens, then the named pipe goes away and must be recreated by a new instance of your server application. If #2 happens, then the pipe won't go away until someone either reboots the server or kills the server app and starts it again.
Regardless, the client application needs to handle the above issues. They boil down to connection failed problems. Depending on what the client does you might have it move into a wait state and let it ping the server every so often to see if it has come back again.
Without knowing the nature of the data and communication processes involved its hard to recommend a proper approach.

XmlHttpRequest bug?

I'm writing a program that among other things needs to download a file given its URL. I'm too lazy to implement the Http/Https protocols manually, so that I needed some library/object/function that'll do the job.
Critical requirement: The download must be asynchronous. That is, the thread that issued the download must be able to do something else "while" downloading the file, plus the download must be able to be aborted anytime without any barbaric side effects (such as internal call to TerminateThread).
Nice-to-have requirements:
Should be able to download the file "into memory". Means - read the contents of the file as they arrive, not necessarily save it into some "file system" file.
It'd be nice to have some convenient Win32 progress notification mechanism (waitable event, semahpore, completion port, etc.), rather than just periodically polling the download status.
I've chosen the XmlHttpRequest COM object to do the work. It seemed to work fine enough, plus it supported asynchronous mode.
However I noticed that after some period it just stops working.
That is, after several successful file downloads it stops downloading anything.
I periodically poll it to get its status, it reports "in-progress", but nothing actually happens, and there's no network activity. Moreover, when the same process creates another instance of XmlHttpRequest object to perform new downloads - the effect is the same. The object reports "in progress", whereas it doesn't even try to connect to the server (according to network sniffers and system TCP state).
The only way to make this object work back is to restart the process. This makes me suspect that there's a sort of a bug (sorry, I meant undocumented feature) in the object. Also it's not a bug at the level of an individual object, since the problem persists when the object is destroyed and another one is created. It's probably some global state of the DLL that implements this object.
Does anyone know something about this? Is this a known bug?
I'm pretty sure there's no chance that I have another bug in my code, because of which it seems to me to be the bug is in the XmlHttpRequest. I've done enoughtests and spent time with the debugger to conclude without reasonable doubt that it's just the object stops working.
BTW, while the object should work, I do all the waiting via MsgWaitXXXX API calls. So that if this object needs the message loop to work properly (for instance, it may create a hidden notification window and bind it to a socket via WSAAsyncSelect) - I give it the opportunity.
I know from my own experiences that the Microsoft implementation of the XmlHttpRequest falls short of full compliance with the draft standard. In particular the standard mandates that streamed data should be able to be extracted in ready state '3' (Receiving) which IE deliberately ignores.
Unfortunately I have not seen what you are describing despite using XmlHttpRequest objects extensively for long polling purposes.