Is Network Up? C++ Fedora/Unix - c++

Does any one have a snippet of their code that, checks if the network is enabled on a machine and has an active IP Address.
I have a networking software that connects to other client machines, Although it works when the machine is connected but if i unplug the cable or disable the network, It throws a whole reem of exceptions.
It would be nice to just put a check on top :D
Thanks in Advance

Network is always in dynamic state, a simple check at beginning of the run is not enough for correct operation.
So unfortunately you have to check for any network operations succeess state.
As for not even starting program with network disconnected state... Consider if your program is automatically started after computer has crashed or power failure. Or if any other component has suffered something similar, or a glitch. These happen surprisingly often, and restarting program on n+1 computers just because some dweeb stumbled on network cable is quite annoying..
For checking a general availability of networking, you can always "ping -q -c 1 127.0.0.1" return value is 1 if localhost does not answer. This should be in startup script, quite unnecessary to code it in application.

You should probably just catch the exceptions: otherwise you'll have problems if the machine is connected to a network, but not one with the appropriate other machines on it.

I think you can get what you want with 'ifconfig' command in the terminal.

Related

How to understand and debug from a VirtualBox log file?

I have followed this tutorial for developing an operating system. I am using Windows 10 as my host sytem and used wsl for compiling. But my VM fails as soon as I enable interrupts.
This is the log file of the VM that is output, but I cannot understand it. I am pretty naive with VirtualBox. Can someone explain any possible error you see?
Here is the code of the Os. I just have changed the structure I believe. Rest code in execution point of view is same as shown in video series.
That is a lot of log to scroll through and it's hard to be sure on the face of it that just looking at that would be able to tell us what about your startup code (not visible to us as part of the question) would trigger it. However, I can speak to some general strategies about approaching a log file like this.
We can see some general state transitions in there. The log ends with:
00:00:15.712045 Changing the VM state from 'DESTROYING' to 'TERMINATED'
So I can go back through and look at where the first instance of DESTROYING showed up, which was:
00:00:15.698320 Changing the VM state from 'POWERING_OFF' to 'OFF'
00:00:15.701802 Changing the VM state from 'OFF' to 'DESTROYING'
Following the same process backwards to POWERING_OFF, I see:
00:00:08.577363 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
00:00:14.342287 ERROR [COM]: aRC=VBOX_E_INVALID_VM_STATE (0x80bb0002) aIID={872da645-4a9b-1727-bee2-5585105b9eed} aComponent={ConsoleWrap} aText={Invalid machine state GuruMeditation when checking if the guest entered the ACPI mode)}, preserve=false aResultDetail=0
00:00:15.643579 GUI: Request for close-action to power VM off.
00:00:15.643599 GUI: Passing request to power VM off from machine-logic to UI session.
00:00:15.643606 GUI: Powering VM down on UI session power off request...
00:00:15.644257 Console: Machine state changed to 'Stopping'
00:00:15.644763 Console::powerDown(): A request to power off the VM has been issued (mMachineState=Stopping, InUninit=0)
00:00:15.645075 Changing the VM state from 'GURU_MEDITATION' to 'POWERING_OFF'
That error line at the top of that block may point to something searchable that would turn up other instances of people having the same or a similar problem. If you scroll up a bit, you can also see that something VGA-related was happening right before the error, which may help narrow it down if it's directly related to the error, or may be another step to backtrack through on the way to the real issue.

CPU Usage gradually increases in dotnet core webservice

I have a .net Core web service which seems to slowly increase its cpu usage.
meaning at the first day it won't go past 10%, the second day it can go up to 20% and so on.
Using the TOP command in linux, all my webservices seems to sometime be shown there (probably when a request is made) and afterward disappear.
This specific process after running for a while just stays there constantly consuming cpu even when no request has been made.
the API still working fine, it seems like there are some threads that just keeps hanging and consuming cpu. last time I checked I had 5 threads that consumed 3-4% cpu and didn't die for some reason.
My guess is that in some specific scenario a thread just stays alive consuming cpu.
The app runs on ubuntu machine, my first step was trying to create a dump file with ProcDump so I can analyze those threads and maybe find where they are hanging.
ProcDump generates a huge 21gb file, which trying to analyze with lldb throws out of memory exception. even tried transferring it to a windows machine to debug with windbg , no help there as it couldn't open the file.
As there is no specific exception or anything I can't really share any piece of code as I have no idea where the issue is... just kind a hoping for some suggestion that might help me get to a solution or at least understand where the problem is.
Thanks a lot for reading, cheers
You could try using something like jetBrains’ DotMemory, they also have a fairly high level but helpful guide https://www.jetbrains.com/help/dotmemory/How_to_Find_a_Memory_Leak.html it also worth checking your startup file and double checking the services you’ve registered are used in the correct way ie not added as scoped when they should be transient or even a singleton etc
so iv'e been at it for a while.
Eventually found out that my problem was with HttpClient
Probably some bad mix of static class and creating new instances of HttpClient that causes the issue Iv'e explained above.
Solved it by utilizing HttpClientFactory as explained here -
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?view=aspnetcore-2.1
Lesson learned :)
A little late but Procdump for Linux just added .NET Core 3 support that generates much more managable sized core dumps. It automatically detects if the target process is .NET Core and does the right thing (i.e., no need to specify switches).

OpenSSL decryption failed or bad record mac boost::asio

I'm writing a transparent intercepting HTTPS capable proxy using boost::asio + openSSL. I have a default server context where I specify that the server is a TLSv1.2 server, when a client connects, I extract the host from the hello and use SSL_set_SSL_CTX to set the context (which either already exists or I've just created it after spoofing the upstream cert) and initiate the server (downstream) read/write volley as well as the upstream.
This was working before I started storing and sharing contexts. On each new incoming connection, I was creating a new client socket and context, loading ca-bundle as verify file, then creating a new server context, getting the spoofed certificate. It was functioning, but I started developing issues where EC_KEY objects were being double freed and such. I learned from another question of mine that I was going about this the wrong way and began refactoring to recycle and share CTX objects. To be specific, I'm using a single client CTX shared across the board that loads, at program startup, the CA-Bundle for verification.
However, since this refactor, I'm getting this on both the client and the server:
decryption failed or bad record mac
..mixed with a bajillion "short read"s. If I try to force everything TLSv1.2, I get
block cipher pad is wrong
Those errors are given to me after a read/write has failed and I call async_shutdown on either upstream or downstream sockets, which in the callback, error is set (so the shutdown failed).
I've scoured the interwebs finding jira posts from places like apache httpd and nginx where this error was fixed in different ways (resizing read buffers to be larger, openSSL patches, forcing SSLv3, so on and so forth).
I thought there might be an issue with multithreading (my io-service uses a thread pool) but I can see in the code that boost do_init sets locking mechanics for openSSL and all of my IO are wrapped into a single strand.
I'm at a total loss and am wondering if anyone can shed light on what might be happening. I realize I've posted no code, that's because I've got hundreds and hundreds of lines of it and don't want to turn people off with a huge code dump. I realize however this is a rather complicated program and thus a complicated issue so please ask and I'll provide whatever I can.
Edit
I guess I should mention for completeness that I'm getting these errors on both openssl 1.0.2 and 1.0.2a, Win 8.1 x64 and I'm intercepting and routing the http/https traffic through my proxy with with WinDivert.
Edit 2
Reduced entire program to 1 thread, same effect. Created new client CTX for each client connection, same issue. Tried disabling AES-NI, issue persists. Tried different computer, same effect. Recompiled openssl from source (was using precompiled binaries), issue persists. Tried setting additional OP_ workaround flags described in current docs related to downgrade detection, padding bugs, so on and so forth, issue persist. I think I'll just start randomly mashing the keyboard and compile button soon.
I was going to just delete this question, but I decided to answer it in light of the fact that nowhere on the net (that I could find) actually pointed to a correct solution to this problem. I've read every single report about this error that one could find and every single one of those reports, the people "solved" or "reduced" this error in a different way. Every single one of them, a different solution. This is what helped make this issue so difficult to reason out, because everyone everywhere has a different underlying causal explanation.
It's complicated, ready? This error will present itself if you cancel/abort a pending async SSL operation. Mind->boom(). It'll be even more confusing if you do what the docs say and use async_shutdown to do so, because even the call back to async_shutdown will fail (error code is set) and your error message will randomly be something stupid like "decryption failed or bad record mac" or "block cipher pad is wrong" or "SSLv3 alert!" so on and so forth. When seeing errors like this, ignore the errors and analyze the control flow of your IO ops, somewhere you're either prematurely ending them or getting them out of order.
In my case, the premature end was (sort of) intentional, since during this stupid heavy refactor I decided to change things outside the scope of the problem, like my HTTPHeader parser, which I bugged out and ended up cause it to fail nearly 100% and thus aborting the connections. :) The error strings were masking the real cause by telling me encryption failed for some reason or another. Dumb mistake I know, but I take comfort in being the first one (apparently) to recognize it. :)
Open a powershell and type this
(Invoke-WebRequest -Uri status.dev.azure.com).StatusDescription
https://devblogs.microsoft.com/devops/deprecating-weak-cryptographic-standards-tls-1-0-and-1-1-in-azure-devops-services/

Distributed software debug with gdb

I am currently developing a distributed software in C++ using linux which is executed in more than 20 nodes simultaneously. So one of the most challenging issue that I found is how to debug it.
I heard that is possible to manage in a single gdb session multiple remote sessions (e.g. in the master node I create the gdb session and in every other node I launch the program using gdbserver), is it possible? If so can you give an example? Do you know any other way to do it?
Thanks
You can try to do it like this:
First start nodes with gdbserver on remote hosts. It is even possible to start it without a program to debug, if you start it with --multi flag. When server is in multi mode, you can control it from your local session, I mean that you can make it start a program you want to debug.
Then, start multiple inferiors in your gdb session
gdb> add-inferior -copies <number of servers>
switch them to a remote target and connect them to remote servers
gdb> inferior 1
gdb> target extended-remote host:port // use extended to switch gdbserver to multi mode
// start a program if gdbserver was started in multi mode
gdb> inferior 2
...
Now you have them all attached to one gdb session. The problem is that, AFAIK, it is not much better than to start multiple gdb's from different console tabs. On the other hand you can write some scripts or auto tests this way. See the gdb tutorial: server and inferiors.
I don't believe there is one, simple, answer to debugging "many remote applications". Yes, you can attach to a process on another machine, and step through it in GDB. But it's quite awkward to debug a large number of interdependent processes, especially when the problem is complicated.
I believe a good set of logging capabilities in the code, supplemented with additional logs for specific debugging as needed, is more likely to give you a good/fast result.
Another option might be to run the processes on one machine, rather than on multiple machines. Perhaps even use threads within one process, to simulate the behaviour of multiple machines, simplifying the debugging process. Of course, this doesn't prevent bugs that appear ONLY when you run 20 processes on 20 different machines. But the basic idea is to reduce the number of those bugs to a minimum, and debug most things in a "simpler environment".
Aggressive use of defensive programming paradigms, such as liberal use of assert is clearly a good idea (perhaps with a macro to turn it off for the production runs, but make sure that you don't just leave error paths completely unchecked - it is MUCH harder to detect that the reason something crashes is that a memory allocation failed than to track down where that NULL pointer came from some 20 function calls away from a failed allocation.

Failed to resume in time Crashlog

I am trying to figure out a "Failed to resume in time" problem. In one of our testers devices (which is an iPhone 4S with the latest OS) it happens very frequently, whereas in my own device it doesn't seem to happen at all.
Anyway, I got a few crashlogs. I am unable to trace the root of the cause though. I understand that the issue might be
1.When a process is holding up the main thread for too long.
2.When there is a memory issue.
I don't think the memory is much of an issue since it seems to happen when the user leaves the main menu and comes back. Nothing much is happening in the main menu so it probably is a task that runs too long.
Here is an excerpt from the crash log:
Can somebody help me or guide me on who I can trace the cause of the issue? Is there anyway to turn off the watchdog timer(probably not huh?) Also, what does highlighted thread refer to?
I have already checked my applicationDidBecomeActive & applicationWillEnterForeground to make sure there is nothing going on there.
To my knowledge there are no synchronous calls being made at this point. Does Reachability use synchronous calls to check for internet? How can I check for that?
I am not making any large data transfers upon resume.
I notice that GameCenter automatically logs in or check for log in upon resuming your app. Is there anyway to prevent this? Could this possibly cause a time out issue?
I tried doing a time profile, but I am not able to understand how to use it to analyze. If you can provide a good resource for that, that would be amazing.
Thanks!!!
You're currently in "trying to find the issue mode". You should switch to "try to find out how much of an issue this really is" mode.
So go find another 4S (actually as many as you can) to rule out that it's a device-specific issue. If it happens on all 4S it should be easier to pinpoint. If not, have someone else look over it, discuss possible causes. The peer programming approach often helps when you're stuck in a dead-end situation.
If the issue is only on that one device, you might want to check if it's broken (or "jailbroken") or might simply need a hard reboot (hold power and home for 10+ seconds).
If it only happens on some devices but not all, try to find what they have in common. This could be language/locale, or dictation, practically any kind of setting the user might have changed. If necessary, write a logger that logs as many settings as possible to your (web) server so you can compare settings one-by-one and quickly discard those that aren't in synch.
If only very few devices are affected, you could also ignore the issue and hope that additional crash logs from users will reveal the key to the issue.
Finally, there's always the option to disable suspend on terminate and instead terminate the app when the home button is pressed (as it was pre iOS 4). Unless of course the app has to run in background.