I successfully ported the Asio Pinger example https://think-async.com/Asio/asio-1.20.0/src/examples/cpp03/icmp/ping.cpp to work without Boost at all.
The example works perfectly, but running the app as root on macOS.
As suggested here and here, I understand that asio::ip::icmp is built on top of raw sockets and macOS doesn't allow raw sockets to non-root users. This force me to run the app as sudo, that's not ideal. Otherwise, it fails with "Operation not permitted".
In the ping.c example, they set the socket type as SOCK_DGRAM for non-root users. I tried to manually change the flag directly in the ip\icmp.hpp header but of course it does not work: the socket initializes correctly without sudo, but the app crashes sending the packet with socket_.send_to(request_buffer.data(), destination_);.
Anyone knows how to run the Pinger example without involving root/sudo?
Thanks
Related
Qt "Terminal Example" is not working as expected with RS232.
I am using this as a boilerplate for my serial GUI application but cannot get it to send data to my device. Using the same settings in PUTTY I get a perfect output. I have narrowed it down to the issue that it will only send a single message and then no more. Is there some loop in there? I've already put debug statements all over to check unknown actions with no luck.
I checked what functions are outputting but I cannot see anywhere that it closes the port.
I also thought that maybe it was just me not sending the \r command but even this did nothing. I simply send the first message and then it does nothing.
I have tried sending it manually, with commands like these:
m_serial->write("command");
m_serial->write("command\r");
I have also tried following a solution here: How to make QSerialPort from Qt5.13.1 work?
I tried to update to the newest version and the maintenance tool did not find repository so I just did a clean install with 5.12.5 and same problem persists.
In my image, the first set of open-close is the terminal example. The second set is Putty working. I am definitely connecting because the error checking and serial port info I get from Qt is correct.
EDIT
My port settings are:
Baud: 9600
Data bits: 8
Stop Bits: 1
Parity: None
Flow Control: None
Qt Version: 5.13.1, 5.12.5
I wanted to play with the new lldb since it is supposed to work better on linux and I tried to use it inside a container.
Sadly it seems to consider the connection coming from the container ipv4 and not localhost so it rejects it:
error: rejecting incoming connection from 192.168.1.2 (expecting 127.0.0.1)
I couldn't see how to make it work so far.
Work on lldb on Linux is ongoing, please file a bug about this at the lldb.llvm.org bugzilla.
I'm writing a transparent intercepting HTTPS capable proxy using boost::asio + openSSL. I have a default server context where I specify that the server is a TLSv1.2 server, when a client connects, I extract the host from the hello and use SSL_set_SSL_CTX to set the context (which either already exists or I've just created it after spoofing the upstream cert) and initiate the server (downstream) read/write volley as well as the upstream.
This was working before I started storing and sharing contexts. On each new incoming connection, I was creating a new client socket and context, loading ca-bundle as verify file, then creating a new server context, getting the spoofed certificate. It was functioning, but I started developing issues where EC_KEY objects were being double freed and such. I learned from another question of mine that I was going about this the wrong way and began refactoring to recycle and share CTX objects. To be specific, I'm using a single client CTX shared across the board that loads, at program startup, the CA-Bundle for verification.
However, since this refactor, I'm getting this on both the client and the server:
decryption failed or bad record mac
..mixed with a bajillion "short read"s. If I try to force everything TLSv1.2, I get
block cipher pad is wrong
Those errors are given to me after a read/write has failed and I call async_shutdown on either upstream or downstream sockets, which in the callback, error is set (so the shutdown failed).
I've scoured the interwebs finding jira posts from places like apache httpd and nginx where this error was fixed in different ways (resizing read buffers to be larger, openSSL patches, forcing SSLv3, so on and so forth).
I thought there might be an issue with multithreading (my io-service uses a thread pool) but I can see in the code that boost do_init sets locking mechanics for openSSL and all of my IO are wrapped into a single strand.
I'm at a total loss and am wondering if anyone can shed light on what might be happening. I realize I've posted no code, that's because I've got hundreds and hundreds of lines of it and don't want to turn people off with a huge code dump. I realize however this is a rather complicated program and thus a complicated issue so please ask and I'll provide whatever I can.
Edit
I guess I should mention for completeness that I'm getting these errors on both openssl 1.0.2 and 1.0.2a, Win 8.1 x64 and I'm intercepting and routing the http/https traffic through my proxy with with WinDivert.
Edit 2
Reduced entire program to 1 thread, same effect. Created new client CTX for each client connection, same issue. Tried disabling AES-NI, issue persists. Tried different computer, same effect. Recompiled openssl from source (was using precompiled binaries), issue persists. Tried setting additional OP_ workaround flags described in current docs related to downgrade detection, padding bugs, so on and so forth, issue persist. I think I'll just start randomly mashing the keyboard and compile button soon.
I was going to just delete this question, but I decided to answer it in light of the fact that nowhere on the net (that I could find) actually pointed to a correct solution to this problem. I've read every single report about this error that one could find and every single one of those reports, the people "solved" or "reduced" this error in a different way. Every single one of them, a different solution. This is what helped make this issue so difficult to reason out, because everyone everywhere has a different underlying causal explanation.
It's complicated, ready? This error will present itself if you cancel/abort a pending async SSL operation. Mind->boom(). It'll be even more confusing if you do what the docs say and use async_shutdown to do so, because even the call back to async_shutdown will fail (error code is set) and your error message will randomly be something stupid like "decryption failed or bad record mac" or "block cipher pad is wrong" or "SSLv3 alert!" so on and so forth. When seeing errors like this, ignore the errors and analyze the control flow of your IO ops, somewhere you're either prematurely ending them or getting them out of order.
In my case, the premature end was (sort of) intentional, since during this stupid heavy refactor I decided to change things outside the scope of the problem, like my HTTPHeader parser, which I bugged out and ended up cause it to fail nearly 100% and thus aborting the connections. :) The error strings were masking the real cause by telling me encryption failed for some reason or another. Dumb mistake I know, but I take comfort in being the first one (apparently) to recognize it. :)
Open a powershell and type this
(Invoke-WebRequest -Uri status.dev.azure.com).StatusDescription
https://devblogs.microsoft.com/devops/deprecating-weak-cryptographic-standards-tls-1-0-and-1-1-in-azure-devops-services/
I am running an ssh tunnel from an application using a QProcess:
QProcess* process = new QProcess();
process->start("ssh", QStringList()<<"-L"<<"27017:localhost:27017"<<"example.com");
So far it works great, the only problem being that there is no way for me to see when the port has actually been created.
When I run the command on a shell, it takes about 10 seconds to connect to the remote host after which the forwarded port is ready for usage. How do I detect it from my application?
EDIT:
As suggested by vahancho, I used the fact that post-connection there is some output on the terminal that can be used to detect that the connection has succeeded. However, there is a line which is run instantly after launch Pseudo-terminal will not be allocated because stdin is not a terminal, which probably would give a false alarm. The correct output is available in the second signal, emitted a bit later (which is a true indicator of the port having being opened). To get rid of the first message, I am now running ssh using ssh -t -t to force an stdin allocation.
So, the only question left is, can anyone help me without any concerns in this approach?
So, the only question left is, can anyone help me without any concerns in this approach?
This is not a stable and robust solution, unfortunately. It is similarly a broken concept to handling git outputs rather than using an actual library. The main problem is that these softwares do not have any guarantee for output compatibility, rightfully.
Just imagine that what happens if they have an unclear text, a typo, et all, unnoticed. They inherently need to fix the output respectively, and all the applications relying on the output would abruptly break.
This is also the reason behind working on dedicated libraries giving access to the functionality for reuse rather than working with the user facing output directly. In case of git, this means the libgit2 library, for instance.
Qt does not have an ssh mechanism in place by default like you can have such libraries in python, e.g. paramiko.
I would suggest to establish a way in your code by using libssh or libssh2 as you also noted yourself in the comment. I can understand the inconvenience that is not a truly Qt'ish way as of now, but at this point Qt cannot provide anything more robust without third-party.
That being said, it would be nice to see a similar add-on library in the Qt Project for the future, but this may not be happen any soon. If you write your software with proper design in mind, you will be able to switch to such a library withour major issues once someone stands up to maintain such an additional library to Qt or elsewhere.
I had the same problem, but in my case ssh do not output anything - so I couldn't just wait for output. I'm also using ssh to setupt tunnel, so I used QTcpSocket:
program = "ssh";
arguments << m_host << "-N" << "-L" << QString("3306:%1:3306").arg(m_host);
connect(tunnelProcess, &QProcess::started, this, &Database::waitForTunnel);
tunnelProcess->start(program, arguments);
waitForTunnel() slot:
QTcpSocket sock;
sock.connectToHost("127.0.0.1", 3306);
if(sock.waitForConnected(100000))
{
sock.disconnectFromHost();
openDatabaseConnection();
}
else
qDebug() << "timeout";
I hope this will help future people finding this question ;)
I want to test air applications and air libraries using flexmojos 3.9-SNAPSHOT.
However, although flexmojos does indeed has support for air, it tries to run the swf generated by the build using flash player, and as I need to use air native libraries I wanted to run the tests using adl (AIR debug launcher).
To do this, I cloned flexmojos in github.com to this repository (http://github.com/mi007/flexmojos). I then created a class that created an -app.xml for the TestRunner.swf file that was generated and ran:
adl TestRunner-app.xml
However, before the Test ends, it should call the server in the port 13540 to report something. When that happens, I'm getting the following error:
Error #2044: Unhandled securityError:. text=Error #2048: Security sandbox violation: app:/TestRunner.swf cannot load data from 127.0.0.1:13540.
at org.sonatype.flexmojos.unitestingsupport::ControlSocket/connect()[/Users/rafael/p2d/others/flexmojos/flexmojos-testing/flexmojos-unittest-support/src/main/flex/org/sonatype/flexmojos/unitestingsupport/ControlSocket.as:46]
at org.sonatype.flexmojos.unitestingsupport::TestApplication/runTests()[/Users/rafael/p2d/others/flexmojos/flexmojos-testing/flexmojos-unittest-support/src/main/flex/org/sonatype/flexmojos/unitestingsupport/TestApplication.as:52]
at flash.events::EventDispatcher/dispatchEventFunction()
at flash.events::EventDispatcher/dispatchEvent()
at mx.core::UIComponent/dispatchEvent()[C:\autobuild\galaga\frameworks\projects\framework\src\mx\core\UIComponent.as:9408]
at mx.core::UIComponent/set initialized()[C:\autobuild\galaga\frameworks\projects\framework\src\mx\core\UIComponent.as:1169]
at mx.managers::LayoutManager/doPhasedInstantiation()[C:\autobuild\galaga\frameworks\projects\framework\src\mx\managers\LayoutManager.as:718]
at Function/http://adobe.com/AS3/2006/builtin::apply()
at mx.core::UIComponent/callLaterDispatcher2()[C:\autobuild\galaga\frameworks\projects\framework\src\mx\core\UIComponent.as:8733]
at mx.core::UIComponent/callLaterDispatcher()[C:\autobuild\galaga\frameworks\projects\framework\src\mx\core\UIComponent.as:8673]
I know that it is calling the server before in port 13539 successfully, because it prints the test results on the console. I also know that it is opening port 13540 because I was able to telnet to it. However, for some reason it is unable to connect from the air application.
Given the circunstances, I have the following questions:
1) Is there any good documentation that I can read to understand how this security framework works? The only documentation that I found was terribly confusing.
2) Does anyone has any ideas or hints about what might be happening?
3) I have read somewhere that flexmojos hacks the security framework so that flex applications can open a socket to localhost without problems during tests. Is there any documentation about how this is done?
Thanks,
You've got very close to the answer in your comment...
You need to add a line to a file in ~/Library/Preferences/Macromedia/Flash Player/#Security/FlashPlayerTrust yourself, covering the swf you want to run. You can do this by hand directly (all files in that folder will be processed) or by using the Flash Player Settings Manager.