I have a C++ client that connects to servers using libcurl on FreeBSD. The system administrators recently update the FreeBSD image and install ports. The system went from cURL version 7.24.0_2 to cURL version 7.31.0. (File name went from libcurl.so.6 to lib curl.so.7 for what that's worth.)
I recompiled my program to link against the new library.
Now I am getting return value 3 (CURLE_URL_MALFORMAT) from my call to curl_easy_perform(3), and the error message string returned is " malformed".
However, nothing else has changed. The URL is unchanged, and has been verified as correct.
Stranger still, the command line "curl" program works fine; isn't it using the same library?!
I've spent a couple hours reading the release notes for libcurl but couldn't spot anything that suggested a reason as to why this should now fail.
Any suggestions?
Turns out the sysadmins built cURL wrong. A new install and it works.
Related
I am writing a c++ grpc based service. As recommended here, I installed grpc in my local directpry. I am using linux ubuntu 22.04.
Now, at he begginig of a service call implementation, i have these lines of code:
google::protobuf::Arena arena;
ResponseStatus * response_status =
google::protobuf::Arena::CreateMessage<ResponseStatus>(&arena);
response->set_allocated_status(response_status);
When I invoke the service, from a ruby client, the last line causes an exception with the following error message
[libprotobuf FATAL /home/lrleon/grpc/third_party/protobuf/src/google/protobuf/generated_message_util.cc:764] CHECK failed: (submessage_arena) == (nullptr):
I'll be honest: I do not have an good idea about why this problem could be happening. My best hypothesis is that somewhere a different version of the required grpc/protobuf libraries is intervening. I do not believe it because I am almost sure I am not using any system library related to grpc, but eventually another app could have installed some library.
I spent a full day searching in forums without getting a some related post that could help me solve mine.
That said, I kindly ask if some expert in grpc could help me.
Thanls in advance
I am developing a C++ HTTP/HTTPS client application inside environment of CentOS/Redhat 7.7, which the OS comes with OpenSSL 1.0.2k-fips. But I need at least OpenSSL 1.1.1 for my application. So I compiled OpenSSL 1.1.1f from source.
My compilation configuration.
./Configure linux-x86_64 enable-ec_nistp_64_gcc_128 -Wl,-rpath=/usr/local/lib64 --prefix=/usr/local --openssldir=/usr/local
The problem starts here.
Any requests I make with the new OpenSSL results with Verify return code: 20 (unable to get local issuer certificate). I spent some time switching HTTP clients for C++, which was a very tiring process, only to find out that the issue is with OpenSSL itself.
After a long research, the HTTP client I currently use, shared a ca-bundle inside it's repo. Including it into my project resolved the issue.
https://github.com/yhirose/cpp-httplib/blob/master/example/ca-bundle.crt
cli.set_ca_cert_path(./ca-bundle.crt);
Or with command line,
openssl s_client -connect example.com:443 -CAfile ./ca-bundle.crt
This all seems absurd, I never had to do anything like this before in my experience with developing HTTP clients. Therefore my questions:
Is this a problem with CentOS 7 ?
Did I make a mistake while compiling OpenSSL ? (Followed)
Note: Using cli.set_ca_cert_path("/etc/ssl/certs/ca-bundle.crt"); also works (took me a long time to realize), but I never had to do this manually before, hence forgive my ignorance.
Thanks for your time.
Edit: With the help of some code,
const char* dir = getenv(X509_get_default_cert_dir_env());
if (!dir)
dir = X509_get_default_cert_dir();
cout << dir << endl;
It would seem dir is /usr/local/certs which does not exist. But I also tried this with the default installed OpenSSL inside the OS. Which I would guess should point to the real certificates ?
Edit2: Using /etc/pki/tls/cert.pem also works.
I have a Xamarin iOS application in VisualStudio 2019 on a Windows 10 PC. I build it on the PC, and when possible then right click the project and choose Show IPA File on Build Server, then on my Mac I upload the file to the app store using Transporter. I am encountering these issues:
Whenever I build, the build says it failed with message "There was an error unzipping the file bin\Ad-Hoc\MyApp.app.dSYM.zip: Could not find a part of the path 'C:\MyDirectory\MyApp.iOS\bin\Ad-Hoc\MyAppiOS.app.dSYM'." This has not historically caused any issues - we have still been able to upload the IPA to the app store and deploy our app - but I include it in case it's relevant to the other issues.
Sometimes, my build fails saying it "has been disconnected while waiting a post repsonse to topic xvs/Build/.../execute-task/MyApp.iOS/...Codesign" or "Unable to connect to Mac Server with Address='192.111.111.111' and User='My Username'. The build can't continue without a connection". I'm assuming a wifi issue must cause this, though the machines are all right next to one another and next to my router so seems odd. Occasionally when I try to pair to the Mac, I also get the message "Error, Couldn't connect to com. Please try again. An attempt was made to access a socket in a way forbidden by its access permissions." In any case, makes me wonder if there's any way for me to 1) hardwire the Mac to the PC or 2) build directly on the Mac instead of through VS on the PC, even though I write the code on the PC?
During the build, I periodically get an error "/Users/myUser/Library/Caches/Xamarin/mtbs/builds/MyApp.iOS//bin/Ad-Hoc/MyAppiOS.app: errSecInternalComponent MyApp.iOS C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\MSBuild\Xamarin\iOS\Xamarin.iOS.Common.targets 2003". If I lock all keychains on the Mac this goes away on the next build, but then it reappears a few builds later.
Once the build finishes with just the .dSym.zip error, most of the time, the "Show IPA File on Build Server" option still does NOT show up (not greyed out - it is not present in the menu at all) when I right click the iOS project. To get around this, I have been copying the file over to the Mac via S3. I'm wondering why the option doesn't show up, and if there's a way to just find the built file on the Mac rather than copying it over from Windows?
I then use Transporter on the Mac to upload the IPA file to the Apple Store. It always is able to read the version information and says the file is Delivered. However, often I then get an email from Apple saying the build failed because "ITMS-90688: This IPA is invalid - While unzipping the IPA we received the error message [ End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of MyAppiOS.ipa or MyAppiOS.ipa.zip, and cannot find MyAppiOS.ipa.ZIP, period. ] Verify that the IPA can be unzipped before reattempting your upload.".
In response to that error, I've tried to unzip the IPA file and I find that on the PC it is always unzippable if I change the extension to .zip, and on my Mac, I can always unzip it to see the Payload directory, and the MyAppiOS item inside it can never be opened - I get a popup "You can't open the application 'MyAppiOS' because it is not supported on this type of Mac." - in any case, the files Apple likes and the ones it doesn't look the same to me when I try unzipping.
My only ideas are to try to figure out how to open and build the app on my Mac, to call my router company, and to keep trying and trying, over and over again, until finally one of the builds works... which sometimes literally takes hours.
Many thanks for any help you are able to offer!
I'm trying to run a simple client/server to implement a communication using QSslSocket. I work on Windows (unfortunately) and I use QtCreator for more convenience.
When I try, from the client side, to connect to the server using MyQSslSocket->connectToHostEncrypted(ip, port), I get the following message:
qt.network.ssl: QSslSocket::connectToHostEncrypted: TLS initialization failed
When I print the raised error, I get the following one:
QAbstractSocket::SocketError(20)
In the documentation we can find that this error code corresponds to QAbstractSocket::SslInternalError whose the description is:
"The SSL library being used reported an internal error. This is probably the result of a bad installation or misconfiguration of the library."
After some investigations I found that Qt does not provide OpenSSL by itself so I installed it (the binaries, for both 32 and 64 bits versions to be sure) from here https://slproweb.com/products/Win32OpenSSL.html.
During the installation, the dll was copied to C:\Windows\System32 (for the 64 bits). Then I checked that the PATH environment variable does well contain this folder.
At this point I tried again, but I still had the same problem, as if the OpenSSL installation was still not found.
When I print the output of the following calls (in the main function of my client):
qDebug() << QSslSocket::supportsSsl();
qDebug() << QSslSocket::sslLibraryVersionString();
I get the following outputs:
false""
My question is, how to make QSslSocket::supportsSsl() return true ?
If anyone could teach me what I missed, what I am doing wrong and tell me what I should do to be able to make this SSL connection run properly, I would be very grateful.
Fareanor.
PS: Sorry for the long question but I think it is important to clearly expose the problem and the context to help you to easily understand the problem and give me more relevant answers.
Ok,
Thanks to #AlienPenguin and #Macias advices, it was that my version of OpenSSL was too recent.
Finally I have installed the closest available version of the one used for the Qt build (which does make sense, I should have thought of it) which can be found by running the following call:
qDebug() << QSslSocket::sslLibraryBuildVersionString();
Problem solved.
Thanks again.
I recently saw this error on programs which used to work fine. I think that the error started appearing after I did a sudo apt-get upgrade, which might have upgraded the Qt libraries on my machine.
I've reproduced this error for newly created project containing this code:
QDesktopServices::openUrl(QUrl("/home/sashoalm/Has Spaces.txt"));
QDesktopServices::openUrl(QUrl::fromLocalFile("/home/sashoalm/Has Spaces.txt"));
This produces 2 message boxes saying the same - /home/sashoalm/Has%20Spaces.txt: No such file or directory. But the file exists - I've verified that, xdg-open "/home/sashoalm/Has Spaces.txt" works fine, for example.
Any workarounds? When did this bug happen? My OS is Debian Wheezy.
Edit: I checked Qt4's source code, and the relevant code is this (from qdesktopservices_x11.cpp):
return (QProcess::startDetached(client + QLatin1Char(' ') + QString::fromLatin1(url.toEncoded().constData())));
QUrl::toEncoded() returns the percent-encoded path as file:///home/sashoalm/Has%20Spaces.txt. What is strange is that there were no changes in that file save updating the copyright notices since at before 2011. So it can't be a change in Qt. But the command issued by QDesktopServices::openUrl() is xdg-open file:///home/sashoalm/Has%20Spaces.txt, and that doesn't work on my computer. Perhaps it used to work before, and an update to xdg-open itself broke it? Does anyone know if xdg-open should handle file:/// with percent encoding?
on Qt5
QDesktopServices::openUrl(QUrl::fromLocalFile("/home/sashoalm/Has Spaces.txt"));
worked just fine. I was having the same problem when loading the file purely from a QUrl like the first line
QDesktopServices::openUrl(QUrl("/home/sashoalm/Has Spaces.txt"));
but when used the QUrl::fromLocalFile it just did the thing
Either escape the space with \
QUrl("/home/sashoalm/Has\ Spaces.txt")
or add quotes to the path: -
QUrl("\"/home/sashoalm/Has Spaces\"")