I'm currently trying to set up a simple packet sniffer with libpcap and facing a lot of confusion over this linker error.
I cloned the most recent version from github (1.11.0-PRE-GIT) and successfully did the configure, make and make install steps outlined in the installation instructions.
My script is as follows:
#include <iostream>
#include <pcap/pcap.h>
#include <string>
#include <cstdlib>
using namespace std;
int main(int argc, char *argv[])
{
//Set up error buffer
char errbuf[PCAP_ERRBUF_SIZE];
// Check Libpcap version number
cout << pcap_lib_version() << endl << endl;
//Initialize the library for local charactr encoding
pcap_init(PCAP_CHAR_ENC_LOCAL, errbuf);
return 0;
}
But when I try to compile with the command below I get the error:
g++ csniff.cc -o csniff -lpcap
/usr/bin/ld: /tmp/ccAPLFoh.o: in function `main':
csniff.cc:(.text+0x79): undefined reference to `pcap_init'
collect2: error: ld returned 1 exit status
I've also checked the pcap.h files present in both usr/include and usr/local/include and they both contain a prototype for the pcap_init function that looks like this:
#define PCAP_CHAR_ENC_LOCAL 0x00000000U /* strings are in the local
character encoding */
#define PCAP_CHAR_ENC_UTF_8 0x00000001U /* strings are in UTF-8 */
PCAP_AVAILABLE_1_10
PCAP_API int pcap_init(unsigned int, char *);
One thing I have noticed though is that when I comment out the line with pcap_init and use the script to print the version number I get
libpcap version 1.9.1 (with TPACKET_V3)
Any pointers would be much appreciated!
Edit: runnning Ubuntu 20.04.3 LTS
Your operating system already comes with libpcap - version 1.9.1. Most Linux distributions do, as do the *BSDs, macOS, and some commercial UN*Xes.
You compiled and installed a newer version of libpcap, so you had two versions of the library file - the 1.9.1 that comes with the system, in the system library directory, because it comes with the OS, and the 1.11.0-PRE-GIT and you compiled and installed, probably in /usr/local/lib.
You also had one version of the libpcap header files - the 1.11.0-PRE-GIT version - in /usr/local/include/pcap, because you installed it. You did not have the 1.9.1 version of the header files, because, like many Linux distributions, a separate "development" package has to be installed in order to get the header files.
When you compiled your program, you didn't tell it where to look for header files or libraries. It found the header files in /usr/local/include/pcap - i.e., the 1.11.0-PRE-GIT header files - and found the library in the system library directory - i.e., the 1.9.1 library.
This causes problems, because the header files included a declaration of pcap_init(), so the compiler didn't print a warning about pcap_init() not being declared, but the library file doesn't include the pcap_init() function, so the linker printed an error about pcap_init() not being found in libpcap.
Removing the call to pcap_init() meant that the linker didn't try to find pcap_init(), and thus didn't fail.
If you hadn't built and installed libpcap 1.11.0-PRE-GIT, and you had installed the libpcap-dev package, your system would have the headers and library for 1.9.1, and compiling your program - without the call to pcap_init() - would find the headers for 1.9.1, so it can compile, and the library for 1.9.1, so it will link.
If, however, you want to build with 1.11.0-PRE-GIT, then you will need to tell the compiler where to find the libraries; you might also have to tell it where to find the headers.
If you don't have the libpcap-dev package installed, then you don't need to tell it where to find the headers, as it will find the 1.11.0-PRE-GIT ones you installed; however, if the libpcap-dev package is installed, you may have to add the flag -I /usr/local/include to the compiler command, to make sure it finds the headers in /usr/local/include rather than the system include directory.
To make sure it finds the 1.11.0-PRE-GIT version of the libraries, you will have to add the flag -L /usr/local/lib to the compiler command as well - or set LD_LIBRARY_PATH so that the linker finds the 1.11.0-PRE-GIT version.
If you build with 1.11.0-PRE-GIT, you can use pcap_init(). You don't need to use pcap_init() unless you want to run the program on some UN*X and on Windows and you want all strings to be in UTF-8 (on Windows, strings would, by default, be treated as being in the "local code page"). To quote the DESCRIPTION section of the pcap_init() man page:
pcap_init() is used to initialize the Packet Capture library. opts
specifies options for the library; currently, the options are:
PCAP_CHAR_ENC_LOCAL
Treat all strings supplied as arguments, and return all strings
to the caller, as being in the local character encoding.
PCAP_CHAR_ENC_UTF_8
Treat all strings supplied as arguments, and return all strings
to the caller, as being in UTF‐8.
On UNIX‐like systems, the local character encoding is assumed to be
UTF‐8, so no character encoding transformations are done.
On Windows, the local character encoding is the local ANSI code page.
If pcap_init() is not called, strings are treated as being in the local
ANSI code page on Windows, pcap_lookupdev(3PCAP) will succeed if there
is a device on which to capture, and pcap_create(3PCAP) makes an
attempt to check whether the string passed as an argument is a UTF‐16LE
string ‐ note that this attempt is unsafe, as it may run past the end
of the string ‐ to handle pcap_lookupdev() returning a UTF‐16LE string.
Programs that don’t call pcap_init() should, on Windows, call
pcap_wsockinit() to initialize Winsock; this is not necessary if
pcap_init() is called, as pcap_init() will initialize Winsock itself on
Windows.
Related
Context: I'm using Qt 5.9.3 on Windows, building for MinGW 32-bit. The Qt part is a side issue though - the problem seems to be with MinGW. The version of MinGW is 4.3.0, supplied prebuilt as part of the Qt installation.
I'm building a library which talks to a USB device over HID. Everything compiles fine, but it fails at the link stage with
./..\..\object\debug\usb_hid_device.o: In function `ZN8MyApp3USB5Win3213getDevicePathB5cxx11Ell':
<MYPATH>/../../source/win32/usb_hid_device.cpp:99: undefined reference to `HidD_GetAttributes(void*, _HIDD_ATTRIBUTES*)#8'
./..\..\object\debug\usb_hid_device.o: In function `ZN8MyApp3USB5Win3214CHIDDeviceImplC2EllRNS_15LogPerComponentE':
<MYPATH>/../../source/win32/usb_hid_device.cpp:200: undefined reference to `HidD_FlushQueue(void*)#4'
The linker command is
g++ -shared -mthreads -Wl,-subsystem,windows -Wl,--out-implib,<MYPATH>\bin\debug\libusb_hid_comms.a -o <MYPATH>\bin\debug\usb_hid_comms.dll object_script.usb_hid_comms.Debug -lhid -lsetupapi -LC:\Qt\Qt5.9.3\5.9.3\mingw53_32\lib C:\Qt\Qt5.9.3\5.9.3\mingw53_32\lib\libQt5Guid.a C:\Qt\Qt5.9.3\5.9.3\mingw53_32\lib\libQt5Cored.a
If I omit -lhid I get the same errors. I also get the same errors if I remove -lhid and explicitly set the path and filename to libhid.a. If I deliberately mistype the path and filename, it comes up with an error, so I know the command-line is getting parsed correctly. But for whatever reason, MinGW appears to not be linking with one of its own library files.
I've also tried removing -lsetupapi and I get the linker errors I'd expect for the functions defined in there. Likewise the Qt library files. But it seems that specifically for libhid.a, MinGW can see the library file but just isn't going to link with it.
Has anyone else seen this? Or can anyone else with the same (or similar) version of MinGW confirm or deny that they can link with libhid.a? Or is there something obviously wrong with what I'm doing?
I've just found the answer. I'm posting an answer myself so that other people know in future, because I think this is still a valid question which people might want to know about.
The problem is the include file hidsdi.h. The majority of other header files which pull in Win32 API calls have extern "C" around the function declarations. However this one doesn't! The result is that we end up with C++ name mangling for linker symbols, instead of the C-style "_" in front of the linker symbols.
The solution is to use
extern "C"
{
#include <hidsdi.h>
}
and then everything works fine.
The version of hidsdi.h with the older version of MinGW (which I'm porting from) did have that protection around the function declarations. However it looks like it's gone in the newer version.
I try to get tests generated by the cxxtest framework working under a MinGW environment managed by mysys2. The tool generates C++ files with absolute paths. However, gcc seems to be unable to resolve this absolute paths.
Here is a minimal example to demonstrate the problem:
// file1.h
#include <iostream>
inline void hallo() { std::cout << "Hallo\n"; }
// main.cpp
#include "/home/phil/example/file1.h"
int main()
{
hallo();
return 0;
}
The file exists (at least the msys2 shell resolves the path):
$ ls /home/phil/example/file1.h
/home/phil/example/file1.h
... but calling g++ results in this error:
$ g++ main.cpp
main.cpp:1:38: fatal error: /home/phil/example/file1.h: No such file or directory
#include "/home/phil/example/file1.h"
^
compilation terminated.
Same error with clang.
Under a full Linux environment, the example works. It also works if I replace the absolute path by a relative one (#include "file1.h").
So, I assume the problem lies in the layer over Windows that is responsible to resolve paths. Not sure whether I should report it as a bug to the msys2 project, or whether it is a known problem. If it is a known problem, are there any workarounds (like setting -I options)?
(If possible, I would like to avoid replace the absolute paths, as they are in generated code by the cxxtest framework. Technically, running a postprocessing step on the generated files would be possible but seems like a hack in the long run.)
Since you are running compilers that use MinGW-w64 as their runtime environment, they don't recognize POSIX-style paths like that. I think they actually interpret the root directory "/" to be "C:\". Other than that, they would only recognize native Windows-style paths.
I recommend that you pass the argument -I/home/phil/example to your compiler from some program running in the msys-2.0.dll POSIX emulation runtime environment (e.g. /usr/bin/bash or /usr/bin/make). The msys-2.0.dll runtime will then convert that argument to use a native Windows path so the compiler can understand it, and statements like #include <file1.h> will work. Alternatively, you might try putting a Windows-style path in your source code, e.g. the path should start with C:\.
Note however that having absolute paths in source code or build scripts is a bad idea since it makes it harder to build the code on a different computer. You could consider using environment variables or relative paths.
Try using the MinGW compiler that Cygwin provides as a package. (In other words, forget the MSYS environment; work under Cygwin, but build the code as before, in the MinGW style.)
Then you should be able to have include references /home/phil; it will just resolve to C:\Cygwin\home\phil or wherever your Cygwin root is.
Actually, it might be possible under MSYS also (which, after all, is just the descendant of an old for of Cygwin). You just have to figure out what /home/phil is referring to, create that tree and work under there.
I want to add new method in OpenCV library. I made my_funct.cpp whose code is as simple as:
#include "precomp.hpp"
#include <stdio.h>
void cv::my_funct(){
printf("%s\n","Hello world!");
}
and I added header CV_EXPORTS_W void my_funct(); to files C:\opencv\build\include\opencv2\imgproc\imgproc.hpp and C:\opencv\sources\modules\imgproc\include\opencv2\imgproc\imgproc.hpp. Then I used CMake to build new binaries for whole library, but when I make a new project in which I use my_funct() I get an error:
The procedure entry point _ZN2cv8my_functEv could not be located in
the dynamic link library path_to_this_project\project.exe.
Other opencv functions work just fine. I'm using mingw32 to compile library and the version of OpenCV is 2.4.9. Can you tell me what am I doing wrong?
This looks like an MinGW run-time error. So going by the assumption that you didn't get any compiler or linker errors while building project.exe, your executable most likely doesn't find the matching .dll to your .dll.a import library (which must have included the my_funct() definition).
I would recommend during developments phase - not talking about the install() scripting - to add a post-build step using add_custom_command() and generator expressions to copy the right DLL next to your project.exe:
add_custom_command(
TARGET project
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
"<... path to matching DLL ...>"
"$<TARGET_FILE_DIR:project>"
)
Certainly you could also let CMake find the matching DLL, but before I could go into details there I would need to see your project.exe CMake script.
Maybe also good idea - if you are in the process of extending OpenCV code - would be to use ExternalProject_Add() to include OpenCV into your project.
References
MinGW-w64 - for 32 and 64 bit Windows - Wiki: Procedure entry point OpenProcessToken? could not be located in the dynamic link library kernel32.dll
MinGW "The procedure entry point libiconv could not be located ..."
Getting started with OpenCV 2.4 and MinGW on Windows 7
There are many references to using i2c_smbus_ functions when developing embedded Linux software to communicate on the I2C bus. When i2c_smbus functions such as i2c_smbus_read_word_data are referenced in software project for ARM8 processor errors such as ‘i2c_smbus_read_word_data’ was not declared in this scope are generated at compile.
Investigation of the following header files indicate the absence of most i2c_smbus function definition.
/usr/arm-linux-gnueabi/include/linux/i2c.h
/usr/arm-linux-gnueabi/include/linux/i2c-dev.h
Also in that following reference i2c.h file has all the i2c_smbus defined.
How can this problem be resolved?
Research references
Using I2C from userspace in Linux
I2C Communication from Linux Userspace – Part II
I2C dev interface
Because you are using a wrong header file for your application.
If you see an extern on the function i2c_smbus_read_word_data() in your header, it's a header file for your kernel, but not for your application. The Linux kernel has i2c_smbus_read_word_data() and other i2c smbus functions for its internal use. But they are a) not system calls, or b) not accessible from your application.
Instead, get i2c-tools from Linux Kernel Wiki and install it. If you are using Debian, just
sudo apt-get install libi2c-dev
and use i2c_smbus_read_word_data() or any other interfaces they offer.
Version Notes
i2c-dev, untill version 3.x, used be a header only package, meaning that there was no library to link to. All functions were inline functions defined using ioctl().
e.g.)
static inline __s32 i2c_smbus_access(int file, char read_write, __u8 command,
int size, union i2c_smbus_data *data)
{
struct i2c_smbus_ioctl_data args;
args.read_write = read_write;
args.command = command;
args.size = size;
args.data = data;
return ioctl(file,I2C_SMBUS,&args);
}
:
static inline __s32 i2c_smbus_read_word_data(int file, __u8 command)
{
union i2c_smbus_data data;
if (i2c_smbus_access(file,I2C_SMBUS_READ,command,
I2C_SMBUS_WORD_DATA,&data))
return -1;
else
return 0x0FFFF & data.word;
}
But since v4.0, it start to be a standard shared library with libi2c.so.0 and i2c/smbus.h. You have to include the header file in your source code
#include <i2c/smbus.h>
And link libi2c.so.0 with -li2c
gcc -o a.out main.o -li2c
I ran into this today. The i2c_smbus_* functions are defined in:
/usr/include/linux/i2c-dev.h
...but when I would try to cross-compile for ARM on an older version of Ubuntu, I was running into errors such:
i2c_smbus_read_block_data was not declared in this scope
Turns out the functions are not defined in the equivalent ARM-specific location:
/usr/arm-linux-gnueabi/include/linux/i2c-dev.h
When cross-compiling, this 2nd older header file is the one used. Had to re-declare locally a few of the inline i2c_smbus_... functions to get around the problem.
Based on https://unix.stackexchange.com/questions/621854/usr-include-linux-i2c-dev-h-does-not-contain-i2c-smbus-read-word-data-functio, I have found this fixes the function not defined errors:
#include <i2c/smbus.h>
I am currently working with legacy code that references various i2c_smbus functions. It has:
#include <linux/i2c-dev-user.h>
and it fails to compile. Surely, this include used to work, but it seems the lib's header files changed at some point. I did refresh/reinstall libi2c-dev recently.
Note that I added the above include. I can't remove the original include. It is still needed.
FYI: I have not tried cross-compiling yet.
From the i2c Linux kernel documentation:
Please note that there are two files named "i2c-dev.h" out there, one is distributed with the Linux kernel and is meant to be included from kernel driver code, the other one is distributed with i2c-tools and is meant to be included from user-space programs. You obviously want the second one here.
So you need to include the i2c-dev.h from i2c-tools not from the Linux kernel.
I'm trying to compile a UTF-16BE C++ source file in g++ with -finput-charset compiler option but I'm always getting a bunch of errors. More details follow.
My environment(in CentOS Linux):
g++: 4.1.2
iconv: 2.5
Linux language(in Terminal): LANG="en_US.UTF-8"
My sample source file(stored in UTF-16BE encoding):
// main.cpp:
#include <iostream>
int main()
{
std::cout << "Hello, UTF-16" << std::endl;
return 0;
}
My steps:
I read the manual of g++ about the -finput-charset option. The g++ manual says:
-finput-charset=charset
Set the input character set, used for translation from the character set of the input file to the source character set used by
GCC. If the locale does not specify, or GCC cannot get this
information from the locale, the default is UTF-8. This can be
overridden by either the locale or this command line option.
Currently the command line option takes precedence if there’s a
conflict. charset can be any encoding supported by the system’s
"iconv" library routine.
Thus I entered the command as follows:
g++ -finput-charset=UTF-16BE main.cpp
and I got these errors:
In file included from main.cpp:1:
/usr/lib/gcc/i386-redhat-linux/4.1.2/../../../../include/c++/4.1.2/iostream:1:
error: stray ‘\342’ in program
/usr/lib/gcc/i386-redhat-linux/4.1.2/../../../../include/c++/4.1.2/iostream:1:
error: stray ‘\274’ in program
...(repeatedly, A LOT, around 4000+)...
/usr/lib/gcc/i386-redhat-linux/4.1.2/../../../../include/c++/4.1.2/iostream:1:
error: stray ‘\257’ in program
main.cpp: In function ‘int main()’:
main.cpp:5: error: ‘cout’ is not a member of ‘std’
main.cpp:5: error: ‘endl’ is not a member of ‘std’
The manual text suggests that the charset can be any encoding supported by 'iconv' routine, thus I guessed the compilation errors might be caused by my iconv library. I then tested the iconv:
iconv --from-code=UTF-16BE --to-code=UTF-8 --output=main_utf8.cpp main.cpp
A "main_utf8.cpp" file is generated as expected. I then tried to compile it:
g++ -finput-charset=UTF-8 main_utf8.cpp
Note that I specified the input-charset explicitly to see if I did anything wrong, but this time a "a.out" was generated without any errors. When I ran it, it could produce the correct output.
Finally...
I couldn't figure out where I did wrong. I searched in the web trying to find out some examples for this compiler option but I couldn't.
Please advise! Thanks!
Further edits:
Thanks, guys! Your replies are quick! Some updates:
When I said "UTF-16" I meant "UTF-16 + BOM". In fact I used UTF-16BE. I have updated the text above.
Some answers say the errors are caused by the non-UTF-16 header files. Here are my thoughts if this is the case: We'll always include some standard header files when writing a C/C++ project, right? Such as stdio.h or iostream. If the G++ compiler only deals with the encoding of the source files created by us but never with the source files in the standard library, then what does this -finput-charset option exist for??
Final edit:
At last, my solution is like this:
At the beginning, I changed the encoding of my source files to GB2312, as "Mr Lister" said below. This worked fine for a while, but later I found it not suitable for my situation because most of the other parts in the system still use UTF-8 for communication and interfaces, thus I must convert the encoding in many places... Not only an overhead of my work, it may also result in some performance decrease in my program.
Later I tried to convert all my source files to UTF-8 + BOM. In this way, Visual Studio in Windows could compile them happily but GCC in Linux would complain. I then wrote a shell script to remove the BOM, and before I want to compile my code with GCC, I run this script first.
Luckily, I don't have to build the code in Linux manually because TeamCity the continuous integration tool is used in my project to generate the build automatically. I could change the build steps in TeamCity to help me run this script before the daily build starts.
With this UTF-8 + BOM + script method, I decide not to edit my source code in Linux, because if I want to do so, I must make sure my code could build successfully before I commit it, which means I must run the script to remove the BOM before I build the code, which means SVN would report EVERY file is modified(BOM removed) thus make it very easy to mistakenly commit a wrong file. To solve this problem, I wrote another shell script to add the BOM back to the source files. Though I still don't edit my code very often in Linux, but when I really need to, I don't have to face the terribly long change list in the commit dialog.
Encoding Blues
You cannot use UTF-16 for source code files; because the header you are including, <iostream>, is not UTF-16-encoded. As #include includes the files verbatim, this means that you suddenly have an UTF-16-encoded file with a large chunk (approximately 4k, apparently) of invalid data.
There is almost no good reason to ever use UTF-16 for anything, so this is just as well.
Edit: Regarding problems with encoding support: The OSes themselves are not responsible for providing encoding support, this comes down to the compilers used.
g++ on Windows supports absolutely all of the same encodings as g++ on Linux, because it's the same program, unless whatever version of g++ you are using on Windows relies on a deeply broken iconv library.
Inspect your toolchain and ensure that all your tools are in working order.
As an alternative; don't use Chinese in the source files, but write them in English, using English-language literals, or simple TOKEN_STYLE_PLACEHOLDERs, using l10n and i18n to replace these in the running executable.
Threedit: -finput-charset is almost certainly a holdover from the days of codepages and other nonsense of the kind; however; an ISO-8859-n file will almost always be compatible with UTF-8 standard headers, however, see the reedit below.
Reedit: For next time; remember a simple mantra: "N'DUUH!"; "Never Don't Use UTF-8!"
I18N
A common solution to this kind of problem is to remove the problem entirely, by way of, for instance, gettext.
When using gettext, you usually end up with a function loc(char *) that abstracts away most of the translation tool specific code. So, instead of
#include <iostream>
int main () {
std::cout << "瓜田李下" << std::endl;
}
you would have
#include <iostream>
#include "translation.h"
int main () {
std::cout << loc("DEEPER_MEANING") << std::endl;
}
and, in zh.po:
msgid DEEPER_MEANING
msgstr "瓜田李下"
Of course, you could also then have a en.po:
msgid DEEPER_MEANING
msgstr "Still waters run deep"
This can be expanded upon, and the gettext package has tools for expansion of strings with variables and such, or you could use printf, to account for different grammars.
The Third Option
Instead of having to deal with multiple compilers with different requirements for file encodings, file endings, byte order marks, and other problems of the kind; it is possible to cross-compile using MinGW or similar tools.
This option requires some setup, but may very well reduce future overhead and headaches.
The error message says the problem is in the include files, so I presume what happens is that the include files are normal UTF-8, but the compiler wants to treat them as UTF-16 because of the compiler switch.
So I'm afraid the solution is to always convert the source to UTF-8 first; perhaps in the makefile. Or to find a solution that doesn't contain include files in other encodings...
Edit:
Maybe a GB encoding would work, if and only if none of the system source files contain any non-ASCII characters. Then you could tell the compiler they were GB encoded without problem.
This does not work because the compiler will also try to read the header files as UTF-16, which they are not.
UTF-16 is not an encoding for bytes. It's an encoding where your basic storage unit is 16 bits large.
When you want to store UTF-16 in a byte sequence you have to choose between UTF-16BE and UTF-16LE.