Java binding for C library(GPIO) [closed] - java-native-interface

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 months ago.
Improve this question
I need to write Java binding for GPIO library.
Decided to go with JNI for the purpose. All of the references have examples with using standard C libraries with functions such as printf or, from scientific library with method such as multiply.
The library for which I need to write Java binding has macros, structs, with types such as __u32 which I am unable to see mapped to Java.
Until now have watched some youtube videos, looking at JNI programmers's guide which was recommended(but it is very old) and looking at an ibm documentation on JNI
https://docs.oracle.com/javase/8/docs/technotes/guides/jni/spec/types.html has not been very helpful.
https://github.com/java-native-access/jna/blob/master/www/Mappings.md has mappings but in the library there are types such as __u32 for which there is no corresponding Java type
https://docs.oracle.com/javase/8/docs/technotes/guides/jni/spec/types.html, even the official documentation has no mappingfor __u32 or unsigned 32/64 bit
Is there any tools that can help me?
Where can I find a good reference for this?
Or am I approaching this completely wrong?
Should I go for JNA or some other option?

Java is not a language I would use for programming hardware, I recently took a class in Embedded Programming where the instructor specifically said don't use Java.
This article does talk about how to convert unsigned integers to integers for Java.
I'm not sure why the example you have is __u32, it is more appropriate in C to use stdint.h which provides uint32_t.
Unsigned integer values are used in GPIO to toggle boolean values that are packed into addresses in the hardware. An unsigned char (8 bits), unsigned short (16 bits) or unsigned int (generally 32 bits for this use) contain control values for digital hardware, to reduce the address space necessary (memory on a device) these bits grouped together by function and packed into a single unsigned data type.
You can find examples of this in device data sheets.

Related

Is it true that a C++ compiler is written multiple times for different platforms (Linux, windows etc) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
As far as I know C++ is an ISO standard, so they provide some sort of standards and list of features to be implemented for the coming release.
Is it the case that every platform owner will go and write their own implementation for those standards?
Or is there any core compiler code which is implemented once and then every other platform will write wrappers around it?
Or do they write their own C++ compiler from scratch?
Yes and no. Compiler basically consists of two parts: parser (a.k.a front-end) and code generator (a.k.a. back-end). Parser is responsible of recogninizing C++ grammar. Code generator constructs machine code for target platform (hardware type and operating system) based on information it gets from parser. While parser is platform independent, code generator part is tied to target platform. So, in order to support a new platform, one can reuse existing parser part, but has to write new code generator part.
Basically what ISO standards set is set of rules that should be followed by the compiler vendors.
But these are the standards for implementation and not the actual implementation.
Every major hardware vendor knows how to use its own hardware best
This includes aspects like
1) ABI support - This include things like binary formats, system calls and other interfaces
2) Shared Libraries.
3) Architecture Support.
So Microsoft, IBM, Intel, Oracle, and HP all have their own C++ compilers, which create optimal code on their latest hardware.
Standards, however do provide the draft that has to be purchased
https://isocpp.org/std/the-standard
The following table presents compiler support for new C++ features. These include C++11, C++14, C++17, and later accepted revisions to the standard, as well as various technical specifications.
http://en.cppreference.com/w/cpp/compiler_support
Likely once they where created platform specific, put simple a windows exe or com file wont run on linux, however later versions can be compiled trough older versions.

Is C++ suited for tiny embedded targets? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
We are currently redesigning our embedded software and are going from 8 bit to 32 bit Cortex-M microcontrollers. Memory is pretty limited (128 kByte Flash and 32 kByte RAM).
In another thread an embedded software library (www.redblocks.de) was recommended. It seems to cover my needs very well, but requires the use of C++.
Does anybody have experience with C++ on embedded platforms like ours? I am wondering what kind of overhead I am dealing with, compared to C.
Depending on the C++ features you are using, there is little to no overhead compared to C.
Here are some features compared:
Using classes without virtual methods results in the same binary code
as C functions working on a data structure that is passed along as a
pointer
When classes with virtual methods are used, a vptr is added to the object’s data section and a vtable introduced in the text memory segment. Similar functionality can be implemented in C with function pointers (that occupy memory as well). As soon as you have more than one virtual method in a class, you typically end up with more efficient binary code when using C++ instead of manually introducing multiple function pointers per object with C.
The efficiency of exception handling differs from compiler to compiler.
RTTI adds overhead and should not be used on tiny embedded targets.
Non-deterministic dynamic memory usage (malloc / free in C and new / delete in C++) should be avoided with both programming languages on platforms without virtual memory management.
Templates have much in common with C preprocessor macros as they are evaluated during compile time and are a kind of compile time source code generation. Thus, they do not add any runtime overhead. However, using them non-deliberately will result in bloated code. If used in the right places, they can even help reducing runtime overhead.
I think the most challenging issue is the developers’ knowledge. C++, especially when using templates a lot, is a much more sophisticated language than C. So you need a bunch of pretty good developers.
However, if you want to go for a clean and reusable object oriented design, C++ is certainly the better choice than C.
I am not an embedded developer myself but I have several colleagues using c++ on the kind of microcontrollers you are targeting.
The language by itself does not add a lot of overhead but the use of the standard library (containers, algorithms...) is not recommended if you are limited in Flash/RAM.
If the performances are an issue you might also want to avoid RTTI and exception.
More information on this paper or on this page.
The book Effective C++ in an Embedded Environment form Scott Meyers is also a very good source of information.

Why is there a networking library proposal for C++14/17? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Even though TCP/UDP/IP are commonly used protocols, I do not understand why they want it to be part of the ISO C++ Standard. These have nothing to do with the core of the language. Data structures are universally required tools hence STL makes sense but these protocols are too specific IMO.
There has been a long-standing sentiment that the C++ library's tiny focus area is something bad that's holding the language back. Most "modern" languages come with large framework libraries which include networking, graphics and JSON. By contrast, if you want to do any of these in C++, you a) don't get anything by default, and b) are overwhelmed with a choice of third-party libraries which you are usually unable to appraise properly and select from.
This is how that side of the opinion goes.
Of course there are other people who think that that's just the way it should be. Nonetheless, standardization is hard work, and whereas languages like Java and C# and Go have large companies behind them that can put energy into developing a huge library, C++ doesn't have that kind of manpower, and most people who spend time on C++ standardization are more interested in core aspects of programming: data structures, concurrency, language evolution (concepts, ranges, modules...).
So it isn't so much that people are generally opposed to a larger library, but it's not a priority for many. But if good ideas come around, they have a good chance for being considered. And large library components like networking won't be going into the standard library anyway, but rather into a free-standing Technical Specification, which is a way to see whether the idea is useful, popular and correct. Only if a TS gets widely used and receives lots of positive feedback will there be a possible future effort to include it into the IS.
(You may have noticed similar efforts to create Technical Specifications for file systems and for graphics.)
C++ 11 includes Threading in Standard. Now programmers need not to write PThread in Linux and Windows Threads in Windows, separably. Same can happen if Networking library gets Standardized.

Is there any middleware/library that will convert your binary or text data from 64-bit to 32-bit? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
We use C++ in both front-end (Windows 32-bit) and back-end (Linux 64-bit). They can pass either binary or text data to communicate. Is there any middleware/library that will convert these data from 64-bit to 32-bit? Or is the only option to change your code?
There's no such thing like "64-bit text data". A text file just contains characters in some encoding. And currently there's no 64-bit encoding available. The longest fixed-width encoding is UTF-32 which is 32-bit long. For variable-length encoding, it's maximum 6-byte long for UTF-8 (edit: it has been officially limited to 4 bytes only because the range for Unicode was restricted to U+10FFFF) and a different number for others, but none is up to 8 bytes long. If there are differences then you need to convert the encoding, not 64-bit to 32-bit
For more information read The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
Binary is also just a series of bits, not necessary an array of constant width 64-bit or 32-bit numbers although in modern computer architectures the size is a multiple of bytes. You need to read the data exactly like how they were written. If you write a 64-bit value, read as a 64-bit value regardless of the 16, 32 or 64-bit program. How can you ensure that a number written in 64-bit does not overflow when cropping to 32-bit?
If you're using MSVC then the type sizes are the same in both 32 and 64-bit mode except pointers, thus no code changing is required if you stick to the standard. On most other 64-bit platforms you may need to take care if you use long since it's wider than in a 32-bit program.
It's better to use C++11's standard types like intN_t in cstdint in cross-platform code. Before C++11 and C99 many libraries and compilers also define their own standard fixed-width integer types like that for compatibility, for example qint32 in Qt and __int32 in MSVC
Converting 32-bit Application Into 64-bit Application in C
How can a 32 bit client communicate with a 64 bit server if long type is passed?

Is there a library for creating fixed length primitive types? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am currently writing code in vs2012 which I want to compile in Linux. I want to serialise and save progress to a file and want that save file to be accessible on a 32-bit and 64-bit architecture, and both Windows and Linux.
I do not want to use any serialization libraries.
To achieve this I want to have functions which can convert and retrieve types, namely float, double, int32 and int64 (signed and unsigned) to fixed length and portable primitives for storage and retrieval in a binary file. My understanding is that bool and char types are specified in the standard and therefore already portable.
Performance is not critical but size is, so ASCII is not a viable solution here. I do not mind losing some precision if, for example, a platform uses a bit length larger than the fixed length I have specified, or vice versa.
As I am a newb too much talk of endianness, IEEE, etc. will confuse and irritate me.
I am particularly interested in a library that will do these conversions for me out of the box, but will consider rolling my own if that is the only way of achieving this.
FYI I don't want serialization libraries because boost doesn't work with smart pointers, cereal doesn't work with VS2012, and that Microsoft one doesn't work in Linux. If I'm going to have to doodle around to get these things to work I figure I might as well just do it myself.
Any ideas?
Edit: as I have now been schooled on the c++11 compatibility of the boost serialization library I will gladly settle for that solution.
#ausairman Boost Serialization very much does work with smart pointers:
boost serialize and std::shared_ptr
#include <boost/serialization/shared_ptr.hpp>
The samples (http://www.boost.org/doc/libs/1_49_0/libs/serialization/doc/tutorial.html#examples) show this. Also, the example makes it look like aliasing and cycles are taken care of by default.
Since you mention straightup that endianness and other portability concerns confuse you, I very heartily suggest you do not write this yourself (unless it is purely with the goal of learning).
If you are interested in something that is not platform dependent and will store your values in memory with the same format that you wish to serialize, consider Cap'n Proto, which is written by the same author as Google's Protobuffers 2.0.
I am not sure whether smart pointers are used, and you might have to rewrite your objects to be backed by the Cap'n proto structs instead of primitive values, but this seems closest to what you want.
http://kentonv.github.io/capnproto/