As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm looking to use Qt for a non-UI application. It has the potential to run on an appliance, but will start out on the desktop. The UI part (I know, I said non-UI) would be a web server with HTML(5)/AJAX.
I would really only use Qt for basic cross platform stuff like threads, synchronization, serialization, resources (strings, maybe images), internationalization, etc.
Which would be better for something like this, Qt or Boost and creating the cross platform layer myself?
Qt feels a little heavy for what I need, but I want to hear what experiences others have.
Yes, in my opinion it is perfectly OK. I wouldn't say Qt is heavy compared to Java, for example, which is extremely widely used for such tasks. Qt is very powerful, clean, easy and fast. I use it a lot, and I don't know any major drawbacks with it.
Yes, using QtCore (and other non-GUI modules) should do just what you need. As choosing between Boost and QtCore: both do good jobs and sometimes they interleave. But not always.
Qt(Core) offers mainly functionality. Boost offer mainly tools to achieve functionality. For example, you have templates and functors in Boost, not in Qt. OTOH, if you need message pumps and the like, you will only find those in Qt.
It really depends on what you are trying to achieve.
What you're proposing is perfectly reasonable.
You want to use a number of features (threading, etc. that you mention) across platforms.
Essentially you have a number of options, as follows:
Option 1 (Bad): Write your own cross-platform wrappers. You'd be reinventing the wheel, and you probably won't be able to tackle as many cross-platform cases and features as Qt already does. This option also means that whoever inherits your code will have to deal with your custom library instead of a well-supported and well-documented easily accessible library.
Option 2 (Not recommended): Use individual cross-platform solutions for every feature you want, like threading, networking, etc. This means that you (and your successor) will have to maintain compatibility with a large number of libraries in the future.
Option 3 (Recommended): Use a single, well documented, easily accessible library to meet all your needs. Qt fits the bill.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am building a C# project. This project is going to use NVidia's Tesla through CUDA. CUDA C native implementation is not exposed directly to C# and, in my opinion, the available C# wrappers (like Brahma, CUDAfy, Linq to GPU) are not mature enough for production.
I decided to go ahead and build my math logic in a C++ component that is going to access CUDA which is the official supported way. C++/CLI is not an option as I am using Intel C++ Compiler, for performance, which doesn't support CLR extensions.
My most important criteria is performance, so, I would try to minimise marshalling and copying arrays between C++ (where my business logic lives) and .NET (the rest of my applications).
I am aware that this question has been asked before, but mostly, the C++ library is already there and other times, C++/CLI is an option, but here, both situations are not the case.
Given that I am going to write the C++ library from scratch in C++, I am in the position to decide the best way to expose it to C#. Do you have any recommendations or best practices that I should follow to get the easiest and highest performing integration between C++ and .NET? Note that what I will be exchanging are mostly large arrays
Edit: clarifying that I am building my business logic (math) in C++ and not an infrastructure library to facilitate access to GPU.
While it is certainly possible to outperform the already existing libraries that you deemed not mature enough, the very fact that you are asking this question here should make you think twice about deciding to roll your own library/implementation!
Considerations beyond specific performance such as stability and reliability should be your primary concern if this is going to production. Generally unless you know what you're doing, duplicating the effort of the community, or other teams of developers, can be a slippery slope.
I know this answer doesn't really address your question but as it's formulated your question is in my opinion overly broad and there is no simple answer. Initially I was going to post this as a comment but decided it was too long to fit the format.
So, in closing, I recommend you try out the already existing libraries and if you find them not-fitting from a performance stand point, start asking specific questions.
UPDATE
If you're going to implement most of the logic in C++ and you're expecting to just be transferring some results back to your managed code in the form of arrays then there isn't much that you need to do. In general the automatic marshalling of arrays is as efficient as you're going to get.
The one thing I would recommend is though to read as much as possible about Marshalling and use a performance profiler before deciding to get "creative" in order to improve things.
And here's one last idea that might be interesting but again, you should profile before attempting to use this: you might try to use a Memory Mapped File as the backing store for your data and open the file from both ends. Ultimately this may or may not be useful so definitely profile before you buy ;)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I need to IMPLEMENT(not to use some library/open source) an event/message system.
I have the following restrictions:
It must be fast. It will be use for games and speed is the main restriction. I think I can't create/delete message/event classes every time a new message/event is sent even if I use custom allocators for that.
I must be able to predict when a messages/event sent/created will be received.
It must be easy to use. Doesn't matter how complicated the implementations of the system will be, the programmer that uses it must have an easy to use interface.
I will prefer to avoid giant switches like on Windows messages, but I also want to avoid overriding a class for only one function - the event handler or something like this. I think something like the MFC style would be nice.
It must be able to handle lots (maybe 1000/frame at 60 frames/second, don't know exactly this) of messages/events without performance issues.
It can't use compilers hacks that are not available on other platforms. It must be portable. I will use C++ for implementation.
Any architecture/design/link/book that you think is suitable for/might help this would be highly appreciated. Thanks!
Let me address your points one by one:
It must be fast. It will be use for games and speed is the main
restriction. I think I can't
create/delete message/event classes
every time a new message/event is sent
even if I use custom allocators for
that.
It would suffice and perhaps be even more efficient (it was for me in one project) to reuse and refill existing messages. No need for a custom allocator.
I must be able to predict when a
messages/event sent/created will be
received.
You can make predictions but normal networks (you want portability) will make your predictions sometimes a bit off and sometimes way off.
It must be easy to use. Doesn't matter
how complicated will be the
implementations of the system, the
programmer that uses it must have an
easy to use interface.
That should be possible, albeit this could cost you some extra effort. Error handling and special cases (platform, networking) come to mind.
I will prefer to avoid giant switches
like on Windows messages, but I also
want to avoid overriding a class for
only one function - the event handler
or something like this. I think
something like the MFC style would be
nice.
Avoiding manually written giant switches is a thing I 100% subscribe to.
It must be able to handle lots (maybe
1000/frame at 60 frames/second, don't
know exactly this) of messages/events
without performance issues.
If you take care during implementation, you should only be bounded by the network.
It can't use compilers hacks that are
not available on other platforms. It
must be portable. I will use C++ for
implementation.
Not even C++ is available on all platforms. Could you please list the platforms you are addressing?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I wont call myself a novice programmer, I have been working for a while now in c and c++ however I have never worked on something of my own. I think a bigger learning can be accomplished if one works on some project on there own apart from there work. Keeping this in mind, could you guys tell me some project I can implement? I recently learned posix threads so something I can do with that would be good. unfortunately I dont know anything about doing and making UI so I would like to avoid that.
You could write your own threadpool, that would be interesting, challenging, and wouldn't require much UI work.
You may not get a lot of threading exposure, but anything on http://projecteuler.net/ is recommended.
Optionally, you could make a program to draw and color the Mandelbrot Set. You could do that with multiple threads and it lends itself to extended features for a larger project as you desire.
Projects that meant a lot to me on the start were:
1) Small picture editing project. Various operations on the .bmp files like rotations, changing contrast, black and white conversions, convolution filters etc. You really see some results so it is kind of cool. You can easily add posix threads in that story.
2) API for B and B* trees.
3) Preemptive multithreaded OS kernel. Things like semaphores, signals, fork function etc. It can be really hard if you don't have mentor, but it is extremely useful.
4) Thread safe FAT16 file system.
5) Distributed image processing.
I did those projects during my first 2-3 years of "programming career" and those meant a lot to me, so you can try some of them.
Have you tried the free community-based FOSS advertisements? Those projects are all looking for contributors :)
You could try to code a VST Effect if you like music (MAO).
with a simple interface you could learn more about thread, dynamic plugin, real time constraints ...
Another idea could be a basic command line app hosting plugins each plugin implementing a simple action. To process different actions on data depending on the plugin you load.
Or implement a parser (or compiler) for basic MIPS for example.
Something I wish I had:
Design a data description language (ddl) that allows you to express some C (and/or whatever) types.
Write a program that, given specifications in the ddl, writes C (and/or whatever) code that :
defines the types in C (and/or whatever)
writes text representing instances of the types
reads text to generate instances of the types
frees instances of the types
Such a facility could be handy in other projects; one can spend a lot of time, over the years, producing such boiler plate by hand.
Further possibilities:
converts type instances to binary (e.g. for sending over the network)
converts binary to type instances
Then there is the tricky part: implement a facility for updating type specifications, so that for example if you have a file of revision a text type instances, you can read that with a rev-b program.
A few suggestions:
implement a HTTP and/or FTP server;
implement thread-safe containers and the associated test programs. It will be a good exercise to look for potential deadlocks and improving the locking strategy.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Can somebody point out the advantages of Clojure and what type of applications is it suited for ?
I don't intend to compare it to any languages as such. As a language in itself what is it suitable for ? My intention is to know the right tools for the right job, and where does clojure fit-in in that kind of scenario.
Advantages:
all the benefits of functional programming, without the purity straitjacket
lispy: allows dynamic, compact code with late binding, macros, multimethods
Java interoperability
can code functions to sequence abstraction instead of to specific data structures
concurrency goodies baked in: functional data structures, software transactional memory
runs on the JVM: portability and fast garbage collection
Suited for:
bottom-up design
embedded languages
highly-concurrent applications
Probably not suited for:
cases where you want static typing
if you want the language to be amenable to static analysis
anything that needs a fast startup time
hordes of clueless Java monkeys
In general I find as strong points for clojure (in no particular order):
1) The REPL to try things out interactively.
2) Everything is immutable by default and mutability has several well chosen standard patterns to modify state in a safe way in an multithreaded environment
3) Tail recursion is made explicit. Till there is proper support for tail recursion on the JVM this is probably the best compromise
4) Very expressive language which favors a functional approach over an imperative approach.
5) Very good integration with the Java platform making it painless to mix in Java libraries
6) Leiningen as a build and dependency management tool together with the clojars site
Ok, point 6 has nothing to do with the language perse, but definitely with my enjoyment of using it.
Regarding applications it targets multithreading applications, but the way things go right now that could mean about anything, as everywhere people try to keep all those cores busy while the user is not waiting. On the other hand apparently a lot of people use it to deploy to Google App Engine which is radically SINGLE threaded.
The functional approach works well in my (limited) experience for implementing data transformations and calculations. Where information and events can be 'streamed' through the application. Web apps fall largely under this category where we "transform" a request into a "response".
But I still have to use it in real production code. Currently I use it for personal projects and prototyping/benchmarking stuff.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 14 years ago.
I always work in windows environment and most often to write the program in C/C++.
Is it necessary to a Windows Application Programmer to remember Win32 APIs as more
as possible??
Dazza
Well, I can't say it would hurt, but I think that it's better to remember how to quickly reference the API documentation instead of actually remebering the documentation itself. That mental currency (of which there is a limited amount, of course) could be better used remembering other things to make you a better developer in your environment.
You shouldn't worry about brute memorization. You probably want to understand the basics though such as message pumps, posting messages, resource files, posting to threads, and just the general gist of how to do things in Win32. You'll probably know this after writing your first Win32 program.
In general, though, concern yourself about learning the best way to do something when its needed. Keep good references around, such as
Programming Windows by Petzold
As a programmer you have so many other things to learn, ie your problem domain or other technologies you'll have to integrate with, that wasting time on brute memorization is usually a waste of resources and won't work as well as google or a good index in the back of a book.
Yes. Even if you use a framework like MFC or WTL, you still need to use Win32 calls for many things.
Depends, if you are doing heavy GUI work, knowing Win32 (with or without MFC) is going to benefit you.
OTOH If you are writing background tasks and services, I would segregate as much of the Win32 necessary functions away from the normal code in order to keep the code as portable as possible. Every place I've ever worked where there was a client/server setup (as it use to be called), as the product gained popularity/industry notice, there always came along a customer who was interested in the product, but not on Windows. If you have Win32 calls all over your code, porting is a pain.
Of course, this doesn't mean you shouldn't know the APIs. When in Rome.... If you are doing an app that needs to do a lot of I/O, you should be using the Win32 I/O completion API. I'm just saying hide that stuff whenever you can.
Short answer, Remember individual APIs? don't be silly. Be aware of many APIs, indeed.