I m doing in my project in scheduling algorithms in NS2. To implement scheduling algorithm, whether i have to write code in TCL or C++?
It depends upon the type of scheduling that you are doing. For example if you are writing queue scheduling algorithm then you should write a C++ code.
In NS2; we can write algorithms in both tcl script and C++. TCL scripting is simple and easy while we work with an existing protocol. While coding in C++ you have to create a new protocol and it is little bit tough. Overall, For better result, you must have to code in C++
Related
I am working on multi-objective association rule mining. For this, I want to use NGSA II algorithm. Is there any fully functional NSGA-II implementation in python or MATLAB that takes standard dataset as input?
I'm beginning a project and am looking for a little guidance before I start making major decisions.
The project is a network simulator. Basically I will load a directed graph representing a network of computers. The network is expected to run an algorithm to simulate an operation for the network as a whole. Each separate node on the graph will run the same algorithm.
For example: A simple flooding algorithm which begins at a root node, and each node should be able to receive the message then resend it to its neighbors.
My issue is the loading of algorithms for each node to run. The user should be able to create a text file with the algorithm and have it load into my program for each node to run separately.
The text files can be in any format, although I believe it would be easiest if they were formatted as a c++ function.
The only idea I could come up with would be to create a parser to read each line. Not only would that be difficult but I don't think that would work due to loops present in most algorithms.
I'm willing to give a more detailed description.
Usually, if you want to write code in files, you'd use a scripting language. Lua is a popular one.
Your description of parsing a text-file for an algorithm, is basically you creating your own scripting language (which is also commonly done).
Perhaps you don't need a full-blown scripting language, but at the very least, you need a domain-specific language, and might as well use a scripting language (sandboxed) for that purpose. Boost::Spirit is an option for describing and embedding your domain-specific language parser directly in your C++ code.
I am currently looking for a discrete event simulator written for C++. I did not find much on the web written specifically in OO-style; there are some, but outdated. Some others, such as Opnet, Omnet and ns3 are way too complicated for what I need to do. And besides, I need to simulate agent-based algorithms capable of simulating systems of thousands of nodes.
Does anybody know anything suitable for my needs?
Others have good direct answers, but I'm going to suggest an alternative. If I understand you right, you want a system in C++ or such where you can post events that fire in the future, and code is run when those events fire.
I had a project to do like this, and I started out trying to write such an event system in C++ and then quickly realized I had a better solution.
Have you considered writing your program in behavioral Verilog? That may seem strange to write software in a hardware description language, but a Verilog simulator is an event-based system underneath, and behavioral Verilog is a very convenient way to express events, timing, triggers, etc. There is a free Verilog simulator (which is what I used) called Icarus Verilog. If you're not using Ubuntu or some Linux distro with Icarus already in a package, building from source is straightforward.
I would recommend having a second look to OmNet++. At first sight it may look quite complex, but if you look it into more detail you will find that most of the complexity is in the network add-on (the INET Framework). Unless you are going to do a detailed network simulation you do not need the INET.
Using OmNet++ core is not specially difficult and it may be simpler than other similar tools.
You may want to have a look to an intro.
One of the things that makes OmNet++ attractive to me is its scalability. Is possible to run large simulations in a desktop. Besides, it is possible to scale the same simulation to a cluster without rewriting the code.
You should consider SystemC, although I'd also recommend taking a second look at OmNet++.
We use SIMLIB at my school. It is very fast, easy to understand, object oriented, discrete and continuous simulator. It might look outdated but it is still maintained.
There is CSIM from Mesquite Software which supports developing models in C, C++ and Java. However, it is paid-commercial, AFAIK.
Take a look at GBL library. It's written in modern C++ and even supports C++0x features like move semantics and lambda functions. It offers several modeling mechanisms: synchronous and asynchronous event handlers, preemptive threads, and fibers. You can create purely behavioral, cycle accurate, and real-time models, or any mixture of those.
If you had the possibility of having an application that would use both Haskell and C++.
What layers would you let Haskell-managed and what layers would you let C++-managed ?
Has any one ever done such an association, (surely) ?
(the Haskell site tells it's really easy because Haskell has a mode where it can be compiled in C by gcc)
At first I think I would keep all I/O operations in the C++ layers. As well as GUI management.
It is pretty vague a question, but as I am planning to learn Haskell, I was thinking about delegating some work to Haskell-code (I learn in actually coding), and I want to choose some part where I will see Haskell benefits.
The benefit of Haskell is the powerful abstractions it allows you to use. You're not thinking in terms of ones and zeros and addresses and registers but computations and type properties and continuations.
The benefit of C++ is how tightly you can optimize it when necessary. You aren't thinking about high-minded monads, arrows, partial application, and composing pure functions: with C++, you can get right down to the bare metal!
There's tension between these two statements. In his paper “Structured Programming with go to statements,” Donald Knuth wrote
I have felt for a long time that a talent for programming consists largely of the ability to switch readily from microscopic to macroscopic views of things, i.e., to change levels of abstraction fluently.
Knowing how to use Haskell and C++ but also how and when to combine them well will knock down all sorts of problems.
The last big project I wrote that used FFI involved using an in-house radar modeling library written in C. Reimplementing it would have been silly, and expressing the high-level logic of the rest of the application would have been a pain. I kept the “brains” of it in Haskell and called the C library when I needed it.
You're wanting to do this as an exercise, so I'd recommend the same approach: write the smarts in Haskell. Shackling Haskell as a slave to C++ will probably end up frustrating you or making you feel as though you wasted your time. Use each language where its strengths lie.
Here is how I see things:
Functional languages excel at transforming things. Whenever you write programs which take an input and map/filter/reduce it, use functional constructs. Wonderful real world examples where Haskell should excel are given by web applications (you basically transform things stored in a database to web pages).
Procedural languages (OOP languages are procedural) excel at side effects and communication between objects. They are cumbersome to use to transform data, but whenever you want to do system programming or (bidirectional) interaction with humans (user interfaces of any kind, including client-side web programming), they do the job cleanly.
However, some may argue that user interfaces should have a functional description, I answer that well established frameworks are easy enough to use with OOP languages that one should use them this way. After all, it is natural to think of UI components in terms of objects and communication between objects.
IO is only a tool: whenever you transform input to output, Haskell (or whatever FP language) should do the IO. Whenever you speak to a human, C++ (or whatever OOP language) should do the IO.
Don't care about speed. When you use Haskell for the right job, it's efficient. When you use C++ or Python for the right job, it's efficient.
Therefore, let's say I'm fluent in Haskell, C, C++ and Python, here's how I write applications:
If my application's main role is to transform data, I write it in Haskell, with possibly some low-level parts written in C (which may, in turn, call some high-tech low level parts written in C++, but I'd stick with C as an interface for portability reasons).
If my application's main role is to interact with a user, I write it in Python (PyQt for instance), and let Python call performance-critical routines written in C++ (boost::python is pretty good as a binding generator). I may also have to call subroutines which transform or fetch data, which will be written in Haskell.
If I have to write a part of an application in Haskell, I segregate it into a C-callable API.
Sometimes, a Haskell application which reads things on stdin and write back on stdout is useful as a submodule (that you call with fork/exec, or whatever on your platform). Sometimes, a shell script is the right wrapper for such applications.
This answer is more a story than a comprehensive answer, but I used a mix of Haskell, Python and C++ for my dissertation in computational linguistics, as well as several C and Java tools that I didn't write. I found it simplest to run everything as a separate process, using Python as glue code to start the Haskell, C++ and Java programs.
The C++ was a fairly simple, tight loop that counted feature occurrences. Basically all it did was math and simple I/O. I actually controlled options by having the Python glue code write out a header full of #defines and recompiling. Kind of hacky, but it worked.
The Haskell was all the intermediate processing: taking the complex output from the various C and Java parsers that I used, filtering extraneous data, and transforming it the simple format the C++ code expected. Then I took the C++ output and transformed it into LaTeX markup (among other formats).
This is an area that you would expect Python to be strong, but I found that Haskell makes manipulation of complex structures easier; Python is probably better for simple line-to-line transformations, but I was slicing and dicing parse trees and I found that I forgot the input and output types when I wrote code in Python.
Since I was using Haskell a lot like a more-structured scripting language, I ended up writing a few file I/O utilities, but beyond that, the built in libraries for tree and list manipulation sufficed.
In summary, if you have a problem like mine, I would suggest C++ for the memory-constrained, speed-critical part, Haskell for the high-level transformations, and Python to run it all.
I have never mixed both languages but your approach feels a little upside down to me.
Haskell is more apt at high-level operations while C++ can be optimized and can be most beneficial for tight loops and other performance critical code.
One of the largest benefits of Haskell is the encapsulation of IO into monads. As long as this IO isn't time critical I don't see any reason to do it in C++.
For the GUI part you are probably right. There is a plethora of Haskell GUI libraries but C++ has powerful tools such as QtCreator which simplify the tedious tasks a lot.
There are certain common library functions in erlang that are much slower than their c equivalent.
Is it possible to have c code do the binary parsing and number crunching, and have erlang spawn processes to run the c code?
Of course C would be faster, in the extreme case, after optimizations. If by faster you mean faster to run.
Erlang would be by far, faster to write. Depending on the speed requirements you have Erlang is probably "fast enough", and it will save you days of searching for bugs in C.
C code will only be faster after optimizations. If you spend the same amount of time on C and Erlang you will come out with about the same speed (note that I count time spent debugging and error fixing in this time estimation. Which will be a lot less in Erlang).
So:
faster writing = Erlang
faster running (after optimisations) = C
faster running without optimisations = any of the two
Take your pick.
There are two rough rules of thumb based on Erlang FAQ:
Code which involves mainly number crunching and data processing will run about 10 times slower than an equivalent C program. This includes almost all "micro benchmarks".
Large systems which spent most of their time communicating with other systems, recovering from faults and making complex decisions run at least as fast as equivalent C programs.
However there are some official solutions to the lack of number crunching performance of Erlang:
Native Implemented Function (NIF):
Implementing a function in C and loading its object code into Erlang virtual machine to be like a standard Erlang function but with native performance.
Examples: Evedis, Bitcask, ElevelDB
Port:
A byte-oriented interface from Erlang virtual machine to external OS processes through standard input and output file descriptors. The communication with this port is through message passing from Erlang's point of view.
Port Driver:
A dynamically linked C object file which is loaded into Erlang virtual machine and acts like a port. The communication with this port driver is through message passing from Erlang's point of view.
Examples: OTP_Inet, ENanomsg, P1_TLS
C Node:
You can simply promote your Erlang runtime to a distributed node. This way there is a specification to implement an Erlang runtime in C and communicate with Erlang nodes with a single interface.
All of aforementioned solutions have its own pros and cons and need to be used with extreme care.
First of all write whole logic of the system in Erlang, then implement handling binaries in C. Using NIFs (it is kind of interface to C) is pretty straight forward and transparent to the rest of the system. Here is another thread about talking to C Run C Code Block in Erlang.
Before hacking C, make sure you benchmarked current implementation. It is possible that it will satisfy your needs especially with the latest Erlang/OTP release (R14) which introduces great enhancements to binary handling.
easy threading is not so interesting to erlang. Easy threading + Message passing and the OTP framework is what's awesome about erlang. If you need number crunching use something like ocaml, python, haskell. Erlang is all that good at number crunching.
Parsing binaries is one of the things erlang is best at though, probably the best for it. Joe's book programming erlang covers everything really well, and is not so expensive used. It also talks about integrating C code and gives an example. the source is available from pragmatic programming without needing to buy the book, you can grep #include or something.
If you really look for speed you should try OpenMP or MPI parallel programming frameworks for C and C++. I recommend you to take a look at Patterns for Parallel Programming (link to amazon.com) for the details of OpenMP and MPI programming patterns.
The section of erl_nif in Erlang ERTS reference manual will be helpful.
If you like Erlang, but want C speed, why not go for JOCAML. It is an extension for OCAML (which is similar to Erlang but is near C in terms of speed) designed for the multicore revolution going on at the moment. I love it (and I know more than 10 programming languages...)
I used C over 20 years.
I am using Erlang almost exclusively the recently years.
C is faster to run for obvious reason.
Hower, Erlang is fast enough for most things when you do it right.
Also, writing Erlang is much faster and more of fun.
For the piece of algorithms for which the run-time speed is critical, it surely can be written in C, which is the way of Erlang BIFs.
Yes,
But there's more than one way to this, loosely speaking, some or all of which are already listed.
We should ask:
Are those procedures really equivalent (how do the Erlang and C differ)?
Is there a better way to write Erlang for this task (other procedures/libraries or data-types)?
It may be helpful to consider this post: Scaling & Speed with Erlang.
To address the question, yes it is possible to have erlang call some c function to handle a specific task. The most common way is to use a NIF - http://erlang.org/doc/tutorial/nif.html. NIFs were recommended only for short running functions before Erlang version 20 or so, few ms, because they were blocking, which couldn't work with Erlangs preemptive scheduler. Now with dirty threads it is more flexible, you can read up on that.
Just to note, C may be faster at parsing binary, though you should run tests, Erlang is by far faster to write the code. Erlang does a great job parsing binaries by pattern matching.