Node.js vs C++ for mathematic - c++

I have to write a server program that implements some fuzzy logic and I choose to write it in Node.js to take advantage of its event orientation.
I have to work with difficult mathematic computational problem, and I don't know what's the best way to obtain performance:
Write all in Node.js and use the power of the V8 engine for the mathematical task.
Write a module in C++ that implements all the mathematical function and call it from Node.
Anyone that have experience in these type of computation on both platform?

Since you need the Node.js part anyway, go ahead, implement everything in Node.js. If it is fast enough, this is easy enough to maintain. It's very hard to predict the power of a virtual machine / JIT compiler.
If it is not fast enough, first think about algorithmic improvements. If this doesn't help and if profiling shows that the computation is the problem, go ahead, re-implement it in C++. But be aware that writing performant C++ code is not trivial. Make sure that you have a good profiler at hand and measure often.
In general I'd say C++ code is faster if written correctly. The tricky part is to write it correctly. Please check this article Google Paper on C++, Java, Scala, Go for more information. The gist is - managed languages make it a lot easier to write & maintain the code but if you need raw performance, C++ is the best. But it comes at the price of needing a lot of expertise and that the code is harder to maintain.

denshade, your C implementation goes only to 2e5 not 2e6, like you've done for js (linking to today's revs on Github):
primes.c
primes.js
Piping to /dev/null, and changing js also to 2e5, I get about 6.5 seconds for C and about 8.5 seconds for js (using some version of node) on my current computer.
Since your algorithm is O(n^2), I would expect 2e6 to take some 15 minutes, not 15 hours, but I haven't tried it. Maybe it does fall apart that bad for some reason.
(Note that I couldn't comment directly, since I'm brand new on SO and have no rep.)

it's pretty much impossible to answer this kind of question. The answer as always for these things is it depends on your skills and how much time and effort you are willing to put into it.
C++ always has the potential to be faster and more efficient as you have much closer control over all the things that matter. The downside it that you have to do all the things that matter and the generic implementations in the other language are probably done by someone who knows what they are doing and could well be better than a naive or quick implementation in c++
Plus often you'll find that the bottleneck isn't what you think it will be anyway, for example if reading in your data turns out to take 20 times as long as the calculations which isn't impossible then it hardly matters how fast the calculations are. And intuition about where the bottlenecks lie is often badly wrong even for experienced developers.

http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=node&lang2=gpp
The above link is dead and now in the wayback--
https://web.archive.org/web/20180324192118/http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=node&lang2=gpp
C++ uses the CPU and performs up to 10X faster than Node.js doing mathematical operations.
That site moved over here
https://benchmarksgame-team.pages.debian.net/benchmarksgame/which-programs-are-fastest.html

One thing to consider with going the C++ route for complex mathematical computations is that you might be able to leverage an existing high performance library, such as BLAS, LAPACK, ARMA, etc.. where other developers have already put significant time and effort into providing highly optimized functionality. I doubt you'll find a similar level of high performance library for JavaScript. Certainly if you have an identified bottleneck around matrix calculations or linear algebra, one of these C++ libraries is the way to go.

I've runned #denshade codes removing prints and the timing on 100000 numbers are exceptional:
3 sec. for nodejs!
6 sec. for gcc/clang compiled c
6 sec. for hhvm ( php )
14 sec for php7 w/ opcache
15 sec for php7 w/o opcache
Nodejs is so fast because it's compiled and optimized over-time.
so, maybe you just need to test by your self which is the best language that fits your needs in this case.

If your calculations aren't trivial I'd like to issue a warning. JavaScript is very bad when you are going for heavy calculations. My story involves a simple prime program which you can find here: https://github.com/denshade/speedFun
Long story short. I created a simple, be it inefficient prime check function implemented in C & JavaScript. Both are implemented in the same way. The first 2000 000 primes are verified in 5 seconds in C. The same function in javascript lasted for more than 16 hours when run in node.js.

Following are the areas where Node.js is proving itself as a perfect technology
partner.
● I/O bound Applications
● Data Streaming Applications
● Data Intensive Real-time Applications (DIRT)
● JSON APIs based Applications
● Single Page Applications
It is not advisable to use Node.js for CPU intensive applications.
here is API comparisons :
https://www.linkedin.com/pulse/nodejs-vs-java-which-faster-apis-owen-rubel

Related

Internet connection speed vs. Programming language speed for HTTP Requests?

I know how to program in Python but I am also interested in learning C++. I have heard that it is much faster than python and for the programs I am writing currently, I would prefer them to run as quickly and efficiently as possible. I know that a lot of that comes from just writing good code but I was also wondering if using another language, such as C++, would help.
While I was pondering this, I realized that since most of my programs will be mainly using the internet (as in implementing Google APIs and using the information from them to submit data to other websites) then maybe the speed of the language doesn't matter if the speed of my internet connection is always going to be relatively the same. I have two ways I am connecting to the internet: Selenium (or some kind of automated browser) for things that require a browser, and just HTTP requests.
How much difference would I see between python and a different language even though the major focus of my programs is on the internet?
Thanks.
Scenarios
The main benefit you would get from using language that is compiled to machine code is that you can do lots of byte and bit-magic. Lets say, modifying image data, transforming audio, analysing indices of a genomic sequence database.
Typical tasks
Serving web-pages you typically have problems if a completely different sort: You will be loading a resource from hard disk, serve them directly if its an image or audio, or you will be executing different transformation steps on a text resource until it becomes the final HTML document. The latter will be using template engines, database queries, and so on.
If you look at that you can see that most of the things, say 90-99% are pretty high-level stuff -- in Python you will use an API that is optimized by many, many users for optimal performance (meaning: time and space). "Open a file" will be almost as fast in C as it is in Python, so is reading from it and serving it to some Socket. Transforming text data could be a bit faster in C++ then it is in Python, but... how fast does it have to be? A use is very likely willing to wait 200ms, isnt't he? And that is a lot of time for a nice high-level template engine to transform a bit of text.
What C++ and Python can do for you
A typical Python web-service is much faster to write and a easier to deploy then a server written in C++. If you would do it in C++ you firstly need to handle sockets and connections -- and for those would either use an existing library or write your own handling. If you use an existing library (which I strongly recommend) you are basically not doing anything differently then Python does. If you write your own handling, you have many, many low-level things you can do wrong that will burn the performance you wish for. No, that is not an option.
If you need speed, and Python and the server and template framework is not enough you should re-think your architectural approach. Then take a look at the c10k-problem and write tiny pieces in C. (Look at this c10k very hot topic, too) But I can not see many reasons not to use a high-level language like Python, if you are only looking for performance in a medium-complex web-serving application.
Summary: The difference
If you are just serving files from the hard-drive I guess your Python program would even be faster then your hand-crafted C++-server. If you use a framework written in C or C++ and just drop in your static pages, I guess you get a boost like 2-5fold against Python. Then again, if your web-application is a bit more complex then serving static content, I estimate that the difference will diminish very quickly and you will get 1-2fold speed gain at most.
It's not all about speed...
One note about another difference between C++ and Python one should not forget: Since C++ is really compiled and not as dynamic as Python you would gain a lot of static error analysis by using Python. Writing correct code is always difficult, but can be done in C++ and Python with good tests and static analysis -- the latter is simpler in C++ (my opinion). If that is an issue for you, you may think again, but you asked about speed.

Speed - embedding python in c++ or extending python with c++

I have some big mysql databases with data for calculations and some parts where I need to get data from external websites.
I used python to do the whole thing until now, but what shall I say: its not a speedster.
Now I'm thinking about mixing Python with C++ using Boost::Python and Python C API.
The question I've got now is: what is the better way to get some speed.
Shall I extend python with some c++ code or shall I embedd python code into a c++ programm?
I will get fore sure some speed increment using c++ code for the calculating parts and I think that calling the Python interpreter inside of an C-application will not be better, because the python interpreter will run the whole time. And I must wrap things python-libraries like mysqldb or urllib3 to have a nice way to work inside c++.
So what whould you suggest is the better way to go: extending or embedding?
( I love the python language, but I'm also familiar with c++ and respect it for speed )
Update:
So I switched some parts from python to c++ and used multi threading (real one) in my c modules and my programm now needs instead of 7 hours 30 minutes :))))
In principle, I agree with the first two answers. Anything coming from disk or across a network connection is likely to be a bigger bottleneck than the application.
All the research of the last 50 years indicates that people often have inaccurate intuition about system performance issues. So IMHO, you really need to gather some evidence, by measuring what is actually happening, then chose a solution based on that evidence.
To try to confirm what is causing the slow performance, measure the system and user time of your application (e.g time python prog.py), and measure the load on the machine.
It the application is maxing-out the CPU, and most of that time is spent in the application (user time), then there may be a case for using a more effective technology for the application.
But if the CPU is not maxed, or the application spends most of its time in the system (system time), and not in the application (user time), then it is unlikely that changing the application programming technology will help significantly. (This is an example of Amdahl's Law http://en.wikipedia.org/wiki/Amdahl%27s_law)
You may also need to measure the performance of your database server, and maybe network connection, to identify the source of the bottle neck, but start with the easiest part.
In my opinion, in your case it makes no sense to embed Python in C++, while the reverse could be beneficial.
In most of programs, the performance problems are very localized, which means that you should rewrite the problematic code in C++ only where it makes sense, leaving Python for the rest.
This gives you the best of both world: the speed of C++ where you need it, the ease of use and flexibility of Python everywhere else. What is also great is that you can do this process step by step, replacing the slow code paths by the by, leaving you always with the whole application in an usable (and testable!) state.
The reverse wouldn't make sense: you'd have to rewrite almost all the code, sacrificing the flexibility of the Python structure.
Still, as always when talking about performance, before acting measure: if your bottleneck is not CPU/memory bound switching to C++ isn't likely to produce much advantages.

What's a good and safe language for drawing intensive particle systems over a long period of time?

Another one of my rather ambiguous question today, sorry.
Currently I have written some half decent software that has a 'roll your own' RESTful client, which pulls data from twitter. This data is then visualized with a number of particle systems using Open FrameWorks (a framework that works with c++).
My plans for this were to run the software indefinitely on my VPS, and build some kind of front end GUI allowing users to explore the pretty particles and so on. Between the JSON library I am using, C/C++, OpenFrameworks, and freaking Xcode4 I have produced way too many SIGBIRT and GDB errors to care for. I have go to the ends of the virtual world to fix them, and re wrote everything over and over. I even managed to SIGBIRT the openframeworks draw circle method, HAH!
(TL;DR starts here) Ok so anyway I am starting from scratch, looking for a powerful language that can crunch maths and blast through a good set of particles, and run quite well over the longest periods of time. Right now I am thinking about haskell, any ideas?
Thanks in advance all!
Haskell's (or more specifically GHC's) number crunching speed is approaching that of C++ but it's a little way behind. However, it's certainly not terrible, and Haskell's advantages in parallelism may become important. That is, if you write it in straight Haskell first, there's a good chance that it'll be easy to refactor it to run in parallel now or in the future. That isn't so true of C++.
The 'vector' package (on Hackage) would be a good choice for arrays suitable for number crunching. It supports mutable arrays in case that sort of approach is needed. However, if you're prepared to go more on the bleeding edge and your algorithm can be parallelized, you might want to look at the 'repa' package, and for extreme performance on a GPU, take a look at 'Accelerate' (which works but is still categorized as experimental).
The crashes you mention sound like they could be an indication of a bit of complexity in your problem. Where Haskell does well is in managing the complexity of... well, anything. So, if the problem is complex, then Haskell will serve you very well.
The foreign function interface in Haskell is well designed, though you will need to write C glue between Haskell and C++. So, that's another option for your number crunching.
For the web interface, take a look at 'yesod' which is seeing very active development and advertises itself as doing RESTful.
AFAIK, number crunching speed is not Haskell's strongest point - it's a highly abstract language, far from the 'metal'; its strength in a numeric processing context lies in the "mathiness" of its semantics - Haskell code often reads much like a Mathematical proof, and many of its concepts are borrowed from various fields of Mathematics.
For plain old number crunching, C++ is probably still your best choice, as it allows you to stay close to the hardware and optimize tight loops at the machine level, while offering higher-level programming constructs to manage complexity.
OTOH, if you have a library in place for the heavy lifting, and you merely need to write the glue to make the various parts work together, then go with whatever you're most comfortable with - python, C#, java, haskell, C++, ... - as long as they have bindings for all your libraries, you're good. If you don't have a library, then you might also consider writing the performance critical parts in C, and then pull them into your favorite high-level language - this is trivial in C++, slightly harder in python or haskell, and pretty damn inconvenient in java.

Erlang - C and Erlang

There are certain common library functions in erlang that are much slower than their c equivalent.
Is it possible to have c code do the binary parsing and number crunching, and have erlang spawn processes to run the c code?
Of course C would be faster, in the extreme case, after optimizations. If by faster you mean faster to run.
Erlang would be by far, faster to write. Depending on the speed requirements you have Erlang is probably "fast enough", and it will save you days of searching for bugs in C.
C code will only be faster after optimizations. If you spend the same amount of time on C and Erlang you will come out with about the same speed (note that I count time spent debugging and error fixing in this time estimation. Which will be a lot less in Erlang).
So:
faster writing = Erlang
faster running (after optimisations) = C
faster running without optimisations = any of the two
Take your pick.
There are two rough rules of thumb based on Erlang FAQ:
Code which involves mainly number crunching and data processing will run about 10 times slower than an equivalent C program. This includes almost all "micro benchmarks".
Large systems which spent most of their time communicating with other systems, recovering from faults and making complex decisions run at least as fast as equivalent C programs.
However there are some official solutions to the lack of number crunching performance of Erlang:
Native Implemented Function (NIF):
Implementing a function in C and loading its object code into Erlang virtual machine to be like a standard Erlang function but with native performance.
Examples: Evedis, Bitcask, ElevelDB
Port:
A byte-oriented interface from Erlang virtual machine to external OS processes through standard input and output file descriptors. The communication with this port is through message passing from Erlang's point of view.
Port Driver:
A dynamically linked C object file which is loaded into Erlang virtual machine and acts like a port. The communication with this port driver is through message passing from Erlang's point of view.
Examples: OTP_Inet, ENanomsg, P1_TLS
C Node:
You can simply promote your Erlang runtime to a distributed node. This way there is a specification to implement an Erlang runtime in C and communicate with Erlang nodes with a single interface.
All of aforementioned solutions have its own pros and cons and need to be used with extreme care.
First of all write whole logic of the system in Erlang, then implement handling binaries in C. Using NIFs (it is kind of interface to C) is pretty straight forward and transparent to the rest of the system. Here is another thread about talking to C Run C Code Block in Erlang.
Before hacking C, make sure you benchmarked current implementation. It is possible that it will satisfy your needs especially with the latest Erlang/OTP release (R14) which introduces great enhancements to binary handling.
easy threading is not so interesting to erlang. Easy threading + Message passing and the OTP framework is what's awesome about erlang. If you need number crunching use something like ocaml, python, haskell. Erlang is all that good at number crunching.
Parsing binaries is one of the things erlang is best at though, probably the best for it. Joe's book programming erlang covers everything really well, and is not so expensive used. It also talks about integrating C code and gives an example. the source is available from pragmatic programming without needing to buy the book, you can grep #include or something.
If you really look for speed you should try OpenMP or MPI parallel programming frameworks for C and C++. I recommend you to take a look at Patterns for Parallel Programming (link to amazon.com) for the details of OpenMP and MPI programming patterns.
The section of erl_nif in Erlang ERTS reference manual will be helpful.
If you like Erlang, but want C speed, why not go for JOCAML. It is an extension for OCAML (which is similar to Erlang but is near C in terms of speed) designed for the multicore revolution going on at the moment. I love it (and I know more than 10 programming languages...)
I used C over 20 years.
I am using Erlang almost exclusively the recently years.
C is faster to run for obvious reason.
Hower, Erlang is fast enough for most things when you do it right.
Also, writing Erlang is much faster and more of fun.
For the piece of algorithms for which the run-time speed is critical, it surely can be written in C, which is the way of Erlang BIFs.
Yes,
But there's more than one way to this, loosely speaking, some or all of which are already listed.
We should ask:
Are those procedures really equivalent (how do the Erlang and C differ)?
Is there a better way to write Erlang for this task (other procedures/libraries or data-types)?
It may be helpful to consider this post: Scaling & Speed with Erlang.
To address the question, yes it is possible to have erlang call some c function to handle a specific task. The most common way is to use a NIF - http://erlang.org/doc/tutorial/nif.html. NIFs were recommended only for short running functions before Erlang version 20 or so, few ms, because they were blocking, which couldn't work with Erlangs preemptive scheduler. Now with dirty threads it is more flexible, you can read up on that.
Just to note, C may be faster at parsing binary, though you should run tests, Erlang is by far faster to write the code. Erlang does a great job parsing binaries by pattern matching.

Simplifying algorithm testing for researchers.

I work in a group that does a large mix of research development and full shipping code.
Half the time I develop processes that run on our real time system ( somewhere between soft real-time & hard real-time, medium real-time? )
The other half I write or optimize processes for our researchers who don't necessarily care about the code at all.
Currently I'm working on a process which I have to fork into two different branches.
There is a research version for one group, and a production version that will need to occasionally be merged with the research code to get the latest and greatest into production.
To test these processes you need to setup a semi complicated testing environment that will send the data we analyze to the process at the correct time (real time system).
I was thinking about how I could make the:
Idea
Implement
Test
GOTO #1
Cycle as easy, fast and pain free as possible for my colleagues.
One Idea I had was to embed a scripting language inside these long running processes.
So as the process run's they could tweak the actual algorithm & it's parameters.
Off the bat I looked at embedding:
Lua (useful guide)
Python (useful guide)
These both seem doable and might actually fully solve the given problem.
Any other bright idea's out there?
Recompiling after a 1-2 line change, redeploying to the test environment and restarting just sucks.
The system is fairly complicated and hopefully I explained it half decently.
If you can change enough of the program through a script to be useful, without a full recompile, maybe you should think about breaking the system up into smaller parts. You could have a "server" that handles data loading etc and then the client code that does the actual processing. Each time the system loads new data, it could check and see if the client code has been re-compiled and then use it if that's the case.
I think there would be a couple of advantages here, the largest of which would be that the whole system would be much less complex. Now you're working in one language instead of two. There is less of a chance that people can mess things up when moving from python or lua mode to c++ mode in their heads. By embedding some other language in the system you also run the risk of becoming dependent on it. If you use python or lua to tweek the program, those languages either become a dependency when it becomes time to deploy, or you need to back things out to C++. If you choose to port things to C++ theres another chance for bugs to crop up during the switch.
Embedding Lua is much easier than embedding Python.
Lua was designed from the start to be embedded; Python's embeddability was grafted on after the fact.
Lua is about 20x smaller and simpler than Python.
You don't say much about your build process, but building and testing can be simplified significantly by using a really powerful version of make. I use Andrew Hume's mk, but you would probably be even better off investing the time to master Glenn Fowler's nmake, which can add dependencies on the fly and eliminate the need for a separate configuration step. I don't ordinarily recommend nmake because it is somewhat complicated, but it is very clear that Fowler and his group have built into nmake solutions for lots of scaling and portability problems. For your particular situation, it may be worth the effort to master it.
Not sure I understand your system, but if the build and deployment is too complicated, maybe you could automate it? If deployment is completely automatic, would that solve the problem?
I don't understand how a scripting language would solve the problem? If you change your algorithm, you still need to restart calculation from the beginning, don't you?
It kind of sounds like what you need is CruiseControl or something similar; every time hyou touch the baseline code, it rebuilds and reruns tests.