Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We know that pure functions:
Always return the same result for a given input
Produce no side-effects
This leads us to referential transparency - where an expression can be replaced with a value without changing the behaviour of the program.
This tells us that a program can be said to be purely functional if it excludes destructive modifications (updates) of entities in the program's running environment.
When we look at Software Transactional Memory, we see a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. But nothing about that is particularly functional on its own.
My question is: Can we consider Clojure's STM 'functional'?
Clojure STM is intentionally not "pure functional" because it is intended to manage state, and updating state is a side-effect. This reflects Clojure's design philosophy as a language that prefers functional programming by default, but still supplies you with the tools to do useful/stateful things, in a hopefully controlled manner.
Can we consider Clojure's STM 'functional'?
No. Quite the contrary. The STM is designed to be stateful, impure, referentially opaque, however you want to put it. But in a nice way, akin, as you've noted, to database transactions.
Clojure is a layered language. The STM sits on top of the core pure functions and data structures, isolating state change in a single construct - refs - which it provides a vocabulary to manipulate.
Clojure is layered in other ways too.
Many control structures (and, when, ... ) are layered on a few
special forms by means of macros.
Most of the core functions - written in Clojure - are layered on the
minority implemented in the JVM (or other host), which is equipped
with the clojure.lang package to implement them.
Clojure STM doesn't have referential transparency as the results can be different every time depending on operations interleaving in multiple threads.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Just need some insight over parallel processing methods on C++. My work on parallelizing untill now , has been on C, where the main method is using pthreads.
However I've also worked with openMP and cilkPlus.
I just want to ask , what is the common way of parallelizing code in C++ ? Are pthreads considered to be a good or bad implementation ? Shall I continue using them in C++ ?
C++ delivers concurrency in its standard library at different layers of abstraction.
The most low-level approach to threads is the use of std::thread in conjunction with the synchronization classes std::mutex, std::unique_lock, std::condition_variable etc. Furthermore, there is support for atomic operations (compare_exchange_weak/strong, fetch_add and alike), including memory ordering facilities like fences.
The next higher level of abstraction makes use of std::async, std::future and std::promise. Instead of having to control several threads on one's own here one just focusses on the execution of tasks, which may be sequential or parallel, whatever suits the most in the specific situation. The implementation even may decide on its own when to work parallel or not.
Finally, in C++17, there are concurrent implementations of known sequential STL-algorithms that use concurrency internally, like std::for_each and std::reduce which behaves similar to std::accumulate. This in conjunction with yet to come features, such as coroutines, ranges and execution contexts, can lead to very robust, performant and especially readable code.
Summarizing, there are several portable tools for parallel programming in C++. As with everything in programming, one should use the most abstract tool one can get in order to prevent dealing with details not relevant for the actual problem. Therefore, unless there is a good reason, I would not use pthreads in C++ any longer.
As a first choice, unless your needs are more complex, use std::thread, std::mutex, std::condition_variable etc
see documentation here:
http://en.cppreference.com/w/cpp/header/thread
http://en.cppreference.com/w/cpp/header/mutex
http://en.cppreference.com/w/cpp/header/future
http://en.cppreference.com/w/cpp/header/condition_variable
The standard thread tools are not fantastic or bleeding edge, but they are good enough for most tasks - certainly as a replacement for pthreads.
The advantage of the std:: constructs is that visibility of changes to shared data is guaranteed to be correct, even when the optimiser reorders loads and stores as part of its as-if pass.
If your needs are more complex, then building concurrency tools, or using a library that it built on top of them is a good idea.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to make a state machine for a hardware device. It will have more than 25 states and I am not sure what design to apply.
Because I am using C++11 I thought of using OOP and implementing it using the State Pattern, but I don't think it is appropriate for the embedded system.
Should it be more like a C style design ? I haven't code it one before. Can someone give me some pointers on what is the best suited design ?
System information:
ARM Cortex-M4
1 MB Flash
196 KB Ram
I also saw this question, the accepted answers points to a table design, the other answer to a State pattern design.
The State Pattern is not very efficient, because any function call goes at least through a pointer and vtable lookup but as long as you don't update your state every 2 or 3 clock cycles or call a state machine function inside a time critical loop you should be fine. After all the M4 is a quite powerful microcontroller.
The question is, whether you need it or not. In my opinion, the state pattern makes only sense, if in each state, the behavior of an object significantly differs (with the need for different internal variables in each state) and if you don't want to carry over variable values during state transitions.
If your TS is only about taking the transition from A to B when reading event alpha and emitting signal beta in the process, then the classic table or switch based approach is much more sensible.
EDIT:
I just want to clarify that my answer wasn't meant as a statement against c++ or OOP, which I would definitly use here (mostly out of personal preference). I only wanted to point out that the State Pattern might be an overkill and just because one is using c++ doesn't mean he/she has to use class hierarchies, polymorphism and special design patterns everywhere.
Consider the QP active object framework, a framework for implementing hierarchical state machines in embedded systems. It's described in the book, Practical UML Statecharts in C/C++: Event Driven Programming for Embedded Systems by Miro Samek. Also, Chapter 3 of the book describes more traditional ways of implementing state machines in C and C++.
Nothing wrong with a class. You could define a 'State' enum and pass, or queue, in events, using a case switch on State to access the corect action code/function. I prefer that for simpler hardware-control state engines than the classic 'State-Machine 101' table-driven approach. Table-driven engines are awesomely flexible, but can get a bit convoluted for complex functionality and somewhat more difficult to debug.
Should it be more like a C style design ?
Gawd, NO!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Recently I've read that we can code C/C++ and from python call those modules, I know that C/C++ is fast and strongly typed and those things but what advantages I got if I code some module and then call it from python? in what case/scenario/context it would be nice to implement this?
Thanks in advance.
Performance. That's why NumPy is so fast ("The NumPy array: a structure for efficient
numerical computation")
If you need to access a system library that doesn't have a wrapper
in python (Example: Shapely wraps around libgeos to do
geometrical computations), or if you're writing a wrapper around a
system library.
If you have a performance bottleneck in a function
that needs to be made a lot faster (and can benefit from using C).
Like Charlie said, profiling is essential to find out whether you
want to do this or not.
Profile your application. If it really is spending time in a couple of places that you can recode in C consider doing so. Don't do this unless profiling tells you you really need to.
Another reason is there might be a C/C++ library with functionality not available in python. You might write a python extension in C/C++ so that you can access/use that C/C++ library.
The primary advantage I see is speed. That's the price paid for the generality and flexibility of a dynamic language like Python. The execution model of the language doesn't match the execution model of the processor, by a wide margin, so there must be a translation layer someplace at runtime.
There are significant sections of work in many applications that can be encapsulated as strongly-typed functions. FFTs, convolutions, matrix operations and other types of array processing are good examples where a tightly-coded compiled loop can outperform a pure Python solution more than enough to pay for the runtime data copying and "environment switching" overhead.
There is also the case for "C as a portable assembler" to gain access to hardware functions for accelerating these functions. The Python library may have a high-level interface that depends on driver code that's not available widely enough to be part of the Python execution model. Multimedia hardware for video and audio processing, and array processing instructions on the CPU or GPU are examples.
The costs for a custom hybrid solution are in development and maintenance. There is added complexity in design, coding, building, debugging and deployment of the application. You also need expertise on-staff for two different programming languages.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
In the company I work at we're dealing with a huge problem: we have a system that consists in several units of processing. We made it this way so each module has specific functionality. The integration between these modules is done using a queue system (which is not fast but we're working on it) and replicating messages between these modules. The problem is that this is generating a great deal of overhead as four of these systems are requiring the same kind of data, and maintaining consistency for these modules is bad.
Another requirement for the system is redundancy, so I was thinking to kill these two problems in one shot.
So I was thinking of using some kind of shared resource. I've looked at shared memories (which are great but could lead to locking inconsistencies if the module crashes leading to inconsistencies in the program), and maybe do some "raw copy" from the segment to another computer to do redundancy.
So I've began to search for alternatives, ideas and something like that. I've found one that is noSQL, but I don't know if the speed that I'm requiring would suffice this.
I need something (ideally):
Memory-like fast
That could provide me redundancy (active-passive is ok, active active is good)
I also think that shared-memory is the way to go. To provide redundancy, let every process copy the data that is going to be changed to local/non-shared memory. Only after the module has done its work, copy it back to shared memory. Make sure the 'copy-to-shared-memory' part is as small as possible and nothing can go wrong while doing the copy. Some tricks you could use are:
Prepare all data in local memory and use one memcpy operation to copy it to shared memory
Use a single value to indicate that the written data is valid. This could be a boolean or something like a version number that indicates the 'version' of the data written in shared memory.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am looking for an IDL-like (or whatever) translator which turns a DOM- or JSON-like document definition into classes which
are accessible from both C++ and Python, within the same application
expose document properties as ints, floats, strings, binary blobs and compounds: array, string dict (both nestable) (basically the JSON type feature set)
allow changes to be tracked to refresh views of an editing UI
provide a change history to enable undo/redo operations
can be serialized to and from JSON (can also be some kind of binary format)
allow to keep large data chunks on disk, with parts only loaded on demand
provide non-blocking thread-safe read/write access to exchange data with realtime threads
allow multiple editors in different processes (or even on different machines) to view and modify the document
The thing that comes closest so far is the Blender 2.5 DNA/RNA system, but it's not available as a separate library, and badly documented.
I'm most of all trying to make sure that such a lib does not exist yet, so I know my time is not wasted when I start to design and write such a thing. It's supposed to provide a great foundation to write editing UI components.
ICE is the closest product I could think of. I don't know if you can do serialization to disk with ICE, but I can't think of a reason why it wouldn't. Problem is it costs $$$. I haven't personally negotiated a license with them, but ICE is the biggest player I know of in this domain.
Then you have Pyro for python which is Distributed Objects only.
Distributed Objects in Objective-C (N/A for iPhone/iPad Dev, which sucks IMHO)
There are some C++ distributed objects libraries but they're mostly dead and unusable (CORBA comes to mind).
I can tell you that there would be a lot of demand for this type of technology. I've been delving into some serialization and remote object stuff since off-the-shelf solutions can be very expensive.
As for open-source frameworks to help you develop in-house, I recommend boost::asio's strands for async thread-safe read/write and boost::serialization for serialization. I'm not terribly well-read in JSON tech but this looks like an interesting read.
I wish something freely available already existed for this networking/serialization glue that so many projects could benefit from.
SWIG doesn't meet all your requirements, but does make interfacing c++ <-> python a lot easier.