flux vs redux pros and cons highlights - state

I'm new to flux/redux data flow, and I try to understand the main differences between them.
Can you please highlight the differences? Such as pros and cons for each one?
Thanks

The shortest, simplest answer is Flux has multiple stores and a central dispatcher to manage communication to those stores.
Redux only has one store and encourages using simple reducer functions to interact with various properties on the store.
This strategy makes it easy to reason about the overall state of your application (it's just one massive POJO) and allows for awesome devtools like time-travel debugging and hotswapping state.
There's a more detailed explanation (that does a far better job of explaining) in the official docs.

For compare Flux and Redux similarities ans differences this is very summarized format this is done by daniel

Related

From Design to Development: Is there a common EmberJS workflow?

This question is subjective by nature, but I am curious about a specific thing, so hopefully there is a decent answer.
I tend to be a little old fashioned and like to create all my pages static and get the design just the way I like it (or at least very close) before I start breaking it down into handlebars and components and templates. This is mostly because the Ember "Getting Started" Guide taught me that process.
Is this the common practice, generally?
I am the front-end Designer and Developer for my company, and basically I have two separate workflows, one for Design, and one for Development/Testing.
Is there a way to merge the two and get a single streamlined workflow (perhaps a JS task that can split up static pages into templates by using some special markup??)
Design tends to be a little easier when you are working with static pages.
Development (especially when using EAK or Ember-CLI) expects everything to be modular and dynamic.
Is there any clear answer to this question?
I posted a similar question on Ember Forums, but have not gotten many view, so I figured I would try here.
Short answer when you're building an app with Ember you want to build around the URL structure, due to the way the URL and Router interact. Here's an awesome talk by Tom Dale about the URL. This makes it a very outside in approach, since each layer deeper in the url is content that's generally embedded 1 level deeper in the page.
http://vimeo.com/68390483

Starting with Data Mining

I have started learning Data Mining and wish to create a small project in C++/Java that allows me to utilize a database, say from twitter and then publish a particular set of results (for eg. all the news items on a feed). I want to know how to go about it? Where should I start?
This is a really broad question, so it's hard to answer. Here are some things to consider:
Where are you going to get the data? You mention twitter, but you'll still need to collect the data in some way. There are probably libraries out there for listening to twitter streams, or you could probably buy the data if someone is selling it.
Where are you going to store the data? Depending on how much you'll have and what you plan to do with it, a traditional relational database may or may not be the best fit. You may be better off with something that supports running mapreduce jobs out-of-the box.
Based on the answers to those questions, the choice of programming languages and libraries will be easier to make.
If you're really set on Java, then I think a Hadoop cluster is probably what you want to start out with. It supports writing mapreduce jobs in Java, and works as an effective platform for other systems such as HBase, a column-oriented datastore.
If your data are going to be fairly regular (that is, not much variation in structure from one record to the next), maybe Hive would be a better fit. With Hive, you can write SQL-like queries, given only data files as input. I've never used Mahout, but I understand that its machine learning capabilities are suited for data mining tasks.
These are just some ideas that come to mind. There are lots of options out there and choosing between them has as much to do with the particular problem you're trying to solve and your own personal tastes as anything else.
If you just want to start learning about Data Mining there are two books that I particularly really enjoy:
Pattern Recognition and Machine Learning. Christopher M. Bishop. Springer.
And this one, which is for free:
http://infolab.stanford.edu/~ullman/mmds.html
Good references for you are
AI course taught by people who actually know the subject,Weka website, Machine Learning datasets, Even more datasets, Framework for supporting the mining of larger datasets.
The first link is a good introduction on AI taught by Peter Norvig and Sebastian Thrun, Google's Research Director, and Stanley's creator (the autonomous car), respectively.
The second link you get you to Weka website. Download the software - which is pretty intuitive - and get the book. Make sure you understand all the concepts: what's data mining, what's machine learning, what are the most common tasks, and what are the rationales behind them. Play a lot with the examples - the software package bundles some datasets - until you understand what generated the results.
Next, go to real datasets and play with them. When tackling massive datasets, you may face several performance issues with Weka - which is more of a learning tool as far as my experience can tell. Thus I recommend you to take a look at the fifth link, which will get you to Apache Mahout website.
It's far from being a simple topic, however, it's quite interesting.
I can tell you how I did it.
1) I got the data using twitter4j.
2) I analyzed the data using JUNG.
You have to define a class representing edges and a class representing vertices.
These classes will contain the attributes of the edges and vertices.
3) Then, there is a simple function to add an edge g.addedge(V1,V2,edgeFromV1ToV2) or to add a vertex g.addVertex(V).
The class that defines edges or vertices is easy to create. As an example :
`public class MyEdge {
int Id;
}`
The same is done for vertices.
Today I would do it with R, but if you don't want to learn a new programming language, just import jung which is a java library.
Data mining is broad fields with many different techniques; classification, clustering, association and pattern mining, outlier detection, etc.
You should first decide what you want to do and then decide wich algorithm you need.
If you are new to data mining, I would recommend to read some books like Introduction to Data Mining by Tan, Steinbach and Kumar.
I would like to suggest you to use python or R for data mining process. Doing work with java or c , it bit difficult in the sense you need to do a lot coding

When i proceed to develop a software, ui design or database design, which should be first?

I tried to design the ui with some ui mocking software, but i found it's hard for me to settle down all the detail of the design, since the database didn't design yet.
But if i first design software, then the same problem occur, I didn't have the UI, how can I create a prominent UI ?
UI first.
Mock up an elegant and easy-to-use user interface (and workflow) thinking from the point of view of the user, and only then think about the underlying database / data structures you'll need to bring that UI to life.
If you can't design your UI because you haven't yet designed your database, you're doing it wrong IMHO. How many annoying pieces of software have you used that suffered from letting the database design drive the UI design?
Edit: As others have pointed out, you need to start with use cases / user stories. The UI design and database design, whichever order you do them, should only happen after you know what your software is trying to do, and for whom.
Edit by Bryan Oakley:
(source: gapingvoid.com)
Put the user at the place he deserves. Design UI first.
Database is only a consequence of user needs.
use cases first, neither ui nor database.
If you're trying to solve a problem in an object-oriented language, it's recommended that you start thinking about the objects involved. Don't worry about the database or the UI until you've got a solid domain model nailed down that addresses all the use cases.
You don't worry about the database or the UI at first. You can serialize objects to the file system if you need persistence and don't have a database. Being able to drive your app with a command line UI is a good exercise for guaranteeing that you have a good MVC separation.
Start with the objects.
UPDATE:
The one advantage that this approach has is that it doesn't prejudice the UI with a particular database design or vice versa. The object are agnostic about the other two layers. You aren't required to have a UI or relational database at all. You're just getting the objects right. Once you have that, you can create any UI or persistence scheme you like, confident that the domain model can handle the problem you've been asked to solve.
All the above answers address your issue in a right direction. That said, I would follow the SDLC thoroughly. It helps you understand the need for the solution for the problem at hand. Then comes the requirement gathering followed by the design either UI / underlying structure that supports the UI. It's a procedure but you would benefit in the end.
Your question is very subjective.
My opinion (and it is just that) is that database and underlying structure should come first. It can often help to put down the keyboard and mouse and compile some notes on paper.
Lay out goals like what you want your application to do, list the features you require and then start thinking about how you'll build it.
This method works for me in application design.
usually you need to manipulate some data in the solutions you develop. So you should consider how this data is organised first, stabilizing this layer is fundamental at the beginning. I agree with duffymo's comment about designing the business objects first if you are in a OO world. Mapping these objects to the DB will be a part of your work. Then you add business functionality and work on the presentation layer. Of course you will have to do some refactoring from time to time, but usually the refactoring impacts the business and presentation layers more than the database.
read this, it is a good technique.
DDD - Domain Driver Design
Would you build a house without a foundation? Database design isn't the fun part but it is the foundation of most business apps and if you get it wrong it becomes the most costly to fix and the most costly to maintain.
That said, I note that there is no reason you can't work on both together as they intertwine. But before you can do either, you need to understand the requirements and the business you are writing the app for.

Is there a tool to model/simulate software concurrency?

Is there a good tool out there that can model an application concurrency/locking scheme in a graphical way and that can simulate some of the aspects?
I know that Petri nets can be used for that more or less, but I don't know a good GUI tool that can design and simulate.
Is UML in any way usable for such purposes?
Any good links are very appreciated.
UML Activity Diagrams can be expressed as Petri nets (e.g. see this paper). Unfortunately I don't know any good industrial oriented tools for simulation of Petri Nets or Activity Diagram (but there are many academic projects which you can easily find).
Are you sure that you want to simulate your model (by simulation I mean that you actually want to sit and look how your Perti net is being executed)? Usually this type of analysis is applicable for small and simple algorithms. In real world situation you probably would like to do model checking of your algorithm rather than simulation. I would recommend you to check SPIN (used by many companies, e.g. Siemens). Also I have a positive experience with Alloy and Prism. But if your focus is on verifying parallel algorithms I would suggest you to consider SPIN first.
Edit: I checked some tools for simulation and I can advise looking at
1) http://sourceforge.net/projects/visual-petri/
2) http://www.renew.de/
3) http://www.winpesim.de/index.html
SPIN is a popular tool for verification of distributed systems but is command line only I think. But on the Spin webpage there is a link to a closely related GUI tool called GOAL
I'm doubt this is what you are looking for, but I'll throw in my two cents:
At my university, in our class on concurrent software systems, we use a tool called Labelled Transition System Analyser (LTSA). It's actually a language that you can use to model the behavior of a system.
The "code" is turned into a state diagram and a transition table.
Here is an interactive Java applet which can design and run a Petri net.
It's been a long time since I've looked at it, but it sounds like Ptolemy would be a good fit.
You can check Petri Net Sim, to simulate common/timed/colored Petrinets, it comes with a nice GUI that displays Petri Net execution in real time.
Try using the concurrency tool, LTSA. Java program. (Labelled Transition System Analyser), to simulate programs. You can download it from:
http://www.doc.ic.ac.uk/ltsa/
But you have to be patient while using it, it can take a couple of hours to learn how to use it. Probably works best while modeling Java programs.
And it's always good to use UML models of course :)

What are some examples of how your company uses a wiki for development?

Do you use a wiki in your company? Who uses it and what for. Do you share information between projects / teams / departments or not?
We use ours to store
Coding Style docs
Setup and Deployment procedures for web servers and sites
Network diagrams (what are all the servers in Dev, Staging, QA and Production called etc.)
Project docs (pdfs, visios, excel, docs, etc.) are stored in SVN. For the non-techies we have links to those docs in the wiki that point to an up-to-date share on my box. (tip: some wikis provide source control integration but ours doesn't)
Installation and Setup procedures for development tools
Howto's on things like using our bug tracking system, our unit testing philosophy
When doing research on a topic I often capture the important information in a wiki page for others to learn from
I've seen them used to keep seating charts in medium to large size organizations for the new people
At my previous company all of the emergency contacts and procedures for handling a critical outage where available on the front page of the wiki
The best part about a wiki is that it's searchable. Some wiki's support searching inside uploaded or linked docs as well.
If you setup a wiki and encourage or even require people to use it the amount of information that will accumulate can be amazing. It's definately worth the effort especially if you have someone in IT with some spare time on their hands to set it up.
Do you use a wiki in your company?
= We use it for the purpose of a Knowlede Based. Basically it is a wiki but many more functionalities intagrated.
Who uses it and what for
= Employees. Knowledge Sharing, Preparation of collaborative-documents, etc.
Do you share information between projects / teams / departments or not?
= Depends on the requirements. It is possible to set permissions between users.
We use a wiki, for documenting our systems. It's updated gradually as things update and evolve. It should go without saying that there's benefit in that, however whether you use a wiki or other methods is worth thinking about.
A wiki is great for collarborative editing. The information shouldn't go stale in theory, because as people use the systems they have the opportunity to keep it up to date.
However we have found in our organisation that people struggle a little with wiki markup. Especially tables. I think a solution that has wysiwyg editing would be better if you have non-highly-technical people editing it. Sharepoint springs to mind, but it's expensive.
I use a wiki as my virtual "story wall" for agile development. All of my stories are written and organized in the wiki. While my customers are reasonably local (we can have face-to-face meetings), they aren't co-located. To enable better customer interaction I've resorted to a wiki instead of a wall-based story tracking mechanism. It also works a little better for me due to the fact that I often have multiple, concurrent projects and limited wall space in my cube. In a larger team with more focused projects and more wall area, I'm not sure I'd make the same choice.
My company uses a wiki for project-planing but also for storing documentation and ideas.
I have found that a wiki is a great way to link the programmers in the company with the business-people.
When someone who are not on the programming-team comes up with an idea or finds a bug, it's a loot simpler to let that person document it in the wiki.
I think it's an important aspect for a small company like mine to easily synchronize the business-team with the development-team. A wiki helps with that, since it gives the feeling of being a part of the development process, instead of having to ask the programmer directly about every little detail.
we have MediaWiki to store technical information that is not ready to be published in other formats - specification drafts, diagrams (via GraphViz extension), results of short investigations, etc.
I also think this question is a wiki too :)