What if the default method / techniques to make a robust multi-person tracker in crowded areas (single fixed camera)? - computer-vision

The more I read about the multiple-object literature the more it seems impossible to track objects (in my case, persons walking or running around) without eventually losing them.
When I look at SOTA trackers such as Bytetrack, Centertrack, FairMOT etc, -out out the box- they all seem to forget previously identified objects or mismatch them, both of which cause serious problems.
If I use person re-identification (reid) it adds significant computational overhead.
Maybe, I am not reading the right papers or maybe industrial / commercial trackers use different techniques (post-processing?) that make them more robust to forgetting and mismatch of tracks?
I would appreciate any tips & pointers.
(my use case is indoors & fixed camera, about 20 people in the room, lots of moving around & occlusion)

I evaluate multi-object SOTA trackers as they come out. None of the top 3 solves this problem completely; specially not when objects are entering and exiting the field of view of the camera. I recommend you to train a custom model for the specific environment where it will be deployed to minimize this issue.
Feel free to check out my GitHub repo where I try all these MOTs out.

Related

OpenEdge 11.3 Application Migration

We have an application with 10 millions lines of code in 4GL(Progress) and a database also OpenEdge with 300 Tables. My Boss says we should migrate it to a new Programming language and a new Database Management system.
My questions are:
Do you think we should migrate it? Do you think Progress has a "future"?
If we should migrate it, how, are there any tools? Or should we begin with programming from scratch?
Thank you for the help.
Ablo
Unless your boss has access to an unlimited budget, endless user patience and a thirst for frustration and agony you should not waste any time thinking about rewrites.
http://www.joelonsoftware.com/articles/fog0000000069.html
Yes, Progress has a future. They probably will never be as sexy an option as Microsoft or Oracle or whatever the cool kids are using this week. But they have been around for 30 years and they will still be here when you and your boss retire.
There are those who will rain down scorn on Progress because it isn't X or it doesn't have Y. Maybe they can rewrite your 10 million lines of code next weekend and prove just how right they are. I would not, however, pay them for those efforts until after the user acceptance tests are passed and the implementation is completed.
A couple of years later (the original post being from 2014 and the answers being from 2014 to 2015) :
The post, which has gotten the most votes is argumenting basically two fold :
a. Progress (Openedge) has been around for a long time and is not going anywhere soon
b. Unless your boss has access to an unlimited budget, endless user patience and a thirst for frustration and agony you should not waste any time thinking about rewrites: http://www.joelonsoftware.com/articles/fog0000000069.html
With regard to a:
Yes, the Progress OpenEdge Stack is still around. But from my experience the difficulty to find experienced and skilled Openedge has gotten even more difficult.
But also an important factor here, which i think has evolved to much greater importance, since this discussion started:
The available Open Source Stacks for application development have gotten by factors better, both in terms of out-of-box functionality and quality and have decisively moved in direction of RAD.
I am thinking for instance of Spring Boot, but not only, see https://stackshare.io/spring-boot/alternatives. In the Java realm Spring Boot is certainly unique. Also for the development of rich Webui's many very valid options have emerged, which certainly are addressing RAD requirements, just some "arbitrary" examples https://vaadin.com for Java, but also https://www.polymer-project.org for Javascript, which are interestingly converging both with https://vaadin.com/flow.
Many of the available stacks are still evolving strongly, but all have making life easier for the developer as strong driver. Also in terms of architectures you will find a convergence of many of this stacks with regard basic building blocks and principles: Separation of Interfaces from Implementation, REST API's for remote communication, Object Relational Mapping Technologies, NoSql / Json approaches etc etc.
So yes the Open Source Stack are getting very efficient in terms of Development. And what must also be mentioned, that the scope of these stacks do not stop with development: Deployment, Operational Aspects and naturally also Testing are a strong ,which in the end also make the developers life easier.
Generally one can say the a well choosen Mix and Match of Open Source Stacks have a very strong value proposition, also on the background of RAD requirements, which a proprietary Stack, will have in the long run difficulty to match - at least from my point of view.
With regard to b:
Interestingly enough i was just recently with a customer, who is looking to do exactly this: rewrite their application. The irony: they are migrating from Progress to Progress OpenEdge, with several additional Open Edge compliant Tools. The reason two fold: Their code is getting very difficult to maintain and would refactoring in order to address requirements coming from Web Frontends. Also interesting, they are not finding enough qualified developers.
Basically: Code is sound and lives , when it can be refactored and when it can evolve with new requirements. Unfortunately there many examples - at least from my experience - to contrary.
Additionally End-of-Lifecyle of Software can force a company, to "rewrite" at least layers of their software. And this doesn't necessarily have to bad and impossible. I worked on a Project, which migrated over 300 Oracle Forms forms to a Java based UI within less then two years. This migration from a 2 tier to a 3 tier architecture actually positioned the company to evolve their architecture to address the needs of Web Ui's. So actually in the end this "rewrite" and a strong return of value also from the business perspective.
So to cut a (very;-)) long story short:
One way or another, it is easy to go wrong with generalizations.
You need not begin programming from scratch. There is help available online and yes, you can contact Progress Technical Support if you find difficulties. Generally, ABL code from previous version should work with only little changes. Here are few things that you need to do in order to migrate your application:
Backup databases
Backup source code and .r files
Truncate DB bi files
Convert your databases
Recompile ABL code and test
http://knowledgebase.progress.com articles will help you in this. If you are migrating from some older versions like 9, you can find a good set of new features. You can try them but only after you are done with your conversion.
If you are migrating from 32-bit to 64-bit and if you are using 32-bit libraries, you need to replace them with 64-bit
The first question I'd come back with is 'why'? If the application is not measuring up that's one thing, and the question needs to be looked at from that perspective.
If the perception is that Progress is somehow a "lesser" application development and operating environment, and the desire is only to move to a different development and operating environment - you'll end up with a lot of resources in time, effort, and money invested - not to mention the opportunity cost - and for what? To run on a different database platform? Will migrating result in a lower TCO? Faster development turn-around time? Quicker time to market? What's expected advantage in moving from Progress, and how long will it take to recover the migration cost - if ever?
Somewhere out there is a company who had similar thoughts and tried to move off of Progress and the ABL. The effort failed to meet their target performance and functionality metrics, so they eventually gave up on the migration, threw in the towel, and stayed with Progress - after spending $25M on the project.
Can your company afford that kind of risk / reward ratio?
Progress (Openedge) has been around for a long time and is not going anywhere soon. And rewriting 10 Million lines of code in any language just to use the current flavor of the month would never be worth it unless your current application is not doing what you need. Even then bringing it up to current needs would normally be a better solution.
If you need to migrate your current application to the latest version of Openedge (Progress) you would normally just make a copy of your database(s) and convert it/them to the new version of Openedge and compile your your code against the new databases and shake the bugs out. You may have some keyword issues, but this is usually pretty minor.
If you need help with programming I would suggest contacting Progress Software and attending the yearly trade show or going to https://community.progress.com/ and asking/looking for local user groups. The local user groups would be a stellar place to find local programming talent.
Hope this helps.....

Prevent piracy of desktop application which doesnt need Internet connection?

Suppose for an application which will never receive internet connection during its lifetime, how can you prevent the piracy of the software?
There cannot be a single product key requirement during installation because, once installed legitimately anybody can copy the installation and re-distribute it.
So every time the application runs it should check for something and crash if the check fails.
Now what could it possibly check?
Initially I thought keeping an encrypted binary file will do the job, but as answered here, that seems a negligible prevention.
Any hacker can modify the executable so that instead of crashing when the check fails it should continue running.
So no matter how difficult the check is, the cracked application will always run.
Now I cannot see any possible solution to this problem.
PS: I am a single independent developer who is developing productivity software with very low charge. Seeing this question I believe I just have to let it go. Sigh....
EDIT: I would like to thank all the contributors in this discussion in letting me know the grim reality...
What I understand now is that you are indirectly submitting the source code of your application in the form of the target executable. Its source code can be modified by anybody using a debugger, thus ANY method of preventing piracy through source code of your application is useless. The only possible solution to this problem is to keep your legitimate customers happy by providing them services (apart from the software) and keep your price below their expectations.
I was think of solving this problem for past 3 days and now all seems worthwhile but still learnt a lot in this process, which I wouldn't have otherwise...
I ha
The only standalone thing I've seen that is semi-effective is hardware keys that come with the boxed software. They used to attach to a parallel port or a serial port and get checked when you started the program.
AutoCad and similar programs used to do this, but it is a BIG PAIN for your customers. Any time it doesn't read it, or a key goes bad, customer productivity suffers. It hurts your legitimate customers far more than those who end up pirating it anyway, and a sufficiently motivated pirate can make a VM that will overcome this. Modern versions of this use USB.
My recommendation is to trust people. Upon install, make them click a "I promise I paid for this" button and be done with it. If they click "I didn't pay for this" show them a small paragraph about how to help keep good software coming and prevent customer-harming DRM schemes by simply contributing to the success of good software authors.
You could generate a unique copy for each user, create a database, and check it agents copies you find online if you like playing the biggest game of wack-a-mole ever.

Redmine task granularity

I've been using Redmine for almost a year to manage my startup. I have all issues stored in one project with two subprojects for areas that I had to outsource and didn't want to give the contractor access to the main project issues. My problem is that I have ended up with hundreds of issues which all vary greatly in the time required to implement them. Some are small e.g.'Fix bug in controller', 'Add telephone number to contact us page' etc and some require much more effort e.g. 'Create a new Q&A area', 'Migrate server to nginx', and some are more abstract e.g. 'Investigate new SEO opportunities', 'Consider implementing a reseller control panel' etc.
I feel like I must be using Redmine incorrectly as having these all mixed together is a bit confusing. Any ideas on how I could better organize would be greatly appreciated. If supplementing with other tools might be a better idea I'd love to hear suggestions.
I don't think there is a problem having all the issues you mentioned mixed together in a project as long as they're all related to the project.
The most important point when using redmine with projects having lots of issues is to make use of custom queries. This is a great feature, but in order to ba able to use it, you must also use and fill in other fields:
Tracker: Make use of different trackers (the default of bugs, features and tasks works for me)
Category: Can be a specific part of your software, or other aspects of your business (administration, IT/server, ...)
Version: Use the version to group different issues, usually used for a release, but can also be ideas or unplanned
Of course priority and Due Date - I often use them for ordering, but you may create a custom query of issues du in the next 2 weeks
Assignee is usually the most important if there is more than one user - first of all you'll want to see the issues assigned to you, as well as the issues created by you (in order to follow-up)
You can always add custom fields in case you have other information which may be used to filter your issues.
Once a set of custom queries are in place, you'll hardly consult all your open issues at once anymore.
Two little used features for redmine newbies are categories and custom fields.
Categories are usually used for modules in your project ("Database", "Front End", "Administration Panel", etc.) and you can use custom fields for anything else you find useful - i.e. Create a "Time Consumer (Estimated)" custom field as a list with "Whale (Weeks)", "Elephant (days)", "Tiger (Hours)", "Monkey (About an hour)", "Mouse (Minutes)".

If I wanted to make a Pac-Man Game?

I am immediately placing this as a community wiki thing. I don't want to ask for help in programming yet or have even a specific question about programming, but rather the process and the resources needed to make such a game.
To put it simply: My college friend and I decided to give ourselves a really big challenge to further our skills in programming. In six months time we want to show ourselves a Pac-Man game. Pac-Man will be AI-controlled like the Ghosts and whichever Pac-Man lives the longest after a set of tries wins.
This isn't like anything we've done so far. The goal here, for me, isn't to create a perfect game, but to try and complete it, learn a whole bunch in the process. Even if I don't finish in the time, which is a good possibility, I would want to have at least tried this.
So my question is this: How should I start preparing myself? I already have started vector math, matrices, all that fun stuff. My desired platform would be DirectX 9.0c; is that advisable? Keep in mind that this is not a preference just for this project, but I wish to have some kind of future in graphics development, so I want to pick a platform that is future-safe.
As for the game development in general, what should I take into consideration? I have never done a real game before, so any and all advise to development of mid-scale projects( if this would be a mid-scale project ) is greatly appreciated.
My main concerns are the pit-falls and demotivators.
Sorry if the question is so vague. If it doesn't belong here, then I will remove it. Otherwise, any and all advise regarding making larger projects is greatly appreciated.
Given you've not tried this sort of thing before here's a few things I'd recommend.
Start with something other than DirectX (and presumably C++)
DirectX and C++ expose you to a lot of low-level stuff you can learn later. Keep things simple and perhaps try XNA and C# which is close enough you can port it later but will let you skip a lot of things like memory management and pointers for now.
Start with 2D instead of 3D
The original Pacman is 2D so you won't be needed vector math for now.
So where does that leave you?
Well, a few things to think about are the game loop, keeping things in sync, updating the screen and responding to user input.
These are great principles and will let you get something up and running a lot sooner. Do not underestimate how important it is to keep seeing progress - this is hard if you set the technical bar too high initially.
I'd go down this route (ordered to keep things fun and interesting)
Get a screen displaying - this is highly visual
Get a Pacman responding to user input
Get Pacman constrained to within the walls
Get a ghost responding to secondary user input - you can chase each other
Figure out some collision detection
Get the dots and power pills rendering so you can score and eat ghost
Render some more ghosts and figure out AI
Work out the code for finding when the level is complete
Make the map change and state reset when on a new level
Once you've got this working and running you can then decide if you want to play with better AI, 3D math or switch over to C++.
I had to write a pacman game in Java for an OO class. I found it to be very straightforward, possibly with the exception of figuring out the best way to map walls. After a bit of research, I came across this: http://javaboutique.internet.com/PacMan/source.html which uses bit-shifting to determine walls. It looks like complexity overkill, but I found it to be pretty elegant after I played around with the math a little. Other than that, pacman is a very array-friendly concept, so use an array for the board, some basic sprites, tinker with the speed and refresh, keep track of game data, and toss it in a loop.
As for the AI with the ghosts, there are articles written about them. Each ghost has a specific "strategy". Or you could roll your own..you could program them to be as easy as always heading towards pacman (or his general location/quadrant), or as complex (shortest-path) as you'd like.
Play pacman! This is the first task for your project!
I'd look at the original arcade cabinet assembly code for Pacman and the description of what it does. It's a real eye opener :)
Personally, here's what I would do:
study open source games to see what they do
buy a book about game programming (actually, I have a book about game programming already, but you probably want something more recent than that)
pick a toolset/game development library (Sourceforge, Google Code)
work through the tutorials that come with that library, possibly change to a different library if the API is too weird
come up with a requirements document
draw up a first pass design ("plan to throw one away"), try to have somebody review it
decide on a test plan
write up a schedule, not because I want to stay on schedule but because I want to break things down into easily-defined tasks
write the smallest complete game I could (eg., a Pac Man sprite that I can control inside a window: no maze, ghosts, score, lives, ability to die, etc.)
add features to that game until I've implemented the whole thing
Sounds like a good idea for a learning project! The 2 general things I recommend for your approach are
work in iterations
read a bunch about C++ and DirectX along the way
Start small -- write some code that does nothing more than draw Pac-Man on the screen. Then build on that by implementing movement across the screen. Then build the map boundaries and the inability to travel through them. And continue in this fashion, prioritizing the next task you need to complete, and then doing whatever it takes to complete it. Try not to make the tasks too big.
In order to figure out how to complete the tasks, you'll need to read. Books, web sites and existing code are all very helpful in figuring out how to do what you want. It's worth looking at several different ways to complete the same task, because some ways are better than other, or might better fit your project.
Good stuff! I am glad that Pacman motivates and inspires you.
Things to get started.
1) Decide on the development environment.
a) Are you building a standalone game or a networked game.
b) Which language are you targetting at to improve?
2) How well versed with AI?
3) How well versed with the programming algorithms techniques - like A * (A star) path finding, Dijkstra algorith, collision detection, hit testing or even recursive programming?
4) Are any of you talented in graphical design?
Good luck.
P/S FYI, if I were to write a Pacman game, I would do it in C# and Silverlight 4.0 (I can write C++ comfortably but my priority is to jump on the Silverlight bandwagon).

Unit test Bug Tracking

During the process of building software applications, you would start testing what you have built in stages even before it is complete and you could start seeing issues/bugs. How do you track them, Do you use your regular bug tracking tool to add them as issues(waste of time - since it is a work in progress), just have them in your head to fix later, or have a simple text list.
What would be an efficient way to make sure that whatever you have found is eventually fixed as development progresses? Are there any tiny tools to do that?
What I usually do is the following:
Gauge the size of the bug/issue
If it's too big, create an issue in the bug tracker.
If it's small enough, write a failing unit test and then come back to it after I've finished the original functionality.
I've found that the simplest and most efficient way to track tasks of all types (todos, work items, bugs, etc ...) is to use a single system. Typically a bug tracking system. This allows you to see all of the work remaining on your project in a single place.
Having multiple tracking systems almost always results in lost data. People eventually pick different systems, don't tell people about the system they are on, lose the piece of paper which has the list of work items, etc ...
Most bug tracking systems allow you to categorize your bugs so it's easy to distinguish the type of work remaining as well.
Make sure you CI tools such as CruiseControl.NET run unit tests as part of the build. This will cause the build shown as broken when unit test fails and the person who last checked in will be responsible for fixing it.