Last weeks I built a Progressive Web App using Html/JS. I found several (free) libraries for scanning QR codes. The result of the QR-scanning was not very good. In many cases, no auto-focus was done by the video camera.
Can Ionic use the video camera very well to scan (any) QR-codes?
Does Ionic use the native (like) access to the video camera?
The best way of proving something is to build it ;-) Would an Ionic app have better QR scanning?
Today I created my first Ionic3 app. I used this nice example.
The difference in QR scanning quality was far more than I expected:
After trying many (free) Javascript examples on the web, the quality was not very good. Alas.
The solution of the Ionic/Cordova example on Android was SUPERB!
Ionic generates a native app (either for IoS or Android). It will use the native functionality of the video camera for scanning the QR.
Related
I want to program a drone to fly with a C++ project using image processing analysis in real-time (using OpenCV). I want to do it with PX4, Gazebo simulator. The final goal is to run the project on a real
drone using Jetson-Nano and Pixhawk drone.
I have 2 main problems.
I can't manage to get the video stream of the px4 drone models without using ROS. I have followed this official guide to install the relevant software (Gazebo, Px4, GCS).
For python we have Drone-kit library to control the drone, but I want to use C++ for my project. what are the alternatives tools instead of the Drone-kit to control drones with C++ and how I can receive the video stream from the gazebo px4 drone?
After that I tried to search for information online for hours and go through the documentations, I could not find a suitable guide or solution.
thanks.
Posting this as an answer after details in the comments made things more clear.
For an application like this ROS should most definitely be used. It comes with a wide range of pre-built packages and tools to enable easy localization and navigation. When looking at UAVs the MoveIt! package is a good place to look. It handles 3D navigation and has a few UAV implementations already. The Hector Quadcopter package is another good option for something like SLAM.
Was wondering if anyone had some idea on how one would implement OCR image linking on an IOS device.
What I exactly want the app to do is scan an image using the iPhones camera, then recognise that image. When the image is recognised the app should open a link that is relative to the image.
A better example of what I am talking about is made by a company called Augment . They make a product called "Trackers" which is exactly what I would like to implement.
There are no in-build/custom SDK's that do your exact requirement.
However, you can achieve it by customizing the OpenCV library or any of Augmented reality SDK's.
Here are the links may helpful to you
OpenCV library tutorial iOS
Wikitude Augmented reality
Now there is Real-Time Recognition SDK (http://rtrsdk.com).
It is free by the way. Disclaimer: I work for ABBYY
we are developing a web site where you can upload videos about tutorials, it looks like a repository of videos for a certain institution. and we came up a idea where the user can record a video directly to the site and upload it with title,description, etc. . and we are going to use ruby on rails framework. Do anyone knows here how to make it except using flash and silverlight?
well i search some and found some but the case is when the user stops the recording it automatically upload the file...and the user can put some title or descriptions etc.. and the example is in php thats why its a problem to me also how can i do it in rails
thanks in advance
If an HTML5 solution could be suitable for you, you can take a look to WebRTC (currently supported in Chrome, Firefox and Opera).
You can find a good tutorial here: http://www.html5rocks.com/en/tutorials/getusermedia/intro/
The first hit on searching webcam plugin: http://www.xarg.org/project/jquery-webcam-plugin/ As it is using JavaScript it is easy to include in Rails.
Many others appear in the results ...
Another option is to use the Nimbb widget. There are a lot of tutorials showing how to embed it into a website.
There are list of JQuery Plugins for webcam.. Best Top 10 are listed Below:
1. ScriptCam : jQuery plugin to manipulate Webcams
ScriptCam is a popular JQuery plugin to manipulate webcams. Take snapshots, detect movement, colors, QR and barcodes, record videoclips and organize videochats.
2. Xarg jQuery webcam plugin
The jQuery webcam plugin is a transparent layer to communicate with a camera directly in JavaScript.This plugin provides three different modes to access a webcam through a small API directly with JavaScript – or more precisely jQuery. Thus, it is possible to bring the image on a Canvas (callback mode), to store the image on the server (save mode) and to stream the live image of the Flash element on a Canvas (stream mode).
3. jQuery.WebcamQRCode : QR Code scanning in jQuery
WebcamQRCode is a jQuery plugin that uses the webcam user to scan a QR code and return the result to Javascript to be treated. This plugin was originally developed to scan the barcode of the product and automatically fill in the corresponding information on the form of an intranet.This plugin uses Flash to access the webcam.
4. Photobooth-js : jQuery Html5 plugin to take pictures through webcam
Photobooth-js is a jQuery plugin plus an html5 widget that allows users to take their avatar pictures on your site.This jquery plugin is supported in all browsers that support navigator.getUserMedia.
5. Photobooth with PHP, jQuery and CSS3
In this tutorial, we will be building a jQuery and PHP powered photobooth. It will allow your website visitors to take a snapshot with their web camera and upload it from a neat CSS3 interface.The solution we are going to use for this app is webcam.js. It is a JavaScript wrapper around flash’s API that gives us control over the user’s webcam.
6. Mackers jQuery Webcam Plugin
Plugin which allows jQuery to read data from a user’s webcam or other video capture device.
7. headtrackr : Javascript library for headtracking via Webcam
headtrackr is a javascript library for real-time face tracking and head tracking, tracking the position of a users head in relation to the computer screen, via a web camera and the webRTC/getUserMedia standard.
8. tracking.js : Real Time tracking techniques by the Camera
The tracking.js brings to web elements tracking techniques of a real scene captured by the camera, through natural interactions from object tracking, color markers, among others, allowing the development of interfaces and games through a simple and intuitive API.
9. Reveal.js with Webcam-based gesture recognition
This is what I got when I combined webcam-based gesture recognition with Hakim El Hattab’s reveal.js. It took me a while to write and fine tune the detection algorithms. Even then, the algorithms are only about 80% accurate.
10. Say Chees : JavaScript library for integrating a webcam
A minimal library for integrating webcam snapshots into your app. It uses getUserMedia, a recent API for accessing audio and video in the browser.
If my Answer Satisfied your requirement somehow or somewhere then I appreciate if you Up-Vote this Answer... :) ;)
You can do this without any external library and in fact, I think the work is way easier if you build this from the ground up.
You'll first need access to the user's webcam, with JavaScript you can first test if the user has a webcam in the first place. This can be done with:
function hasNavigator() {
return !!(navigator.mediaDevices && navigator.mediaDevices.getUserMedia)
}
Then if hasNavigator() is true, you can
if (video) {
navigator.mediaDevices.getUserMedia(constraints).then((stream) => { video.srcObject = stream })
}
where video is
var video = document.querySelector('video')
This allows you to fetch a stream from the webcam:
Once you have the stream you can draw a frame of the stream on a canvas and save it as an image.
I wrote a blog post about Linking a Webcam Directly to Rails' ActiveStorage that I think you might find helpful.
I cannot find any suitable link or reply about playheaven.There is only sample project in github which did not help.
If there is some nice tutorial or video link explaining in steps how to do it.Please provide the link or help me integrate them
I'm Zachary Drake, Director of Sales Engineering at PlayHaven. Officially, PlayHaven doesn't currently support Cocos2D, so we don't have instructions on how to integrate PlayHaven into a Cocos2D game. PlayHaven may very well work with Cocos2d by simply following the iOS instructions, but we don't do QA and other testing on Cocos2d the way we do for officially supported platforms (iOS, Android, Unity iOS, Unity Andoid, Adobe AIR).
This person on the Cocos2d Forums seems to have successfully integrated PlayHaven (after ranting about it), but I have not actually performed the integration steps described and cannot vouch for them:
http://www.cocos2d-iphone.org/forum/topic/96170
The iOS integration documentation is not as good as we'd like it to be, and we are in the process of revising it now. If you email support#playhaven.com with specific iOS integration problems, we can assist you. But if a Cocos2D compatibility issue breaks the integration, unfortunately there won't be anything we can do to get around that.
Thanks for your interest in PlayHaven.
My Playhavan working sample with Cocos2d : https://www.box.com/s/lc2p8jhngn5o8hxeuw3y
Here is complete description of SDK integration:
https://github.com/playhaven/sdk-ios
I'm about to start my final year project which requires me to develop the Kinect Fusion Algorithm. I was told to code in C++ and use the OpenNI API.
Problem:
I read up online but I am still confused as to how to start. I installed Microsoft Visual Studio 2012 Express as well as OpenNI, but how should I start? (I was told to practice coding first before starting to work on the project)
If I want to practice and understand how the codes work and how the Kinect respond to the code, any advice on how should I start? As I am REALLY lost at the moment and hitting a dead end, not knowing what to do next with many of the information online which I do not really understand.
First of all, if you're planning to use OpenNI with Kinect, I advise you not to use version 2.0, which is available at the official website. The reason is simply that there currently is no driver yet to support the Microsoft Kinect (the company behind OpenNI - PrimeSense - only supports a driver for their own sensor, which is different from the Kinect, and the community hasn't gotten round to writing a Kinect driver yet).
Instead grab the package from the simple-openni project's downloads page - it contains everything to get you going: libraries from the 1.5.x line.
OpenNI is the barebone framework - it only contains the architecture for natural interface data processing.
NITE is a proprietary (freeware) library by PrimeSense that provides code to process the raw depth images into meaningful data - hand tracking, skeleton tracking etc.
SensorKinect is the community-maintained driver for making the Kinect interact with OpenNI.
Mind you that these drivers don't provide a way to control the Kinect's tilt motor and the LED light. You may need to use libfreenect for that.
As for getting started, both the OpenNI and NITE packages contain source code samples for simple demos of the technology. It's a good idea to start with one and modify it to suit your needs. That's what I've done to get my own project - controlling Google Chrome with Kinect - working.
As for learning C++, there are tons of materials out there. I recommend the book "Thinking in C++" by Bruce Eckel, if you're a technical person.
There are multiple examples written for OpenNI, available at the GitHub repository: https://github.com/OpenNI/OpenNI
Your best place to start is to review the Resources Page at OpenNI.org, followed by the Reference Guide. Then tackle several of the examples -- run them, step through them and modify them to understand how they are working.