I cannot find any suitable link or reply about playheaven.There is only sample project in github which did not help.
If there is some nice tutorial or video link explaining in steps how to do it.Please provide the link or help me integrate them
I'm Zachary Drake, Director of Sales Engineering at PlayHaven. Officially, PlayHaven doesn't currently support Cocos2D, so we don't have instructions on how to integrate PlayHaven into a Cocos2D game. PlayHaven may very well work with Cocos2d by simply following the iOS instructions, but we don't do QA and other testing on Cocos2d the way we do for officially supported platforms (iOS, Android, Unity iOS, Unity Andoid, Adobe AIR).
This person on the Cocos2d Forums seems to have successfully integrated PlayHaven (after ranting about it), but I have not actually performed the integration steps described and cannot vouch for them:
http://www.cocos2d-iphone.org/forum/topic/96170
The iOS integration documentation is not as good as we'd like it to be, and we are in the process of revising it now. If you email support#playhaven.com with specific iOS integration problems, we can assist you. But if a Cocos2D compatibility issue breaks the integration, unfortunately there won't be anything we can do to get around that.
Thanks for your interest in PlayHaven.
My Playhavan working sample with Cocos2d : https://www.box.com/s/lc2p8jhngn5o8hxeuw3y
Here is complete description of SDK integration:
https://github.com/playhaven/sdk-ios
Related
can somebody provide a short example on how to snap camera frames with NDK, C++ Camera 2 api?
I couldn't find any meaningful resources out there as it's sort of brand new API, however would be thankful for any help.
Thanks!
I found a good example of using camera2 from within C++ here: https://github.com/justinjoy/native-camera2
It's easy to follow and I got it running in a few minutes.
There aren't sample apps available yet, but you can take a look at the basic compliance test for the camera2 NDK API:
https://android.googlesource.com/platform/cts/+/master/tests/camera/libctscamera2jni/native-camera-jni.cpp
I'm about to start my final year project which requires me to develop the Kinect Fusion Algorithm. I was told to code in C++ and use the OpenNI API.
Problem:
I read up online but I am still confused as to how to start. I installed Microsoft Visual Studio 2012 Express as well as OpenNI, but how should I start? (I was told to practice coding first before starting to work on the project)
If I want to practice and understand how the codes work and how the Kinect respond to the code, any advice on how should I start? As I am REALLY lost at the moment and hitting a dead end, not knowing what to do next with many of the information online which I do not really understand.
First of all, if you're planning to use OpenNI with Kinect, I advise you not to use version 2.0, which is available at the official website. The reason is simply that there currently is no driver yet to support the Microsoft Kinect (the company behind OpenNI - PrimeSense - only supports a driver for their own sensor, which is different from the Kinect, and the community hasn't gotten round to writing a Kinect driver yet).
Instead grab the package from the simple-openni project's downloads page - it contains everything to get you going: libraries from the 1.5.x line.
OpenNI is the barebone framework - it only contains the architecture for natural interface data processing.
NITE is a proprietary (freeware) library by PrimeSense that provides code to process the raw depth images into meaningful data - hand tracking, skeleton tracking etc.
SensorKinect is the community-maintained driver for making the Kinect interact with OpenNI.
Mind you that these drivers don't provide a way to control the Kinect's tilt motor and the LED light. You may need to use libfreenect for that.
As for getting started, both the OpenNI and NITE packages contain source code samples for simple demos of the technology. It's a good idea to start with one and modify it to suit your needs. That's what I've done to get my own project - controlling Google Chrome with Kinect - working.
As for learning C++, there are tons of materials out there. I recommend the book "Thinking in C++" by Bruce Eckel, if you're a technical person.
There are multiple examples written for OpenNI, available at the GitHub repository: https://github.com/OpenNI/OpenNI
Your best place to start is to review the Resources Page at OpenNI.org, followed by the Reference Guide. Then tackle several of the examples -- run them, step through them and modify them to understand how they are working.
We are building a game with sns features.
We are planning to use native framework(e.g. UIKit for iOS) for sns features.
With that in mind,
which one would be better?
http://www.cocos2d-x.org/ or http://code.google.com/p/cocos2d-android-1/
By which definition of "better" are we supposed to decide what is "better" for you?
I'm answering this question by saying that you should try out both, then go with the engine that you feel more productive with. Make the quickest prototype possible using the minimum features you require. In your case that might be displaying a native UI view inside a cocos2d OpenGL view.
For more information about what really matters when selecting an engines read my "game engine picker guide".
I think might be cocos2d-x. I did compare those two, basically, the android one is almost dead and there is no update any more. and the support and tutorials is pool.
But cocos2d-x will be difficult when you try to integrated with the SDK features as you need to go through jni. However, there are plenty of tutorials online to show you how to do that.
I have attempted cocos2d-x in windows environment using eclipse and eventually decided against it because it lacks debugging support and proper IDE integration, which is very good in Java/Android/eclipse/ADT. The only way I would recommend cocos2d-x is if you are developing for IOS and then porting to Android but I don't know about UIKit and SNS.
Since the presentation by Eero Bragge at theAmterdam devdays about QT / QT-Creator I've been looking for an excuse to try my hands at mobile development. Now this excuse has arrived in the form of the Nokia N900, my new phone!
My other hobby is computer vision, so my first Idea's for applications to try and build lie in that direction. My questions now are:
Has anyone tried QT Creator + OpenCV + Maemo 5? I see there is a year old port of opencv for Meamo Diablo (4.1) has anyone tried that one on Maemo 5?
I see that improvements to the OpenCV
port are were among the Meamo google
summer of code 2009 ideas that
didn't make the cut. Is there work
being done there?
How easy is it to acquire images from
the phone's camera and convert them
to something opencv understands?
Does anyone have any useful links to
share?
"I see that improvements to the OpenCV port are were among the Meamo google summer of code 2009 ideas that didn't make the cut. Is there work being done there?"
The project was not select, and AFAIK the people involved didn't carry the project.
OpenCV seems to work under Maemo5 according to the discussion here: http://n2.nabble.com/OpenCV-for-Maemo5-td4172275.html#a4172275
you can use gstreamer framework to acquire image from both cameras. gst-camerabin should work.
see: http://gstreamer.freedesktop.org/wiki/CameraBin
I have two questions regarding native C/C++ on Android platforms:
Is it possible for device manufactures to develop native C++ applications on an Android platform?
How can I develop my own native C++ application / library that has an upper layer Java front-end / API on an Android platform?
Official announcement and download links:
Introducing Android 1.5 NDK, Release 1
Posted by David Turner on 25 June 2009 at 10:30 AM
Many of you have been asking for the ability to call into native code from your Android applications. I'm glad to announce that developers can now download the Android Native Development Kit from the Android developer site.
http://android-developers.blogspot.com/2009/06/introducing-android-15-ndk-release-1.html
this blog entry explains how to do native programming on android:
http://rxwen.blogspot.com/2009/11/native-programming-on-android.html
hope it helps.
It is possible, but it's not supported. Native code requirements may vary significantly from one Android system to the next; unless you are working on very low-level infrastructure, it is best to go the Java-source-to-Dalvik-VM route for portability. And of course, you'll likely be tied to the very phone you wrote your native code for, though if you integrate it into Android it may be accepted and maintained for all platforms the system intends to support.
If you are a device manufacturer, of course. You can essentially do whatever you want.
This article explains it quite well: http://davanum.wordpress.com/2007/12/09/android-invoke-jni-based-methods-bridging-cc-and-java/
Google has released a Native Development Kit (NDK) (according to http://www.youtube.com/watch?v=Z5whfaLH1-E at 00:07:30).
Hopefully the information will be updated on the google groups page (http://groups.google.com/group/android-ndk), as it says it hasn't been released yet.
I'm not sure where to get a simple download for it, but I've heard that you can get a copy of the NDK from Google's Git repository under the donut branch.
Well Android tend to have a normal Linux in the bottom,
so writing Linux apps should be possible if you only can get the code in there...
(but often you can't, since the phone is locked at that level)
So the answer would be:
Yes, but it depends
Yes, but it depends