My app is using both UIkit and cocos2d. After using view with GoogleMap SDK and show cocos scene. xcode show this:
cocos2d: surface size: 0x0
Failed to make complete framebuffer object 0x8CDD
And the screen is black, and can do nothing except restart app.
I have searched so many website, so many people have same problem but no solution for that.
So is it impossible to use Google Map sdk with cocos2d? If not what can I do?
(Before using GoogleMapsdk I used MKMapview, it doesn't cause this problem but it's not as good as GoogleMap)
Google Maps uses OpenGL ES to render the map. Two OpenGL renderers can not be mixed, they both use their own GL context. Specifically on iOS there seems to be little support to run two GL views side by side (let alone on top of each other).
So no, for all intents and purposes mixing cocos2d and google maps on iOS is not possible.
Related
I wonder if I can write my own native module, render something with using OpenGL in C++ and finally display rendered picture on react native side ( by simply using component).
If so, can I use that to render an animation in for example 60fps?
My case is that I've got the custom, let's say, game renderer written in OpenGL, and I looking for some fancy solution to create an editor detached from engine code.
I've already analyzed some react-native video libraries and I've discovered that frames are injecting as the texture of components, but I'm not sure is it the best solution (I can't find any documentation of those low-level mechanisms in react native).
Any advice? Thanks in advance!
I'm trying to write a simple AR app in ReactNative, it should simply see 4 predefined markers and draw a rectangle as a boundary on the live preview of the camera, the thing is I'm trying to do the processing in C++ using opencv so as to have the logic of the app in one place accessible to both Android & IOS.
here's what I've been thinking
write the OS dependent code to open the camera and get permissions in (java/ObjC) & the C++ part to do processing on each frame.
call the C++ code (from within the native code) on each frame, and that should return lets say coordinates for the markers.
draw the rect if 4 markers found on the preview in native code (No idea how to achieve this so far but I think it will be native code).
expose that preview (the live preview with the drawn view) to ReactNative (Not sure about that or how to achieve it)
I've looked at the react native camera component but it doesn't provide access to frames & if that's even possible, I'm not sure if it would be a good idea to send frames over the bridge between JS & java/ObjC.
the problem is that I'm not sure of the performance or if that is even possible.
if you know of any ReactNative library that would be great.
Your steps seem sound. After processing the frame in C++, you will need to set the application properties RCTRootView.appProperties in iOS, and emit an event using RCTDeviceEventEmitter on Android. So, you will need an Objective-C wrapper for your C++ code on iOS and a Java wrapper on Android. In either case, you should be able to use the same React Native code for actually drawing the rectangle on top of the camera preview. You're right that the React Native camera component does not have an API for getting individual frames from the camera, so you'll need to write that code natively for each platform.
I've recently noticed that things accommodate differently on the simulator and a real device when using Cocos2d.To make sure I did the following:
1. I created a blank Cocos2d project. In the init method I created 7 sprites from Icon-72.png(which is found in the resources folder of the Cocos2d template) and added them to the screen.In the simulator only 6.5 sprites could accommodate side-by-side whereas in the iPod touch all seven sprites could accommodate easily and almost half of the screen width remained unused.
2.Then I created a project from Single View Application template. I added the same Icon-72.png to the project. Then on the storyboard I added 6 image views and set their image property to Icon-72.png. This time I had exactly the same result with both simulator and the device.
I guess there should be some tweak as to how to fix this issue with Cocos2d because it's not Apple's fault. Do you know how to handle this?
The iPod Touch could have the retina display and the simulator wont have have it. If you need the same display as of the iPod Touch You can use the iPhone Simulator with the Retina Display and you would get the same screen. Another Option you can use is Copy and paste the Same file with -hd prefix ex:(Icon-72-hd.png) with 72x72 size and you can get the same result.
There is no problem with either the version of cocos2d or with the Apple for the issue you are facing.
I guess I do have poor explanation but you would understand my explanation.
The iPod Touch will be a retina display. Cocos2d doesn't automatically double the size of images.
I'd like to do this:
Create a Cocoa application with a couple NSButtons in it. Also, a "cocos2d-iphone" view running in the same window.
If I trigger the NSButtons, a function is called in the cocos2d-iphone view (not sure where, maybe in the currently running scene?).
Well, I managed to create a new project from the cocos2d-iphone for Mac template, made the window bigger than the cocos2d view, moved the cocos2d view, and added my NSButtons. Now, I am not very sure about how to make the connection I need.. =/
I suggest reading an Interface Builder tutorial. This one is using Quartz, it's not Cocos2D but close enough. Simply assume the Quartz view to be Cocos2D view while you go through the tutorial.
Note that Cocos2D/EAGLView has some issues with NSView objects. In particular you can't add NSView objects as subviews to the Cocos2D OpenGL view, they will simply not be displayed. This is a general problem of the OpenGL view on Mac, and there are solutions/workarounds for this but they unfortunately do not work with Cocos2D. So if you're planning to have NSView objects overlapping the Cocos2D view … well, you can try and if you can make it work, PLEASE let me know how! :)
Simple example - on one side we see camera rendered via standard software rendered "Input" on other hand (labeled "Output") rendered via some directX stuff (at least it seems to me) :
So what function is provided by windows api or DirectX api for capturing such mixed scenes?
TightVNC Server can do it, you may want to look into what they are doing. From a simple glance through their source code it looks like they are creating a virtual screen that mirrors the primary screen.
Specifically though, look into the
CreateCompatibleDC and CreateDIBSection API's
As I known, there is not a direct way to capture DirectX render area, although we can see that on the screen. Because the real render action(aka render instruction) happens in hardware layer. So the API in standard SDK cannot know the finally render result, which lead to the black square.
The only way to do this maybe put your hope on the Render layer(such as DirectX engine) itself can support output interface as well as underlying render action. So I suggest to check some documentation to find if there indeed is.
DirectX can present to a limited subsection of the window that you give it, enabling you to create small regions of DX content in larger windows.