As used in DebugHitTestBounder in SampleApp; I have subclassed SkBounder and installed in my canvas (created in each draw) in order to find what is drawn under mouse clicks but the onIRect method is never called by the drawing routines. The commit method is called as expected (but I don't need it, I need one with a display-space converted rectangle parameter). I debugged the code, found out draw loops are managed in canvas.cpp in one place with macros (LOOPER_BEGIN and LOOPER_END) and found no place in the drawing calls that calls bounder's onIRect. Am I doing something wrong?
Note: I am using code from 2 months old master branch of git repo with XCode 4.6 in Mac OS 10.8.x. Project files are created via gyp.
Apparently, SkBounder only works on the raster backend, I was using the accelerated (GL) backend.
Related
I'm looking into making some changes within the source code of the engine
so I looked at the source code on github but I'm absolutely clueless to how it's actually made up.
and on the web I couldn't find anything on how the engine itself is made, only what it can do.
Several questions come to mind:
Where does the main script start from? is it from the Main::setup()?
What would be the flowchart of how the engine operates?
How is the engine UI built? (from a web dev point of view, what is the equivalent HTML for it?)
I'm no advance expert in c++ so even a general abstracted overview would be really helpful to get started
Godot build is orchestrated from python using SCons as you can read in the documentation Introduction to the buildsystem. It is different for each platform (e.g. you need the JDK for Android).
As you are aware, you can find the Godot source code on github. Before going further I need to point out that at the time of writing the master branch of the repository corresponds to the development builds of Godot 4. You might want to change to a different branch depending on the version you want to work on.
Disclaimer: I'm more familiar with Godot 3 code base.
Now, not only the build process different for each platform, also are the API bindings, and the entry point. You want to see inside the platform folder for operating system specific code.
For example, the entry point for Windows can be found in godot_windows.cpp and it looks like this:
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) {
godot_hinstance = hInstance;
return main(0, nullptr);
}
You can follow the logic from there, you will find that ultimately they do some initialization and call the methods setup and start of the Main class. You can find the Main class in the aptly named main folder. Afterwards the platform specific code will enter its main loop, and after it finished it will call the method cleanup of the Main class, and then release any platform specific stuff.
By the way, when I say a class is in a folder, I mean there are both the .cpp and .h files.
The main loop might do other things, but it must call the iteration method of the Main class. You can see the code computes time, calls into different "servers", dispatches input among other things.
We don't a flowchart. Sadly we have to piece together the overarching processes. For example, I've written about what happens when you instance an scene elsewhere. I did also look into queue_free which you can find elsewhere.
I'll talk a little further about the main loop below. But first I want to point you to the the diagrams we do have:
Architecture diagram.
Inheritance tree.
Now, the more familiar part of the main loop is that there must be an instance of the MainLoop class. It defines initialization and finalization methods, and also methods to be called on each iteration of the main loop. By default it will be an instance of the SceneTree class (which extends the MainLoop class), but you can change that in project settings. You can find the MainLoop class in the core/os folder and the SceneTree class in the scene/main folder.
The SceneTree class has the means to propagate calls of _process and _physics_process on the… scene tree, among other things. The SceneTree has a root object of type Viewport (in Godot 4 it is a Window, which is a type of Viewport). And, as you know, Viewport is a type of Node, and can have children. The children of the root are the autoloads, the current scene, and whatever else you put there… Thus from there down it is Nodes which I expect you to be more familiar with.
On the other side you have sigletons (actual signletons, not autoloads) including the "servers" and some other static utility classes. If you recall Godot has different rendering backends, which are all behind the façade of a "server" (the VisualServer in Godot 3, the RenderingServer in Godot 4). In Godot 3 we had a choice of GLES2 and GLES3 for rendering backend. And the backends also require bindings which you can find back again in the platform folder.
Here is where my familiarity with Godot source code runs out: I don't know how the shader pipeline works.
The UI? Just like everything else, it is rendered with whatever rendering backend is being used. On the web? It will be on a Canvas HTML element (on a WebGL context). The HTML? The HTML code of the web build template is configurable too (The Custom HTML shell option on the build export settings) see Custom HTML page for Web export. The build process for the web? it uses Emscripten (to webassembly). No, there is no Node.js stuff in Godot, just to be clear.
As per making changes, you probably can work on the relevant class. For example if you want to work on the AnimationPlayer you can find it on the scene/animation folder and make your changes there without much worry about how the rest of the engine works.
To build the engine, as I said at the start you need SCons. Please see Compiling and follow the steps for your platform from the documentation.
And about getting your changes merged into Godot, you want to start with an issue or a proposal (written by you, or somebody else). Followed by a pull request. Please refer to Contributing for the overall process and guidelines to get your changes merged into Godot.
Finally if you are having trouble modifying the engine, you can try the Godot Contributors Chat.
Just you know, I'm just starting with iOs and Objective-C (3days).
I'm currently making a 3D object viewer.
I want to be able to load a file and display it into a view. Then the user can rotates and zooms on it.
I have first build it with ninevehGL, but it appears that It doesn't support heavy files (>10M) so well.
I'm now trying to go for Cocos3D.
After i installed everything I have created a xcode project using the cocos3D 2.0 template.
This template should display (i guess) a 3D text "hello world".
But it doesn't even compile throwing me the following:
-(void) updateBounds: (CGRect) bounds withDeviceOrientation: (ccDeviceOrientation) deviceOrientation; <---- expected as a type
It appears that (ccDeviceOrientation) is not recognized as a type.
Would you help me with this ?
I have the project on gitHub here under the folder COCOS3D
Also I'm using the followings version of cocos
cocos2D: 2.0.0 8-Jul-2012
cocos3D: cocos3d 0.7.0
I have, at the same time, posted my question on cocos2D forum, Bill Hollings has been kind enough the quickly reply, and pointed me that the versions of cocos2D and cocos3D i use can't work together.
Here is the explanation and what to do.
I'm currently trying to re-write my binder between Ogre and SDL in my game engine. Originally I used the method outlined in the Ogre Wiki here. I recently updated my version of SDL to 1.3 and noticed the "SDL_CreateWindowFrom()" function call and re-implemented my binder to allow Ogre to build the window, and then get the HWND from Ogre to be passed into SDL.
Only one window is made and I see everything renders properly, however no input is collected. I have no idea why. Here's the code I am currently working with (on windows):
OgreWindow = Ogre::Root::getSingleton().createRenderWindow(WindowCaption, Settings.RenderWidth, Settings.RenderHeight, Settings.Fullscreen, &Opts);
size_t Data = 0;
OgreWindow->getCustomAttribute("WINDOW",&Data);
SDLWindow = SDL_CreateWindowFrom(&Data);
SDL_SetWindowGrab(SDLWindow,SDL_TRUE);
I've tried looking around and there are a number of people that have done this to one degree of success or another(such as here or here). But no one seems to comment on handling the input after implementing this.
I originally thought that maybe since SDL does not own the window it wouldn't collect input from it by default, which is reasonable. So I searched the SDL API and only found that one function "SDL_SetWindowGrab()" that seems to relate to input capture. But calling that has no effect.
How can I get SDL to collect input from my Ogre-made window?
It has been a while, but I figured I would put in the answer for others that may need it. It turned out to be a bug/incomplete feature in SDL 1.3. The "CreateWindowFrom" method wasn't originally intended to be used exclusively as an input handler. At the time of this writing I know myself and another on my team wrote patches for Windows and Linux that permitted this use to work and submitted those patches to SDL.
Suppose I have an OpenGL game running full screen (Left 4 Dead 2). I'd like to programmatically get a screen grab of it and then write it to a video file.
I've tried GDI, D3D, and OpenGL methods (eg glReadPixels) and either receive a blank screen or flickering in the capture stream.
Any ideas?
For what it's worth, a canonical example of something similar to what I'm trying to achieve is Fraps.
There are a few approaches to this problem. Most of them are icky, and it totally depends on what kind of graphics API you want to target, and which functions the target application uses.
Most DirectX, GDI+ and OpenGL applications are double or tripple-buffered, so they all call:
void SwapBuffers(HDC hdc)
at some point. They also generate WM_PAINT messages in their message queue whenever the window should be drawn. This gives you two options.
You can install a global hook or thread-local hook into the target process and capture WM_PAINT messages. This allows you to copy the contents from the device context just before the painting happens. The process can be found by enumerating all the processes on the system and look for a known window name, or a known module handle.
You can inject code into the target process's local copy of SwapBuffers. On Linux this would be easy to do via the LD_PRELOAD environmental variable, or by calling ld-linux.so.2 explicitly, but there is no equivalient on Windows. Luckily there is a framework from Microsoft Research which can do this for you called Detours. You can find this here: link.
The demoscene group Farbrausch made a demo-capturing tool named kkapture which makes use of the Detours library. Their tool targets applications that require no user input however, so they basically run the demos at a fixed framerate by hooking into all the possible time functions, like timeGetTime(), GetTickCount() and QueryPerformanceCounter(). It's totally rad. A presentation written by ryg (I think?) regarding kkapture's internals can be found here. I think that's of interest to you.
For more information about Windows hooks, see here and here.
EDIT:
This idea intrigued me, so I used Detours to hook into OpenGL applications and mess with the graphics. Here is Quake 2 with green fog added:
Some more information about how Detours works, since I've used it first hand now:
Detours works on two levels. The actual hooking only works in the same process space as the target process. So Detours has a function for injecting a DLL into a process and force its DLLMain to run too, as well as functions that are supposed to be used in that DLL. When DLLMain is run, the DLL should call DetourAttach() to specify the functions to hook, as well as the "detour" function, which is the code you want to override with.
So it basically works like this:
You have a launcher application who's only task is to call DetourCreateProcessWithDll(). It works the same way as CreateProcessW, only with a few extra parameters. This injects a DLL into a process and calls its DllMain().
You implement a DLL that calls the Detour functions and sets up trampoline functions. That means calling DetourTransactionBegin(), DetourUpdateThread(), DetourAttach() followed by DetourTransactionEnd().
Use the launcher to inject the DLL you implemented into a process.
There are some caveats though. When DllMain is run, libraries that are imported later with LoadLibrary() aren't visible yet. So you can't necessarily set up everything during the DLL attachment event. A workaround is to keep track of all the functions that are overridden so far, and try to initialize the others inside these functions that you can already call. This way you will discover new functions as soon as LoadLibrary have mapped them into the memory space of the process. I'm not quite sure how well this would work for wglGetProcAddress though. (Perhaps someone else here has ideas regarding this?)
Some LoadLibrary() calls seem to fail. I tested with Quake 2, and DirectSound and the waveOut API failed to initalize for some reason. I'm still investigating this.
I found a sourceforge'd project called taksi:
http://taksi.sourceforge.net/
Taksi does not provide audio capture, though.
I've written screen grabbers in the past (DirectX7-9 era). I found good old DirectDraw worked remarkably well and would reliably grab bits of hardware-accelerated/video screen content which other methods (D3D, GDI, OpenGL) seemed to leave blank or scrambled. It was very fast too.
Building an app for Ubuntu using Ogre3D, CEGUI, OIS which is now all compiling and running as expected. Having got the basic app running I decided to now build a custom config file which I can store both graphics settings (ie. resolution, fullscreen, etc) as well as other configurable settings I will need in the app down the track.
As a starting point I changed from calling mRoot->showConfigDialog() at each startup to :
if(!mRoot->restoreConfig())
mRoot->showConfigDialog();
this was meant to restore the config from the 'ogre.cfg' file which exists and so it did, but got to loading a skybox texture on the first scene create and just sat there doing nothing.
Since that wasn't what I wanted anyway I tried setting things up manually like :
RenderSystem *rs = mRoot->getRenderSystemByName("OpenGL Rendering Subsystem");
mRoot->setRenderSystem(rs);
rs->setConfigOption("Full Screen","No");
rs->setConfigOption("Video Mode","1024 x 768");
Those matched the settings from 'ogre.cfg' that I was using prior from the showConfigDialog() function. I got the same issue with this manual configuration however, while loading the skybox textures it just stops.
I can't work out why these changes have any bearing at all on how the app runs and since OIS grabs the input and locks the mouse to the screen I am having trouble trying to debug it with gdb.
Regarding the mouse locking, you can run gdb on another display. It can either be a display on the same computer (including options like Xephyr that create virtual displays nested in the current display, or just a second session on a different display - if you have a working .xinitrc running two or three X sessions at a time is simple), or it can be on another machine on your network (ie. via ssh -X).