I'm looking into making some changes within the source code of the engine
so I looked at the source code on github but I'm absolutely clueless to how it's actually made up.
and on the web I couldn't find anything on how the engine itself is made, only what it can do.
Several questions come to mind:
Where does the main script start from? is it from the Main::setup()?
What would be the flowchart of how the engine operates?
How is the engine UI built? (from a web dev point of view, what is the equivalent HTML for it?)
I'm no advance expert in c++ so even a general abstracted overview would be really helpful to get started
Godot build is orchestrated from python using SCons as you can read in the documentation Introduction to the buildsystem. It is different for each platform (e.g. you need the JDK for Android).
As you are aware, you can find the Godot source code on github. Before going further I need to point out that at the time of writing the master branch of the repository corresponds to the development builds of Godot 4. You might want to change to a different branch depending on the version you want to work on.
Disclaimer: I'm more familiar with Godot 3 code base.
Now, not only the build process different for each platform, also are the API bindings, and the entry point. You want to see inside the platform folder for operating system specific code.
For example, the entry point for Windows can be found in godot_windows.cpp and it looks like this:
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) {
godot_hinstance = hInstance;
return main(0, nullptr);
}
You can follow the logic from there, you will find that ultimately they do some initialization and call the methods setup and start of the Main class. You can find the Main class in the aptly named main folder. Afterwards the platform specific code will enter its main loop, and after it finished it will call the method cleanup of the Main class, and then release any platform specific stuff.
By the way, when I say a class is in a folder, I mean there are both the .cpp and .h files.
The main loop might do other things, but it must call the iteration method of the Main class. You can see the code computes time, calls into different "servers", dispatches input among other things.
We don't a flowchart. Sadly we have to piece together the overarching processes. For example, I've written about what happens when you instance an scene elsewhere. I did also look into queue_free which you can find elsewhere.
I'll talk a little further about the main loop below. But first I want to point you to the the diagrams we do have:
Architecture diagram.
Inheritance tree.
Now, the more familiar part of the main loop is that there must be an instance of the MainLoop class. It defines initialization and finalization methods, and also methods to be called on each iteration of the main loop. By default it will be an instance of the SceneTree class (which extends the MainLoop class), but you can change that in project settings. You can find the MainLoop class in the core/os folder and the SceneTree class in the scene/main folder.
The SceneTree class has the means to propagate calls of _process and _physics_process on the… scene tree, among other things. The SceneTree has a root object of type Viewport (in Godot 4 it is a Window, which is a type of Viewport). And, as you know, Viewport is a type of Node, and can have children. The children of the root are the autoloads, the current scene, and whatever else you put there… Thus from there down it is Nodes which I expect you to be more familiar with.
On the other side you have sigletons (actual signletons, not autoloads) including the "servers" and some other static utility classes. If you recall Godot has different rendering backends, which are all behind the façade of a "server" (the VisualServer in Godot 3, the RenderingServer in Godot 4). In Godot 3 we had a choice of GLES2 and GLES3 for rendering backend. And the backends also require bindings which you can find back again in the platform folder.
Here is where my familiarity with Godot source code runs out: I don't know how the shader pipeline works.
The UI? Just like everything else, it is rendered with whatever rendering backend is being used. On the web? It will be on a Canvas HTML element (on a WebGL context). The HTML? The HTML code of the web build template is configurable too (The Custom HTML shell option on the build export settings) see Custom HTML page for Web export. The build process for the web? it uses Emscripten (to webassembly). No, there is no Node.js stuff in Godot, just to be clear.
As per making changes, you probably can work on the relevant class. For example if you want to work on the AnimationPlayer you can find it on the scene/animation folder and make your changes there without much worry about how the rest of the engine works.
To build the engine, as I said at the start you need SCons. Please see Compiling and follow the steps for your platform from the documentation.
And about getting your changes merged into Godot, you want to start with an issue or a proposal (written by you, or somebody else). Followed by a pull request. Please refer to Contributing for the overall process and guidelines to get your changes merged into Godot.
Finally if you are having trouble modifying the engine, you can try the Godot Contributors Chat.
Related
This question is a bit similar to this one, except for a little twist :
Can I modify the side-by-side assembly search sequence?
We have a couple different softwares, made with different languages, that talk to each other when they run. To achieve this we made .NET COM objects that we load using Registration-Free COM Activation. This works well. Some of the languages we use can't load COMs, so we made a C++ Wrapper DLL that uses ACTCTX to activate the COMs from their embeded manifests. Also working well.
But now, we have a case where our C++ Wrapper is loaded by code that is ran by an application that isn't ours (let's call it the runtime) that is located somewhere and our application is located somewhere else. We'd rather deploy our COM objects at the same place as application is deployed rather than next to the runtime application.
Not that it is important as the concept remains the same, but the runner is FourJ's Genero (fgl.exe) and the code that calls our C++ wrapper are in .42m files. The runner (fgl.exe) is installed with Genero, by default in Program Files\FourJs and our applications are in another directory with our company's name ie : Program Files\MyCompany
This is similar to what you'd get with Java. Runtime at one place, applcation somewhere else.
So in our case, our .42m loads the C++ Wrapper properly, the wrappers activates the COM (located in the same directory as our .42m and the wrapper) properly but once we try to instanciate an object, we get a "80070002" file not found error.
I've read
Assembly Searching Sequence and noticed the described behavior using Process Monitor.
So what happens is, since ultimately it's fgl.exe that is running, the Windows Side-By-Side loader looks into :
C:\Program Files(x86)\FourJs\fgl\gen2.50\bin\MyCom.dll
C:\Program Files(x86)\FourJs\fgl\gen2.50\bin\MyCom.dll\MyCom.dll
While my COM is really inside of C:\Program Files(x86)\MyCompany\MyApplication\MyCom.dll
To confirm the behavior, we copied the COM in the same directory as fgl.exe and as expected, it works.
So i would like to be able to add a Search Directory to my Activation Context so that it looks for this DLL in my deployment directory.
Is this possible ?
If i can't find another solution, we'll end up deploying our COMs inside of that directory, but that's just not the right.
Thanks
I just started coding VST plugins. But since I'm on a mac I would also like to build Audio Units. I managed to compile some sample code and these components showed up inside my Logic DAW.
In VST there's the possibility to create a plugin shell. This describes a single 'dll'/'vst' file which has multiple effects in it. During startup the host calls a function called getNextShellPlugin and the plugin dynamically registers its content at runtime. The effects then perfectly show up in a plugin list.
Is there a similar way I can achieve this with Audio Units?
I managed to get a plugin shell by adding another component description to the 'info.plist'. But I have to hardcode every effect in there and that's not what I want.
I also tried to use AudioComponentRegister but this didn't work properly for me. Since therefore the component has to be instanciated so I can call this function inside the constructor. But to list the components inside Logic they need to be found during the scan where the component will not get instanciated by default.
So the goal is to register multiple effects inside 1 component at runtime.
Does someone maybe have a tip or a solution? Thanks a lot!
I'm struggling to find a basic example on how to set up a minimal plugin host with VST 3.x SDK. The official documentation is absolutely criptic and brief, I can't get anywhere. I would like to:
understand the minimal setup: required headers, interfaces to implement, ...;
load a VST3 plugin (no fancy GUI, for now);
print out some data (e.g. plugin name, parameters, ...).
That would be a great start :)
Yeah, VST3 is rather mysterious and poorly documented. There are not many good examples partially because not many companies (other than Steinberg) actually care about VST3. But all cynicism aside, your best bet would be to look at the Juce source code to see their implementation of a VST3 host:
https://github.com/julianstorer/JUCE/blob/master/modules/juce_audio_processors/format_types/juce_VST3PluginFormat.cpp
There's a few other VST3-related files in that package which are worth checking out. Anyways, this should at least be enough information to get get you started with a VST3 host.
It's worth noting that Juce is GPL (unless you pay for a license), so it's a big no-no to borrow code directly from it unless you are also using the GPL or have a commercial license. Just a friendly reminder to be a responsible programmer when looking at GPL'd code on the net. :)
Simple VST3 hosts already exist in the VST SDK. It is not difficult to augment them, but there are some things to keep in mind.
The samples under public.skd/vst-hosting in the VST SDK contain an EditorHost and and AudioHost. The first handles the GUI, the second handles the effect (the signal processing). You can combine the two. Neither is a full implementation.
VST objects are COM objects and so you have to make sure to set up the application context correctly, so that your COM objects persist between calls. EditorHost and AudioHost both do that in a couple of lines in a global context variable (look for pluginContext).
If you use separate calls to load and unload effects, process data, and so on, you have to keep COM object pointers, so they are not unloaded. For example, you may be tempted to ignore the Steinberg::Vst::Module module, since you don't need it once the effect is loaded, but you would have to keep a pointer to it somewhere globally or in the main application thread. If not, the automatic unloading of that pointer will also unload the plugin as well and subsequent calls to the plugin will fail.
The construction of VST effects is relatively simple. They consist of a component (the effect) and a controller (the GUI). Both are instantiated when Steinberg::Vst::PlugProvider is loaded (some effects do not have a GUI). Both examples above load a plugprovider. Once you load a plugprovider, you are essentially done.
The following code is sufficient to load a plugprovider (the whole effect). Assume returning -1 means an error:
std::string error;
std::string path = "somepath/someeffect.vst3";
VST3::Hosting::Module::Ptr module =
VST3::Hosting::Module::create(path, error);
if (! module)
return -1;
IPtr<PlugProvider> plugProvider;
VST3::Optional<VST3::UID> effectID = std::move(uid);
for (auto& classInfo : module->
getFactory().classInfos())
{
if (classInfo.category() == kVstAudioEffectClass)
{
if (effectID)
{
if (*effectID != classInfo.ID())
continue;
}
plugProvider = owned(new
PlugProvider(module->getFactory(),
classInfo, true));
break;
}
}
if (! plugProvider)
return -1;
After this, plugProvider->getComponent() and plugProvider->getController() give you the effect and GUI. The controller has to be displayed in a window, of course, which is done in EditorHost. These are the implementations of IComponent,IAudioProcessor and IEditController in the VST SDK.
The source/vst/testsuite part of the VST SDK will show you the full functionality of both of these parts (it will essentially give you the functional calls that you can use to do everything you want to do).
Note the module and plugprovider loaded in the code above. As mentioned above, if you don't keep the module pointer, there is no guarantee the plugprovider pointer will survive. It is difficult to keep track of what gets released when in the VST SDK.
So at work I have been working for a few months on a OPOS driver for a few different things. I didn't create the project, but I have taken it over and am the only one developing it. So today I got curious about the way that it was done and I think that it may have started off on the wrong foot. I had to do a little bit of digging to find out that it uses the OPOS drivers from a company called MCS (Monroe Consulting Services) I downloaded 1.13 and installed the MSI version. I fired up VS created a new mfc dll. I then went to add a class. This is where I am confused.
It doesn't matter if i choose Typelib or ActiveX it usually gives me the same list of interfaces that I can add/extend from(with one exception that comes to mind with MSR it has an events interface that I can extend) And they both make the same header file (in the case with msr it is COPOSMSR.h) but one extends CCmdTarget, and the other extends CWnd. This is my first question. Which should I choose? what is a typelib/ what is a ActiveX component and how do they differ from one another.
The one i've been working on extends CCmdTarget. For the life of me I can not figure out how the driver knows to use one of the files (USNMSRRFID) but that is where all the development went into. (I broke it up a bit so it wasn't just one huge file) But that file doesn't extend COPOSMSR..it extends CCmdTarget as well. The only time i see anything mention the USN file is in MSRRFID.idl (which confuses me even more) Any one have clarity for this?
Part of me thinks this could make a very big impact when it comes time to deploy. A few of the test apps that have been written that make use of this driver require a somewhat confusing setup process that involves registering different drivers, copying files into a specific folder, setting up the registry and so forth. I think that if i can get a grip on what this all means and how to make a nice application that extends one of these OPOS devices properly that I could save my self further grief in the future.
Any tips or pointers??? Sorry if it is a newb question..but i am new to C++. I started with Java then moved to C# so some of this stuff is WAY over my head....
Well so I've done TONS of digging, and it is like searching for dinosaurs. Not easy, and hard to find. I will end up writing a nice little how to on this, but for now I will put up my findings. Although I still don't have this 100% i know I am close.
Turns out the typelib and activeX things are not a big concern but come into play after you've gotten started. ActiveX is for Control objects, and Typelib is for the Service Object. The most important thing is to get started correctly. I found a article on some Chinese website that offers some OK tips after figuring out the translation errors. To start with you will want to make a C++ project with Automation. It can be ATL or MFC. My preference is MFC. In the UPOS 1.13 pdf (or newer) in Appendix A section 8 it describes the responsibilities of the Service object. It has the main methods you need to implement. There are 16 methods you have to add, and at least 4 methods that get/set the properties for your OPOS device.
So to get started you will need to open up the add class wizard (for MFC classes) and click Add MFC class. You wil want your base class to be CCmdTarget. Come up with a classy Class name (I chose PinpadSOCPP) Then in the automation radio buttons select Creatable by type ID. It should fill in your type id as [Project Name].[Class name] so mine was PinpadSO.PinpadSOCPP. hit finish. This makes a nice interface file that you can use Class view to add methods and so forth to it.
As for adding the methods there are 2 things to note about this, and one of them I haven't figured out 100% yet. The first is that you have to implement all the methods in that section with the correct parameters and return values. Most of them return LONG (32bit signed number). and the 2 most common parameters are LONG and BSTR. (there is the occasional pointers for when you have "out" parameters) This is the part that I think that I am currently failing as I don't know if I have them all implemented correctly and that is why I am getting error 104/305 (which from the Chinese article says that I am missing something from my methods) I'm not sure if it is case sensitive, and I'm not sure of the 7 properties that look to need to have get/set which ones need to be implemented because the MSR SO that i am working on from work doesn't use them all and that SO is working. The other is that after you implement the base OPOS methods you have to also implement the extra methods from your specific OPOS device. Since I am doing PINPad there are 6 additional methods I have to implement.
Now this is a lot of time consuming work because you have to open up class view, navigate to the name of your project class. Expand it and go to the Interface portion. My Project name is PinpadSO, and the file that I am implementing this in is PinpadSOCPP (which means the interface name is IPinpadSOCPP) right click on IPinpadSOCPP and click add > add method. This brings you to a 2 step process. You fill in your return value, name of your function, add in all your parameters. Hit next and fill out some help string info (if you want) and hit finish. Now after you do that 20+ times it gets old and slow...and if you are like me you type Computer instead of Compute and flip flop letters, or forget to hit add on all your parameters. A person could make a nice little program to edit the 3 files that get changed each time you add a method and that would speed it up considerably. If you make a mistake you will need to open up [project name].idl, [class name].h, and [class name].cpp those are the 3 files that get the methods added to it directly. I recommend not making a mistake.
so now that all that hard work is out of the way. Compile your program. If you want to save your self an extra step you could turn on Auto Register in the linker project settings (NOTE: if you do that you'll need to run Visual Studio as admin if you program in vista or higher) this would save you of having to open a command window (admin) navigate to your DLL and use the command regsvr32 on that DLL. Nice thing is that you don't have to do that over and over again, just the once will do. I have no hard facts that it works like that every time but the MSR SO that I am working on, I'll make changes to it, compile it, then open up my OPOS tester program and the changes have taken affect.
After that you need to make your registry additions. navigate to HKLM\software\OLEforRetail\ServiceOPOS
(NOTE if you have a x64 machine you'll do this twice. One there, and again at HKLM\software\Wow6432Node\OLEforRetail\ServiceOPOS )
You'll need to add a Key for whatever OPOS device you are working with. I am making a pinpad SO so I made a Key called PINPad (check your UPOS document to see what name you should give it) Lastly choose a name for your device. I chose the model type of the from the vendor as my device name (C100) and made a sub key in PINPad. The default REG_SZ value needs to be your registered SO Device TypeID. in my case it is PinpadSO.PinpadSOCPP
if you don't have a OPOS test program (which I just made my own as a console program) then you can use the Microsoft OPOS test app (I couldn't get it to work on my x64 machine...but maybe you'll have better luck with it) If you do decide to make your own OPOS test app make sure you compile it for x86 machines (even if you have x64) OPOS does not like x64 for some reason (probably the pointers length I'd assume)..at any rate. Once you got it all setup run your test app (for my case I am just running OPOSPinpadClass pin = new OPOSPinpadClass(); Console.WriteLine(pin.Open("C100")); and hope for 0 :)
I am currently getting 104 (E_NOSERVICE)..and like i said before i think it is because I don't have all my methods correct. If that turns out to be the case I'll edit this response, or I'll report back and say what it really was.
Anywho, i hope this helps anyone else who decides they want to make their own SO. Good luck
UPDATE
OPOS checks a couple of properties when you call the Open command. One of the properties that is a must to implement is the in the GetPropertyNumber, and it is PIDX_ServiceObjectVersion. You will need to set this number to return (1000000 * majorVersion) + (1000 * minorVersion) + revision since I am making a OPOS 1.13 compatible SO my returned ServiceObjectVersion is 1013000. You will also want to implement 3 properties in GetPropertyString:
PIDX_DeviceDescription
PIDX_DeviceName
PIDX_ServiceObjectDescription
For all other values you can return a empty string or 0 until you start hooking all those things up.
As a side note if you don't want to make it in C++ you don't have to. You can make it in any language that you can write a ActiveX object in (such as a COM visible .NET class library)
Suppose I have written a game engine in c++. It was functions such as adPlayer(Vec3f position, Model playerModel), addExplosion(Vec3f position, Size explosionSize).
Now, those functions can be called in some sort of test class and then the projcet can be compiled and run. This takes forever.
What would be ideal is to have some basic text editor where i can type these functions, press ctrl+u and then this somehows calls the precompiled functions of the game engine. E.g, it doesn't recompile the game engine.
How would this be done?
Usually you would compile your engine into a .dll and link it to your project. Then you can just link the function and don't have to compile it if you just want to use the functions.
If you are asking about design iteration, you create a data format that is read in and converted to entities in your scene graph. You need to use the factory pattern. You can use a serialization library where each object knows how to read/write/persist itself.
By having a data format that represents a "snapshot" of your game state, you can read/save it from both a game and an editor. Later you can make design changes to a running game instance by having functions that re-read the data during runtime
It seems like right now you might have hardcoded/mixed client code with engine code, which might be hard to seperate.
If you are asking about compilation, then you will want to compile to a library (either .dll or static .lib/.so). Then compile your client/specific code against your engine lib(s). They should be in seperate projects.