I'd like to use the same noise function throughout various .glsl files without copying and pasting code every time.
What is the processing way to achieve this ?
Afaik, there is no way to do this dynamically in Processing. However, you can separately compile your shaders with glslify, which is a GLSL compile-time include system.
glslify (https://github.com/glslify/glslify) is a bit of overkill, though. Whenever I make an OpenGL / GLSL application, I roll my own system.
Something like this (taken from my Java OpenGL Minecraft clone here):
private static String process(String code){
String[]lines=code.split("\n");
for(int i=0;i<lines.length;i++){
if(lines[i].startsWith("#include : ")){
lines[i]=process(Utils.loadFile("path_to_include_directory/"+lines[i].replace("#include : ","")+".glsl"));
}
}
StringBuilder str= new StringBuilder();
for(String s:lines){
str.append(s).append("\n");
}
return str.toString();
}
This code does a copy-paste on includes. Usage:
///// MAIN FILE
#include : bar_module
float foo(vec3 p){
return bar(p * 5.0);
}
///// path_to_include_directory/bar_module.glsl
float bar(vec3 p) {
/* ... ... */
}
This is a recursive includes mechanism, fairly efficient, at runtime. No static compilation needed!
The only downside is that circular dependencies aren't detected. But you shouldn't be in a situation with circular dependencies anyway :)
DISCLAIMER
I don't actually remember how this stuff works, but I think this is the general idea:
File file=new File("my_shader.vert");
file.setContents(process(file.readContents()));
PShader my_shader=new PShader("my_shader");
NOTE: setContents and readContents don't actually exist, but you get the idea :)
PS
Utils.loadFile returns the contents of the filename passed in as a String.
Related
I'm using the Qt framework to create a ui for my business logic.
The class responsible for building the ui provides several methods which, step by step, initialize the ui elements, layout them, group them and, finally, format (i.e. void MyUi::init3_formatUiElements()) them.
Naturally, some ui elements need numerous layout settings set, so this method might look like
void MyUi::init3_formatUiElements() {
_spinBox_distance->setMinimum(0.0);
_spinBox_distance->setMaximum(10.0);
_spinBox_distance->setSingleStep(0.5);
_spinBox_distance->setSuffix(" meters");
//...
//same for other widgets
return;
}
Objects like QDoubleSpinBox* _spinBox_distance are member fields of the MyUi class.
I would like to have a "temporary alias" for _spinBox_distance, in that the above method body simplifies to
void MyUi::init3_formatUiElements() {
//create alias x for _spinBox_distance here
x->setMinimum(0.0);
x->setMaximum(10.0);
x->setSingleStep(0.5);
x->setSuffix(" meters");
//...
//free alias x here
//same for other widgets: create alias x for next widget
//...
//free alias x here
return;
}
This would speed up the typing process and would make code fragments more copy/paste-able, especially for ui elements of a similar type.
Apart from scoping each block in curly braces
{ QDoubleSpinBox*& x = _spinBox_distance;
x->setMinimum(0.0);
//...
}
{ QLabel*& x = _label_someOtherWidget;
//...
}
is there an elegant way to achieve this?
I tried the above syntax without scoping, but destructing x then of course leads to destruction of the underlying widget.
Maybe
QDoubleSpinBox** x = new QDoubleSpinBox*;
x = &_spinBox_distance;
(*x)->setMinimum(0.0);
//...
delete x;
but that doesn't make things much more type-easy (three extra lines, pointers to pointers, (*x))... :D
EDIT: This one does not work as after delete x, can't be redeclared another type.
What about using a macro ?
#define Set(argument) _spinBox_distance->set##argument
and
Set(Minimum(0.0));
Set(Maximum(10.0));
Set(SingleStep(0.5));
Set(Suffix(" meters"));
Or
#define Set(Argument, Value) _spinBox_distance->set##argument(Value)
Set(Minimum, 0.0);
Set(Maximum, 10.0);
Set(SingleStep, 0.5);
Set(Suffix, " meters");
Collecting the fundamental conceptual thoughts about the problem in question from the comments section, I may post the syntactical/technical answer to the question. This approach, without a doubt, should not be chosen in any kind of "complex" situation (or rather not at all).
bad coding style:
same name for different things
name which doesn't tell you anything about the object
move repeated code to dedicated functions, which...
may specialize on several ui types
are template functions
...
in case of Qt: Use Qt Designer.
...
{ auto x = _spinBox_distance;
x->setMinimum(0.0);
//...
}
{ auto x = _label_someOtherWidget;
//...
}
will do the trick.
I think your code looks fine as it is, I find it much more useful to have the code be easy to read/understand than it is to have the code be easy to write. Remember that you write the code once, then have to read it many times afterwards.
In cases like this I make it easier to write with good old (and oft blamed for mistakes) copy and paste. Grab _spinBox_distance->set and just paste, finish the line, paste, finish the line, etc...
If, however, you find yourself writing those 4 setters in a row over and over again, then put them in 1 function that takes in the 4 parameters.
void SetParameters(QDoubleSpinBox* spinBox_distance, double min, double max, double step, std::string suffix)
{
//the setters
}
I am working on a C++ server project that has been plagued with a growing main() function, and the code base has grown to a point where compilation time is about 6 minutes (in debug mode) even after I make the slightest change to the main() function. (The main() function is about 5000 lines long!)
I'm using Visual Studio 2017, and (as I understand) the compiler has some pre-compiled header capabilities, as well as capability to not recompile unmodified functions. But those stuff are currently of little use because most of the logic is in the main() function.
Here's a (very simplified) version of my code:
struct GrandServer{
std::map<std::string,std::function<void(std::string)>> request;
/* some other functions of this server */
};
int main()
{
SQLClient sql_client;
auto query_database=[&sql_client](auto&& callback){/*stuff*/};
GrandServer server;
server.request["/my_page.html"] = [](std::string&& data){
// Do stuff
};
server.request["/my_page_2.html"] = [](std::string&& data){
// Do more stuff
};
server.request["/my_page_3.html"] = [](std::string&& data){
// Do even more stuff
};
server.request["/my_page_4.html"] = [&query_database](std::string&& data){
// Do many many callbacks
query_database([](std::vector<std::string>&& results){
std::string id = std::move(results.front());
do_something_else([id = std::move(id)](auto&& param) mutable {
/* do more stuff, call more functions that will call back, then answer the client */
});
});
};
/* Many many more lambda functions */
}
In essence, the core logic of the whole application is contained within the main() function, by defining lambdas stored in a std::map. The lambda functions contain a few levels of lambda functions within them, mostly defining (async) callbacks from the database. The other standalone functions consist mainly of the functions in GrandServer as well as various utility functions (e.g. time conversion, async callback utilities, unicode utilities, etc.) but none of them form the core logic of the application. This feels like really bad code sprawl :(
I'm thinking of converting all the top-level lambdas (i.e. those stored directly in server.request) into normal standalone member functions into a few separate compilation units like such:
// split definitions of methods in this class over several compilation units
// header file left out for brevity
struct MyServer{
SQLClient sql_client;
void query_database=[this](auto&& callback){/*stuff*/};
void do_my_page(std::string&& data){
// Do stuff
}
void do_my_page_2(std::string&& data){
// Do stuff
}
void do_my_page_3(std::string&& data){
// Do stuff
}
void do_my_page_4(std::string&& data){
// Do many many callbacks
query_database([](std::vector<int>&& results){
do_something_else([](auto&& param){
/* do more stuff, call more functions that will call back, then answer the client */
});
});
}
};
// main.cpp
struct GrandServer{
std::map<std::string,std::function<void(std::string)>> request;
/* some other functions of this server */
};
int main()
{
GrandServer server;
MyServer my_server;
server.request["/my_page.html"] = [&my_server](auto&&... params){my_server.do_my_page(std::forward<decltype(params)>(params)...);};
server.request["/my_page_2.html"] = [&my_server](auto&&... params){my_server.do_my_page_2(std::forward<decltype(params)>(params)...);};
server.request["/my_page_3.html"] = [&my_server](auto&&... params){my_server.do_my_page_3(std::forward<decltype(params)>(params)...);};
server.request["/my_page_4.html"] = [&my_server](auto&&... params){my_server.do_my_page_4(std::forward<decltype(params)>(params)...);};
/* Lots more lambda functions */
}
While this will reduce the size of main() to something much more tolerable, should I expect this to reduce compilation time significantly? There will not be any reduction in the number of template instantiations (in fact, I introduced some new template lambda forwarders in main() that should get inlined).
Also note that due to the callbacks and capturing of variables using move semantics, it is not easy to change the inner lambda functions within each logic flow to normal standalone member or non-member functions. (Template functions and classes cannot be in separate compilation units.) However, each logic flow is usually no more than 100 lines long so I do not believe that this is necessary.
6 minutes to compile a code base of about 6000-7000 lines seems way too slow, and I would think that this is due to my horrendously long main() function. Should I expect breaking this function up as I described above to significantly improve the compile time of this project?
Have you copied every single #include <> to stdfafx.h ?? This goes a long way towards reducing compilation times. The compiler may complain about the precompiled header being too large, but the default size is ridiculously small.
The -Zm option controls the amount allocated for the precompiled header, in megabytes.
I've seen over 10 x improvements in compilation speed for some projects.
If you map a network drive to one of your local disks, there is an easy way to see a further 3x improvement in compilation time.
Here's a solution for that case, it involves replacing the network map to a DOS drive in the registry. Here's what this gives for my own setup:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices]
"R:"="\DosDevices\D:\devel\build"
"S:"="\DosDevices\D:\devel\src"
I have two very similar methods in a C++ class. The only difference is the Objective-C methods that get called inside:
void MyClass::loadFromImage(UIImage *image)
{
// ... Prepare dictionary and error
GLKTextureInfo* info = [GLKTextureLoader textureWithCGImage:image.CGImage options:options error:&err];
// ... Use GLKTexureInfo to load a texture
}
void Surface::loadFromImage(const char* imageName)
{
// ... Prepare dictionary and error
GLKTextureInfo* info = [GLKTextureLoader textureWithContentsOfFile:path options:options error:&err];
// ... Use GLKTexureInfo to load a texture
}
How can I combine these two methods to reduce redundant code?
I am hoping to do something similar to this thread, but not sure how the syntax should work out in Objective-C. Thank you for help!
Replace
// ... Prepare dictionary and error
and
// ... Use GLKTexureInfo to load a texture
with methods that can be used by both versions of loadFromImage.
Yay code reuse!
The situation I have is that I am trying to initialize a file scoped variable, std::string, in a shared object constructor. It will probably be more clear in code:
#include <string>
#include <dlfcn.h>
#include <cstring>
static std::string pathToDaemon; // daemon should always be in the same dir as my *.so
__attribute__((constructor))
static void SetPath()
{
int lastSlash(0):
Dl_info dl_info;
memset(&dl_info, 0, sizeof(dl_info));
if((dladdr((void*)SetPath, &dl_info)) == 0)
throw up;
pathToDaemon = dl_info.dli_fname; // **whoops, segfault here**
lastSlash = pathToDaemon.find_last_of('/');
if(std::string::npos == lastSlash)
{
// no slash, but in this dir
pathToDaemon = "progd";
}
else
{
pathToDaemon.erase(pathToDaemon.begin() + (lastSlash+1), pathToDaemon.end());
pathToDaemon.append("progd");
}
std::cout << "DEBUG: path to daemon is: " << pathToDaemon << std::endl;
}
I have a very simple program that does this same thing: a test driver program for concept if you will. The code in that looks just like this: a "shared object ctor" which uses dladdr() to store off the path of the *.so file when the file is loaded.
Modifications I've tried:
namespace {
std::string pathToDaemon;
__attribute__((constructor))
void SetPath() {
// function def
}
}
or
static std::string pathToDaemon;
__attribute__((constructor))
void SetPath() { // this function not static
// function def
}
and
std::string pathToDaemon; // variable not static
__attribute__((constructor))
void SetPath() { // this function not static
// function def
}
The example you see above sits in a file that is compiled into both a static object library and a DLL. The compilation process:
options for static.a: --std=C++0x -c -Os.
options for shared.so: -Wl,--whole-archive /path/to/static.a -Wl,--no-whole-archive -lz -lrt -ldl -Wl,-Bstatic -lboost_python -lboost_thread -lboost_regex -lboost_system -Wl,-Bdynamic -fPIC -shared -o mymodule.so [a plethora of more objects which wrap into python the static stuff]
The hoops I have to jump through in the bigger project make a much more complicated build process than my little test driver program requires. This makes me think that the problem lies there. Can anyone please shed some light on what I'm missing?
Thanks,
Andy
I think it's worth giving the answer that I've found. The problem was due to the complex nature of the shared library loading. I discovered after some digging that I could reproduce the problem in my test bed program when compiling the code with optimizations enabled. That confirmed the hypothesis that the variable truly didn't exist when it was being accessed by the constructor function.
GCC includes some extra tools for C++ which allow for developers to force certain things to happen at particular times during code initialization. More precisely, it allows for certain things to take place in particular order rather than particular times.
For example:
int someVar(55) __attribute__((init_priority(101)));
// This function is a lower priority than the initialization above
// so, this will happen *after*
__attribute__((constructor(102)))
void SomeFunc() {
// do important stuff
if(someVar == 55) {
// do something here that important too
someVar = 44;
}
}
I was able to use these tools to success in the test bed program even with optimizations enabled. The happiness which ensued was short lived when applied to my much larger library. Ultimately, the problem was due to the nature of such a large amount of code and the problematic way in which the variables are brought into existence. It just wasn't reliable to use these mechanisms.
Since I wanted to avoid repeated calls for evaluating the path, i.e.
std::string GetPath() {
Dl_info dl_info;
dladdr((void*)GetPath, &dl_info);
// do wonderful stuff to find the path
return dl_info.dli_fname;
}
The solution turned out to be much simpler than I was trying to make it:
namespace {
std::string PathToProgram() {
Dl_info dl_info;
dladdr((void*)PathToProgram, &dl_info);
std::string pathVar(dl_info.dli_fname);
// do amazing things to find the last slash and remove the shared object
// from that path and append the name of the external daemon
return pathVar;
}
std::string DaemonPath() {
// I'd forgotten that static variables, like this, are initialized
// only once due to compiler magic.
static const std::string pathToDaemon(PathToProgram());
return pathToDaemon;
}
}
As you can see, exactly what I wanted with less confusion. Everything happens only once, except calls to DaemonPath(), and everything remains within the translation unit.
I hope this helps someone who runs into this in the future.
Andy
Maybe you could try running valgrind on your program
In you self posted solution above, you have changed your »interface« (for the code that reads your pathToDaemon / DaemonPath()) from »Accessing a file scoped variable« to »calling a function in anonymous namespace« - so far ok.
But the implementation of DaemonPath() is not done in a thread-safe way. I though that thread-safeness matters, because your are wrote »-lboost_thread« in your question. So you may think about to change the implementation thread-safe. There are many discussions and solutions about singleton pattern and thread-safeness available, e.g.:
Article from Scott Meyers
Stack Overflow
The fact is, that your DaemonPath() will invoked (maybe far) after loading of the library is done. Note, that only the 1st call to the singleton pattern is critical in a multithreaded environment.
As an alternative, you may add a simple »early« call to your DaemonPath() function like this:
namespace {
std::string PathToProgram() {
... your code from above ...
}
std::string DaemonPath() {
... your code from above ...
}
__attribute__((constructor)) void MyPathInit() {
DaemonPath();
}
}
or in a more portable way like this:
namespace {
std::string PathToProgram() {
... your code from above ...
}
std::string DaemonPath() {
... your code from above ...
}
class MyPathInit {
public:
MyPathInit() {
DaemonPath();
}
} myPathInit;
}
Of course, this approach don't makes your singleton pattern thread-safe. But sometimes, there are situations, we can be sure that there are no concurrent thread accesses (e.g. at initialization time when the shared lib is loading). If this conditions matches for you, this approach could be a way to bypass thread-safeness problem without the use of thread locking (mutex...).
Question is in bold below :
This works fine:
void process_batch(
string_vector & v
)
{
training_entry te;
entry_vector sv;
assert(sv.size() == 0);
...
}
However, this causes the assert to fail :
void process_batch(
string_vector & v
)
{
entry_vector sv;
training_entry te;
assert(sv.size() == 0);
...
}
Now I know this issue isn't shrink wrapped, so I'll restrict my question to this: what conditions could cause such a problem ? Specifically: variable initialization getting damaged dependant on appearance order in the stack frame. There are no malloc's or free's in my code, and no unsafe functions like strcpy, memcpy etc... it's modern c++. Compilers used: gcc and clang.
For brevity here are the type's
struct line_string
{
boost::uint32_t line_no;
std::string line;
};
typedef std::vector<boost::uint32_t> line_vector;
typedef std::vector<line_vector> entry_vector;
typedef std::vector<line_string> string_vector;
struct training_body
{
boost::uint32_t url_id;
bool relevant;
};
struct training_entry
{
boost::uint32_t session_id;
boost::uint32_t region_id;
std::vector< training_body> urls;
};
p.s., I am in no way saying that there is a issue in the compiler, it's probably my code. But since I am templatizing some code I wrote a long time ago, the issue has me completely stumped, I don't know where to look to find the problem.
edit
followed nim's suggestion and went through the following loop
shrink wrap the code to what I have shown here, compile and test, no problem.
#if 0 #endif to shrink wrap the main program.
remove headers till it compiles in shrink wrapped form.
remove library links till compiles in shrink wrapped form.
Solution: removing link to protocol buffers gets rid of the problem
The C++ standard guarantees that the following assertion will succeed:
std::vector<anything> Default;
//in your case anything is line_vector and Default is sv
assert(Default.size() == 0);
So, either you're not telling the whole story or you have a broken STL implementation.
OR: You have undefined behavior in your code. The C++ standard gives no guarantees about the behavior of a program which has a construct leading to UB, even prior to reaching that construct.
The usual case for this when one of the created objects writes beyond
its end in the constructor. And the most frequent reason this happens
in code I've seen is that object files have been compiled with different
versions of the header; e.g. at some point in time, you added (or
removed) a data member of one of the classes, and didn't recompile all
of the files which use it.
What might cause the sort of problem you see is a user-defined type with a misbehaving constructor;
class BrokenType {
public:
int i;
BrokenType() { this[1].i = 9999; } // Bug!
};
void process_batch(
string_vector & v
)
{
training_entry te;
BrokenType b; // bug in BrokenType shows up as assert fail in std::vector
entry_vector sv;
assert(sv.size() < 100);
...
}
Do you have the right version of the Boost libaries suited for your platform? (64 bit/32 bit)? I'm asking since the entry_vector object seems to be have a couple of member variables of type boost::uint32_t. I'm not sure what could be the behaviour if your executable is built for one platform and the boost library loaded is of another platform.