I have a simple question. I have a class that does not have any variables, it is just a class that has a lot of void functions (that display things, etc.). When I create an object of that class, would it be better/more efficient to pass that one object through all my functions as the program progresses, or to just recreate it every time the program goes into a new function? Keeping in mind, that the object has no variables that need to be kept. Thanks in advance for any help.
It makes much more sense that the class only has static functions and no instance is necessary at all. You have no state anyway...
For performance concerns, there is almost no difference. Passing an object as argument will cost you a (very tiny) bit at runtime. Recreating object will not (assuming compiler optimizations).
However, if you ever have plans to introduce some state (fields), or have two implementations for those void methods, you should pass an object, as it greatly reduces refactoring cost.
Summarize: if your class is something like Math where methods stateless by nature, stick with #Amit answer and make it static. Otherwise, if your class is something like Canvas or Windows and you have thoughts on implementing it another way later, better pass it by reference so you can replace it with abstract interface and supply actual implementation.
if the functions in the otherwise empty class never change... consider making them static. or put them in a namespace instead of a class.
on the other hand... if the functions are set once at runtime, like say you pick which display functions to use based on os, then store them in a global. or singleton.
on the gripping hand... if the functions are different for different parts of the greater code... then yes you'll have to somehow deliver it to whatever functions need it. whether you should create once and pass many times - or pass never and create as needed, really depends on the specifics of your application. sorry, there's no universal answer here.
Related
I have a collection of objects, lets say QVector<ApplicationStates>, which registers the most important operations done in my software. Basically, this object is meant to process redo/undo operations.The application is built using a lot of delegated objects. Operations which have to be registered lie in a lot of these objects. As such, I am always passing my collection of objects, in each delegate under the form:
class AWidget : public QWidget{
AWidget(QVector<ApplicationStates>* states, QWidget* parent = nullptr);
...
It seems ugly to me. I think about two solutions:
Singleton;
Simply declare the QVector as a static global variable (I read that global variables are evil).
Does someone have a suggestion?
Thanks for your answers.
I get into a similar situation from time to time, and I have found simply wrapping your vector in a class called something like "ApplicationContext" then passing a shared pointer or reference to an instance of that around saves the day. It has many benefits:
You avoid the global / singleton, and you are free to in fact have several instances concurrently in the future
If you suddenly have more than just that vector of objects that you need to pass arround, simply extend your context class to add whatever you need
If your vector suddenly becomes a map or changes in other ways, you need not change any interfaces that pass it along such as the signals/slots. (You will need to change the implementation where the vector is used of course).
BONUS: The code becomes easily testable! You can now make test cases for this class.
This might not be the best solution in all cases, but I think it comes pretty close in this case!
The question as what the title says, static functions? or functions of an object that is deleted right away?
I know that in a real situation the difference is completely unnoticable but i would still like to know which is more efficient in saving memory. I really don't mind the overhead given by the "new" and "delete" command.
MyClass::staticFunction();
or...
myObject = new MyClass;
myObject->normalFunction();
delete myObject;
edit: the second code might as well be MyClass().normalFunction(); silly...
there are a few things to consider here;
there will only be one instance of the myObject, and it is only used ONCE in the application.
after usage, it is deleted right away because it is not needed.
one would ask, why is this even in a class? why not just put the function where it is used with temporary variables? the answer is encapsulation and readability. i do believe static functions use the same resources as global functions since in fact, they really are global functions that enjoy class scope. the only reason i have to put it in it's own class is to make my code more readable, and encapsulation.
As it stands, none of this makes any sense. The real solution would be to provide a free function (a function at namespace scope), because this is what free functions are there for.
Oh, and since you asked: If calling this one single function has noticeable overhead in your code, then you will only find out about it through careful profiling. Profiling is also what would answer your question which way is faster.
But first make sure your code is easy to read and well maintainable. Optimizing this then will be much easier than fixing prematurely micro-"optimized" code. The only early optimizations you should employ are those that result in optimal data structures and algorithms.
(Note that new and delete will very likely have far greater overhead than what the function actually does, let alone calling it.)
in fact, they really are global functions that enjoy class scope
I think that is spot on.
It doesn't make much of a difference. However, the following would make a lot more sense:
{
MyClass myObject;
myObject.normalFunction();
}
Or even,
MyClass().normalFunction();
Why would you bother creating a heap-allocated instance of an object that doesn't even matter?
We can't say for certain without trying it on a specific platform (or knowing the details of that platform), but we can probably say that the static one will not be slower than the one with new & delete.
There is no overhead to calling non-virtual member function versus class static function versus a free function since binding is resolved at compile-time. The only difference is that member functions get one extra argument for this pointer, but with static and free function you have to pass the object somehow, so it's the same.
If the code is complex enough that you think it needs to be in its own class, then that suggests multiple methods and state stored in the object. You're asking to compare that to using multiple functions, and by implication state passed as arguments. If there's a lot of shared state between the methods, using an object is reasonable. If there's not, using multiple functions is reasonable.
For scoping you may just prefer using namespace.
On the other hand, since you're asking for memory efficience, I can see one reason why you may prefer the object. If you're going to have a consider amount of memory allocated, encapsulating it in the object members sounds like a way to free them afterwards.
What is a good way to share an instance of an object between several classes in a class hierarchy? I have the following situation:
class texture_manager;
class world {
...
std::vector<object> objects_;
skybox skybox_;
}
I currently implemented texture_manager as a singleton, and clients call its instancing method from anywhere in the code. texture_manager needs to be used by objects in the objects_ vector, by skybox_, and possibly by other classes as well that may or may not be part of the world class.
As I am trying to limit the use of singletons in my code, do you recommend any alternatives to this approach? One solution that came to mind would be to pass a texture_manager reference as an argument to the constructors of all classes that need access to it. Thanks.
The general answer to that question is to use ::std::shared_ptr. Or if you don't have that, ::std::tr1::shared_ptr, or if you don't have that, ::boost::shared_ptr.
In your particular case, I would recommend one of a few different approaches:
One possibility is, of course, the shared_ptr approach. You basically pass around your pointer to everybody who needs the object, and it's automatically destroyed when none of them need it anymore. Though if your texture manager is going to end up with pointers to the objects pointing at it, you're creating a reference cycle, and that will have to be handled very carefully.
Another possibility is just to declare it as a local variable in main and pass it as a pointer or reference to everybody who needs it. It won't be going away until your program is finished that way, and you shouldn't have to worry about managing the lifetime. A bare pointer or reference is just fine in this case.
A third possibility is one of the sort of vaguely acceptable uses of something sort of like a singleton. And this deserves a detailed explanation.
You make a singleton who's only job is to hand out useful pointers to things. A key feature it has is the ability to tell it what thing to hand out a pointer to. It's kind of like a global configurable factory.
This allows you to escape from the huge testing issues you create with a singleton in general. Just tell it to hand out a pointer to a stub object when it comes time to test things.
It also allows you to escape from the access control/security issue (yes, they create security issues as well) that a singleton represents for the same reason. You can temporarily tell it to pass out a pointer to an object that doesn't allow access to things that the section of code you're about to execute doesn't need access to. This idea is generally referred to as the principle of least authority.
The main reason to use this is that it saves you the problem of figuring out who needs your pointer and handing it to them. This is also the main reason not to use it, thinking that through is good for you. You also introduce the possibility that two things that expected to get the same pointer to a texture manager actually get pointers to a different texture manager because of a control flow you didn't anticipate, which is basically the result of the sloppy thinking that caused you to use the Singleton in the first place. Lastly, Singletons are so awful, that even this more benign use of them makes me itchy.
Personally, in your case, I would recommend approach #2, just creating it on the stack in main and passing in a pointer to wherever it's needed. It will make you think more carefully about the structure of your program, and this sort of object should probably live for your entire program's lifetime anyway.
When I wrap up some procedural code in a class (in my case c++, but that is probably not of interest here) I'm often confused about the best way to do it. With procedural code I mean something that you could easily put in an procedure and where you use the surrounding object mainly for clarity and ease of use (error handling, logging, transaction handling...).
For example, I want to write some code, that reads stuff from the database, does some calculations on it and makes some changes to the database. For being able to do this, it needs data from the caller.
How does this data get into the object the best way. Let's assume that it needs 7 Values and a list of integers.
My ideas are:
List of Parameters of the constructor
Set Functions
List of Parameters of the central function
Advantage of the first solution is that the caller has to deliver exactly what the class needs to do the job and ensures also that the data is available right after the class has been created. The object could then be stored somewhere and the central function could be triggered by the caller whenever he wants to without any further interaction with the object.
Its almost the same in the second example, but now the central function has to check if all necessary data has been delivered by the caller. And the question is if you have a single set function for every peace of data or if you have only one.
The Last solution has only the advantage, that the data has not to be stored before execution. But then it looks like a normal function call and the class approaches benefits disappear.
How do you do something like that? Are my considerations correct? I'm I missing some advantages/disadvantages?
This stuff is so simple but I couldn't find any resources on it.
Edit: I'm not talking about the database connection. I mean all the data need for the procedure to complete. For example all informations of a bookkeeping transaction.
Lets do a poll, what do you like more:
class WriteAdress {
WriteAdress(string name, string street, string city);
void Execute();
}
or
class WriteAdress {
void Execute(string name, string street, string city);
}
or
class WriteAdress {
void SetName(string Name);
void SetStreet(string Street);
void SetCity(string City);
void Execute();
}
or
class WriteAdress {
void SetData(string name, string street, string city);
void Execute();
}
Values should be data members if they need to be used by more than one member function. So a database handle is a prime example: you open the connection to the database and get the handle, then you pass it in to several functions to operate on the database, and finally close it. Depending on your circumstances you may open it directly in the constructor and close it in the destructor, or just accept it as a value in the constructor and store it for later use by the member functions.
On the other hand, values that are only used by one member function and may vary every call should remain function parameters rather than constructor parameters. If they are always the same for every invocation of the function then make them constructor parameters, or just initialize them in the constructor.
Do not do two-stage construction. Requiring that you call a bunch of setXYZ functions on a class after the constructor before you can call a member function is a bad plan. Either make the necessary values initialized in the constructor (whether directly, or from constructor parameters), or take them as function parameters. Whether or not you provide setters which can change the values after construction is a different decision, but an object should always be usable immediately after construction.
Interface design is very important but in your case what you need is to learn that worst is better.
First choose the simplest solution you have, write it now.
Then you'll see what are the flaws, so fix them.
Repeat until it's not important to fix them.
The idea is that you'll have to get experience to understand how to get directly to the "best" or better said "less worst" solution of some type of problem (that's what we call "design pattern"). To get that experience you'll have to hit problems fast, solve them and try to deeply understand why something was wrong.
That's you'll have to do each time you try something "new". Errors are not a problem if you fix them and learn from them.
You should use the constructor parameters for all values, which are necessary in any case (consider that many programming languages also support constructor overloading).
This leads to the second: Setter should be used to introduce optional parameters, or to update values.
You can also join these methods: expect necessary parameters in the constructor and then call their setter-function. This way you have to do check validity checks only once (in the setters).
Central functions should use temporary parameters only (timestamps, ..)
First off, it sounds like you are trying to do too much at once. Reading, calculating and updating are all separate operations, that themselves can probably split down further.
A technique I use when I'm thinking about the design of a method or class is to think: 'what do I want the highest-level method to ideally look like?' i.e. think about the separate components of the method and split them down. That's top-down design.
In your case, I envisaged this in my head (C#):
public static void Dostuff(...)
{
Data d = ReadDatabase(...);
d.DoCalculations(...);
UpdateDatabase(d);
}
Then do the same thing for each of those methods.
When you come to passing in parameters to your method, you need to consider whether the data you're passing in is stored or not - i.e. if your class is static (it cannot be instantiated, and is instead just a collection of methods etc) or if you make objects of the class. In other words: each object of the class has a state.
If the parameters can indeed be considered to be attributes of the class, they define its state, and should be stored as private variables with getters and setters for each, where neccessary. If the class instead has no state, it should be static and the parameters passed directly to the method.
Either way, it is common, and not considered bad practice, to have both a constructor and a few get / set functions where neccessary. It is also common to have to check the state of the object at the beginning of a method, so I wouldnt worry about that.
As you can see, it largely depends on what else you are doing in this class.
The reason you can't find many resources on this is that the 'right' answer is hugely domain-specific; it depends heavily on the specific project. The best way to find out is usually by experiment.
(For example: You're right about the advantages of the first two methods. An obvious disadvantage is the use of memory to store the data the whole time the object exists. This disadvantage doesn't matter in the least if your project needs two of these data objects; it's potentially a huge problem if you need a very large number. If it's a big live dataset, you're probably better querying for data as you need it, as implied by your third solution... but not definitely, as there are times when it's better to cache the data.)
When in doubt, do a quick test implementation with a simplest-possible interface; just writing it will frequently make it clearer what the pros and cons are for your project.
Specifically addressing your example it seems as though you are still thinking too procedurally.
You should make an object that initialises the connection to the database doing all relevant error checking. Then have a method on the object that writes the values in whatever convenient way you prefer. When the object is destroyed it should release the handle to the database. That would be the object oriented way to approach the problem.
I assume the only responsibility of your WriteAddress class is to write an address to a database or an output stream. If so, then you should not worry about getters and setters for the address details; instead, define an interface AddressDataProvider that is to be implemented by all classes with which your WriteAddress class will collaborate.
One of the methods on that interface would be GetAddressParts(), which would return an array of strings as required by WriteAddress. Any class that implements that method will need to respect this array structure.
Then, in WriteAddress, define a setter SetDataProvider(AddressDataProvider). This method will be called by the code that instantiates your WriteAddress object(s).
Finally, in your Execute() method, obtain the data that are required by calling GetAddressParts() on the "data provider" that you set and write out your address.
Notice that this design shields WriteAddress from subsidiary activities that are not strictly part of its responsibilities. So, WriteAddress does not care how the address details are retrieved; it does not even care about knowing and holding the address details. It just knows from where to get them and how to write them out.
This is obvious even in the description of this design: only two names WriteAddress and AddressDataProvider come up; there is no mention of database or how to pass the address details. This is usually an indication of high cohesion and low coupling.
I hope this helps.
You can implement each approach, they don't exclude each other, then you're going to see which are most useful.
I have heard that in C++, using an accessor ( get...() ) in a member function of the same class where the accessor was defined is good programming practice? Is it true and should it be done?
For example, is this preferred:
void display() {
cout << getData();
}
over something like this:
void display() {
cout << data;
}
data is a data member of the same class where the accessor was defined... same with the display() method.
I'm thinking of the overhead for doing that especially if you need to invoke the accessor lots of times inside the same class rather than just using the data member directly.
The reason for this is that if you change the implementation of getData(), you won't have to change the rest of the code that directly accesses data.
And also, a smart compiler will inline it anyways (it would always know the implementation inside the class), so there is no performance penalty.
It depends. Using an accessor function provides a layer of abstraction, which could make future changes to 'data' less painful. For example, if you wanted to lazily compute the value of 'data', you could hide that computation in the accessor function.
As for the overhead - If you are referring to performance overhead, it will likely be insignificant - your accessors will almost certainly be inlined. If you are referring to coding overhead, then yes, it is a tradeoff, and you'll have to decide whether it is worth the extra effort to provide accessors.
Personally, I don't think the accessors are worth it in most cases.
Yes, I think it should be done more or less unconditionally. If the state variable is in some base class it should more or less always be private. If you allow it to be protected or public, all inherited will use it directly. These classes in turn might be classes your coworkers have written in some other project. If you suddenly decide to mock about in the base class and refactor e.g. the variable name to something more suitable, all users of that state must be rewritten.
This is probably not an issue if you are the only programmer or developing some code that no one ever will use. But as soon as the number of sub classes start to grow, it might get really hairy. Gotta love transparency !
However, I'm not gods best child on this planet. Sometimes I cheat ;) When you're in the owner class, I think it's ok to access private data directly. It might even be beneficial, since you automatically know that you are modifying the actual class you're in. Given that you have some kind of naming convention that actually tells you so, e.g. some variable name with an underscore at the end: "someVariable_".
Cheers !
Well, Mr. Khunt, the overhead is really insignificant for accessors in most cases. The question is whether not the accessor logic needs to be invoked, or the you need direct access to the field. This is a question for each individual implementation, but in many cases, won't make much of a difference.
The real reason for accessors is to provide encapsulation of your fields to other classes - and less about the containing class.
Personally, I prefer not to have dozens of extra functions (get and set per every member variable). I would just use data, and would change to getData() only when required to do something differently. Since we are talking about changing the code only in one class, it shouldn't be too difficult.
It depends on what you might ultimately do with your data member I suppose.
By wrapping it up in the accessor you can then do things like lazily retrieving the data if this was an expensive process and not something you want to do unless someone asks for it. On the other hand you might know that it will always be a dumb built-in type and so I can't see any advantage of going through an accessor there. As I say, it depends on the member.
To my mind, the most important aspect of this question is does it make the code more readable and therefore maintainable? Personally I don't think it does so I wouldn't do this.
Certainly you should never add a private accessor just to do this, that would be cnuts.