I have a server-type application, and I have an issue with making sure thread's aren't deleted before they complete. The code below pretty much represents my server; the cleanup is required to prevent a build up of dead threads in the list.
using namespace std;
class A {
public:
void doSomethingThreaded(function<void()> cleanupFunction, function<bool()> getStopFlag) {
somethingThread = thread([cleanupFunction, getStopFlag, this]() {
doSomething(getStopFlag);
cleanupFunction();
});
}
private:
void doSomething(function<bool()> getStopFlag);
thread somethingThread;
...
}
class B {
public:
void runServer();
void stop() {
stopFlag = true;
waitForListToBeEmpty();
}
private:
void waitForListToBeEmpty() { ... };
void handleAccept(...) {
shared_ptr<A> newClient(new A());
{
unique_lock<mutex> lock(listMutex);
clientData.push_back(newClient);
}
newClient.doSomethingThreaded(bind(&B::cleanup, this, newClient), [this]() {
return stopFlag;
});
}
void cleanup(shared_ptr<A> data) {
unique_lock<mutex> lock(listMutex);
clientData.remove(data);
}
list<shared_ptr<A>> clientData;
mutex listMutex;
atomc<bool> stopFlag;
}
The issue seems to be that the destructors run in the wrong order - i.e. the shared_ptr is destructed at when the thread's function completes, meaning the 'A' object is deleted before thread completion, causing havok when the thread's destructor is called.
i.e.
Call cleanup function
All references to this (i.e. an A object) removed, so call destructor (including this thread's destructor)
Call this thread's destructor again -- OH NOES!
I've looked at alternatives, such as maintaining a 'to be removed' list which is periodically used to clean the primary list by another thread, or using a time-delayed deletor function for the shared pointers, but both of these seem abit chunky and could have race conditions.
Anyone know of a good way to do this? I can't see an easy way of refactoring it to work ok.
Are the threads joinable or detached? I don't see any detach,
which means that destructing the thread object without having
joined it is a fatal error. You might try simply detaching it,
although this can make a clean shutdown somewhat complex. (Of
course, for a lot of servers, there should never be a shutdown
anyway.) Otherwise: what I've done in the past is to create
a reaper thread; a thread which does nothing but join any
outstanding threads, to clean up after them.
I might add that this is a good example of a case where
shared_ptr is not appropriate. You want full control over
when the delete occurs; if you detach, you can do it in the
clean up function (but quite frankly, just using delete this;
at the end of the lambda in A::doSomethingThreaded seems more
readable); otherwise, you do it after you've joined, in the
reaper thread.
EDIT:
For the reaper thread, something like the following should work:
class ReaperQueue
{
std::deque<A*> myQueue;
std::mutex myMutex;
std::conditional_variable myCond;
A* getOne()
{
std::lock<std::mutex> lock( myMutex );
myCond.wait( lock, [&]( !myQueue.empty() ) );
A* results = myQueue.front();
myQueue.pop_front();
return results;
}
public:
void readyToReap( A* finished_thread )
{
std::unique_lock<std::mutex> lock( myMutex );
myQueue.push_back( finished_thread );
myCond.notify_all();
}
void reaperThread()
{
for ( ; ; )
{
A* mine = getOne();
mine->somethingThread.join();
delete mine;
}
}
};
(Warning: I've not tested this, and I've tried to use the C++11
functionality. I've only actually implemented it, in the past,
using pthreads, so there could be some errors. The basic
principles should hold, however.)
To use, create an instance, then start a thread calling
reaperThread on it. In the cleanup of each thread, call
readyToReap.
To support a clean shutdown, you may want to use two queues: you
insert each thread into the first, as it is created, and then
move it from the first to the second (which would correspond to
myQueue, above) in readyToReap. To shut down, you then wait
until both queues are empty (not starting any new threads in
this interval, of course).
The issue is that, since you manage A via shared pointers, the this pointer captured by the thread lambda really needs to be a shared pointer rather than a raw pointer to prevent it from becoming dangling. The problem is that there's no easy way to create a shared_ptr from a raw pointer when you don't have an actual shared_ptr as well.
One way to get around this is to use shared_from_this:
class A : public enable_shared_from_this<A> {
public:
void doSomethingThreaded(function<void()> cleanupFunction, function<bool()> getStopFlag) {
somethingThread = thread([cleanupFunction, getStopFlag, this]() {
shared_ptr<A> temp = shared_from_this();
doSomething(getStopFlag);
cleanupFunction();
});
this creates an extra shared_ptr to the A object that keeps it alive until the thread finishes.
Note that you still have the problem with join/detach that James Kanze identified -- Every thread must have either join or detach called on it exactly once before it is destroyed. You can fulfill that requirement by adding a detach call to the thread lambda if you never care about the thread exit value.
You also have potential for problems if doSomethingThreaded is called multiple times on a single A object...
For those who are interested, I took abit of both answers given (i.e. James' detach suggestion, and Chris' suggestion about shared_ptr's).
My resultant code looks like this and seems neater and doesn't cause a crash on shutdown or client disconnect:
using namespace std;
class A {
public:
void doSomething(function<bool()> getStopFlag) {
...
}
private:
...
}
class B {
public:
void runServer();
void stop() {
stopFlag = true;
waitForListToBeEmpty();
}
private:
void waitForListToBeEmpty() { ... };
void handleAccept(...) {
shared_ptr<A> newClient(new A());
{
unique_lock<mutex> lock(listMutex);
clientData.push_back(newClient);
}
thread clientThread([this, newClient]() {
// Capture the shared_ptr until thread over and done with.
newClient->doSomething([this]() {
return stopFlag;
});
cleanup(newClient);
});
// Detach to remove the need to store these threads until their completion.
clientThread.detach();
}
void cleanup(shared_ptr<A> data) {
unique_lock<mutex> lock(listMutex);
clientData.remove(data);
}
list<shared_ptr<A>> clientData; // Can remove this if you don't
// need to connect with your clients.
// However, you'd need to make sure this
// didn't get deallocated before all clients
// finished as they reference the boolean stopFlag
// OR make it a shared_ptr to an atomic boolean
mutex listMutex;
atomc<bool> stopFlag;
}
Related
I'm trying to solve some complicated (for me at least) asynchronous scenario at once, but I think it will be better to understand more simple case.
Consider an object, that has allocated memory, carrying by variable:
#include <thread>
#include <mutex>
using namespace std;
mutex mu;
class Object
{
public:
char *var;
Object()
{
var = new char[1]; var[0] = 1;
}
~Object()
{
mu.lock();
delete[]var; // destructor should free all dynamic memory on it's own, as I remember
mu.unlock();
}
}*object = nullptr;
int main()
{
object = new Object();
return 0;
}
What if while, it's var variable in detached, i.e. asynchronous thread, will be used, in another thread this object will be deleted?
void do_something()
{
for(;;)
{
mu.lock();
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
mu.unlock();
}
}
int main()
{
object = new Object();
thread th(do_something);
th.detach();
Sleep(1000);
delete object;
object = nullptr;
return 0;
}
Is is it possible that var will not be deleted in destructor?
Do I use mutex with detached threads correctly in code above?
2.1 Do I need cover by mutex::lock and mutex::unlock also delete object line?
I also once again separately point that I need new thread to be asynchronous. I do not need the main thread to be hanged, while new is running. I need two threads at once.
P.S. From a list of commentaries and answers one of most important thing I finally understood - mutex. The biggest mistake I thought is that already locked mutex skips the code between lock and unlock.
Forget about shared variables, mutex itself has noting to do with it. Mutex is just a mechanism for safely pause threads:
mutex mu;
void a()
{
mu.lock();
Sleep(1000);
mu.unlock();
}
int main()
{
thread th(a);
th.detach();
mu.lock(); // hangs here, until mu.unlock from a() will be called
mu.unlock();
return;
}
The concept is extremely simple - mutex object (imagine) has flag isLocked, when (any) thread calls lock method and isLocked is false, it just sets isLocked to true. But if isLocked is true already, mutex somehow on low-level hangs thread that called lock until isLocked will not become false. You can find part of source code of lock method scrolling down this page. Instead of mutex, probably just a bool variable could be used, but it will cause undefined behaviour.
Why is it referred to shared stuff? Because using same variable (memory) simultaneously from multiple threads makes undefined behaviour, so one thread, reaching some variable that currently can be used by another - should wait, until another will finish working with it, that's why mutex is used here.
Why accessing mutex itself from different threads does not make undefined behaviour? I don't know, going to google it.
Do I use mutex with detached threads correctly in code above?
Those are orthogonal concepts. I don't think mutex is used correctly since you only have one thread mutating and accessing the global variable, and you use the mutex to synchronize waits and exits. You should join the thread instead.
Also, detached threads are usually a code smell. There should be a way to wait all threads to finish before exiting the main function.
Do I need cover by mutex::lock and mutex::unlock also delete object line?
No since the destructor will call mu.lock() so you're fine here.
Is is it possible that var will not be deleted in destructor?
No, it will make you main thread to wait though. There are solutions to do this without using a mutex though.
There's usually two way to attack this problem. You can block the main thread until all other thread are done, or use shared ownership so both the main and the thread own the object variable, and only free when all owner are gone.
To block all thread until everyone is done then do cleanup, you can use std::barrier from C++20:
void do_something(std::barrier<std::function<void()>>& sync_point)
{
for(;;)
{
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
} // break at a point so the thread exits
sync_point.arrive_and_wait();
}
int main()
{
object = new Object();
auto const on_completion = []{ delete object; };
// 2 is the number of threads. I'm counting the main thread since
// you're using detached threads
std::barrier<std::function<void()>> sync_point(2, on_completion);
thread th(do_something, std::ref(sync_point));
th.detach();
Sleep(1000);
sync_point.arrive_and_wait();
return 0;
}
Live example
This will make all the threads (2 of them) wait until all thread gets to the sync point. Once that sync point is reached by all thread, it will run the on_completion function, which will delete the object once when no one needs it anymore.
The other solution would be to use a std::shared_ptr so anyone can own the pointer and free it only when no one is using it anymore. Note that you will need to remove the object global variable and replace it with a local variable to track the shared ownership:
void do_something(std::shared_ptr<Object> object)
{
for(;;)
{
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
}
}
int main()
{
std::shared_ptr<Object> object = std::make_shared<Object>();
// You need to pass it as parameter otherwise it won't be safe
thread th(do_something, object);
th.detach();
Sleep(1000);
// If the thread is done, this line will call delete
// If the thread is not done, the thread will call delete
// when its local `object` variable goes out of scope.
object = nullptr;
return 0;
}
Is is it possible that var will not be deleted in destructor?
With
~Object()
{
mu.lock();
delete[]var; // destructor should free all dynamic memory on it's own, as I remember
mu.unlock();
}
You might have to wait that lock finish, but var would be deleted.
Except that your program exhibits undefined behaviour with non protected concurrent access to object. (delete object isn't protected, and you read it in your another thread), so everything can happen.
Do I use mutex with detached threads correctly in code above?
Detached or not is irrelevant.
And your usage of mutex is wrong/incomplete.
which variable should your mutex be protecting?
It seems to be a mix between object and var.
If it is var, you might reduce scope in do_something (lock only in if-block)
And it misses currently some protection to object.
2.1 Do I need cover by mutex::lock and mutex::unlock also delete object line?
Yes object need protection.
But you cannot use that same mutex for that. std::mutex doesn't allow to lock twice in same thread (a protected delete[]var; inside a protected delete object) (std::recursive_mutex allows that).
So we come back to the question which variable does the mutex protect?
if only object (which is enough in your sample), it would be something like:
#include <thread>
#include <mutex>
using namespace std;
mutex mu;
class Object
{
public:
char *var;
Object()
{
var = new char[1]; var[0] = 1;
}
~Object()
{
delete[]var; // destructor should free all dynamic memory on it's own, as I remember
}
}*object = nullptr;
void do_something()
{
for(;;)
{
mu.lock();
if(object)
if(object->var[0] < 255)
object->var[0]++;
else
object->var[0] = 0;
mu.unlock();
}
}
int main()
{
object = new Object();
thread th(do_something);
th.detach();
Sleep(1000);
mu.lock(); // or const std::lock_guard<std::mutex> lock(mu); and get rid of unlock
delete object;
object = nullptr;
mu.unlock();
return 0;
}
Alternatively, as you don't have to share data between thread, you might do:
int main()
{
Object object;
thread th(do_something);
Sleep(1000);
th.join();
return 0;
}
and get rid of all mutex
Have a look at this, it shows the use of scoped_lock, std::async and managment of lifecycles through scopes (demo here : https://onlinegdb.com/FDw9fG9rS)
#include <future>
#include <mutex>
#include <chrono>
#include <iostream>
// using namespace std; <== dont do this
// mutex mu; avoid global variables.
class Object
{
public:
Object() :
m_var{ 1 }
{
}
~Object()
{
}
void do_something()
{
using namespace std::chrono_literals;
for(std::size_t n = 0; n < 30; ++n)
{
// extra scope to reduce time of the lock
{
std::scoped_lock<std::mutex> lock{ m_mtx };
m_var++;
std::cout << ".";
}
std::this_thread::sleep_for(150ms);
}
}
private:
std::mutex m_mtx;
char m_var;
};
int main()
{
Object object;
// extra scope to manage lifecycle of future
{
// use a lambda function to start the member function of object
auto future = std::async(std::launch::async, [&] {object.do_something(); });
std::cout << "do something started\n";
// destructor of future will synchronize with end of thread;
}
std::cout << "\n work done\n";
// safe to go out of scope now and destroy the object
return 0;
}
All you assumed and asked in your question is right. The variable will always be freed.
But your code has one big problem. Lets look at your example:
int main()
{
object = new Object();
thread th(do_something);
th.detach();
Sleep(1000);
delete object;
object = nullptr;
return 0;
}
You create a thread that will call do_something(). But lets just assume that right after the thread creation the kernel interrupts the thread and does something else, like updating the stackoverflow tab in your web browser with this answer. So do_something() isn't called yet and won't be for a while since we all know how slow browsers are.
Meanwhile the main function sleeps 1 second and then calls delete object;. That calls Object::~Object(), which acquires the mutex and deletes the var and releases the mutex and finally frees the object.
Now assume that right at this point the kernel interrupts the main thread and schedules the other thread. object still has the address of the object that was deleted. So your other thread will acquire the mutex, object is not nullptr so it accesses it and BOOM.
PS: object isn't atomic so calling object = nullptr in main() will also race with if (object).
In C++20 std::jthread was introduced as a safer version of std::thread; where std::jthread, as far as I understand, cleans up after itself when the thread exits.
Also, the concept of cooperative cancellation is introduced such that an std::jthread manages an std::stop_source that handles the state of the underlying thread, this std::stop_source exposes an std::stop_token that outsiders can use to read the state of the thread sanely.
What I have is something like this.
class foo {
std::stop_token stok;
std::stop_source ssource;
public:
void start_foo() {
// ...
auto calculation = [this](std::stop_token inner_tok) {
// ... (*this is used here)
while(!inner_tok.stop_requested()) {
// stuff
}
}
auto thread = std::jthread(calculation);
ctok = thread.get_stop_token();
ssource = thread.get_stop_source();
thread.detach(); // ??
}
void stop_foo() {
if (ssource.stop_possible()) {
ssource.request_stop();
}
}
~foo() {
stop_foo();
}
}
Note foo is managed by a std::shared_ptr, and there is no public constructor.
Somewhere along the line, another thread can call foo::stop_foo() on a possibly detached thread.
Is what I am doing safe?
Also, when detaching a thread, the C++ handle is no longer associated with the running thread, and the OS manages it, but does the thread keep receiving stop notifications from the std::stop_source?
Is there a better way to achieve what I need? In MVSC, this doesn't seem to raise any exceptions or halt program execution, and I've done a lot of testing to verify this.
So, is this solution portable?
What you wrote is potentially unsafe if the thread accesses this after the foo has been destroyed. It's also a bit convoluted. A simpler approach would just be to stick the jthread in the structure...
class foo {
std::jthread thr;
public:
void start_foo() {
// ...
jthr = std::jthread([this](std::stop_token inner_tok) {
// ... (*this is used here)
while(!inner_tok.stop_requested()) {
// stuff
}
});
}
void stop_foo() {
jthr.request_stop();
}
~foo() {
stop_foo();
// jthr.detatch(); // this is a bad idea
}
}
To match the semantics of your code, you would uncomment the jthr.detach() in the destructor, but this is actually a bad idea since then you could end up destroying foo while the thread is still accessing it. The code I wrote above is safe, but obviously whichever thread drops the last reference to the foo will have to wait for the jthread to exit. If that's really intolerable, then maybe you want to change the API to stick a shared_ptr in the thread itself, so that the thread can destroy foo if it is still running after the last external reference is dropped.
In our program, we have a class FooLogger which logs specific events (strings). We use the FooLogger as a unique_ptr.
We have two threads which use this unique_ptr instance:
Thread 1 logs the latest event to file in a while loop, first checking if the instance is not nullptr
Thread 2 deallocates the FooLogger unique_ptr instance when the program has reached a certain point (set to nullptr)
However, due to bad interleaving, it is possible that, while logging, the member variables of FooLogger are deallocated, resulting in an EXC_BAD_ACCESS error.
class FooLogger {
public:
FooLogger() {};
void Log(const std::string& event="") {
const float32_t time_step_s = timer_.Elapsed() - runtime_s_; // Can get EXC_BAD_ACCESS on timer_
runtime_s_ += time_step_s;
std::cout << time_step_s << runtime_s_ << event << std::endl;
}
private:
Timer timer_; // Timer is a custom class
float32_t runtime_s_ = 0.0;
};
int main() {
auto foo_logger = std::make_unique<FooLogger>();
std::thread foo_logger_thread([&] {
while(true) {
if (foo_logger)
foo_logger->Log("some event");
else
break;
}
});
SleepMs(50); // pseudo code
foo_logger = nullptr;
foo_logger_thread.join();
}
Is it possible, using some sort of thread synchronisation/locks etc. to ensure that the foo_logger instance is not deallocated while logging? If not, are there any good ways of handling this case?
The purpose of std::unique_ptr is to deallocate the instance once std::unique_ptr is out of scope. In your case, you have multiple threads each having access to the element and the owning thread might get eliminated prior to other users.
You either need to ensure that owner thread never gets deleted prior to the user threads or change ownership model from std::unique_ptr to std::shared_ptr. It is the whole purpose of std::shared_ptr to ensure that the object is alive as long as you use it.
You just need to figure out what's required for program and use the right tools to achieve it.
Use a different mechanism than the disappearance of an object for determining when to stop.
(When you use a single thing for two separate purposes, you often get into trouble.)
For instance, an atomic bool:
int main() {
FooLogger foo_logger;
std::atomic<bool> keep_going = true;
std::thread foo_logger_thread([&] {
while(keep_going) {
foo_logger.Log("some event");
}
});
SleepMs(50);
keep_going = false;
foo_logger_thread.join();
}
It sounds like std::weak_ptr can help in this case.
You can make one from a std::shared_ptr and pass it to the logger thread.
For example:
class FooLogger {
public:
void Log(std::string const& event) {
// log the event ...
}
};
int main() {
auto shared_logger = std::make_shared<FooLogger>();
std::thread foo_logger_thread([w_logger = std::weak_ptr(shared_logger)]{
while (true) {
auto logger = w_logger.lock();
if (logger)
logger->Log("some event");
else
break;
}
});
// some work ...
shared_logger.reset();
foo_logger_thread.join();
}
Use should use make_shared instead of make_unique. And change:
std::thread foo_logger_thread([&] {
to
std::thread foo_logger_thread([foo_logger] {
It will create new instance of shared_ptr.
class ThreadOne {
public:
ThreadOne();
void RealThread();
void EnqueueJob(s_info job);
std::queue<s_info> q_jobs;
private:
H5::H5File* targetFile = new H5::H5File("file.h5", H5F_ACC_TRUNC);
std::condition_variable cv_condition;
std::mutex m_job_q_;
};
ThreadOne::ThreadOne() {
}
void ThreadOne::RealThread() {
while (true) {
std::unique_lock<std::mutex> lock(m_job_q_);
cv_condition.wait(lock, [this]() { return !this->q_jobs.empty(); });
s_info info = std::move(q_jobs.front());
q_jobs.pop();
lock.unlock();
//* DO THE JOB *//
}
}
void ThreadOne::EnqueueJob(s_info job) {
{
std::lock_guard<std::mutex> lock(m_job_q_);
q_jobs.push(std::move(job));
}
cv_condition.notify_one();
}
ThreadOne *tWrite = new ThreadOne();
I want to make a thread and send it a pointer of an array and its name as a struct(s_info), and then make the thread write it into a file. I think that it's better than creating a thread whenever writing is needed.
I could make a thread pool and allocate jobs to it, but it's not allowed to write the same file concurrently in my situation, I think that just making a thread will be enough and the program will still do CPU-bound jobs when writing job is in process.
To sum up, this class (hopefully) gets array pointers and their dataset names, puts them in q_jobs and RealThread writes the arrays into a file.
I referred to a C++ thread pool program and the program initiates threads like this:
std::vector<std::thread> vec_worker_threads;
vector_worker_threads.reserve(num_threads_);
vector_worker_threads.emplace_back([this]() { this->RealThread(); });
I'm new to C++ and I understand what the code above does, but I don't know how to initiate RealThread in my class without a vector. How can I make an instance of the class that has a thread(RealThread) that's already ready inside it?
From what I can gather, and as already discussed in the comments, you simply want a std::thread member for ThreadOne:
class ThreadOne {
std::thread thread;
public:
~ThreadOne();
//...
};
//...
ThreadOne::ThreadOne() {
thread = std::thread{RealThread, this};
}
ThreadOne::~ThreadOne() {
// (potentially) notify thread to finish first
if(thread.joinable())
thread.join();
}
//...
ThreadOne tWrite;
Note that I did not start the thread in the member-initializer-list of the constructor in order to avoid the thread accessing other members that have not been initialized yet. (The default constructor of std::thread does not start any thread.)
I also wrote a destructor which will wait for the thread to finish and join it. You must always join threads before destroying the std::thread object attached to it, otherwise your program will call std::terminate and abort.
Finally, I replaced tWrite from being a pointer to being a class type directly. There is probably no reason for you to use dynamic allocation there and even if you have a need for it, you should be using
auto tWrite = std::make_unique<ThreadOne>();
or equivalent, instead, so that you are not going to rely on manually deleteing the pointer at the correct place.
Also note that your current RealThread function seems to never finish. It must return at some point, probably after receiving a notification from the main thread, otherwise thread.join() will wait forever.
i'm currently writing a c/c++ dll for later use mostly in Delphi and i'm more familiar with threads in Delphi than c/c++ and especially boost. So i wonder how i can achieve the following scenario?
class CMyClass
{
private:
boost::thread* doStuffThread;
protected:
void doStuffExecute(void)
{
while(!isTerminationSignal()) // loop until termination signal
{
// do stuff
}
setTerminated(); // thread is finished
};
public:
CMyClass(void)
{
// create thread
this->doStuffThread = new boost::thread(boost::bind(&CMyClass::doStuffExecute, this));
};
~CMyClass(void)
{
// finish the thread
signalThreadTermination();
waitForThreadFinish();
delete this->doStuffThread;
// do other cleanup
};
}
I have red countless articles about boost threading, signals and mutexes but i don't get it, maybe because it's friday ;) or is it not doable how i think to do it?
Regards
Daniel
Just use an atomic boolean to tell the thread to stop:
class CMyClass
{
private:
boost::thread doStuffThread;
boost::atomic<bool> stop;
protected:
void doStuffExecute()
{
while(!stop) // loop until termination signal
{
// do stuff
}
// thread is finished
};
public:
CMyClass() : stop(false)
{
// create thread
doStuffThread = boost::thread(&CMyClass::doStuffExecute, this);
};
~CMyClass()
{
// finish the thread
stop = true;
doStuffThread.join();
// do other cleanup
};
}
To wait for the thread to finish you just join it, that will block until it is finished and can be joined. You need to join the thread anyway before you can destroy it, or it will terminate your program.
There is no need to use a pointer and create the thread with new, just use a boost::thread object directly. Creating everything on the heap is wasteful, unsafe and poor style.
There is no need to use boost::bind to pass arguments to the thread constructor. For many many years boost::thread has supported passing multiple arguments to its constructor directly and it does the binding internally.
It's important that stop has been initialized to false before the new thread is created, otherwise if the new thread is spawned very quickly it could check the value of stop before it is initialized, and might happen to read a true value from the uninitialized memory, and then it would never enter the loop.
On the subject of style, writing foo(void) is considered by many C++ programmers to be a disgusting abomination. If you want to say your function takes no arguments then just write foo().