Creating a cuda stream on each host thread (multi-threaded CPU) - c++

I have a multi-threaded CPU and I would like each thread of the CPU to be able to launch a seperate CUDA stream. The seperate CPU threads will be doing different things at different times so there is a chance that they won't overlap but if they do launch a CUDA kernel at the same time I would like it to continue to run concurrently.
I'm pretty sure this is possible because in the CUDA Toolkit documentation section 3.2.5.5. It says "A stream is a sequence of commands (possibly issued by different host threads)..."
So if I want to implement this I would do something like
void main(int CPU_ThreadID) {
cudaStream_t *stream;
cudaStreamCreate(&stream);
int *d_a;
int *a;
cudaMalloc((void**)&d_a, 100*sizeof(int));
cudaMallocHost((void**)&a, 100*8*sizeof(int));
cudaMemcpyAsync(d_a, a[100*CPU_ThreadID], 100*size(int), cudaMemcpyHostToDevice, stream);
sum<<<100,32,0,stream>>>(d_a);
cudaStreamDestroy(stream);
}
That is just a simple example. If I know there are only 8 CPU Threads then I know at most 8 streams will be created. Is this the proper way to do this? Will this run concurrently if two or more different host threads reach this code around the same time? Thanks for any help!
Edit:
I corrected some of the syntax issues in the code block and put in the cudaMemcpyAsync as sgar91 suggested.

It really looks to me like you are proposing a multi-process application, not multithreaded. You don't mention which threading architecture you have in mind, nor even an OS, but the threading architectures I know of don't posit a thread routine called "main", and you haven't shown any preamble to the thread code.
A multi-process environment will generally create one device context per process, which will inhibit fine-grained concurrency.
Even if that's just an oversight, I would point out that a multi-threaded application should establish a GPU context on the desired device before threads are spawned.
Each thread can then issue a cudaSetDevice(0); or similar call, which should cause each thread to pick up the established context on the indicated device.
Once that is in place, you should be able to issue commands to the desired streams from whichever threads you like.
You may wish to refer to the cudaOpenMP sample code. Although it omits the streams concepts, it demonstrates a multi-threaded app with the potential for multiple threads to issue commands to the same device (and could be extended to the same stream)
Whether or not kernels happen to run concurrently or not after the above issues have been addressed is a separate issue. Concurrent kernel execution has a number of requirements, and the kernels themselves must have compatible resource requirements (blocks, shared memory, registers, etc.), which generally implies "small" kernels.

Related

Should I just believe that std::thread is implemented not by creating user threads only?

I learned that all of user threads mapped with a kernel thread be blocked if one of the threads calls some system call likes I/O System Call.
If std::thread is implemented by creating only a user thread in some environment, then a thread for I/O in some programs can block a thread for Rendering.
So I think distinguishing user / kernel is important but c++ standard does not.
Then how I assure that some situations like above will not occur in particular environment(like Windows10 )?
I learned that all of user threads mapped with a kernel thread be blocked if one of the threads calls some system call likes I/O System Call.
Yes, however it's rare for anything to use kernel's system calls directly. Typically they use a user-space library. For a normally blocking "system" call (e.g. the read() function in a standard C library) the library can emulate it using asynchronous functions (e.g. the aio_read() function in a standard C library) and a user-space thread switches.
So I think distinguishing user / kernel is important but c++ standard does not.
It is important, but for a different reason.
The first problem with user-space threading is that the kernel isn't aware of thread priorities. If you imagine a computer running 2 completely separate applications (with the user using "alt+tab" to switch between them), where each application has a high priority thread (for user interface), and few medium priority threads (for general work) plus a few low priority threads (for doing things like prefetching and pre-calculating stuff in the background); you can end up with a situation where kernel gives CPU time to one application (that uses the CPU time for low priority threads) because it doesn't know the other application needs CPU time for its higher priority threads.
In other words, for a multi-process environment, user-space threading has a high risk of wasting CPU time doing irrelevant work (in one process) while important work (in another process) waits.
The second problem with user-space threading is that (for modern systems) good scheduling decisions take into account differences between different CPUs ("big.Little", hyper-threading, which caches are shared by which CPUs, ..) and power management (e.g. for low priority threads it's reasonable to reduce CPU clock speed to increase battery life and/or reduce CPU temperatures so they can run faster for longer when higher priority work needs to be done later); and user-space has none of the information needed (and none of the ability to change CPU speeds, etc) and can not make good scheduling decisions.
Note that these problems could be "fixed" by having a huge amount of communication between user-space and kernel (the user-space threading informing kernel of thread priorities of waiting threads and currently running thread, kernel informing user-space of CPU differences and power management, etc); but the whole point of user-space thread switching is to avoid the cost of kernel system calls, so this communication between user-space and kernel would make user-space thread switching pointless.
Then how I assure that some situations like above will not occur in particular environment(like Windows10 )?
You can't. It's not your decision.
When you choose to use high level abstractions (std::thread in C++ rather than using the kernel directly from assembly language) you are deliberately delegating low level decisions to something else (the compiler and its run-time environment). The advantages (you no longer have to care about these decisions) are the disadvantages (you are no longer able to make these decisions).
Rephrasing my attempt to answer, after talking to the OP and understanding better what is really being asked.
Most I/O operations are blocking per thread level: if a threads starts one, only this thread will be blocked, not the whole process.
The OP seems to intend to start a rendering operation in a thread and doesn't want it to be blocked by an I/O operation in this thread. Two possible solutions are:
To spawn another thread to do this blocking I/O operation, and then let the rendering thread to proceed independently of the I/O;
To use resources specific of each OS (that doesn't belong to C++), to start the same I/O operation in an asynchronous, non blocking form.
Lastly, to minimize the blocking of the OS access to I/O, what an application developer can do is to try to make sure that there's no simultaneous access to the same I/O device at the same time.
You can be assured that std::thread is not using "user threads" because that concept pretty much died around the turn of the century.
Modern hardware has multiple CPU cores, which work much better if there are sufficient kernel threads. Without enough kernel threads, CPU cores may sit idle.
The idea of "user threads" originated in an era when there was only a single CPU core, and people instead worried about having too many kernel threads.

How to create a user space thread? [duplicate]

I am just started coding of device driver and new to threading, went through many documents for getting an idea about threads. I still have some doubts.
what is a kernel thread?
how it differs from user thread?
what is the relationship between the two threads?
how can i implement kernel threads?
where can i see the output of the implementation?
Can anyone help me?
Thanks.
A kernel thread is a task_struct with no userspace components.
Besides the lack of userspace, it has different ancestors (kthreadd kernel thread instead of the init process) and is created by a kernel-only API instead of sequences of clone from fork/exec system calls.
Two kernel threads have kthreadd as a parent. Apart from that, kernel threads enjoy the same "independence" one from another as userspace processes.
Use the kthread_run function/macro from the kthread.h header You will most probably have to write a kernel module in order to call this function, so you should take a look a the Linux Device Drivers
If you are referring to the text output of your implementation (via printk calls), you can see this output in the kernel log using the dmesg command.
A kernel thread is a kernel task running only in kernel mode; it usually has not been created by fork() or clone() system calls. An example is kworker or kswapd.
You probably should not implement kernel threads if you don't know what they are.
Google gives many pages about kernel threads, e.g. Frey's page.
user threads & stack:
Each thread has its own stack so that it can use its own local variables, thread’s share global variables which are part of .data or .bss sections of linux executable.
Since threads share global variables i.e we use synchronization mechanisms like mutex when we want to access/modify global variables in multi threaded application. Local variables are part of thread individual stack, so no need of any synchronization.
Kernel threads
Kernel threads have emerged from the need to run kernel code in process context. Kernel threads are the basis of the workqueue mechanism. Essentially, a thread kernel is a thread that only runs in kernel mode and has no user address space or other user attributes.
To create a thread kernel, use kthread_create():
#include <linux/kthread.h>
structure task_struct *kthread_create(int (*threadfn)(void *data),
void *data, const char namefmt[], ...);
kernel threads & stack:
Kernel threads are used to do post processing tasks for kernel like pdf flush threads, workq threads etc.
Kernel threads are basically new process only without address space(can be created using clone() call with required flags), means they can’t switch to user-space. kernel threads are schedulable and preempt-able as normal processes.
kernel threads have their own stacks, which they use to manage local info.
More about kernel stacks:-
https://www.kernel.org/doc/Documentation/x86/kernel-stacks
Since you're comparing kernel threads with user[land] threads, I assume you mean something like the following.
The normal way of implementing threads nowadays is to do it in the kernel, so those can be considered "normal" threads. It's however also possible to do it in userland, using signals such as SIGALRM, whose handler will save the current process state (registers, mostly) and change them to another one previously saved. Several OSes used this as a way to implement threads before they got proper kernel thread support. They can be faster, since you don't have to go into kernel mode, but in practice they've faded away.
There's also cooperative userland threads, where one thread runs until it calls a special function (usually called yield), which then switches to another thread in a similar way as with SIGALRM above. The advantage here is that the program is in total control, which can be useful when you have timing concerns (a game for example). You also don't have to care much about thread safety. The big disadvantage is that only one thread can run at a time, and therefore this method is also uncommon now that processors have multiple cores.
Kernel threads are implemented in the kernel. Perhaps you meant how to use them? The most common way is to call pthread_create.

Calls to GPU kernel from a multithreaded C++ application?

I'm re-implementing some sections of an image processing library that's multithreaded C++ using pthreads. I'd like to be able to invoke a CUDA kernel in every thread and trust the device itself to handle kernel scheduling, but I know better than to count on that behavior. Does anyone have any experience with this type of issue?
CUDA 4.0 made it much simpler to drive a single CUDA context from multiple threads - just call cudaSetDevice() to specify which CUDA device you want the thread to submit commands.
Note that this is likely to be less efficient than driving the CUDA context from a single thread - unless the CPU threads have other work to keep them occupied between kernel launches, they are likely to get serialized by the mutexes that CUDA uses internally to keep its data structures consistent.
Perhaps Cuda streams are the solution to your problem. Try to invoke kernels from a different stream in each thread. However, I don't see how this will help, as I think that your kernel executions will be serialized, even though they are invoked in parallel. In fact, Cuda kernel invocations even on the same stream are asynchronous by nature, so you can make any number of invocations from the same thread. I really don't understand what you are trying to achieve.

Multithreading vs multiprocessing

I am new to this kind of programming and need your point of view.
I have to build an application but I can't get it to compute fast enough. I have already tried Intel TBB, and it is easy to use, but I have never used other libraries.
In multiprocessor programming, I am reading about OpenMP and Boost for the multithreading, but I don't know their pros and cons.
In C++, when is multi threaded programming advantageous compared to multiprocessor programming and vice versa?Which is best suited to heavy computations or launching many tasks...? What are their pros and cons when we build an application designed with them? And finally, which library is best to work with?
Multithreading means exactly that, running multiple threads. This can be done on a uni-processor system, or on a multi-processor system.
On a single-processor system, when running multiple threads, the actual observation of the computer doing multiple things at the same time (i.e., multi-tasking) is an illusion, because what's really happening under the hood is that there is a software scheduler performing time-slicing on the single CPU. So only a single task is happening at any given time, but the scheduler is switching between tasks fast enough so that you never notice that there are multiple processes, threads, etc., contending for the same CPU resource.
On a multi-processor system, the need for time-slicing is reduced. The time-slicing effect is still there, because a modern OS could have hundred's of threads contending for two or more processors, and there is typically never a 1-to-1 relationship in the number of threads to the number of processing cores available. So at some point, a thread will have to stop and another thread starts on a CPU that the two threads are sharing. This is again handled by the OS's scheduler. That being said, with a multiprocessors system, you can have two things happening at the same time, unlike with the uni-processor system.
In the end, the two paradigms are really somewhat orthogonal in the sense that you will need multithreading whenever you want to have two or more tasks running asynchronously, but because of time-slicing, you do not necessarily need a multi-processor system to accomplish that. If you are trying to run multiple threads, and are doing a task that is highly parallel (i.e., trying to solve an integral), then yes, the more cores you can throw at a problem, the better. You won't necessarily need a 1-to-1 relationship between threads and processing cores, but at the same time, you don't want to spin off so many threads that you end up with tons of idle threads because they must wait to be scheduled on one of the available CPU cores. On the other hand, if your parallel tasks requires some sequential component, i.e., a thread will be waiting for the result from another thread before it can continue, then you may be able to run more threads with some type of barrier or synchronization method so that the threads that need to be idle are not spinning away using CPU time, and only the threads that need to run are contending for CPU resources.
There are a few important points that I believe should be added to the excellent answer by #Jason.
First, multithreading is not always an illusion even on a single processor - there are operations that do not involve the processor. These are mainly I/O - disk, network, terminal etc. The basic form for such operation is blocking or synchronous, i.e. your program waits until the operation is completed and then proceeds. While waiting, the CPU is switched to another process/thread.
if you have anything you can do during that time (e.g. background computation while waiting for user input, serving another request etc.) you have basically two options:
use asynchronous I/O: you call a non-blocking I/O providing it with a callback function, telling it "call this function when you are done". The call returns immediately and the I/O operation continues in the background. You go on with the other stuff.
use multithreading: you have a dedicated thread for each kind of task. While one waits for the blocking I/O call, the other goes on.
Both approaches are difficult programming paradigms, each has its pros and cons.
with async I/O the logic of the program's logic is less obvious and is difficult to follow and debug. However you avoid thread-safety issues.
with threads, the challange is to write thread-safe programs. Thread safety faults are nasty bugs that are quite difficult to reproduce. Over-use of locking can actually lead to degrading instead of improving the performance.
(coming to the multi-processing)
Multithreading made popular on Windows because manipulating processes is quite heavy on Windows (creating a process, context-switching etc.) as opposed to threads which are much more lightweight (at least this was the case when I worked on Win2K).
On Linux/Unix, processes are much more lightweight. Also (AFAIK) threads on Linux are implemented actually as a kind of processes internally, so there is no gain in context-switching of threads vs. processes. However, you need to use some form of IPC (inter-process communications), as shared memory, pipes, message queue etc.
On a more lite note, look at the SQLite FAQ, which declares "Threads are evil"! :)
To answer the first question:
The best approach is to just use multithreading techniques in your code until you get to the point where even that doesn't give you enough benefit. Assume the OS will handle delegation to multiple processors if they're available.
If you actually are working on a problem where multithreading isn't enough, even with multiple processors (or if you're running on an OS that isn't using its multiple processors), then you can worry about discovering how to get more power. Which might mean spawning processes across a network to other machines.
I haven't used TBB, but I have used IPP and found it to be efficient and well-designed. Boost is portable.
Just wanted to mention that the Flow-Based Programming ( http://www.jpaulmorrison.com/fbp ) paradigm is a naturally multiprogramming/multiprocessing approach to application development. It provides a consistent application view from high level to low level. The Java and C# implementations take advantage of all the processors on your machine, but the older C++ implementation only uses one processor. However, it could fairly easily be extended to use BOOST (or pthreads, I assume) by the use of locking on connections. I had started converting it to use fibers, but I'm not sure if there's any point in continuing on this route. :-) Feedback would be appreciated. BTW The Java and C# implementations can even intercommunicate using sockets.

What is process and thread?

Yes, I have read many materials related to operating system. And I am still reading. But it seems all of them are describing the process and thread in a "abstract" way, which makes a lot of high level elabration on their behavior and logic orgnization. I am wondering what are they physically? In my opinion, they are just some in-memory "data structures" which are maintained and used by the kernel codes to facilitate the execution of program. For example, operating system use some process data structure (PCB) to describe the aspects of the process assigned for a certain program, such as its priority, its address space and so on. Is this all right?
First thing you need to know to understand the difference between a process and a thread, is a fact, that processes do not run, threads do.
So, what is a thread? Closest I can get explaining it is an execution state, as in: a combination of CPU registers, stack, the lot. You can see a proof of that, by breaking in a debugger at any given moment. What do you see? A call stack, a set of registers. That's pretty much it. That's the thread.
Now, then, what is a process. Well, it's a like an abstract "container" entity for running threads. As far as OS is concerned in a first approximation, it's an entity OS allocates some VM to, assigns some system resources to (like file handles, network sockets), &c.
How do they work together? The OS creates a "process" by reserving some resources to it, and starting a "main" thread. That thread then can spawn more threads. Those are the threads in one process. They more or less can share those resources one way or another (say, locking might be needed for them not to spoil the fun for others &c). From there on, OS is normally responsible for maintaining those threads "inside" that VM (detecting and preventing attempts to access memory which doesn't "belong" to that process), providing some type of scheduling those threads, so that they can run "one-after-another-and-not-just-one-all-the-time".
Normally when you run an executable like notepad.exe, this creates a single process. These process could spawn other processes, but in most cases there is a single process for each executable that you run. Within the process, there can be many threads. Usually at first there is one thread, which usually starts at the programs "entry point" which is the main function usually. Instructions are executed one by one in order, like a person who only has one hand, a thread can only do one thing at a time before it moves on to the next.
That first thread can create additional threads. Each additional thread has it's own entry point, which is usually defined with a function. The process is like a container for all the threads that have been spawned within it.
That is a pretty simplistic explanation. I could go into more detail but probably would overlap with what you will find in your textbooks.
EDIT: You'll notice there are lot's of "usually"'s in my explanation, as there are occasionally rare programs that do things drastically different.
One of the reasons why it is pretty much impossible to describe threads and processes in a non-abstract way is that they are abstractions.
Their concrete implementations differ tremendously.
Compare for example an Erlang Process and a Windows Process: an Erlang Process is very lightweight, often less than 400 Bytes. You can start 10 million processes on a not very recent laptop without any problems. They start up very quickly, they die very quickly and you are expected to be able to use them for very short tasks. Every Erlang Process has its own Garbage Collector associated with it. Erlang Processes can never share memory, ever.
Windows Processes are very heavy, sometimes hundreds of MiBytes. You can start maybe a couple of thousand of them on a beefy server, if you are lucky. They start up and die pretty slowly. Windows Processes are the units of Applications such as IDEs or Text Editors or Word Processors, so they are usually expected to live quite a long time (at least several minutes). They have their own Address Space, but no Garbage Collector. Windows Processes can share memory, although by default they don't.
Threads are a similar matter: an NPTL Linux Thread on x86 can be as small as 4 KiByte and with some tricks you can start 800000+ on a 32 Bit x86 machine. The machine will certainly be useable with thousands, maybe tens of thousands of threads. A .NET CLR Thread has a minimum size of about 1 MiByte, which means that just 4000 of those will eat up your entire address space on a 32 Bit machine. So, while 4000 NPTL Linux Threads is generally not a problem, you can't even start 4000 .NET CLR Threads because you will run out of memory before that.
OS Processes and OS Threads are also implemented very differently between different Operating Systems. The main two approaches are: the kernel knows only about processes. Threads are implemented by a Userspace Library, without any knowledge of the kernel at all. In this case, there are again two approaches: 1:1 (every Thread maps to one Kernel Process) or m:n (m Threads map to n Processes, where usually m > n and often n == #CPUs). This was the early approach taken on many Operating Systems after Threads were invented. However, it is usually deemed inefficient and has been replaced on almost all systems by the second approach: Threads are implemented (at least partially) in the kernel, so that the kernel now knows about two distinct entities, Threads and Processes.
One Operating System that goes a third route, is Linux. In Linux, Threads are neither implemented in Userspace nor in the Kernel. Instead, the Kernel provides an abstraction of both a Thread and a Process (and indeed a couple of more things), called a Task. A Task is a Kernel Scheduled Entity, that carries with it a set of flags that determine which resources it shares with its siblings and which ones are private.
Depending on how you set those flags, you get either a Thread (share pretty much everything) or a Process (share all system resources like the system clock, the filesystem namespace, the networking namespace, the user ID namespace, the process ID namespace, but do not share the Address Space). But you can also get some other pretty interesting things, too. You can trivially get BSD-style jails (basically the same flags as a Process, but don't share the filesystem or the networking namespace). Or you can get what other OSs call a Virtualization Container or Zone (like a jail, but don't share the UID and PID namespaces and system clock). Since a couple of years ago via a technology called KVM (Kernel Virtual Machine) you can even get a full-blown Virtual Machine (share nothing, not even the processor's Page Tables). [The cool thing about this is that you get to reuse the highly-tuned mature Task Scheduler in the kernel for all of these things. One of the things the Xen Virtual Machine has often criticized for, was the poor performance of its scheduler. The KVM developers have a much superior scheduler than Xen, and the best thing is they didn't even have to write a single line of code for it!]
So, on Linux, the performance of Threads and Processes is much closer than on Windows and many other systems, because on Linux, they are actually the same thing. Which means that the usage patterns are very different: on Windows, you typically decide between using a Thread and a Process based on their weight: can I afford a Process or should I use a Thread, even though I actually don't want to share state? On Linux (and usually Unix in general), you decide based on their semantics: do I actually want to share state or not?
One reason why Processes tend to be lighter on Unix than on Windows, is different usage: on Unix, Processes are the basic unit of both concurrency and functionality. If you want to use concurrency, you use multiple Processes. If your application can be broken down into multiple independent pieces, you use multiple Processes. Every Process does exactly one thing and only that one thing. Even a simple one-line shell script often involves dozens or hundreds of Processes. Applications usually consist of many, often short-lived Processes.
On Windows, Threads are the basic units of concurrency and COM components or .NET objects are the basic units of functionality. Applications usually consist of a single long-running Process.
Again, they are used for very different purposes and have very different design goals. It's not that one or the other is better or worse, it's just that they are so different that the common characteristics can only be described very abstractly.
Pretty much the only few things you can say about Threads and Processes are that:
Threads belong to Processes
Threads are lighter than Processes
Threads share most state with each other
Processes share significantly less state than Threads (in particular, they generally share no memory, unless specifically requested)
I would say that :
A process has a memory space, opened files,..., and one or more threads.
A thread is an instruction stream that can be scheduled by the system on a processor.
Have a look at the detailed answer I gave previously here on SO. It gives an insight into a toy kernel structure responsible for maintaining processes and the threads...
Hope this helps,
Best regards,
Tom.
We have discussed this very issue a number of times here. Perhaps you will find some helpful information here:
What is the difference between a process and a thread
Process vs Thread
Thread and Process
A process is a container for a set of resources used while executing a program.
A process includes the following:
Private virtual address space
A program.
A list of handles.
An access token.
A unique process ID.
At least one thread.
A pointer to the parent process, whether or not the process still exists or not.
That being said, a process can contain multiple threads.
Processes themselves can be grouped into jobs, which are containers for processes and are executed as single units.
A thread is what windows uses to schedule execution of instructions on the CPU. Every process has at least one.
I have a couple of pages on my wiki you could take a look at:
Process
Thread
Threads are memory structures in the scheduler of the operating system, as you say. Threads point to the start of some instructions in memory and process these when the scheduler decides they should be. While the thread is executing, the hardware timer will run. Once it hits the desired time, an interrupt will be invoked. After this, the hardware will then stop execution of the current program, and will invoke the registered interrupt handler function, which will be part of the scheduler, to inform that the current thread has finished execution.
Physically:
Process is a structure that maintains the owning credentials, the thread list, and an open handle list
A Thread is a structure containing a context (i.e. a saved register set + a location to execute), a set of PTEs describing what pages are mapped into the process's Virtual Address space, and an owner.
This is of course an extremely simplified explanation, but it gets the important bits. The fundamental unit of execution on both Linux and Windows is the Thread - the kernel scheduler doesn't care about processes (much). This is why on Linux, a thread is just a process who happens to share PTEs with another process.
A process is a area in memory managed by the OS to run an application. Thread is a small area in memory within a process to run a dedicated task.
Processes and Threads are abstractions - there is nothing physical about them, or any other part of an
operating system for that matter. That is why we call it software.
If you view a computer in physical terms you end up with a jumble of
electronics that emulate what a Turing Machine does.
Trying to do anything useful with a raw Truing Machine would turn your brain to Jell-O in
five minutes flat. To avoid
that unpleasant experience, computer folks developed a set of abstractions to compartmentalize
various aspects of computing. This lets you focus on the level of abstraction that
interests you without having to worry about all the other stuff supporting it.
Some things have been cast into circuitry (eg. adders and the like) which makes them physical but the
vast majority of what we work with is based on a set abstractions. As a general rule, the abstractions
we use have some form of mathematical underpinning to them. This is why stacks,
queues and "state" play such an important role in computing - there is a well founded
set of mathematics around these abstractions that let us build upon and reason about
their manipulation.
The key is realizing that software is always based on a
composite of abstract models of "things". Those "things" don't always relate to
anything physical, more likely they relate some other abstraction. This is why
you cannot find a satisfactory "physical" basis for Processes and Threads
anywhere in your text books.
Several other people have posted links to and explanations about what threads and
processes are, none of them point to anything "physical" though. As you guessed, they
are really just a set of data structures and rules that live within the larger
context of an operating system (which in turn is just more data structures and rules...)
Software is like an onion, layers on layers on layers, once you peal all the layers
(abstractions) away, nothing much is left! But the onion is still very real.
It's kind of hard to give a short answer which does this question justice.
And at the risk of getting this horribly wrong and simplying things, you can say threads & processes are an operating-system/platform concept; and under-the-hood, you can define a single-threaded process by,
Low-level CPU instructions (aka, the program).
State of execution--meaning instruction pointer (really, a special register), register values, and stack
The heap (aka, general purpose memory).
In modern operating systems, each process has its own memory space. Aside shared memory (only some OS support this) the operating system forbids one process from writing in the memory space of another. In Windows, you'll see a general protection fault if a process tries.
So you can say a multi-threaded process is the whole package. And each thread is basically nothing more than state of execution.
So when a thread is pre-empted for another (say, on a uni-processor system), all the operating system has to do in principle is save the state of execution of the thread (not sure if it has to do anything special for the stack) and load in another.
Pre-empting an entire process, on the other hand, is more expensive as you can imagine.
Edit: The ideas apply in abstracted platforms like Java as well.
They are not physical pieces of string, if that's what you're asking. ;)
As I understand it, pretty much everything inside the operating system is just data. Modern operating systems depend on a few hardware requirements: virtual memory address translation, interrupts, and memory protection (There's a lot of fuzzy hardware/software magic that happens during boot, but I'm not very familiar with that process). Once those physical requirements are in place, everything else is up to the operating system designer. It's all just chunks of data.
The reason they only are mentioned in an abstract way is that they are concepts, while they will be implemented as data structures there is no universal rule how they have to be implemented.
This is at least true for the threads/processes on their own, they wont do much good without a scheduler and an interrupt timer.
The scheduler is the algorithm by which the operating system chooses the next thread to run for a limited amount of time and the interrupt timer is a piece of hardware which periodically interrupts the execution of the current thread and hands control back to the scheduler.
Forgot something: the above is not true if you only have cooperative threading, cooperative threads have to actively yield control to the next thread, which can get ugly with one thread polling for results of an other thread, which waits for the first to yield.
These are even more lightweight than other threads as they don't require support of the underlying operating system to work.
I had seen many of the answers but most of them are not clear enough for an OS beginner.
In any modern day operating system, one process has a virtual CPU, virtual Memory, Virtual I/O.
Virtual CPU : if you have multiple cores the process might be assigned one or more of the cores for processing by the scheduler.
Virtual I/O : I/O might be shared between various processes. Like for an example keyboard that can be shared by multiple processes. So when you type in a notepad you see the text changing while a key logger running as daemon is storing all the keystrokes. So the process is sharing an I/O resource.
Virtual Memory : http://en.wikipedia.org/wiki/Virtual_memory you can go through the link.
So when a process is taken out of the state of execution by the scheduler it's state containing the values stored in the registers, its stack and heap and much more are saved into a data structure.
So now when we compare a process with a thread, threads started by a process shares the Virtual I/O and Virtual Memory assigned to the process which started it but not the Virtual CPU.
So there might be multiple thread being started by a process all sharing the same virtual Memory and Virtual I/O bu but having different Virtual CPUs.
So you understand the need for locking the resource of a process be it statically allocated (stack) or dynamically allocated(heap) as the virtual memory space is shared between threads of a process.
Also each thread having its own Virtual CPU can run in parallel in different cores and significantly reduce the completion time of a process(reduction will be observable only if you have managed the memory wisely and there are multiple cores).
A thread is controlled by a process, a process is controlled by the operating system
Process doesn't share memory between each other - since it works in so called "protected flat model", on other hand threads shares the same memory.
With the Windows, at least once you get past Win 3.1, the operating system (OS) contains multiple process each with its own memory space and can't interact with other processes without the OS.
Each process has one or more threads that share the same memory space and do not need the OS to interact with other threads.
Process is a container of threads.
Well, I haven't seen an answer to "What are they physically", yet. So I give it a try.
Processes and Thread are nothing phyical. They are a feature of the operating system. Usally any physical component of a computer does not know about them. The CPU does only process a sequential stream of opcodes. These opcodes might belong to a thread. Then the OS uses traps and interrupts regain control, decide which code to excecute and switch to another thread.
Process is one complete entity e.g. and exe file or one jvm. There can be a child process of a parent process where the exe file run again in a separate space. Thread is a separate path of execution in the same process where the process is controlling which thread to execute, halt etc.
Trying to answer this question relating to Java world.
A process is an execution of a program but a thread is a single execution sequence within the process. A process can contain multiple threads. A thread is sometimes called a lightweight process.
For example:
Example 1:
A JVM runs in a single process and threads in a JVM share the heap belonging to that process. That is why several threads may access the same object. Threads share the heap and have their own stack space. This is how one thread’s invocation of a method and its local variables are kept thread safe from other threads. But the heap is not thread-safe and must be synchronized for thread safety.
Example 2:
A program might not be able to draw pictures by reading keystrokes. The program must give its full attention to the keyboard input and lacking the ability to handle more than one event at a time will lead to trouble. The ideal solution to this problem is the seamless execution of two or more sections of a program at the same time. Threads allows us to do this. Here Drawing picture is a process and reading keystroke is sub process (thread).