concurrent and conditional signal assignment (VHDL) - concurrency

In VHDL, there are two types for signal assignment:
concurrent ----> when...else
----> select...when...else
sequential ----> if...else
----> case...when
Problem is that some say that when...else conditions are checked line by line (king of sequential) while select...when...else conditionals are checked once. See this reference for example.
I say that when..else is also a sequential assignment because you are checking line by line. In other words, I say that there no need to say if..else within a process is equivalent to when..else. Why they assume when..else is a concurrent assignment?

Where you are hinting at in your problem has nothing to do with concurrent assignments or sequential statements. It has more to do with the difference between if and case. Before we get to that first lets understand a few equivalents. The concurrent conditional assignment:
Y <= A when ASel = '1' else B when BSel = '1' else C ;
Is exactly equivalent to a process with the following code:
process(A, ASel, B, BSel, C)
begin
if ASel = '1' then
Y <= A ;
elsif BSel = '1' then
Y <= B ;
else
Y <= C ;
end if ;
end process ;
Likewise the concurrent selected assignment:
With MuxSel select
Y <= A when "00", B when "01", C when others ;
Is equivalent to a process with the following:
process(MuxSel, A, B , C)
begin
case MuxSel is
when "00" => Y <= A;
when "01" => Y <= B ;
when others => Y <= C ;
end case ;
end process ;
From a coding perspective, the sequential forms above have a little more coding capability than the assignment form because case and if allow blocks of code, where the assignment form only assigns to one signal. However other than that, they have the same language restrictions and produce the same hardware (as much as synthesis tools do that). In addition for many simple hardware problems, the assignment form works well and is a concise capture of the problem.
So where your thoughts are leading really comes down to the difference between if and case. If statements (and their equivalent conditional assignments) that have have multiple "elsif" in (or implied in) them tend to create priority logic or at least cascaded logic. Where as case (and their equivalent selected assignments) tend to be well suited for things like multiplexers and their logic structure tends to be more of a balanced tree structure.
Sometimes tools will refactor an if statement to allow it to be equivalent to a case statement. Also for some targets (particularly LUT based logic like Xilinx and Altera), the difference between them in terms of hardware effiency does not show up until there are enough "elsif" branches though.
With VHDL-2008, the assignment forms are also allowed in sequential code. The transformation is the same except without the process wrapper.

Concurrent vs Sequential is about independence of execution.
A concurrent statement is simply a statement that is evaluated and/or executed independently of the code that surrounds it. Processes are concurrent. Component/Entity Instances are concurrent. Signal assignments and procedure calls that are done in the architecture are concurrent.
Sequential statements (other than wait) run when the code around it also runs.
Interesting note, while a process is concurrent (because it runs independently of other processes and concurrent assignments), it contains sequential statements.
Often when we write RTL code, the processes that we write are simple enough that it is hard to see the sequential nature of them. It really takes a statemachine or a testbench to see the true sequential nature of a process.

Related

Why does not-changing the while's variable condition not result in a loop? Edited [duplicate]

Updated, see below!
I have heard and read that C++0x allows an compiler to print "Hello" for the following snippet
#include <iostream>
int main() {
while(1)
;
std::cout << "Hello" << std::endl;
}
It apparently has something to do with threads and optimization capabilities. It looks to me that this can surprise many people though.
Does someone have a good explanation of why this was necessary to allow? For reference, the most recent C++0x draft says at 6.5/5
A loop that, outside of the for-init-statement in the case of a for statement,
makes no calls to library I/O functions, and
does not access or modify volatile objects, and
performs no synchronization operations (1.10) or atomic operations (Clause 29)
may be assumed by the implementation to terminate. [ Note: This is intended to allow compiler transfor-
mations, such as removal of empty loops, even when termination cannot be proven. — end note ]
Edit:
This insightful article says about that Standards text
Unfortunately, the words "undefined behavior" are not used. However, anytime the standard says "the compiler may assume P," it is implied that a program which has the property not-P has undefined semantics.
Is that correct, and is the compiler allowed to print "Bye" for the above program?
There is an even more insightful thread here, which is about an analogous change to C, started off by the Guy done the above linked article. Among other useful facts, they present a solution that seems to also apply to C++0x (Update: This won't work anymore with n3225 - see below!)
endless:
goto endless;
A compiler is not allowed to optimize that away, it seems, because it's not a loop, but a jump. Another guy summarizes the proposed change in C++0x and C201X
By writing a loop, the programmer is asserting either that the
loop does something with visible behavior (performs I/O, accesses
volatile objects, or performs synchronization or atomic operations),
or that it eventually terminates. If I violate that assumption
by writing an infinite loop with no side effects, I am lying to the
compiler, and my program's behavior is undefined. (If I'm lucky,
the compiler might warn me about it.) The language doesn't provide
(no longer provides?) a way to express an infinite loop without
visible behavior.
Update on 3.1.2011 with n3225: Committee moved the text to 1.10/24 and say
The implementation may assume that any thread will eventually do one of the following:
terminate,
make a call to a library I/O function,
access or modify a volatile object, or
perform a synchronization operation or an atomic operation.
The goto trick will not work anymore!
To me, the relevant justification is:
This is intended to allow compiler transfor- mations, such as removal of empty loops, even when termination cannot be proven.
Presumably, this is because proving termination mechanically is difficult, and the inability to prove termination hampers compilers which could otherwise make useful transformations, such as moving nondependent operations from before the loop to after or vice versa, performing post-loop operations in one thread while the loop executes in another, and so on. Without these transformations, a loop might block all other threads while they wait for the one thread to finish said loop. (I use "thread" loosely to mean any form of parallel processing, including separate VLIW instruction streams.)
EDIT: Dumb example:
while (complicated_condition()) {
x = complicated_but_externally_invisible_operation(x);
}
complex_io_operation();
cout << "Results:" << endl;
cout << x << endl;
Here, it would be faster for one thread to do the complex_io_operation while the other is doing all the complex calculations in the loop. But without the clause you have quoted, the compiler has to prove two things before it can make the optimisation: 1) that complex_io_operation() doesn't depend on the results of the loop, and 2) that the loop will terminate. Proving 1) is pretty easy, proving 2) is the halting problem. With the clause, it may assume the loop terminates and get a parallelisation win.
I also imagine that the designers considered that the cases where infinite loops occur in production code are very rare and are usually things like event-driven loops which access I/O in some manner. As a result, they have pessimised the rare case (infinite loops) in favour of optimising the more common case (noninfinite, but difficult to mechanically prove noninfinite, loops).
It does, however, mean that infinite loops used in learning examples will suffer as a result, and will raise gotchas in beginner code. I can't say this is entirely a good thing.
EDIT: with respect to the insightful article you now link, I would say that "the compiler may assume X about the program" is logically equivalent to "if the program doesn't satisfy X, the behaviour is undefined". We can show this as follows: suppose there exists a program which does not satisfy property X. Where would the behaviour of this program be defined? The Standard only defines behaviour assuming property X is true. Although the Standard does not explicitly declare the behaviour undefined, it has declared it undefined by omission.
Consider a similar argument: "the compiler may assume a variable x is only assigned to at most once between sequence points" is equivalent to "assigning to x more than once between sequence points is undefined".
Does someone have a good explanation of why this was necessary to allow?
Yes, Hans Boehm provides a rationale for this in N1528: Why undefined behavior for infinite loops?, although this is WG14 document the rationale applies to C++ as well and the document refers to both WG14 and WG21:
As N1509 correctly points out, the current draft essentially gives
undefined behavior to infinite loops in 6.8.5p6. A major issue for
doing so is that it allows code to move across a potentially
non-terminating loop. For example, assume we have the following loops,
where count and count2 are global variables (or have had their address
taken), and p is a local variable, whose address has not been taken:
for (p = q; p != 0; p = p -> next) {
++count;
}
for (p = q; p != 0; p = p -> next) {
++count2;
}
Could these two loops be merged and replaced by the following loop?
for (p = q; p != 0; p = p -> next) {
++count;
++count2;
}
Without the special dispensation in 6.8.5p6 for infinite loops, this
would be disallowed: If the first loop doesn't terminate because q
points to a circular list, the original never writes to count2. Thus
it could be run in parallel with another thread that accesses or
updates count2. This is no longer safe with the transformed version
which does access count2 in spite of the infinite loop. Thus the
transformation potentially introduces a data race.
In cases like this, it is very unlikely that a compiler would be able
to prove loop termination; it would have to understand that q points
to an acyclic list, which I believe is beyond the ability of most
mainstream compilers, and often impossible without whole program
information.
The restrictions imposed by non-terminating loops are a restriction on
the optimization of terminating loops for which the compiler cannot
prove termination, as well as on the optimization of actually
non-terminating loops. The former are much more common than the
latter, and often more interesting to optimize.
There are clearly also for-loops with an integer loop variable in
which it would be difficult for a compiler to prove termination, and
it would thus be difficult for the compiler to restructure loops
without 6.8.5p6. Even something like
for (i = 1; i != 15; i += 2)
or
for (i = 1; i <= 10; i += j)
seems nontrivial to handle. (In the former case, some basic number
theory is required to prove termination, in the latter case, we need
to know something about the possible values of j to do so. Wrap-around
for unsigned integers may complicate some of this reasoning further.)
This issue seems to apply to almost all loop restructuring
transformations, including compiler parallelization and
cache-optimization transformations, both of which are likely to gain
in importance, and are already often important for numerical code.
This appears likely to turn into a substantial cost for the benefit of
being able to write infinite loops in the most natural way possible,
especially since most of us rarely write intentionally infinite loops.
The one major difference with C is that C11 provides an exception for controlling expressions that are constant expressions which differs from C++ and makes your specific example well-defined in C11.
I think the correct interpretation is the one from your edit: empty infinite loops are undefined behavior.
I wouldn't say it's particularly intuitive behavior, but this interpretation makes more sense than the alternative one, that the compiler is arbitrarily allowed to ignore infinite loops without invoking UB.
If infinite loops are UB, it just means that non-terminating programs aren't considered meaningful: according to C++0x, they have no semantics.
That does make a certain amount of sense too. They are a special case, where a number of side effects just no longer occur (for example, nothing is ever returned from main), and a number of compiler optimizations are hampered by having to preserve infinite loops. For example, moving computations across the loop is perfectly valid if the loop has no side effects, because eventually, the computation will be performed in any case.
But if the loop never terminates, we can't safely rearrange code across it, because we might just be changing which operations actually get executed before the program hangs. Unless we treat a hanging program as UB, that is.
The relevant issue is that the compiler is allowed to reorder code whose side effects do not conflict. The surprising order of execution could occur even if the compiler produced non-terminating machine code for the infinite loop.
I believe this is the right approach. The language spec defines ways to enforce order of execution. If you want an infinite loop that cannot be reordered around, write this:
volatile int dummy_side_effect;
while (1) {
dummy_side_effect = 0;
}
printf("Never prints.\n");
I think this is along the lines of the this type of question, which references another thread. Optimization can occasionally remove empty loops.
I think the issue could perhaps best be stated, as "If a later piece of code does not depend on an earlier piece of code, and the earlier piece of code has no side-effects on any other part of the system, the compiler's output may execute the later piece of code before, after, or intermixed with, the execution of the former, even if the former contains loops, without regard for when or whether the former code would actually complete. For example, the compiler could rewrite:
void testfermat(int n)
{
int a=1,b=1,c=1;
while(pow(a,n)+pow(b,n) != pow(c,n))
{
if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
}
printf("The result is ");
printf("%d/%d/%d", a,b,c);
}
as
void testfermat(int n)
{
if (fork_is_first_thread())
{
int a=1,b=1,c=1;
while(pow(a,n)+pow(b,n) != pow(c,n))
{
if (b > a) a++; else if (c > b) {a=1; b++}; else {a=1; b=1; c++};
}
signal_other_thread_and_die();
}
else // Second thread
{
printf("The result is ");
wait_for_other_thread();
}
printf("%d/%d/%d", a,b,c);
}
Generally not unreasonable, though I might worry that:
int total=0;
for (i=0; num_reps > i; i++)
{
update_progress_bar(i);
total+=do_something_slow_with_no_side_effects(i);
}
show_result(total);
would become
int total=0;
if (fork_is_first_thread())
{
for (i=0; num_reps > i; i++)
total+=do_something_slow_with_no_side_effects(i);
signal_other_thread_and_die();
}
else
{
for (i=0; num_reps > i; i++)
update_progress_bar(i);
wait_for_other_thread();
}
show_result(total);
By having one CPU handle the calculations and another handle the progress bar updates, the rewrite would improve efficiency. Unfortunately, it would make the progress bar updates rather less useful than they should be.
Does someone have a good explanation of why this was necessary to allow?
I have an explanation of why it is necessary not to allow, assuming that C++ still has the ambition to be a general-purpose language suitable for performance-critical, low-level programming.
This used to be a working, strictly conforming freestanding C++ program:
int main()
{
setup_interrupts();
for(;;)
{}
}
The above is a perfectly fine and common way to write main() in many low-end microcontroller systems. The whole program execution is done inside interrupt service routines (or in case of RTOS, it could be processes). And main() may absolutely not be allowed to return since these are bare metal systems and there is nobody to return to.
Typical real-world examples of where the above design can be used are PWM/motor driver microcontrollers, lighting control applications, simple regulator systems, sensor applications, simple household electronics and so on.
Changes to C++ have unfortunately made the language impossible to use for this kind of embedded systems programming. Existing real-world applications like the ones above will break in dangerous ways if ported to newer C++ compilers.
C++20 6.9.2.3 Forward progress [intro.progress]
The implementation may assume that any thread will eventually do one of the following:
(1.1) — terminate,
(1.2) — make a call to a library I/O function,
(1.3) — perform an access through a volatile glvalue, or
(1.4) — perform a synchronization operation or an atomic operation.
Lets address each bullet for the above, previously strictly conforming freestanding C++ program:
1.1. As any embedded systems beginner can tell the committee, bare metal/RTOS microcontroller systems never terminate. Therefore the loop cannot terminate either or main() would turn into runaway code, which would be a severe and dangerous error condition.
1.2 N/A.
1.3 Not necessarily. One may argue that the for(;;) loop is the proper place to feed the watchdog, which in turn is a side effect as it writes to a volatile register. But there may be implementation reasons why you don't want to use a watchdog. At any rate, it is not C++'s business to dictate that a watchdog should be used.
1.4 N/A.
Because of this, C++ is made even more unsuitable for embedded systems applications than it already was before.
Another problem here is that the standard speaks of "threads", which are higher level concepts. On real-world computers, threads are implemented through a low-level concept known as interrupts. Interrupts are similar to threads in terms of race conditions and concurrent execution, but they are not threads. On low-level systems there is just one core and therefore only one interrupt at a time is typically executed (kind of like threads used to work on single core PC back in the days).
And you can't have threads if you can't have interrupts. So threads would have to be implemented by a lower-level language more suitable for embedded systems than C++. The options being assembler or C.
By writing a loop, the programmer is asserting either that the loop does something with visible behavior (performs I/O, accesses volatile objects, or performs synchronization or atomic operations), or that it eventually terminates.
This is completely misguided and clearly written by someone who has never worked with microcontroller programming.
So what should those few remaining C++ embedded programmers who refuse to port their code to C "because of reasons" do? You have to introduce a side effect inside the for(;;) loop:
int main()
{
setup_interrupts();
for(volatile int i=0; i==0;)
{}
}
Or if you have the watchdog enabled as you ought to, feed it inside the for(;;) loop.
It is not decidable for the compiler for non-trivial cases if it is an infinite loop at all.
In different cases, it can happen that your optimiser will reach a better complexity class for your code (e.g. it was O(n^2) and you get O(n) or O(1) after optimisation).
So, to include such a rule that disallows removing an infinite loop into the C++ standard would make many optimisations impossible. And most people don't want this. I think this quite answers your question.
Another thing: I never have seen any valid example where you need an infinite loop which does nothing.
The one example I have heard about was an ugly hack that really should be solved otherwise: It was about embedded systems where the only way to trigger a reset was to freeze the device so that the watchdog restarts it automatically.
If you know any valid/good example where you need an infinite loop which does nothing, please tell me.
I think it's worth pointing out that loops which would be infinite except for the fact that they interact with other threads via non-volatile, non-synchronised variables can now yield incorrect behaviour with a new compiler.
I other words, make your globals volatile -- as well as arguments passed into such a loop via pointer/reference.

How can we elimate break statement in C/C++ programs to support compiler for auto vectorization?

There are some of the rules to be followed while programming, making the program easy for the compiler to auto vectorize.
Some of the rules include:
No 'break' statements(The loop should have single flow)
No 'if' statements(Only masked assignment allowed) and many other
It would be great to know what steps can be taken in order to replace these conditional break statements.
I would like to give a simple example to make it easy to understand but the program can contain many 'break' statements which are usually depending on rigorous condition and status flags.
Consider a simple program working on a 2D array:
void main()
{
int i,j,sum = 0;
int array[3][5] = {{1,3,6,8,9},{23,65,77,54,5},{2,5,-7,-89,-8}};
for(i=0;i<3;i++)
{
for(j=0;j<5;j++)
{
sum += a[i][j];
if(i==2 && j==2)
{
break;//breaks out of loop j
}
}
}
return 0;
}
It would be great to know what steps can be taken in order to replace these conditional break statements.
Generally speaking, you don't.
A break statement within a loop exists to terminate the loop, typically based on a condition other than the loop's normal termination condition. Loops usually have a terminal condition built into them, as well as a specific syntactic place where that condition is found. As such, if the code in question is decently written, a break statement will only appear in cases where the additional termination condition cannot reasonably be added to the loop termination condition.
So if we're talking about code that is reasonable (and if we're not, then vectorization is the least of your worries), then rewriting it to fold the break condition into the loop condition is either straight-up impossible or will lead to code that is exceedingly difficult to follow/maintain.
So in general, if break is being used because it's necessary, then it isn't going to be possible/reasonable to fold it into the loop condition. And even if it is, a complex loop termination condition can kill auto-vectorization just as effectively as a break statement.
Most code is not going to be amenable to vectorization. Your trivial example is non-vectorizable because it contains an OS system call as part of its operation. Vectorization typically applies to looping over sequences of values and performing fairly simple operations on them.

How statements are executed concurrently in combinational logic using VHDL?

I wonder how signal assignment statements are executed concurrently in combinational logic using VHDL? For the following code for example the three statements are supposed to run concurrently. What I have a doubt in is that how the 'y' output signal is immediately changed when I run the simulation although if the statements ran concurrently 'y' will not see the effect of 'wire1' and 'wire2' (only if the statements are executed more than one time).
entity test1 is port (a, b, c, d : in bit; y : out bit);
end entity test1;
------------------------------------------------------
architecture basic of test1 is
signal wire1, wire2 : bit;
begin
wire1 <= a and b;
wire2 <= c and d;
y <= wire1 and wire2;
end architecture basic;
Since VHDL is used for simulating digital circuits, this must work similarly to the actual circuits, where (after a small delay usually ignored in simulations) circuits continously follow their inputs.
I assume you wonder how the implementation achieves this behaviour:
The simulator will keep track of which signal depends on which other symbol and reevaluates the expression whenever one of the inputs changes.
So when a changes, wire1 will be updated, and in turn trigger an update to y. This will continue as long as combinatorial updates are necessary. So in the simulation the updates are indeed well ordered, although no simulation time has passed. The "time" between such updates is often called a "delta cycle".

implementing a flip-flop with concurrent statement

It is stated in VHDL programming that for combinational circuits, concurrent statements are used while for sequential circuits, both concurrent and sequential statements are applicable. Now the question is:
What will happen if I write a sequential code in a concurrent form? For example, I don't use process and write a flip flop with when..else
architecture x of y is
begin
q <= '0' when rst=1 else
d when (clock'event and clock='1') else
q;
end;
Is that a correct and synesthesizable code? If it is an incorrect code, what is wrong with that exactly (apart form syntax errors)?
You say: "It is stated in VHDL programming that for combinational circuits, concurrent statements are used while for sequential circuits, both concurrent and sequential statements are applicable.". That is simply not true. You can model both combinational and sequential code using either concurrent or sequential statements.
It is unusual to model sequential logic using concurrent statements. (I say that because I see a lot of other people's code in my job and I almost never see it). However, it is possible. Your code does have a syntax error and a more fundamental error. This modified version of your code synthesises to a rising-edge triggered flip-flop with an asynchronous, active-high reset, as you expected:
q <= '0' when rst='1' else
d when clock'event and clock='1';
The syntax error was that you had rst=1 instead of rst='1'. The more fundamental error was that you don't need the else q. This is unnecessary, because signals in VHDL retain the value previously assigned until a new value is assigned. Therefore, in VHDL code modelling sequential logic, it is never necessary to write q <= q (or its equivalent). In your case, in the MCVE I constructed q was an output and so your else q gave a syntax error because you cannot read outputs.
Here's the MCVE:
library IEEE;
use IEEE.std_logic_1164.all;
entity concurrent_flop is
port (clock, rst, d : in std_logic;
q : out std_logic);
end entity concurrent_flop;
architecture concurrent_flop of concurrent_flop is
begin
q <= '0' when rst='1' else
d when clock'event and clock='1';
end architecture concurrent_flop;
I wrote an MCVE to check what I was about to say was correct. You could have done the same. Doing so is a great way of learning VHDL. EDA Playground is often a good place to try things out (shameless plug), but was no good in this case, because one cannot synthesise VHDL on EDA Playground.

Please, clarify the concept of sequential and concurrent execution in VHDL

I got familiar with a little bit of Verilog at school and now, one year later, I bought a Basys 3 FPGA board. My goal is to learn VHDL.
I have been reading a free book called "Free Range VHDL" which assists greatly in understanding the VHDL language. I have also searched through github repos containing VHDL code for reference.
My biggest concern is the difference between sequential and concurrent execution. I understand the meaning of these two words but I still cannot imagine why we can use "process" for combinational logic (i.e. seven segment decoder). I have implemented my seven segment decoder as conditional assignment of concurrent statements. What would be the difference if I implemented the decoder using process and a switch statement? I do not understand the word sequential execution of process when it comes to combinational logic. I would understand it if it was a sequential machine-a state machine.
Can somebody please explain this concept?
Here is my code for a seven-segment decoder:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity hex_display_decoder is
Port ( D: in STD_LOGIC_VECTOR (3 downto 0);
SEG_OUT : out STD_LOGIC_VECTOR (6 downto 0));
end hex_display_decoder;
architecture dataflow of hex_display_decoder is
begin
with D select
SEG_OUT <= "1000000" when "0000",
"1111001" when "0001",
"0100100" when "0010",
"0110000" when "0011",
"0011001" when "0100",
"0010010" when "0101",
"0000010" when "0110",
"1111000" when "0111",
"0000000" when "1000",
"0010000" when "1001",
"0001000" when "1010",
"0000011" when "1011",
"1000110" when "1100",
"0100001" when "1101",
"0000110" when "1110",
"0001110" when "1111",
"1111111" when others;
end dataflow;
Thank you,
Jake Hladik
My biggest concern is difference between sequential and concurrent
execution. I understand the meaning of these two words but I still
cannot imagine why we can use "process" for combinational logic (ex.
seven segment decoder).
You are confounding two things:
The type of logic, which can be sequential or combinational.
The order of execution of statements, which can be sequential or concurrent.
Types of logic
In logic design:
A combinational circuit is one that implements a pure logic function without any state. There is no need for a clock in a combinational circuit.
A sequential circuit is one that changes every clock cycle and that remembers its state (using flip-flops) between clock cycles.
The following VHDL process is combinational:
process(x, y) begin
z <= x or y;
end process;
We know it is combinational because:
It does not have a clock.
All its inputs are in its sensitivity list (the parenthesis after the process keyword). That means a change to any one of these inputs will cause the process to be re-evaluated.
The following VHDL process is sequential:
process(clk) begin
if rising_edge(clk) then
if rst = '1' then
z <= '0';
else
z <= z xor y;
end if;
end if;
end process;
We know it is sequential because:
It is only sensitive to changes on its clock (clk).
Its output only changes value on a rising edge of the clock.
The output value of z depends on its previous value (z is on both sides of the assignment).
Model of Execution
To make a long story short, processes are executed as follow in VHDL:
Statements within a process are executed sequentially (i.e. one after the
other in order).
Processes run concurrently relative to one another.
Processes in Disguise
So-called concurrent statements, essentially all statements outside a process, are actually processes in disguise. For example, this concurrent signal assignment (i.e. an assignment to a signal outside a process):
z <= x or y;
Is equivalent to this process:
process(x, y) begin
z <= x or y;
end process;
That is, it is equivalent to the same assignment within a process that has all of its inputs in the sensitivity list. And by equivalent, I mean the VHDL standard (IEEE 1076) actually defines the behaviour of concurrent signal assignments by their equivalent process.
What that means is that, even though you didn't know it, this statement of yours in hex_display_decoder:
SEG_OUT <= "1000000" when "0000",
"1111001" when "0001",
"0100100" when "0010",
"0110000" when "0011",
"0011001" when "0100",
"0010010" when "0101",
"0000010" when "0110",
"1111000" when "0111",
"0000000" when "1000",
"0010000" when "1001",
"0001000" when "1010",
"0000011" when "1011",
"1000110" when "1100",
"0100001" when "1101",
"0000110" when "1110",
"0001110" when "1111",
"1111111" when others;
is already a process.
Which, in turn, means
What would be the difference if I implemented the decoder using
process and a switch statement?
None at all.