Send interrupt to cpu as keyboard do? - c++

Is it possible to simulate hardware interrupts somehow from user program?
I've seen this question posted many times, but always not answered.
I want to know about low-level interrupts (for example simulate situation when key pressed on keyboard, so that keyboard driver would interrupt interrupt).
High level events and APIs are outside scope, and question is rather theoretical than practical (to prevent "why" discussions :)

Yes and no.
On an x86 CPU (for one example) there's an int instruction that generates an interrupt. Once the interrupt is generated, the CPU won't necessarily1 distinguish between an interrupt generated by hardware and one generated by software. For one example, in the original PC BIOS, IBM chose an interrupt that would cause the print-screen command to execute. The interrupt they chose (interrupt 5) was one that wasn't then in use, but which Intel had said was reserved for future use. Intel eventually did put that interrupt to use -- in the 286 they added a bound instruction that checks that a value is within bounds, and generates an interrupt if it's not. The bound instruction is essentially never used though, because it generates interrupt 5 if a value is out of bounds. This means (if you're running something like MS-DOS that allows it) executing the bound instruction with a value that's out of bounds will print the screen.
On a modern OS, however, this won't generally be allowed. All generation and handling of interrupts happens in the kernel. The hardware had 4 levels of protection ("rings") and support for specifying the ring at which the int instruction can be executed. If you try to execute it from code running at ring 3, it won't execute directly -- instead, execution will switch to the OS kernel, which can treat it as it chooses.
This allows (for example) Windows to emulate MS-DOS, so MS-DOS programs (which do use the int instruction) can execute in a virtual machine, with virtualized input and output, so even though they "think" they're working directly with the keyboard and screen hardware, they're actually using emulations of them provided by software.
For "native" programs, however, using most int instructions (i.e. any but a tiny number of interrupts intended for communication with the kernel) will simply result in the program being shut down.
So, bottom line: yes, the hardware supports it -- but the hardware also supports prohibiting it, and nearly every modern OS does exactly that, at least for most code outside the OS kernel itself.
Though, with typical hardware, the interrupt handler can read data from the programmable interrupt controller (PIC) chip that will tell it whether the interrupt came through the PIC (i.e., hardware interrupt) or not (software interrupt). Most hardware also supports at least a few interrupts that can be generated only by hardware, such as NMI on the x86. These are usually reserved for fairly narrow uses though (e.g., NMI on a PC is normally used for things like memory parity errors).

Related

How to Read the program counter / Instruction pointer of a specific core in Kernel Mode?

Windows 10, x64 , x86
My current knowledge
Lets say it is quad core, there will be 4 individual program counters which will point to 4 different locations of code for parallel execution.
Each of this program counters indicates where a computer is in its program sequence.
The address it points to changes after a context switch where another threads program counter gets placed onto the program counter to execute.
What I want to do:
Im in Kernel Mode my thread is running on core 1 and I want to read the current instruction pointer of core 2.
Expected Results:
0x203123 is the address of the instruction pointer and this address belongs to this thread and this thread belongs to this process... etc.
Anyone knows how to do it or can give me good book references, links etc...
Although I don't believe it's officially documented, there is a ZwGetContextThread exported from ntdll.dll. Being undocumented, things can change (and I haven't tried it in quite a while) but at least when I last tried it, you called it with a thread handle and a pointer to a CONTEXT structure, and it would return that thread's context.
I'm not certain exactly how up-to-date that is though. It's never mattered to me, so I haven't checked, but my guess would be that the IP in the CONTEXT you get is whatever was saved the last time the thread was suspended. So, if you want something (reasonably) current, you'd use ZwSuspendThread, get the context, then ZwResumeThread to start it running again.
Here I suppose I'm probably supposed to give the standard lines about undocumented function being subject to change, using them being a bad idea, and that you should generally leave all of this alone. Ah well, I been disappointing teachers and other authority figures for years, and I guess I'm not changing right now.
On the other hand, there may be a practical problem here. If you really need data that's really current, this probably isn't going to work very well for you. What it gives you will be kind of current at best. On the other hand, really current is almost a meaningless concept with information that goes out of date every clock cycle.
Anyone knows how to do it or can give me good book references, links etc...
For 80x86 hardware (regardless of operating system); there are only 3 ways to do this (that I know of):
a) send an inter-processor interrupt to the other CPU, and have an interrupt handler that stores the "return EIP" (from its stack) at a known address in memory so that your CPU can read "value of EIP immediately before interrupt" (with synchronization so that your CPU doesn't read before the value is written, etc).
b) put the other CPU into some kind of "debug mode" (single-stepping, last branch recording, ...) so that (either code in a debug exception handler or the CPU's hardware itself) is constantly writing EIP values to memory that you can read.
Of course both of these options will ruin performance, and the value you get will probably be useless (because EIP would've changed after you obtain it but before you can use the obtained value). To ensure the value is still useful; you'd need the other CPU to wait until after you've consumed the obtained value (and are ready for the next value); and to do that you'd have to resort to single-step debugging facilities (with the waiting in the debug exception handler), where you'll be lucky if you can get performance better than a thousand times slower (and can probably improve performance by simply disabling other CPUs completely).
Also note that they still won't accurately tell you EIP in all cases (e.g. if the CPU is in SMM/System Management Mode and is beyond the control of the OS); and I doubt Windows kernel supports any of it (e.g. kernel should support single-stepping of user-space processes/threads to allow debuggers to work, but won't support single-stepping of kernel and will probably lock up the computer due to various "waiting for lock to be released for 6 days" problems).
The last of the 3 options is:
c) Run the OS inside an emulator/simulator instead of running it on real hardware. In that case you can probably modify the emulator/simulator's code to inject EIP values somewhere (maybe some kind of virtual "EIP reporting device"?). This will ruin performance of the emulator/simulator, but you may be able to hide that (e.g. "virtual time inside the emulator passes at a rate of one second per 1000 seconds of real time outside the emulator").

Concurrent interrupts in ARM

I am new to ARM processors. Atmel ATSAMD20e implements ARM cortex M0+ processor based on ARMv6 architecture. It allows upto 32 external interrupts, with the interrupt signals connected to the nested vector interrupt table (NVIC). Would it be possible to have concurrent interrupts using NVIC? if so,how can we determine the maximum number of interrupts that can be run concurrently? Could someone please point to any documentation that explains handling of concurrent interrupts. Thanks
The maximum interrupts that can run "concurrently" are limited by stack space, the number of priority levels, and the number of interrupt sources you have in the system. You say you have 32 interrupts, the M0+ has 192 levels, and I have no idea how much stack you're willing to sacrifice to get this behavior. (And "concurrent" is really a misnomer. they're preempting each other, not running concurrently)
In practice, however, it really doesn't buy much to support more than a few priority levels, if even that. You only need this if you have an interrupt whose deadline requirements are shorter than your longest interrupt running period.
See here (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0337e/Cihcbadd.html) for description of what happens on the stack as interrupts get preempted by other interrupts.

which registers is changed when we move from user mode to kernel mode ?! and what is the reason to move to kernel mode?

which registers is changed when we move from user mode to kernel mode ?! and what is the reason to move to kernel mode ?
why these reasons aren't cause moving to kernel mode :
make new admin by root ( super user or admin)
If i get TLB miss why we don't move to kernel mode
when we write to bit Page modified in the Page tables
From your questions i found that you are very poor in operating system concepts.
Ok let me explain,(I am assuming you are using linux not windows).
"which registers is changed when we move from user mode to kernel mode ?"
For knowing answer for this question you need to learn about process management.
But i can simply say, linux uses system call interface for changing from user space to kernel space. system call interface uses some registers (Based on your processor) to pass system call number and arguments for system call.
In general, the move to kernel mode happens when
you make an explicit request to the kernel (a system call)
you make an implicit request to the kernel (accessing memory that isn't mapped into your space, whether valid or not)
the kernel decides it needs to do something more important than executing your code (normally as the result of a hardware interrupt).
All registers will be preserved, as it would be rather difficult to write code if your registers could change at random, but how that happens is very CPU specific.
"which registers is changed when we move from user mode to kernel mode ?!"
In a typical x86 based architecture running linux kernel, this is what happens:
a software program shall trigger interrupt 0x80 by the instruction: int $0x80
The CPU will change the program counter register & the code selector to refer to
the place where linux system call handler exists in memory (linux applies virtual
memory concept).
Till now registers affected are: CS, EIP, and EFLAGS register. the CPU also changes
Stack Selector (SS) and Stack Pointer (ESP) to refer to the top of kernel stack.
Finally, the kernel changes the Data Selector and Extra Data Selector (DS & ES) to
select to a kernel mode data segment.
The kernel shall push the program context on kernel's stack and the general purpose
registers (like accumulators) will change due to the kernel code being executed.
So as you can see, it all depends on the operating system and the architecture.
"and what is the reason to move to kernel mode ?"
The CPU by default works in kernel mode, your question should be "what is the need of user mode?". The user mode is necessary because it doesn't provide all permissions to the running software. You can run your browser/file manager/shell in user mode without any worries. If full permissions are given to application software, they will access the kernel data and damage it, and they might also access the hardware and for example, destroy the data stored on your hard disk.
Kernel of course must work in kernel mode (at least the core of the kernel). Application software for example, might require to write data to a file on the disk. application software doesn't have access to the disk (because it is running in user mode). the only way to achieve this is to call the kernel (which is running in kernel mode) to do the job. that's why you need to move from user mode to kernel mode and vice versa.

Use kernel programming function in C++ program

I am a newbie in this area & am writing a C++/assembly code to benchmark (measure execution time) of a section of a code in clock cycles. I need to disable pre-emption and hard interrupts through my code. I know that linux kernel development permits use of preempt_disable(); &raw_local_irq_save(flags) functions to do the same.
My question is that I am not writing a kernel module, but a normal C/C++ program in user space. Can I use these system calls through my C++ code (i.e. from user space/ no kernel module?) Which header files should i include. if yes. Can someone please give me reading references or examples?
Thanks!!
You can't do this from userland application, especially disabling hardware interrupts, which provides the basis for many fundamental kernel functions like timekeeping.
What you can do instead is use sched_setscheduler(2) to set, say, SCHED_FIFO real-time priority, that is ask the kernel not to preempt your app until it voluntarily releases the CPU (usually a system call). Be careful though - you can easily lockup your system that way.
Usually that is impossible. The kernel will not let you block interrupts.
But assigning yourself a very high prio is usally good enough. Plus, make sure the benchmarked code runs long enough, e.g. by running it 10000 times in a loop. That way, some interrupts don't matter in the overall cycle counting. In my experience a code run time of 1 second is good enough (provided your system is not under heave stress) for home-brewn benchmarking.

How does a compiled program interact with the OS?

When a program is compiled it is converted to machine code which can be "understood" by the machine. How does this machine code interact with the operating system in order to do things like getting input from the keyboard ?
To me, it seems that the machine code should run at a lower level than the operating system and therefore, I can't understand how the OS can act as an intermediary between the compiled application and the hardware.
PS : I just started C ++ programming and I am trying to understand how cin and cout work
This is a very good question (better than you know), and there is a lot to learn. A LOT.
I'll try to keep it short. The operating system acts as a level of abstraction between software and hardware:
Software
.
/|\
| communicates with
\|/
'
Operating System
.
/|\
| communicates with
\|/
'
Hardware
The OS communicates with the hardware through programs called drivers (widely used term), and the OS communicates with software through procedures called system calls (not-so-widely used term).
Essentially, when you make a system call, you are leaving your program and entering code of the operating system. System calls are the only way programmers are allowed to communicate with resources.
Now I would stop there, but you also said:
To me, it seems that the machine code should run at a lower level than
the operating system and therefore, I can't understand how the OS can
act as an intermediary between the compiled application and the
hardware.
This is tricky, but simple once you understand some basics.
First, all code is just machine code running on the CPU. No code is higher or lower than other code (with the exception of some commands that can only be run in a privileged kernel mode). So the question is, how can the OS possibly be in control even though it is relinquishing control of the CPU to the user?
When code is running on a CPU, there is a concept called an interrupt. This is a signal sent to the CPU that causes the currently running code to stop and get switched out with another piece of code, called an interrupt handler.
Examples of interrupts include the keyboard, the mouse, and most importantly, the clock.
The clock interrupt is raised on a regular basis causes the operating system's clock interrupt handler to run. Within this clock interrupt handler is the operating system's code that examines what code is currently running determines what code needs to run next. This can be either more operating system code or more user code.
Because the clock is always ticking, and because the operating system always gets this periodic chance to run on the CPU, it is able to orchestrate everything within the computer, even though it runs using the same set of CPU commands as any normal program.
The operating system provides system calls that programs can call to get access to lower level services.
Note that system calls are different from the system() function that you have probably used to execute external programs.
System calls are used to do things like access files, communicate over the network, request heap memory, etc.