Compile error with embedded assembler - c++

I don't understand why this code
#include <iostream>
using namespace std;
int main(){
int result=0;
_asm{
mov eax,3;
MUL eax,3;
mov result,eax;
}
cout<<result<<endl;
return 0;
}
shows the following error.
1>c:\users\david\documents\visual studio 2010\projects\assembler_instructions\assembler_instructions.cpp(11): error C2414: illegal number of operands
Everything seems fine, and yet why do I get this compiler error?

According to this page, the mul instruction only takes a single argument:
mul arg
This multiplies "arg" by the value of corresponding byte-length in the A register, see table below:
operand size 1 byte 2 bytes 4 bytes
other operand AL AX EAX
higher part of result stored in: AH DX EDX
lower part of result stored in: AL AX EAX

Thus following the notes as per Justin's link:
#include <iostream>
int main()
{
int result=0;
_asm{
mov eax, 3;
mov ebx, 4;
mul ebx;
mov result,eax;
}
std::cout << result << std::endl;
return 0;
}

Use:
imul eax, 3;
or:
imul eax, eax, 3;
That way you don't need to worry about edx -register being clobbered. It's "signed integer multiply". You seem to have 'int' -result so it shouldn't matter whether you use mul or imul.
Sometimes I've gotten errors from not having edx register zeroed when dividing or multiplying. CPU was Intel core2 quad Q9550
There's numbingly overengineered but correct intel instruction reference manuals you can read. Though intel broke its websites while ago. You could try find same reference manuals from AMD sites though.
Update: I found the manual: http://www.intel.com/design/pentiumii/manuals/243191.htm
I don't know when they are going to again break their sites, so you really always need to search it up.
Update2: ARGHL! those are from year 1999.. well most details are unfortunately the same.

You should download the Intel architecture manuals.
http://www.intel.com/products/processor/manuals/
For your purpose, volume 2 is going to help you the most.
As of access in July 2010, they are current.

Related

How do I speed up a for loop with unrelated data?

A very simple example of passing an array of integers to a for loop shown below. If those integers are unrelated to each other, how can I make it so that a "for loop" iterates over all of them at the same time?
int waffles[3] = { 0 };
for (int i = 0; i < 3; i++) {
waffles[i] = i;
}
What I get
clock 1: waffles[0] = 0;
clock 2: waffles[1] = 1;
clock 3: waffles[2] = 2;
What I want
clock 1: waffles[0] = 0, waffles[1] = 1, waffles[2] = 2
This can actually be done using SIMD instructions like the AVX instructions, although it not trivial to implement. You probably want to 100% make sure you are bottlenecked by a specific loop and really NEED to improve performance there.
This might help https://stackoverflow.blog/2020/07/08/improving-performance-with-simd-intrinsics-in-three-use-cases/
(I know this is not a full answer, but I can't comment yet and it might help anyway)
As #François Andrieux comment points out:
The compiler will very likely unroll that loop to the most efficient form for the targeted platform.
See how this code compiles in Godbolt's Compler Explorer here.
Clang puts 0 and 1 using the same instruction:
movabs rax, 4294967296
mov qword ptr [rsp + 12], rax
mov dword ptr [rsp + 20], 2
gcc puts 1 and 2 using the same instruction:
mov DWORD PTR [rsp], 0
mov QWORD PTR [rsp+4], rax
Larger array would result in vectored instructions that put even more data at once (see here)

Saturate short (int16) in C++

I am optimizing bottleneck code:
int sum = ........
sum = (sum >> _bitShift);
if (sum > 32000)
sum = 32000; //if we get an overflow, saturate output
else if (sum < -32000)
sum = -32000; //if we get an underflow, saturate output
short result = static_cast<short>(sum);
I would like to write the saturation condition as one "if condition" or even better with no "if condition" to make this code faster. I don't need saturation exactly at value 32000, any similar value like 32768 is acceptable.
According this page, there is a saturation instruction in ARM. Is there anything similar in x86/x64?
I'm not at all convinced that attempting to eliminate the if statement(s) is likely to do any real good. A quick check indicates that given this code:
int clamp(int x) {
if (x < -32768)
x = -32768;
else if (x > 32767)
x = 32767;
return x;
}
...both gcc and Clang produce branch-free results like this:
clamp(int):
cmp edi, 32767
mov eax, 32767
cmovg edi, eax
mov eax, -32768
cmp edi, -32768
cmovge eax, edi
ret
You can do something like x = std::min(std::max(x, -32768), 32767);, but this produces the same sequence, and the source seems less readable, at least to me.
You can do considerably better than this if you use Intel's vector instructions, but probably only if you're willing to put a fair amount of work into it--in particular, you'll probably need to operate on an entire (small) vector of values simultaneously to accomplish much this way. If you do go that way, you usually want to take a somewhat different approach to the task than you seem to be taking right now. Right now, you're apparently depending on int being a 32-bit type, so you're doing the arithmetic on a 32-bit type, then afterwards truncating it back down to a (saturated) 16-bit value.
With something like AVX, you'd typically want to use an instruction like _mm256_adds_epi16 to take a vector of 16 values (of 16-bits apiece), and do a saturating addition on all of them at once (or, likewise, _mm256_subs_epi16 to do saturating subtraction).
Since you're writing C++, what I've given above are the names of the compiler intrinsics used in most current compilers (gcc, icc, clang, msvc) for x86 processors. If you're writing assembly language directly, the instructions would be vpaddsw and vpsubsw respectively.
If you can count on a really current processor (one that supports AVX 512 instructions) you can use them instead to operate on a vector of 32 16-bit values simultaneously.
Are you sure you can beat the compiler at this?
Here's x64 retail with max size optimizations enabled. Visual Studio v15.7.5.
ecx contains the intial value at the start of this block. eax is filled with the saturated value when it is done.
return (x > 32767) ? 32767 : ((x < -32768) ? -32768 : x);
mov edx,0FFFF8000h
movzx eax,cx
cmp ecx,edx
cmovl eax,edx
mov edx,7FFFh
cmp ecx,edx
movzx eax,ax
cmovg eax,edx

why is it faster to print number in binary using arithmetic instead of _bittest

The purpose of the next two code section is to print number in binary.
The first one does this by two instructions (_bittest), while the second does it by pure arithmetic instructions which is three instructions.
the first code section:
#include <intrin.h>
#include <stdio.h>
#include <Windows.h>
long num = 78002;
int main()
{
unsigned char bits[32];
long nBit;
LARGE_INTEGER a, b, f;
QueryPerformanceCounter(&a);
for (size_t i = 0; i < 100000000; i++)
{
for (nBit = 0; nBit < 31; nBit++)
{
bits[nBit] = _bittest(&num, nBit);
}
}
QueryPerformanceCounter(&b);
QueryPerformanceFrequency(&f);
printf_s("time is: %f\n", ((float)b.QuadPart - (float)a.QuadPart) / (float)f.QuadPart);
printf_s("Binary representation:\n");
while (nBit--)
{
if (bits[nBit])
printf_s("1");
else
printf_s("0");
}
return 0;
}
the inner loop is compile to the instructions bt and setb
The second code section:
#include <intrin.h>
#include <stdio.h>
#include <Windows.h>
long num = 78002;
int main()
{
unsigned char bits[32];
long nBit;
LARGE_INTEGER a, b, f;
QueryPerformanceCounter(&a);
for (size_t i = 0; i < 100000000; i++)
{
long curBit = 1;
for (nBit = 0; nBit < 31; nBit++)
{
bits[nBit] = (num&curBit) >> nBit;
curBit <<= 1;
}
}
QueryPerformanceCounter(&b);
QueryPerformanceFrequency(&f);
printf_s("time is: %f\n", ((float)b.QuadPart - (float)a.QuadPart) / (float)f.QuadPart);
printf_s("Binary representation:\n");
while (nBit--)
{
if (bits[nBit])
printf_s("1");
else
printf_s("0");
}
return 0;
}
The inner loop compile to and add(as shift left) and sar.
the second code section run three time faster then the first one.
Why three cpu instructions run faster then two?
Not answer (Bo did), but the second inner loop version can be simplified a bit:
long numCopy = num;
for (nBit = 0; nBit < 31; nBit++) {
bits[nBit] = numCopy & 1;
numCopy >>= 1;
}
Has subtle difference (1 instruction less) with gcc 7.2 targetting 32b.
(I'm assuming 32b target, as you convert long into 32 bit array, which makes sense only on 32b target ... and I assume x86, as it includes <windows.h>, so it's clearly for obsolete OS target, although I think windows now have even 64b version? (I don't care.))
Answer:
Why three cpu instructions run faster then two?
Because the count of instructions only correlates with performance (usually fewer is better), but the modern x86 CPU is much more complex machine, translating the actual x86 instructions into micro-code before execution, transforming that further by things like out-of-order-execution and register renaming (to break false dependency chains), and then it executes the resulting microcode, with different units of CPU capable to execute only some micro-ops, so in ideal case you may get 2-3 micro-ops executed in parallel by the 2-3 units in single cycle, and in worst case you may be executing an complete micro-code loop implementing some complex x86 instruction taking several cycles to finish, blocking most of the CPU units.
Another factor is availability of data from memory and memory writes, a single cache miss, when the data must be fetched from higher level cache, or even memory itself, creates tens-to-hundreds cycles stall. Having compact data structures favouring predictable access patterns and not exhausting all cache-lines is paramount for exploiting maximum CPU performance.
If you are at stage "why 3 instructions are faster than 2 instructions", you pretty much can start with any x86 optimization article/book, and keep reading for few months or year(s), it's quite complex topic.
You may want to check this answer https://gamedev.stackexchange.com/q/27196 for further reading...
I'm assuming you're using x86-64 MSVC CL19 (or something that makes similar code).
_bittest is slower because MSVC does a horrible job and keeps the value in memory and bt [mem], reg is much slower than bt reg,reg. This is a compiler missed-optimization. It happens even if you make num a local variable instead of a global, even when the initializer is still constant!
I included some perf analysis for Intel Sandybridge-family CPUs because they're common; you didn't say and yes it matters: bt [mem], reg has one per 3 cycle throughput on Ryzen, one per 5 cycle throughput on Haswell. And other perf characteristics differ...
(For just looking at the asm, it's usually a good idea to make a function with args to get code the compiler can't do constant-propagation on. It can't in this case because it doesn't know if anything modifies num before main runs, because it's not static.)
Your instruction-counting didn't include the whole loop so your counts are wrong, but more importantly you didn't consider the different costs of different instructions. (See Agner Fog's instruction tables and optimization manual.)
This is your whole inner loop with the _bittest intrinsic, with uop counts for Haswell / Skylake:
for (nBit = 0; nBit < 31; nBit++) {
bits[nBit] = _bittest(&num, nBit);
//bits[nBit] = (bool)(num & (1UL << nBit)); // much more efficient
}
Asm output from MSVC CL19 -Ox on the Godbolt compiler explorer
$LL7#main:
bt DWORD PTR num, ebx ; 10 uops (microcoded), one per 5 cycle throughput
lea rcx, QWORD PTR [rcx+1] ; 1 uop
setb al ; 1 uop
inc ebx ; 1 uop
mov BYTE PTR [rcx-1], al ; 1 uop (micro-fused store-address and store-data)
cmp ebx, 31
jb SHORT $LL7#main ; 1 uop (macro-fused with cmp)
That's 15 fused-domain uops, so it can issue (at 4 per clock) in 3.75 cycles. But that's not the bottleneck: Agner Fog's testing found that bt [mem], reg has a throughput of one per 5 clock cycles.
IDK why it's 3x slower than your other loop. Maybe the other ALU instructions compete for the same port as the bt, or the data dependency it's part of causes a problem, or just being a micro-coded instruction is a problem, or maybe the outer loop is less efficient?
Anyway, using bt [mem], reg instead of bt reg, reg is a major missed optimization. This loop would have been faster than your other loop with a 1 uop, 1c latency, 2-per-clock throughput bt r9d, ebx.
The inner loop compile to and add(as shift left) and sar.
Huh? Those are the instructions MSVC associates with the curBit <<= 1; source line (even though that line is fully implemented by the add self,self, and the variable-count arithmetic right shift is part of a different line.)
But the whole loop is this clunky mess:
long curBit = 1;
for (nBit = 0; nBit < 31; nBit++) {
bits[nBit] = (num&curBit) >> nBit;
curBit <<= 1;
}
$LL18#main: # MSVC CL19 -Ox
mov ecx, ebx ; 1 uop
lea r8, QWORD PTR [r8+1] ; 1 uop pointer-increment for bits
mov eax, r9d ; 1 uop. r9d holds num
inc ebx ; 1 uop
and eax, edx ; 1 uop
# MSVC says all the rest of these instructions are from curBit <<= 1; but they're obviously not.
add edx, edx ; 1 uop
sar eax, cl ; 3 uops (variable-count shifts suck)
mov BYTE PTR [r8-1], al ; 1 uop (micro-fused)
cmp ebx, 31
jb SHORT $LL18#main ; 1 uop (macro-fused with cmp)
So this is 11 fused-domain uops, and takes 2.75 clock cycles per iteration to issue from the front-end.
I don't see any loop-carried dep chains longer than that front-end bottleneck, so it probably runs about that fast.
Copying ebx to ecx every iteration instead of just using ecx as the loop counter (nBit) is an obvious missed optimization. The shift-count is needed in cl for a variable-count shift (unless you enable BMI2 instructions, if MSVC can even do that.)
There are major missed optimizations here (in the "fast" version), so you should probably write your source differently do hand-hold your compiler into making less bad code. It implements this fairly literally instead of transforming it into something the CPU can do efficiently, or using bt reg,reg / setc
How to do this fast in asm or with intrinsics
Use SSE2 / AVX. Get the right byte (containing the corresponding bit) into each byte element of a vector, and PANDN (to invert your vector) with a mask that has the right bit for that element. PCMPEQB against zero. That gives you 0 / -1. To get ASCII digits, use _mm_sub_epi8(set1('0'), mask) to subtract 0 or -1 (add 0 or 1) to ASCII '0', conditionally turning it into '1'.
The first steps of this (getting a vector of 0/-1 from a bitmask) is How to perform the inverse of _mm256_movemask_epi8 (VPMOVMSKB)?.
Fastest way to unpack 32 bits to a 32 byte SIMD vector (has a 128b version). Without SSSE3 (pshufb), I think punpcklbw / punpcklwd (and maybe pshufd) is what you need to repeat each byte of num 8 times and make two 16-byte vectors.
is there an inverse instruction to the movemask instruction in intel avx2?.
In scalar code, this is one way that runs at 1 bit->byte per clock. There are probably ways to do better without using SSE2 (storing multiple bytes at once to get around the 1 store per clock bottleneck that exists on all current CPUs), but why bother? Just use SSE2.
mov eax, [num]
lea rdi, [rsp + xxx] ; bits[]
.loop:
shr eax, 1 ; constant-count shift is efficient (1 uop). CF = last bit shifted out
setc [rdi] ; 2 uops, but just as efficient as setc reg / mov [mem], reg
shr eax, 1
setc [rdi+1]
add rdi, 2
cmp end_pointer ; compare against another register instead of a separate counter.
jb .loop
Unrolled by two to avoid bottlenecking on the front-end, so this can run at 1 bit per clock.
The difference is that the code _bittest(&num, nBit); uses a pointer to num, which makes the compiler store it in memory. And the memory access makes the code a lot slower.
bits[nBit] = _bittest(&num, nBit);
00007FF6D25110A0 bt dword ptr [num (07FF6D2513034h)],ebx ; <-----
00007FF6D25110A7 lea rcx,[rcx+1]
00007FF6D25110AB setb al
00007FF6D25110AE inc ebx
00007FF6D25110B0 mov byte ptr [rcx-1],al
The other version stores all the variables in registers, and uses very fast register shifts and adds. No memory accesses.

Strange uint32_t to float array conversion

I have the following code snippet:
#include <cstdio>
#include <cstdint>
static const size_t ARR_SIZE = 129;
int main()
{
uint32_t value = 2570980487;
uint32_t arr[ARR_SIZE];
for (int x = 0; x < ARR_SIZE; ++x)
arr[x] = value;
float arr_dst[ARR_SIZE];
for (int x = 0; x < ARR_SIZE; ++x)
{
arr_dst[x] = static_cast<float>(arr[x]);
}
printf("%s\n", arr_dst[ARR_SIZE - 1] == arr_dst[ARR_SIZE - 2] ? "OK" : "WTF??!!");
printf("magic = %0.10f\n", arr_dst[ARR_SIZE - 2]);
printf("magic = %0.10f\n", arr_dst[ARR_SIZE - 1]);
return 0;
}
If I compile it under MS Visual Studio 2015 I can see that the output is:
WTF??!!
magic = 2570980352.0000000000
magic = 2570980608.0000000000
So the last arr_dst element is different from the previous one, yet these two values were obtained by converting the same value, which populates the arr array!
Is it a bug?
I noticed that if I modify the conversion loop in the following manner, I get the "OK" result:
for (int x = 0; x < ARR_SIZE; ++x)
{
if (x == 0)
x = 0;
arr_dst[x] = static_cast<float>(arr[x]);
}
So this probably is some issue with vectorizing optimisation.
This behavior does not reproduce on gcc 4.8. Any ideas?
A 32-bit IEEE-754 binary float, such as MSVC++ uses, provides only 6-7 decimal digits of precision. Your starting value is well within the range of that type, but it seems not to be exactly representable by that type, as indeed is the case for most values of type uint32_t.
At the same time, the floating-point unit of an x86 or x86_64 processor uses a wider representation even than MSVC++'s 64-bit double. It seems likely that after the loop exits, the last-computed array element remains in an FPU register, in its extended precision form. The program may then use that value directly from the register instead of reading it back from memory, which it is obligated to do with previous elements.
If the program performs the == comparison by promoting the narrower representation to the wider instead of the other way around, then the two values might indeed compare unequal, as the round-trip from extended precision to float and back loses precision. In any event, both values are converted to type double when passed to printf(); if indeed they compared unequal, then it is likely that the results of those conversions differ, too.
I'm not up on MSVC++ compile options, but very likely there is one that would quash this behavior. Such options sometimes go by names such as "strict math" or "strict fp". Be aware, however, that turning on such an option (or turning off its opposite) can be very costly in an FP-heavy program.
Converting between unsigned and float is not simple on x86; there's no single instruction for it (until AVX512). A common technique is to convert as signed and then fixup the result. There are multiple ways of doing this. (See this Q&A for some manually-vectorized methods with C intrinsics, not all of which have perfectly-rounded results.)
MSVC vectorizes the first 128 with one strategy, and then uses a different strategy (which wouldn't vectorize) for the last scalar element, which involves converting to double and then from double to float.
gcc and clang produce the 2570980608.0 result from their vectorized and scalar methods. 2570980608 - 2570980487 = 121, and 2570980487 - 2570980352 = 135 (with no rounding of inputs/outputs), so gcc and clang produce the correctly rounded result in this case (less than 0.5ulp of error). IDK if that's true for every possible uint32_t (but there are only 2^32 of them, we could exhaustively check). MSVC's end result for the vectorized loop has slightly more than 0.5ulp of error, but the scalar method is correctly rounded for this input.
IEEE math demands that + - * / and sqrt produce correctly rounded results (less than 0.5ulp of error), but other functions (like log) don't have such a strict requirement. IDK what the requirements are on rounding for int->float conversions, so IDK if what MSVC does is strictly legal (if you didn't use /fp:fast or anything).
See also Bruce Dawson's Floating-Point Determinism blog post (part of his excellent series about FP math), although he doesn't mention integer<->FP conversions.
We can see in the asm linked by the OP what MSVC did (stripped down to only the interesting instructions and commented by hand):
; Function compile flags: /Ogtp
# assembler macro constants
_arr_dst$ = -1040 ; size = 516
_arr$ = -520 ; size = 516
_main PROC ; COMDAT
00013 mov edx, 129
00018 mov eax, -1723986809 ; this is your unsigned 2570980487
0001d mov ecx, edx
00023 lea edi, DWORD PTR _arr$[esp+1088] ; edi=arr
0002a rep stosd ; memset in chunks of 4B
# arr[0..128] = 2570980487 at this point
0002c xor ecx, ecx ; i = 0
# xmm2 = 0.0 in each element (i.e. all-zero)
# xmm3 = __xmm#4f8000004f8000004f8000004f800000 (a constant repeated in each of 4 float elements)
####### The vectorized unsigned->float conversion strategy:
$LL7#main: ; do{
00030 movups xmm0, XMMWORD PTR _arr$[esp+ecx*4+1088] ; load 4 uint32_t
00038 cvtdq2ps xmm1, xmm0 ; SIGNED int to Single-precision float
0003b movaps xmm0, xmm1
0003e cmpltps xmm0, xmm2 ; xmm0 = (xmm0 < 0.0)
00042 andps xmm0, xmm3 ; mask the magic constant
00045 addps xmm0, xmm1 ; x += (x<0.0) ? magic_constant : 0.0f;
# There's no instruction for converting from unsigned to float, so compilers use inconvenient techniques like this to correct the result of converting as signed.
00048 movups XMMWORD PTR _arr_dst$[esp+ecx*4+1088], xmm0 ; store 4 floats to arr_dst
; and repeat the same thing again, with addresses that are 16B higher (+1104)
; i.e. this loop is unrolled by two
0006a add ecx, 8 ; i+=8 (two vectors of 4 elements)
0006d cmp ecx, 128
00073 jb SHORT $LL7#main ; }while(i<128)
#### End of vectorized loop
# and then IDK what MSVC smoking; both these values are known at compile time. Is /Ogtp not full optimization?
# I don't see a branch target that would let execution reach this code
# other than by falling out of the loop that ends with ecx=128
00075 cmp ecx, edx
00077 jae $LN21#main ; if(i>=129): always false
0007d sub edx, ecx ; edx = 129-128 = 1
... some more ridiculous known-at-compile-time jumping later ...
######## The scalar unsigned->float conversion strategy for the last element
$LC15#main:
00140 mov eax, DWORD PTR _arr$[esp+ecx*4+1088]
00147 movd xmm0, eax
# eax = xmm0[0] = arr[128]
0014b cvtdq2pd xmm0, xmm0 ; convert the last element TO DOUBLE
0014f shr eax, 31 ; shift the sign bit to bit 1, so eax = 0 or 1
; then eax indexes a 16B constant, selecting either 0 or 0x41f0... (as whatever double that represents)
00152 addsd xmm0, QWORD PTR __xmm#41f00000000000000000000000000000[eax*8]
0015b cvtpd2ps xmm0, xmm0 ; double -> float
0015f movss DWORD PTR _arr_dst$[esp+ecx*4+1088], xmm0 ; and store it
00165 inc ecx ; ++i;
00166 cmp ecx, 129 ; } while(i<129)
0016c jb SHORT $LC15#main
# Yes, this is a loop, which always runs exactly once for the last element
By way of comparison, clang and gcc also don't optimize the whole thing away at compile time, but they do realize that they don't need a cleanup loop, and just do a single scalar store or convert after the respective loops. (clang actually fully unrolls everything unless you tell it not to.)
See the code on the Godbolt compiler explorer.
gcc just converts the upper and lower 16b halves to float separately, and combines them with a multiply by 65536 and add.
Clang's unsigned -> float conversion strategy is interesting: it never uses a cvt instruction at all. I think it stuffs the two 16-bit halves of the unsigned integer into the mantissa of two floats directly (with some tricks to set the exponents (bitwise boolean stuff and an ADDPS), then adds the low and high half together like gcc does.
Of course, if you compile to 64-bit code, the scalar conversion can just zero-extend the uint32_t to 64-bit and convert that as a signed int64_t to float. Signed int64_t can represent every value of uint32_t, and x86 can convert a 64-bit signed int to float efficiently. But that doesn't vectorize.
I did an investigation on a PowerPC imeplementation (Freescale MCP7450) as they IMHO are far better documented than any voodoo Intel comes up with.
As it turns out the floating point unit, FPU, and vector unit may have different rounding for floating point operations. The FPU can be configured to use one of four rounding modes; round to nearest (default), truncate, towards positive infinity and towards negative infinity. The vector unit however is only able to round to nearest, with a few select instructions having specific rounding rules. The internal precision of the FPU is 106-bit. The vector unit fulfills IEEE-754 but the documentation does not state much more.
Looking at your result the conversion 2570980608 is closer to the original integer, suggesting the FPU has better internal precision than the vector unit OR different rounding modes.

C++ inline assembler. How to read a value with the two lines?

Reading an address and reading a value by that address:
int m, n, k;
m = 7;
k = (int)&m;
n = *(int*)k;
Last line is compiled by Visual Studio 2013 to:
mov eax, k
mov eax, [eax]
mov n, eax
when the best variant is:
mov eax,[k]
mov n,eax
But the code below is not working because [k] is interpreted as k:
__asm {
mov eax,[k]
mov n,eax
}
Why? How to fix it?
You are trying to do two indirections in one go. x86 doesn't support this.
When you are doing n = *(int *)k;, you are really reading the value of k and then reading the content of that memory location. Since the content of k is not in a register at this point, it needs to be loaded into a register, and then that register content stored in n.
If you had a PDP-11 or VAX processor, it does indeed have a mov *(k), n (it has opposite direction to Intel assembler, so will move from k on the left to n on the right).
But x86, ARM, MIPS, 29K, 68000, and most other processors don't support this addressing moded