namespace in debug flags of in-class defined friend functions - c++

I'm dealing with a class that defines a friend function in the class without outside declaration
namespace our_namespace {
template <typename T>
struct our_container {
friend our_container set_union(our_container const &, our_container const &) {
// meaningless for the example here, just a valid definition
// no valid semantics
return our_container{};
}
};
} // namespace our_namespace
As discussed (e.g. here or here) the function set_union is not in the our_namespace namespace but will be found by argument dependent lookup:
auto foo(std::vector<our_namespace::our_container<float>> in) {
// works:
return set_union(in[0], in[1]);
}
I noticed however that in the debug flags set_union appears to be in the our_namespace namespace
mov rdi, qword ptr [rbp - 40] # 8-byte Reload
mov rsi, rax
call our_namespace::set_union(our_namespace::our_container<float> const&, our_namespace::our_container<float> const&)
add rsp, 48
pop rbp
ret
our_namespace::set_union(our_namespace::our_container<float> const&, our_namespace::our_container<float> const&): # #our_namespace::set_union(our_namespace::our_container<float> const&, our_namespace::our_container<float> const&)
push rbp
mov rbp, rsp
mov qword ptr [rbp - 16], rdi
mov qword ptr [rbp - 24], rsi
pop rbp
ret
although I can't call it as our_namespace::set_union
auto foo(std::vector<our_namespace::our_container<float>> in) {
// fails:
return our_namespace::set_union(in[0], in[1]);
}
Any hints about how the debug information is to be understood?
EDIT: The set_union function body is only a strawdog example here to have a valid definition.

The C++ standard only defines compiler behavior in regards to the code compilation and behavior of the resulting program. It doesn't define all the aspects of code generation, and in particular, it doesn't define debug symbols.
So your compiler correctly (as per Standard) disallows calling the function through namespace it is not in. But since the function does exist and you should be able to debug it, it needs to put debug symbol somewhere. Enclosing namespace seems to be a reasonable choice.

Related

How to reduce the size of the executable?

When I compile this code using the {fmt} lib, the executable size becomes 255 KiB whereas by using only iostream header it becomes 65 KiB (using GCC v11.2).
time_measure.cpp
#include <iostream>
#include "core.h"
#include <string_view>
int main( )
{
// std::cout << std::string_view( "Oh hi!" );
fmt::print( "{}", std::string_view( "Oh hi!" ) );
return 0;
}
Here is my build command:
g++ -std=c++20 -Wall -O3 -DNDEBUG time_measure.cpp -I include format.cc -o runtime_measure.exe
Isn't the {fmt} library supposed to be lightweight compared to iostream? Or maybe I'm doing something wrong?
Edit: By adding -s to the command in order to remove all symbol table and relocation information from the executable, it becomes 156 KiB. But still ~2.5X more than the iostream version.
As with any other library there is a fixed cost and a per-call cost. The fixed cost for the {fmt} library is indeed around 100-150k without debug info (it depends on the compiler flags). In your example you are comparing this fixed cost of linking with the library and the reason why iostreams appears to be smaller is because it is included in the standard library itself which is linked dynamically and not counted to the binary size of the executable.
Note that a large part of this size comes from floating-point formatting functionality which doesn't even exist in iostreams (shortest round-trip representation).
If you want to compare per-call binary size which is more important for real-world code with large number of formatting function calls, you can look at object files or generated assembly. For example:
#include <fmt/core.h>
int main() {
fmt::print("Oh hi!");
}
generates (https://godbolt.org/z/qWTKEMqoG)
.LC0:
.string "Oh hi!"
main:
sub rsp, 24
pxor xmm0, xmm0
xor edx, edx
mov edi, OFFSET FLAT:.LC0
mov rcx, rsp
mov esi, 6
movaps XMMWORD PTR [rsp], xmm0
call fmt::v8::vprint(fmt::v8::basic_string_view<char>, fmt::v8::basic_format_args<fmt::v8::basic_format_context<fmt::v8::appender, char> >)
xor eax, eax
add rsp, 24
ret
while
#include <iostream>
int main() {
std::cout << "Oh hi!";
}
generates (https://godbolt.org/z/frarWvzhP)
.LC0:
.string "Oh hi!"
main:
sub rsp, 8
mov edx, 6
mov esi, OFFSET FLAT:.LC0
mov edi, OFFSET FLAT:_ZSt4cout
call std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)
xor eax, eax
add rsp, 8
ret
_GLOBAL__sub_I_main:
sub rsp, 8
mov edi, OFFSET FLAT:_ZStL8__ioinit
call std::ios_base::Init::Init() [complete object constructor]
mov edx, OFFSET FLAT:__dso_handle
mov esi, OFFSET FLAT:_ZStL8__ioinit
mov edi, OFFSET FLAT:_ZNSt8ios_base4InitD1Ev
add rsp, 8
jmp __cxa_atexit
Other than static initialization for cout there is not much difference because there is virtually no formatting here, so it's just one function call in both cases. Once you add formatting you'll quickly see the benefits of {fmt}, see e.g. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p0645r10.html#BinaryCode.
You forget that iostreams is already included in the stdlibc++.so that is not counted towards the binary size since it is a shared library (usually). I believe by default fmt is built as a static library file so it increases the binary size. You need to compile fmt as a shared library with -DBUILD_SHARED_LIBS=TRUE as explained in the building instructions
in your build/link command, why don't you use -Os option (Optimize for size) ?

Some kext member functions must be redefined, to avoid unresolved symbols

TL;DR
A subclass is reimplementing (redefining) a virtual function of the superclass (base class) in the scope of the superclass, because the dynamic loader requires it to do so. It doesn't make any sense to me.
Example:
class IO80211Controller : public IOEthernetController
{
virtual IOReturn enablePacketTimestamping(); // Implemented in binary, I can see the disassembly.
};
// .cpp - Redefinition with superclass namespace.
IOReturn IO80211Controller::enablePacketTimestamping()
{
return kIOReturnUnsupported; // This is from the disassembly of IO80211Controller
}
The above isn't the real header, I hope it's close to what it should be - no header is available.
// .hpp
class AirPortBrcm4331 : public IO80211Controller
{
// Subclass stuff goes here
};
// .cpp - Redefinition with superclass namespace.
IOReturn IO80211Controller::enablePacketTimestamping()
{
return kIOReturnUnsupported; // This is from the disassembly of AirPortBrcm4331
}
Background
I'm researching IO80211Family.kext (which there are no headers available for), and IO80211Controller class in particular - I'm in the process of reversing the header so it will be possible to inherit from this class and create custom 802.11 drivers.
Discovering the problem
IO80211Controller defines many virtual member functions, which I need to declare on my reversed header file. I created a header file with all virtual functions (extracted from IO80211Controller's vtable) and used it for my subclass.
When loading my new kext (with the subclass), there were linking errors:
kxld[com.osxkernel.MyWirelessDriver]: The following symbols are unresolved for this kext:
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::enableFeature(IO80211FeatureCode, void*)
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::flowIdSupported()
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::apple80211_ioctl(IO80211Interface*, __ifnet*, unsigned long, void*)
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::enablePacketTimestamping()
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::hardwareOutputQueueDepth(IO80211Interface*)
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::disablePacketTimestamping()
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::performCountryCodeOperation(IO80211Interface*, IO80211CountryCodeOp)
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::requiresExplicitMBufRelease()
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::_RESERVEDIO80211Controllerless7()
kxld[com.osxkernel.MyWirelessDriver]: IO80211Controller::stopDMA()
Link failed (error code 5).
The reversed header of the superclass contains over 50 virtual member functions, so if there were any linking problems, I would assume it would be a all-or-nothing. When adding a simple implementation to these functions (using superclass namespace) the linking errors are gone.
Two questions arise
How can multiple implementations of the same functions co-exist? They both live in the kernel address space.
What makes these specific functions so special, while the other 50 are ok without a weird reimplementing demand?
Hypothesis
I can't answer the first question, but I have started a research about the second.
I looked into the IO80211Family mach-o symbol table, and all the functions with the linking error don't contain the N_EXT bit in their type field - meaning they are not external symbols, while the other functions do contain the N_EXT bit.
I wasn't sure how this affects the loading kext procedure, so I dived into XNU source and looked for the kext loading code. There's a major player in here called vtable patching, which might shed some light on my first question.
Anyway, there's a predicate function called kxld_sym_is_unresolved which checks whether a symbol is unresolved. kxld calls this function on all symbols, to verify they are all ok.
boolean_t
kxld_sym_is_unresolved(const KXLDSym *sym)
{
return ((kxld_sym_is_undefined(sym) && !kxld_sym_is_replaced(sym)) ||
kxld_sym_is_indirect(sym) || kxld_sym_is_common(sym));
}
This function result in my case comes down to the return value of kxld_sym_is_replaced, which simply checks if the symbol has been patched (vtable patching), I don't understand well enough what is it and how it affects me...
The Grand Question
Why Apple chose these functions to not be external? are they implying that they should be implemented by others -- and others, why same scope as superclass? I jumped into the source to find answer to this but didn't. This is what most disturbs me - it doesn't follow my logic. I understand that a full comprehensive answer is probably too complicated, so at least help me understand, on a higher level, what's going on here, what's the logic behind not letting the subclass get the implementation of these specific functions, in such a weird way (why not pure abstract)?
Thank you so much for reading this!
The immediate explanation is indeed that the symbols are not exported by the IO80211 kext. The likely reason behind this however is that the functions are implemented inline, like so:
class IO80211Controller : public IOEthernetController
{
//...
virtual IOReturn enablePacketTimestamping()
{
return kIOReturnUnsupported;
}
//...
};
For example, if I build this code:
#include <cstdio>
class MyClass
{
public:
virtual void InlineVirtual() { printf("MyClass::InlineVirtual\n"); }
virtual void RegularVirtual();
};
void MyClass::RegularVirtual()
{
printf("MyClass::RegularVirtual\n");
}
int main()
{
MyClass a;
a.InlineVirtual();
a.RegularVirtual();
}
using the command
clang++ -std=gnu++14 inline-virtual.cpp -o inline-virtual
and then inspect the symbols using nm:
$ nm ./inline-virtual
0000000100000f10 t __ZN7MyClass13InlineVirtualEv
0000000100000e90 T __ZN7MyClass14RegularVirtualEv
0000000100000ef0 t __ZN7MyClassC1Ev
0000000100000f40 t __ZN7MyClassC2Ev
0000000100001038 S __ZTI7MyClass
0000000100000faf S __ZTS7MyClass
0000000100001018 S __ZTV7MyClass
U __ZTVN10__cxxabiv117__class_type_infoE
0000000100000000 T __mh_execute_header
0000000100000ec0 T _main
U _printf
U dyld_stub_binder
You can see that MyClass::InlineVirtual has hidden visibility (t), while MyClass::RegularVirtual is exported (T). The implementation for a function declared as inline (either explicitly with the keyword or implicitly by placing it inside the class definition) must be provided in all compilation units that call it, so it makes sense that they wouldn't have external linkage.
You're hitting a very simple phenomenon: unexported symbols.
$ nm /System/Library/Extensions/IO80211Family.kext/Contents/MacOS/IO80211Family | fgrep __ZN17IO80211Controller | egrep '\w{16} t'
00000000000560c6 t __ZN17IO80211Controller13enableFeatureE18IO80211FeatureCodePv
00000000000560f6 t __ZN17IO80211Controller15flowIdSupportedEv
0000000000055fd4 t __ZN17IO80211Controller16apple80211_ioctlEP16IO80211InterfaceP7__ifnetmPv
0000000000055f74 t __ZN17IO80211Controller21monitorModeSetEnabledEP16IO80211Interfacebj
0000000000056154 t __ZN17IO80211Controller24enablePacketTimestampingEv
0000000000056008 t __ZN17IO80211Controller24hardwareOutputQueueDepthEP16IO80211Interface
0000000000056160 t __ZN17IO80211Controller25disablePacketTimestampingEv
0000000000056010 t __ZN17IO80211Controller27performCountryCodeOperationEP16IO80211Interface20IO80211CountryCodeOp
00000000000560ee t __ZN17IO80211Controller27requiresExplicitMBufReleaseEv
0000000000055ffc t __ZN17IO80211Controller7stopDMAEv
0000000000057452 t __ZN17IO80211Controller9MetaClassD0Ev
0000000000057448 t __ZN17IO80211Controller9MetaClassD1Ev
Save for the two MetaClass destructors, the only difference to your list of linking errors is monitorModeSetEnabled (any chance you're overriding that?).
Now on my system I have exactly one class extending IO80211Controller, which is AirPort_BrcmNIC, implemented by com.apple.driver.AirPort.BrcmNIC. So let's look at how that handles it:
$ nm /System/Library/Extensions/AirPortBrcmNIC-MFG.kext/Contents/MacOS/AirPortBrcmNIC-MFG | egrep '13enableFeatureE18IO80211FeatureCodePv|15flowIdSupportedEv|16apple80211_ioctlEP16IO80211InterfaceP7__ifnetmPv|21monitorModeSetEnabledEP16IO80211Interfacebj|24enablePacketTimestampingEv|24hardwareOutputQueueDepthEP16IO80211Interface|25disablePacketTimestampingEv|27performCountryCodeOperationEP16IO80211Interface20IO80211CountryCodeOp|27requiresExplicitMBufReleaseEv|7stopDMAEv'
0000000000046150 t __ZN17IO80211Controller15flowIdSupportedEv
0000000000046120 t __ZN17IO80211Controller16apple80211_ioctlEP16IO80211InterfaceP7__ifnetmPv
0000000000046160 t __ZN17IO80211Controller24enablePacketTimestampingEv
0000000000046170 t __ZN17IO80211Controller25disablePacketTimestampingEv
0000000000046140 t __ZN17IO80211Controller27requiresExplicitMBufReleaseEv
000000000003e880 T __ZN19AirPort_BrcmNIC_MFG13enableFeatureE18IO80211FeatureCodePv
0000000000025b10 T __ZN19AirPort_BrcmNIC_MFG21monitorModeSetEnabledEP16IO80211Interfacebj
0000000000025d20 T __ZN19AirPort_BrcmNIC_MFG24hardwareOutputQueueDepthEP16IO80211Interface
0000000000038cf0 T __ZN19AirPort_BrcmNIC_MFG27performCountryCodeOperationEP16IO80211Interface20IO80211CountryCodeOp
000000000003e7d0 T __ZN19AirPort_BrcmNIC_MFG7stopDMAEv
So one bunch of methods they've overridden and the rest... they re-implemented locally. Firing up a disassembler, we can see that these are really just stubs:
;-- IO80211Controller::apple80211_ioctl(IO80211Interface*,__ifnet*,unsignedlong,void*):
;-- method.IO80211Controller.apple80211_ioctl_IO80211Interface____ifnet__unsignedlong_void:
0x00046120 55 push rbp
0x00046121 4889e5 mov rbp, rsp
0x00046124 4d89c1 mov r9, r8
0x00046127 4989c8 mov r8, rcx
0x0004612a 4889d1 mov rcx, rdx
0x0004612d 488b17 mov rdx, qword [rdi]
0x00046130 488b82900c00. mov rax, qword [rdx + 0xc90]
0x00046137 31d2 xor edx, edx
0x00046139 5d pop rbp
0x0004613a ffe0 jmp rax
0x0004613c 0f1f4000 nop dword [rax]
;-- IO80211Controller::requiresExplicitMBufRelease():
;-- method.IO80211Controller.requiresExplicitMBufRelease:
0x00046140 55 push rbp
0x00046141 4889e5 mov rbp, rsp
0x00046144 31c0 xor eax, eax
0x00046146 5d pop rbp
0x00046147 c3 ret
0x00046148 0f1f84000000. nop dword [rax + rax]
;-- IO80211Controller::flowIdSupported():
;-- method.IO80211Controller.flowIdSupported:
0x00046150 55 push rbp
0x00046151 4889e5 mov rbp, rsp
0x00046154 31c0 xor eax, eax
0x00046156 5d pop rbp
0x00046157 c3 ret
0x00046158 0f1f84000000. nop dword [rax + rax]
;-- IO80211Controller::enablePacketTimestamping():
;-- method.IO80211Controller.enablePacketTimestamping:
0x00046160 55 push rbp
0x00046161 4889e5 mov rbp, rsp
0x00046164 b8c70200e0 mov eax, 0xe00002c7
0x00046169 5d pop rbp
0x0004616a c3 ret
0x0004616b 0f1f440000 nop dword [rax + rax]
;-- IO80211Controller::disablePacketTimestamping():
;-- method.IO80211Controller.disablePacketTimestamping:
0x00046170 55 push rbp
0x00046171 4889e5 mov rbp, rsp
0x00046174 b8c70200e0 mov eax, 0xe00002c7
0x00046179 5d pop rbp
0x0004617a c3 ret
0x0004617b 0f1f440000 nop dword [rax + rax]
Which about corresponds to this:
static uint32_t IO80211Controller::apple80211_ioctl(IO80211Interface *intf, __ifnet *net, unsigned long some, void *whatev)
{
return this->apple80211_ioctl(intf, (IO80211VirtualInterface*)NULL, net, some, whatev);
}
static bool IO80211Controller::requiresExplicitMBufRelease()
{
return false;
}
static bool IO80211Controller::flowIdSupported()
{
return false;
}
static IOReturn IO80211Controller::enablePacketTimestamping()
{
return kIOReturnUnsupported;
}
static IOReturn IO80211Controller::disablePacketTimestamping()
{
return kIOReturnUnsupported;
}
I didn't try to compile the above, but that should get you on the right track. :)

Using C++ namespaces inside of inline assembly code

I just have a quick question on if this will work on or not.
void __declspec(naked) HookProcessEventProxy() {
__asm {
mov CallObjectPointer, ecx
push edx
mov edx, dword ptr[esp + 0x8]
mov UFunctionPointer, edx
mov edx, dword ptr[esp + 0xC]
mov ParamsPointer, edx
pop edx
pushfd
pushad
}
ProcessEventProxy();
__asm {
popad
popfd
jmp[Pointers::OldProcessEvent] // This is the line in question.
}
}
Does the Pointers namespace define to go to the Pointers::OldProcessEvent or will it go to the ProcessEvent I have inside of my DLLMain?
The HookProcessEventProxy is inside my DLLMain.
From the vendor-specific extensions in the code, it seems that you are compiling this on MSVC. If so, then this is not a problem. The inline assembler understands C++ scoping rules and identifiers.
You can easily verify this for yourself by analyzing the object code produced by the compiler. Either disassemble the binary using dumpbin /disasm, or throw the /FA switch when running the compiler to get a separate listing. What you'll see is that the compiler emits your inline assembly in a very literal fashion:
?HookProcessEventProxy##YAXXZ PROC ; HookProcessEventProxy, COMDAT
mov DWORD PTR ?CallObjectPointer##3HA, ecx ; CallObjectPointer
push edx
mov edx, DWORD PTR [esp+8]
mov DWORD PTR ?UFunctionPointer##3HA, edx ; UFunctionPointer
mov edx, DWORD PTR [esp+12]
mov DWORD PTR ?ParamsPointer##3HA, edx ; ParamsPointer
pop edx
pushfd
pushad
call ?ProcessEventProxy##YAXXZ ; ProcessEventProxy
popad
popfd
jmp ?OldProcessEvent#Pointers##YAXXZ ; Pointers::OldProcessEvent
?HookProcessEventProxy##YAXXZ ENDP ; HookProcessEventProxy
The above listing is from the file generated by the compiler when the /FA switch is used. The comments out to the right indicate the corresponding C++ object.
Note that you do not need the brackets around the branch target. Although the inline assembler ignores them, it is confusing to include them. Just write:
jmp Pointers::OldProcessEvent

How to pass operator << for string as parameter?

I want to do something like that:
vector<string> road_map;
// do some stuff
for_each(road_map.begin(), road_map.end(), bind(cout, &ostream::operator<<));
Note: I don't want to use lambda for this purpose, like that:
[](const string& str){ cout << str << endl; }
C++ compiler will create aux code for lambda. That's why I don't want to use lambda in such case. I suppose that there is more lightweight solution for such problem. Of course it is not critical, if there is not simple solution I just will use lambda.
This answer is mainly used to investigate the C++ compiler will create aux code for lambda claim.
You should note that there is no member function ostream::operator<< taking a std::string. There's only a free-standing function opertator<<(std::ostream&,const std::string&) defined in the <string> header.
I used gcc godbolt, you can see the example Live here
So I made a version using a lambda and a version using std::bind:
With bind you get
void funcBind(std::vector<std::string>& a)
{
using namespace std;
using func_t = std::ostream&(*)(std::ostream&, const std::string&);
func_t fptr = &operator<<; //select the right overload
std::for_each(
a.begin(),
a.end(),
std::bind(
fptr,
std::ref(std::cout),
std::placeholders::_1)
);
}
and this assembly on x86_64
push rbp
push rbx
sub rsp, 8
mov rbp, QWORD PTR [rdi+8]
mov rbx, QWORD PTR [rdi]
cmp rbx, rbp
je .L22
.L24:
mov rdx, QWORD PTR [rbx+8]
mov rsi, QWORD PTR [rbx]
mov edi, OFFSET FLAT:std::cout
add rbx, 32
call std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)
cmp rbp, rbx
jne .L24
.L22:
add rsp, 8
pop rbx
pop rbp
ret
and with a lambda you get:
void funcLambda(std::vector<std::string>& a)
{
std::for_each(
a.begin(),
a.end(),
[](const std::string& b){std::cout << b;});
}
and this assembly
push rbp
push rbx
sub rsp, 8
mov rbp, QWORD PTR [rdi+8]
mov rbx, QWORD PTR [rdi]
cmp rbx, rbp
je .L27
.L29:
mov rdx, QWORD PTR [rbx+8]
mov rsi, QWORD PTR [rbx]
mov edi, OFFSET FLAT:std::cout
add rbx, 32
call std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, long)
cmp rbp, rbx
jne .L29
.L27:
add rsp, 8
pop rbx
pop rbp
ret
so you don't actually see any difference with any appreciable level of optimization enable
You can use an ostream_iterator to generate the output.
#include <string>
#include <vector>
#include <iostream>
#include <algorithm>
#include <iterator>
int main()
{
std::vector<std::string> road_map{"ab", "cde"};
// do some stuff
std::copy(road_map.begin(), road_map.end(),
std::ostream_iterator<std::string>(std::cout, "\n"));
}

Inline Assembly GCD won't work

I've been writing a simple c++ program that uses Assembly to take the GCD of 2 numbers and output them as an example used in a tutorial I watched. I understand what it's doing, but I don't understand why it won't work.
EDIT: Should add that when it runs, it doesn't output anything at all.
#include <iostream>
using namespace std;
int gcd(int a, int b)
{
int result;
_asm
{
push ebp
mov ebp, esp
mov eax, a
mov ebx, b
looptop:
cmp eax, 0
je goback
cmp eax, ebx
jge modulo
xchg eax, ebx
modulo:
idiv ebx
mov eax, edx
jmp looptop
goback:
mov eax, ebx
mov esp, ebp
pop ebp
mov result, edx
}
return result;
}
int main()
{
cout << gcd(46,90) << endl;
return 0;
}
I'm running it on a 32bit Windows system, any help would be appreciated. When compiling, I get 4 errors:
warning C4731: 'gcd' : frame pointer register 'ebp' modified by inline assembly code
warning C4731: 'gcd' : frame pointer register 'ebp' modified by inline assembly code
warning C4731: 'main' : frame pointer register 'ebp' modified by inline assembly code
warning C4731: 'main' : frame pointer register 'ebp' modified by inline assembly code
The compiler will insert these or equivalent instructions for you at the beginning and end of the function:
push ebp
mov ebp, esp
...
mov esp, ebp
pop ebp
If you add them manually, you won't be able to access the function's parameters through ebp, which is why the compiler is issuing warnings.
Remove these 4 instructions.
Also, start using the debugger. Today.