I have an existing project in Visual Studio, with a main file that calls a function file. I also created a CodeTimer.cpp following the steps from the Microsoft guide, and I placed it along with the necessary headers in the same directory as my code and function.
The issue is, I don't know how to link them. The solution builds fine, all three files combine with no errors. But when I CTRL-F5 it, I just see the output of my main, for obvious reasons (I didn't link the CodeTimer to the main).
This is my CodeTimer:
#include "stdafx.h"
#include <tchar.h>
#include <windows.h>
using namespace System;
int _tmain(int argc, _TCHAR* argv[])
{
__int64 ctr1 = 0, ctr2 = 0, freq = 0;
int acc = 0, i = 0;
// Start timing the code.
if (QueryPerformanceCounter((LARGE_INTEGER *)&ctr1) != 0)
{
// Code segment is being timed.
for (i = 0; i<100; i++) acc++;
// Finish timing the code.
QueryPerformanceCounter((LARGE_INTEGER *)&ctr2);
Console::WriteLine("Start Value: {0}", ctr1.ToString());
Console::WriteLine("End Value: {0}", ctr2.ToString());
QueryPerformanceFrequency((LARGE_INTEGER *)&freq);
Console::WriteLine("QueryPerformanceFrequency : {0} per Seconds.", freq.ToString());
Console::WriteLine("QueryPerformanceCounter minimum resolution: 1/{0} Seconds.", freq.ToString());
Console::WriteLine("ctr2 - ctr1: {0} counts.", ((ctr2 - ctr1) * 1.0 / 1.0).ToString());
Console::WriteLine("65536 Increments by 1 computation time: {0} seconds.", ((ctr2 - ctr1) * 1.0 / freq).ToString());
}
else
{
DWORD dwError = GetLastError();
Console::WriteLine("Error value = {0}", dwError.ToString());
}
// Make the console window wait.
Console::WriteLine();
Console::Write("Press ENTER to finish.");
Console::Read();
return 0;
}
NVM fixed it. Just had to add the body of _tmain() in main() function under my code, and get ride of the CodeTimer.cpp file completely. It was a conflict of mains (multiple mains in one project, the compiler automatically outputted the one with the highest priority in the project).
Related
At present, I need to calculate the cpu usage of a certain process on the macOS platform (the target process is not directly related to the current process). I use the proc_pid_rusage API. The calculation method is to call it every once in a while, and then calculate this section The difference between ri_user_time and ri_system_time of the time. So as to calculate the percentage of cpu usage.
I used it on a macOS system with non-M1 chip, and the results were in line with expectations (basically the same as what I saw on the activity monitor), but recently I found that the value obtained on the macOS system with the M1 chip is small. For example, one of my processes that consumes 30+% of the cpu(from activity monitor) is less than 1%.
I provide a demo code, you can directly create a new project to run:
//
// main.cpp
// SimpleMonitor
//
// Created by m1 on 2021/2/23.
//
#include <stdio.h>
#include <stdlib.h>
#include <libproc.h>
#include <stdint.h>
#include <iostream>
#include <thread> // std::this_thread::sleep_for
#include <chrono> // std::chrono::seconds
int main(int argc, const char * argv[]) {
// insert code here...
std::cout << "run simple monitor!\n";
// TODO: change process id:
int64_t pid = 12483;
struct rusage_info_v4 ru;
struct rusage_info_v4 ru2;
int64_t success = (int64_t)proc_pid_rusage((pid_t)pid, RUSAGE_INFO_V4, (rusage_info_t *)&ru);
if (success != 0) {
std::cout << "get cpu time fail \n";
return 0;
}
std::cout<<"getProcessPerformance, pid=" + std::to_string(pid) + " ru.ri_user_time=" + std::to_string(ru.ri_user_time) + " ru.ri_system_time=" + std::to_string(ru.ri_system_time)<<std::endl;
std::this_thread::sleep_for (std::chrono::seconds(10));
int64_t success2 = (int64_t)proc_pid_rusage((pid_t)pid, RUSAGE_INFO_V4, (rusage_info_t *)&ru2);
if (success2 != 0) {
std::cout << "get cpu time fail \n";
return 0;
}
std::cout<<"getProcessPerformance, pid=" + std::to_string(pid) + " ru2.ri_user_time=" + std::to_string(ru2.ri_user_time) + " ru2.ri_system_time=" + std::to_string(ru2.ri_system_time)<<std::endl;
int64_t cpu_time = ru2.ri_user_time - ru.ri_user_time + ru2.ri_system_time - ru.ri_system_time;
// percentage:
double cpu_usage = (double)cpu_time / 10 / 1000000000 * 100 ;
std::cout<<pid<<" cpu usage: "<<cpu_usage<<std::endl;
}
Here I want to know whether there is a problem with my calculation method, if there is no problem, how can I handle the inaccurate results on the M1 chip macOS system?
you have to multiply the cpu usage by some constant. Here are some snipets of code from a diff.
+#include <mach/mach_time.h>
mach_timebase_info_data_t sTimebase;
mach_timebase_info(&sTimebase);
timebase_to_ns = (double)sTimebase.numer / (double)sTimebase.denom;
syscpu.total = task_info.ptinfo.pti_total_system* timebase_to_ns/ 1000000;
usercpu.total = task_info.ptinfo.pti_total_user* timebase_to_ns / 1000000;
~
I looked at numerous posts concerning this problem but none of them apply in my case.
I have a C++ class in one file that has 3 methods. I can set a breakpoint in one method. However, I cannot set a breakpoint on any line of code in the other two methods. This class is build as a library with DEBUG set. All optimizations are turned off.
Below is the code for the two problem methods in this class.
Blockquote
#include "pch.h"
#include <stdio.h>
#include <afxwin.h>
#include <cstring>
#include <io.h>
#include <iostream>
#include <string>
#include "Log.h"
CLog::CLog()
{
ptLog = NULL; // this is the file ptr
}
void CLog::Init()
{
int iFD;
DWORD iLength;
int iStat;
HMODULE hMod;
std::string sPath;
std::string sFile;
int i;
hMod = GetModuleHandle(NULL); // handle to this execuatble
std::cout << "Module = " << hMod;
if(hMod)
{
// Use two bytes ASCII (UNICODE) if set by compiler
char acFile[120];
// Full path name of exe file
GetModuleFileName(hMod, acFile, sizeof(acFile));
std::cout << "File Name = " << acFile<<"\n";
// extract file name from full path and append .log
sPath = acFile;
i = sPath.find_last_of("\\/");
sFile = sPath.substr(i + 1);
sFile.copy(acFile, 120);
std::cout << " File Name Trunc = " << sFile;
sFile.append(".log");
iStat = fopen_s(&ptLog, sFile.data(), "a+"); // append log data to file
std::cout << "fopen stat = " << iStat;
if (iStat != 0) // failed to open error log
{
return;
}
iFD = _fileno(ptLog);
iLength = _filelength(iFD);
// Check length. If too large rename and create new file.
if (iLength > MAX_LOG_SIZE)
{
fclose(ptLog);
char acBakFile[80];
strcpy_s(acBakFile, 80, acFile);
strcat_s(acBakFile, ".bak"); // new name of old log file
remove(acBakFile); // remove previous bak file if it exists
rename(acFile, acBakFile);
fopen_s(&ptLog, acFile, "a+"); // Create new log file
}
}// end if (hMod)
}
,,,
ptLog is declared as FILE *
This class is invoked with the following code:
#include <iostream>
#include "..\Log\Log.h"
int main()
{
CLog Logger;
Logger.Init();
Logger.vLog((char *) "Hello \n");
}
Blockquote
This code is also compiled as debug. If a set a breakpoint on "Loggger.Init()"
the debugger will hit the breakpoint. If select 'Step Into' it will not enter
the code in the Init() method. The code does execute since I can see the text on the console. If I put breakpoints anywhere in the Init() method they do not break.
I did the following:
Removed log.lib from the input to the Linker.
Obviously, the Link failed due to unresolved externals.
Put back log.lib and rebuilt.
Turned off the option "Require source files that exactly match the original version"
Debug and breakpoints worked.
Enabled the option.
Retried debug and the breakpoints still worked.
Did a full rebuild and breakpoints worked.
I don't really understand it because I had performed numerous cleans and rebuilds
previously.
I did find another issue. I had '/clr' option on.
This is for Common Language Runtime support for the .lib.
The module linked to it did not have Common Language Runtime on. In this case,
the breakpoints were ignored. When I turned off '/ clr', the breakpoints
functioned properly
Here is my code.
char BPP[5];
int result, err;
result = GetPrivateProfileStringA("abc", "cba", NULL, BPP, 5, "D:\\aefeaf.ini"); // result = 0
result = _get_errno(&err); // result = 0, err = 0
result = GetLastError(); // result = 0
And description from MSDN: In the event the initialization file specified by lpFileName is not found, or contains invalid values, this function will set errorno with a value of '0x2' (File Not Found). To retrieve extended error information, call GetLastError.
Last parameter is random, the file is not existed. But GetLastError() still return 0. Could someone explain to me why it didn't return 2?
EDIT: As #JochenKalmbach suggest, I ensure my project is not using C++/CLI. And #claptrap said that errorno is a typo (it should be errno), I add _get_errno to my code above. But still, all the error code return is 0. Any help is much appreciated.
Hopefully you are not using C++/CLI... this will mess up the value of "GetLastError" because the code internally uses "IJW" (it just works) and does a bunch of Win32 operations....
FOr native applications, this works as expected:
#include <stdio.h>
#include <tchar.h>
#include <Windows.h>
#include <crtdbg.h>
int _tmain(int argv, char *argc[])
{
char szStr[5];
int result = GetPrivateProfileStringA("abc", "cba", NULL, szStr, 5, "D:\\aefeaf.ini");
_ASSERTE(result == 0);
result = GetLastError();
_ASSERTE(result == 2);
}
If you are using C++/CLI, then you should surround the method with
#pragma managed(push, off)
// Place the method here
#pragma managed(pop);
I'm trying to Use Octave with Visual C++.
I have downloaded octave-3.6.1-vs2010-setup-1.exe. Created a new project, added octave include folder to include path, octinterp.lib and octave.lib to lib path, and I added Octave bin folder as running directory.
The program compiles and runs fine except feval function that causes the exception:
Microsoft C++ exception: octave_execution_exception at memory location 0x0012faef
and on Octave side:
Invalid resizing operation or ambiguous assignment to an out-of-bounds array element.
What am I doing wrong?
Code for a standalone program:
#include <octave/octave.h>
#include <octave/oct.h>
#include <octave/parse.h>
int main(int argc, char **argv)
{
if (octave_main (argc, argv, true))
{
ColumnVector NumRands(2);
NumRands(0) = 10;
NumRands(1) = 1;
octave_value_list f_arg, f_ret;
f_arg(0) = octave_value(NumRands);
f_ret = feval("rand",f_arg,1);
Matrix unis(f_ret(0).matrix_value());
}
else
{
error ("Octave interpreter initialization failed");
}
return 0;
}
Thanks in advance.
I tried it myself, and the problem seems to originate from the feval line.
Now I don't have an explanation as to why, but the problem was solved by simply switching to the "Release" configuration instead of the "Debug" configuration.
I am using the Octave3.6.1_vs2010 build, with VS2010 on WinXP.
Here is the code I tested:
#include <iostream>
#include <octave/oct.h>
#include <octave/octave.h>
#include <octave/parse.h>
int main(int argc, char **argv)
{
// Init Octave interpreter
if (!octave_main(argc, argv, true)) {
error("Octave interpreter initialization failed");
}
// x = rand(10,1)
ColumnVector sz(2);
sz(0) = 10; sz(1) = 1;
octave_value_list in = octave_value(sz);
octave_value_list out = feval("rand", in, 1);
// print random numbers
if (!error_state && out.length () > 0) {
Matrix x( out(0).matrix_value() );
std::cout << "x = \n" << x << std::endl;
}
return 0;
}
with an output:
x =
0.165897
0.0239711
0.957456
0.830028
0.859441
0.513797
0.870601
0.0643697
0.0605021
0.153486
I'd guess that its actually stopped pointing at the next line and the error actually lies at this line:
f_arg(0) = octave_value(NumRands);
You seem to be attempting to get a value (which value?) from a vector and then assigning it to element 0 of a vector that has not been defined as a vector.
I don't really know though ... I've never tried writing octave code like that. I'm just trying to work it out by translating the code to standard matlab/octave code and that line seems really odd to me ...
I'm working on a C++ project using Visual Studio 2010 on Windows. I'm linking dynamically against x264 which I built myself as a shared library using MinGW following the guide at
http://www.ayobamiadewole.com/Blog/Others/x264compilation.aspx
The strange thing is that my x264 code is working perfectly sometimes. Then when I change some line of code (or even change the comments in the file!) and recompile everything crashes on the line
encoder_ = x264_encoder_open(¶m);
With the message
Access violation reading location 0x00000000
I'm not doing anything funky at all so it's probably not my code that is wrong but I guess there is something going wrong with the linking or maybe something is wrong with how I compiled x264.
The full initialization code:
x264_param_t param = { 0 };
if (x264_param_default_preset(¶m, "ultrafast", "zerolatency") < 0) {
throw KStreamerException("x264_param_default_preset failed");
}
param.i_threads = 1;
param.i_width = 640;
param.i_height = 480;
param.i_fps_num = 10;
param.i_fps_den = 1;
encoder_ = x264_encoder_open(¶m); // <-----
if (encoder_ == 0) {
throw KStreamerException("x264_encoder_open failed");
}
x264_picture_alloc(&pic_, X264_CSP_I420, 640, 480);
Edit: It turns out that it always works in Release mode and when using superfast instead of ultrafast it also works in Debug mode 100%. Could it be that the ultrafast mode is doing some crazy optimizations that the debugger doesn't like?
I've met this problem too with libx264-120.
libx264-120 was built on MinGW and configuration option like below.
$ ./configure --disable-cli --enable-shared --extra-ldflags=-Wl,--output-def=libx264-120.def --enable-debug --enable-win32thread
platform: X86
system: WINDOWS
cli: no
libx264: internal
shared: yes
static: no
asm: yes
interlaced: yes
avs: yes
lavf: no
ffms: no
gpac: no
gpl: yes
thread: win32
filters: crop select_every
debug: yes
gprof: no
strip: no
PIC: no
visualize: no
bit depth: 8
chroma format: all
$ make -j8
lib /def:libx264-120.def /machine:x86
#include "stdafx.h"
#include <iostream>
#include <cassert>
using namespace std;
#include <stdint.h>
extern "C"{
#include <x264.h>
}
int _tmain(int argc, _TCHAR* argv[])
{
int width(640);
int height(480);
int err(-1);
x264_param_t x264_param = {0};
//x264_param_default(&x264_param);
err =
x264_param_default_preset(&x264_param, "veryfast", "zerolatency");
assert(0==err);
x264_param.i_threads = 8;
x264_param.i_width = width;
x264_param.i_height = height;
x264_param.i_fps_num = 60;//fps;
x264_param.i_fps_den = 1;
// Intra refres:
x264_param.i_keyint_max = 60;//fps;
x264_param.b_intra_refresh = 1;
//Rate control:
x264_param.rc.i_rc_method = X264_RC_CRF;
x264_param.rc.f_rf_constant = 25;
x264_param.rc.f_rf_constant_max = 35;
//For streaming:
x264_param.b_repeat_headers = 1;
x264_param.b_annexb = 1;
err = x264_param_apply_profile(&x264_param, "baseline");
assert(0==err);
x264_t *x264_encoder = x264_encoder_open(&x264_param);
x264_encoder = x264_encoder;
x264_encoder_close( x264_encoder );
getchar();
return 0;
}
This program succeeds sometime. But will fail often on x264_encoder_open with the access violation.
The information for this is not existing on Google. And how to initialize x264_param_t and how to use x264_encoder_open are unclear.
It seems that behavior caused from x264's setting values, but I can't know these without reading some open source programs that using libx264.
And, this access violation seems doesn't occurs on FIRST TIME EXECUTION and on compilation with MinGW's gcc (e.g gcc -o test test.c -lx264;./test)
Since this behavior, I think that libx264 doing some strange processes of resources in DLL version of ilbx264 that was built on MinGW's gcc.
I had the same problem. The only way I was able to fix it was to build the x264 dll without the asm option (ie. specify --disable-asm)