The program can't rotate the program log using the "rotate" property and "00:00" as the value. Run the program one day and the next day without the result of the rotation. I have tried it for several hours and have not had success either. On the other hand, it works correctly for me if I use "daily", but of course, I am interested in the rotation being carried out every day at 00:00. I would appreciate your help, any contribution will be highly valued.
Using poco-1.11.1
OS: Linux Mint 20.2 Cinnamon
Version: 5.0.7
linux core: 5.4.0-generic
Processor: AMD Ryzen 7 5700U
The provisionally developed code is:
// Configuracion canal fichero log
pCanalFich1->setProperty("path", "log.log");
pCanalFich1->setProperty("rotation", "00:00"); // CHECK THIS LINE PLEASE.
pCanalFich1->setProperty("archive", "timestamp");
pCanalFich1->setProperty("times", "utc");
pCanalFich1->setProperty("purgeCount", "5");
I assume you are using a FileChannel
See Slide 35 of the Logging slides here to see what you can set the rotation to.
I think you want:
pCanalFich1->setProperty("rotation", "daily");
Related
Firstly: Context is VR with HP Reverb G2, WMR runtime, DX12.
We're seeing some unexplained behaviour across developer machines when working with OpenXR. It looks as thought the OpenXR runtime is changing the way it presents depending on the machine setting for preferred GPU.
More specifically, we noticed that depending on the machines setting for preferred GPU, we are seeing a different method used when XrEndFrame is called. This is a big deal as the different method results in a blank screen being drawn into our current renderTarget!
The difference is that when the preferred device is an Nvidia GPU, xrEndFrame looks like this in PIX (in a graphics queue that is separate to our main render):
Index Global ID Name EOP to EOP Duration (ns) Execution Start Time (ns)
2 8063 Signal(pFence:obj#20, Value:62)
3 8064 Wait(pFence:obj#36, Value:31)
5 8065 CopyTextureRegion(pDst:{pResource:obj#4083, Type:D3D12_TEXTURE_COPY_TYPE_SUBRESOURCE_INDEX, SubresourceIndex:0}, DstX:0, DstY:0, DstZ:0, pSrc:{pResource:obj#4084, Type:D3D12_TEXTURE_COPY_TYPE_SUBRESOURCE_INDEX, SubresourceIndex:0}, pSrcBox:{left:0, top:0, front:0, right:2088, bottom:2036, back:1})
6 8066 CopyTextureRegion(pDst:{pResource:obj#4083, Type:D3D12_TEXTURE_COPY_TYPE_SUBRESOURCE_INDEX, SubresourceIndex:1}, DstX:0, DstY:0, DstZ:0, pSrc:{pResource:obj#4085, Type:D3D12_TEXTURE_COPY_TYPE_SUBRESOURCE_INDEX, SubresourceIndex:0}, pSrcBox:{left:0, top:0, front:0, right:2088, bottom:2036, back:1})
8 8067 Signal(pFence:obj#20, Value:63)
9 8068 Signal(pFence:obj#21, Value:31)
and when it isn't, (i.e. somehow maybe picking up Intel onboard?) it looks like this:
Index Global ID Name EOP to EOP Duration (ns) Execution Start Time (ns)
0 8064 Wait(pFence:obj#45, Value:21)
2 8065 ClearRenderTargetView(RenderTargetView:res#4008, ColorRGBA:{Element:0, Element:0, Element:0, Element:0}, NumRects:0, pRects:nullptr)
15 8066 DrawIndexedInstanced(IndexCountPerInstance:4, InstanceCount:2, StartIndexLocation:0, BaseVertexLocation:0, StartInstanceLocation:0)
17 8067 Signal(pFence:obj#22, Value:23)
18 8068 Signal(pFence:obj#23, Value:21)
The latter is clearing the current renderTargetView and drawing a quad over the top that is the dimensions of the headset display.
Yet- we've checked the rendering code and it is definitely not selecting the Intel graphics device. However the second behaviour goes away if we set 'preferred graphics processor' to nvidia gpu in nvidia control panel.
We can also see that the above behaviour is the result of a call to XrEndFrame, and that our rendering code is identical otherwise.
Any clue as to what part of the runtime might be looking at or influenced by this setting?
Unfortunately (fortuitously?) we found we need to work on the rendering code to be able to swap runtimes to say SteamVR, so right now we can't swap out the runtime.
Obviously we have a workaround, which is to set the preferred device. But understanding how/why this issue is occurring would be great.
So this was finally tracked down to an error on our part.
In our case we were using xrGetD3D12GraphicsRequirementsKHR to get the minimum graphics requirements for openxr.
This has an adapterLuid identifier in the structure XrGraphicsRequirementsD3D12KHR which we should have been using to select the GPU in the graphics API, but weren't.
__global__ void functionA()
{
printf("functionA");
}
int main()
{
printf("main1");
functionA<<<1,1>>>();
printf("main2");
}
I'm trying to run a simple test with the above. But the program only outputs "main1". The program should output "functionA" and "main2" too.
This seems to have two reasons:
First of all you need to add
cudaDeviceSynchronize();
after the CUDA routine in order to block the main until the device has completed all tasks.
Furthermore this might happen if you set the wrong GPU architecture/compute capability XX when compiling the code
$ nvcc -gencode=arch=compute_XX,code=sm_XX -o my_app my_app.cu
In this case only the host code is run while the parts on the accelerator will be omitted it seems. You can find an overview of the corresponding number XX for the different hardware generations over here. The K20m you are running is 35. So it should be
$ nvcc -gencode=arch=compute_35,code=sm_35 -o my_app my_app.cu
in your case.
This might also occur if you have multiple graphic accelerators in your system and the code is executed on the wrong one. Each graphics card/accelerator is assigned a particular device id. The device with number 0 should be assigned automatically to the most powerful device and will be used by default. Therefore the first time I compiled the code on my system containing a powerful Tesla K80 (architecture 37) and a low power Quadro P620 (architecture 60) I selected 37 and had the same error as you have while when selecting 60 the code would run. I then used then the Querying Device Properties example to give me a list of the CUDA-capable devices and their corresponding device id, just to find out that on my system the Tesla K80 is set as 1 and 2 while the simple Quadro P620 graphics card is set as 0. I assume this is the case as the K80 is deprecated in CUDA 11!
You can select the device inside your code with cudaSetDevice or change it when launching the program with
$ CUDA_VISIBLE_DEVICES="1" ./my_app
where 1 has to be replaced by the device id you wish to use. Doing so should make your code run without any problems.
You can also test if this really is the issue this by cloning the Github repository of "Learn CUDA Programming", then browsing Chapter01/01_cuda_introduction/01_hello_world/, compile the make file with $ make and finally run it with $ ./hello_world. It automatically compiles for multiple architectures/compute capabilities and should therefore run without any issue!
I've developed a multi platform program (using the FLTK toolkit) and implement multithreading to do background intensive tasks.
I have followed the FLTK tutorials/examples on multithreading which involve using pthread on Mac, ie the function pthread_create and windows threading on windows ie _beginthread
What I have noticed is that the performance is much higher on Windows ie 3 to 4 times faster in these background threads (in the time to execute them).
Why might this be? Is it the threading libraries I'm using? Surely there shouldn't be such a difference? Or could it be the runtime libraries underneath it all?
Here are my machine details
Mac:
Intel(R) Core(TM) i7-3820QM CPU # 2.70GHz
16 GB DDR3 1600 MHz
Model MacBookPro9,1
OS: Mac OSX 10.8.5
Windows:
Intel(R) Core(TM) i7-3520M CPU # 2.90GHz
16 GB DDR3 1600 MHz
Model: Dell Latitude E5530
OS: Windows 7 Service Pack 1
EDIT
To just do a basic speed comparison I compiled this on both machines running from the command line
int main(int agrc, char **argv)
{
time_t t = time(NULL);
tm* tt=localtime(&t);
std::stringstream s;
s<< std::setfill('0')<<std::setw(2)<<tt->tm_mday<<"/"<<std::setw(2)<<tt->tm_mon+1<<"/"<< std::setw(4)<<tt->tm_year+1900<<" "<< std::setw(2)<<tt->tm_hour<<":"<<std::setw(2)<<tt->tm_min<<":"<<std::setw(2)<<tt->tm_sec;
std::cout<<"1: "<<s.str()<<std::endl;
double sum=0;
for (int i=0;i<100000000;i++){
double ii=i*0.123456789;
sum=sum+sin(ii)*cos(ii);
}
t = time(NULL);
tt=localtime(&t);
s.str("");
s<< std::setfill('0')<<std::setw(2)<<tt->tm_mday<<"/"<<std::setw(2)<<tt->tm_mon+1<<"/"<< std::setw(4)<<tt->tm_year+1900<<" "<< std::setw(2)<<tt->tm_hour<<":"<<std::setw(2)<<tt->tm_min<<":"<<std::setw(2)<<tt->tm_sec;
std::cout<<"2: "<<s.str()<<std::endl;
}
Windows takes less than a second. Mac takes 4/5 seconds. Any ideas?
On Mac I'm compiling with g++, with visual studio 2013 on windows.
SECOND EDIT
if I change the line
std::cout<<"2: "<<s.str()<<std::endl;
to
std::cout<<"2: "<<s.str()<<" "<<sum<<std::endl;
Then all of a sudden Windows takes a little bit longer...
This makes me think that the whole thing might be compiler optimisation. So the question would be is g++ (4.2 is the version I have) worse at optimisation or do I need to provide additional flags?
THIRD(!) AND FINAL EDIT
I can report that I achieve comparable performance by ensuring g++ optimisation flags -O were provided at compile time. One of those annoying things that happens so often
A: Im tearing my hair out on problem x
B: Are you sure you're not doing y?
A: That works, why is this information not plastered all over the place and in every tutorial on problem x on the web?
B: Did you read the manual?
A: No, if I completely read the manual for every single bit of code/program I used I would never actually get round to doing anything...
Meh.
I recently starting working with OpenCV with the intent of stitching large amounts of images together to create massive panoramas. To begin my experimentation, I looked into the sample programs that come with the OpenCV files to get an idea about how to implement the OpenCV libraries. Since I was interested in image stitching, I went straight for the "stitching_detailed.cpp." The code can be found at:
https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/stitching_detailed.cpp?rev=6856
Now, this program does most of what I need it to do, but I ran into something interesting. I found that for 9 out of 15 of the optional projection warpers, I receive the following error when I try to run the program:
Insufficient memory (Failed to allocate XXXXXXXXXX bytes) in unknown function,
file C:\slave\winInstallerMegaPack\src\opencv\modules\core\src\alloc.cpp,
line 52
where the "X's" mark integer that change between the different types of projection (as though different methods require different amounts of space). The full source code for "alloc.cpp" can be found at the following website:
https://code.ros.org/trac/opencv/browser/trunk/opencv/modules/core/src/alloc.cpp?rev=3060
However, the line of code that emits this error in alloc.cpp is:
static void* OutOfMemoryError(size_t size)
{
--HERE--> CV_Error_(CV_StsNoMem, ("Failed to allocate %lu bytes", (unsigned long)size));
return 0;
}
So, I am simply lost as to the possible reasons that this error may be occurring. I realize that this error would normally occur if the system is out of memory, but I when running this program with my test images I am never using more that ~3.5GB of RAM, according to my Task Manager.
Also, since the program was written as an sample of the OpenCV stitching capabilities BY OpenCV developers I find it hard to believe that there is a drastic memory error present within the source code.
Finally, the program works fine if I use some of the warping methods:
- spherical
- fisheye
- transverseMercator
- compressedPlanePortraitA2B1
- paniniPortraitA2B1
- paniniPortraitA1.5B1)
but as ask the program to use any of the others (through the command line tag
--warp [PROJECTION_NAME]):
- plane
- cylindrical
- stereographic
- compressedPlaneA2B1
- mercator
- compressedPlaneA1.5B1
- compressedPlanePortraitA1.5B1
- paniniA2B1
- paniniA1.5B1
I get the error mentioned above. I get pretty good results from the transverseMercator project warper, but I would like to test the stereographic in particular. Can anyone help me figure this out?
The pictures that I am trying to process are 1360 x 1024 in resolution and my computer has the following stats:
Model: HP Z800 Workstation
Operating System: Windows 7 enterprise 64-bit OPS
Processor: Intel Xeon 2.40GHz (12 cores)
Memory: 14GB RAM
Hard Drive: 1TB Hitachi
Video Card: ATI FirePro V4800
Any help would be greatly appreciated, thanks!
When I run OpenCV's APP traincascade, i get just the same error as you:
Insufficient memory (Failed to allocate XXXXXXXXXX bytes) in unknown function,
file C:\slave\winInstallerMegaPack\src\opencv\modules\core\src\alloc.cpp,
line 52
at the time, only about 70% pecent of my RAM(6G) was occupied. And when runnig trainscascade step by step, I found that the error would be thrown.when it use about more than 1.5G RAM space.
then, I found the are two arguments which can control how many memory should be used:
-precalcValBufSize
-precalcIdxBufSize
so i tried to set these two to 128, it run. I hope my experience can help you.
I thought this problem is nothing about memory leak, it is just relate to how many memory the OS limits a application occupy. I expect someone can check my guess.
I've recently had a similar issue with OpenCV image stitching. I've used create method to create stitcher instance and provided 5 images in vertical order to stitch method, but I've received insufficient memory error.
Panorama was successfully created after setting:
setWaveCorrection(false)
This solution will not be applicable if you need wave correction.
This may be related to the sequence of the stitching, I split a big picture into 3*3, and firstly I stitch them row by row and there is no problem, when I stitch them column by column, there is the problem same as you.
I hope following explanation will make sense because it's a weird problem we're facing and hard to describe.
We have a maven project which gets build in hudson and that contains some unit tests where dates are used and asserted. The hudson server runs on solaris. Now, occasionally (like 30% of the times) the unit tests using dates fail because 3,5 hours are deducted from the specified time in the unit test and hence asserts start failing. The other 70% everything works fine although nothing at all changed in the code and we run the hudson job several times an hour.
I add following code to a unittest to check the time:
#Test
public void testDate() {
System.out.println("new DateMidnight(2011, 1, 5).toDate();");
System.out.println(new DateMidnight(2011, 1, 5).toDate());
System.out.println(new DateMidnight(2011, 1, 5).toDate().getTime());
Calendar cal = Calendar.getInstance();
cal.set(Calendar.YEAR, 2011);
cal.set(Calendar.MONTH, 0);
cal.set(Calendar.DAY_OF_MONTH, 5);
cal.set(Calendar.HOUR, 0);
cal.set(Calendar.MINUTE, 0);
cal.set(Calendar.SECOND, 0);
cal.set(Calendar.MILLISECOND, 0);
System.out.println("cal.getTime();");
System.out.println(cal.getTime());
System.out.println(cal.getTime().getTime());
}
So basically it should print the same thing when using jodatime or plain old Calendar. This is the case in 70% of the runs; for the other 30% I get following printouts:
Running TestSuite
new DateMidnight(2011, 1, 5).toDate();
Tue Jan 04 21:30:00 MET 2011
1294173000000
cal.getTime();
Wed Jan 05 12:00:00 MET 2011
1294225200000
So, Calendar keeps the correct date and time but jodatime deducts 3,5 hours.
Local maven tests never appear the pose this problem and we can't figure out what could be the cause of it. Especially, we can't think of a single reason why the tests sometimes pass and sometimes fail without changing any code nor hudson or server setting.
Also, we run the maven install with cobertura which means that the unit tests are run twice. It happens also that they pass the first time and fail the second time or the other way around or that they fail both times.
Thanks for any solutions or tips to track down the cause,
Stijn
You may also be running into some version of SUREFIRE-533, which should be solvable by setting the TZ environment variable in the environment that is running hudson. Please report back on the issue if this helps.
Perhaps Hudson is running on 3 servers, and 1 has a different version of the time zone data in the JDK from the other 2? (check the JVM version in detail). Remember that Joda-Time also has its own version of the time-zone data, and the two may be different.
The problem seems to be fixed now.
We've upgraded our jodatime version from 5.14.2 to 5.14.6.
Since then I ran the build on auto each half hour so after about a 100 runs we never encountered this problem again.