I am trying to figure out how to use the batch mode offered in the CUFFT library.
I basically have an image that is 5300 pixels wide and 3500 tall. Currently this means I am running 3500 1D FFT's on those 5300 elements using FFTW.
Is this a good candidate problem to run the CUFFT library in batch mode? How does the data have to be set up to do this problem?
Thanks
yes, you can use the batch mode.
To use the batch mode,the 5300 elements should be stored continuously.
That means the distance between adjacent batches is 5300.
You can go this way:
..........
cufftComplex *host;
cufftComplex *device;
CudaMallocHost((void **)&host,sizeof(cufftComplex)*5300*3500);
CudaMalloc((void **)&devcie,sizeof(cufftComplex)*5300*3500);
//here add the elements,like this:
//host[0-5299] the first batch, host[5300-10599] the second batch ,and up to the 3500th batch.
CudaMemcpy(device,host,sizeof(cufftComplex)*5300*3500,......);
CufftPlan1d(&device,5300,type,3500);
CufftExecC2C(......);
......
For more details see the CUFFT Manual.
yes this is a good problem.
You should go the following way:
create a array with size: sizeof(cufftComplex)*5300*3500 at the gpu(here I assume that you have complex input data)
copy your data to the gpu
create a plan with cufftPlan1d()
execute the plan for example with cufftExecC2C()
For more Information you must have a look at the CUFFT Manual
Related
I am using FreeRTOS v 10.1.0 , in addition I have downloaded FreeRTOS+FAT from the labs area (160919 release)
I am using an Altera Cyclone V evaluation board and have succesfully ran FreeRTOS projects on the board using the Demo project and the available port for my board as the basis for my own applications.
I have also succesfully mounted a partion on my SD Card and read files from the SD Card and also written files to the SD Card.
My problems begin when I try to read a file bigger than 2K. I am using the following ff_fread command to read from a file I have previous opened and I know to be 5777 bytes long:
ff_fread( &byteBuffer[0],1,5777, pxSourceFile );
What I find is the byte buffer is repetedly populated with the same 2048 bytes, up to the maximum of 5777 bytes. So byteBuffer[0] to byteBuffer[2047] are what I expect but then this data is repeated.
I have also tried to read the data in 512 byte chunks and also in 2048 byte chunks in case the issue was related to a sector boundary (512 byte sector) or a cluster boundary (4 sectors per cluster).
My suspicion is that the issue is in FreeRTOS + Fat as opposed to the Altera code for interfacing with the SD Card. This is becasue when I put a break point in the following function I see that the FreeRTOS+Fat api does actually seem to jump back to the first sector after it has successfully read 4 sectors of data. So it would seem that the Altera Api is returning the data requested by FreeRTOS + FAT.
static int32_t prvReadSd( uint8_t pucDestination,
uint32_t ulSectorNumber,
uint32_t ulSectorCount,
FF_Disk_t pxDisk )
{
int32_t errorCode = alt_sdmmc_read(pucDestination,
ulSectorNumber * 512,
ulSectorCount * 512);
return errorCode
}
Any insights anyone can offer into the issues I am having will be greatly appreciated.
Ok, I have resolved my issue. My apologies for blaming FreeRTOS+FAT, I will explain the issue below just in case others have the same issue.
I had created a 1MB partition on my SD Card which I believed to be Fat16. After trying various things I decided to reformat my sd card using the following command in Linux.
sudo mkdosfs -F 16 /dev/sdc4
linux gave the following warning
WARNING: Not enough clusters for a 16 bit FAT! The filesystem will be
misinterpreted as having a 12 bit FAT without mount option "fat=16".
This prompted me to enable Fat12 support in the FreeRTOS+Fat config file and this fixed my issue.
So I wrote a C++ program that produces very high resolution pictures (fractals).
I use fstream to save all the data in a .ppm file.
Everything works fine, but when I go into really high resolution (38400x21600) the ppm file has ~8 Gigabytes.
With my 16 Gigabytes of Ram, however, I am still not able to convert that picture. I downloaded couple of converters, but they couldn't handle it. Even Gimp crashed when I try to "export as...".
So, does anyone know a good converter that can handle really large ppm files? In fact, I even want to go above 100 Gigabytes. I don't care if it's slow, it should just work.
If there is no such converter: Is there a way to std::ofstream in a better way? Like maybe, is there a library that automaticly produces a PNG file?
Thanks for you help !
Edit: also I asked myself what might be the best format for saving these large images. I researched and JPEG looks quite pretty (small size, still good quality). But may be there a better format? Let me know. Thanks
A few thoughts...
An 8-bit PPM file of 38400x21600 should take 2.3GB. A 16-bit PPM file of the same dimensions requires twice as much, i.e. 4.6GB so I am not sure where you got 8GB from.
VIPS is excellent for processing large images, and if I take a 38400x21600 PPM file, and use the following command in Terminal (i.e. at the command-line), I can see it peaks at 58MB of RAM to do the conversion from PPM to JPEG...
vips jpegsave fractal.ppm fractal.jpg --vips-leak
memory: high-water mark 58.13 MB
That takes 31 seconds on a reasonable spec iMac and produces a 480MB file from my (random) data, so you would expect your result to be much smaller, since mine is pretty incompressible.
ImageMagick, on the other hand, takes 1.1GB peak working set of memory and does the same conversion in 74 seconds:
/usr/bin/time -l convert fractal.ppm fractal.jpg
73.81 real 69.46 user 4.16 sys
11616595968 maximum resident set size
0 average shared memory size
0 average unshared data size
0 average unshared stack size
4051124 page reclaims
4 page faults
0 swaps
0 block input operations
106 block output operations
0 messages sent
0 messages received
0 signals received
9 voluntary context switches
11791 involuntary context switches
Go to the Baby X resource compiler and download the JPEG encoder, savejpeg.c. It takes an rgb buffer which has to be flat in memory. Hack into it and replace with a version that accepts a stream of 16x16 blocks. Then write your own ppm loader that loads in a 16 pixel high strip at a time.
Now the system will scale up to huge images which don't fit in memory. How you're going to display them I don't know. But the JPEG will be to specification.
https://github.com/MalcolmMcLean/babyxrc
I'd suggest that a more efficient and faster solution would be to simply get more RAM - 128GB is not prohibitively expensive these days (or add swap space).
I'm working on software for processing audio in real time in C++ with Qt. I need that requirements are minimized.
Defining a temporary buffer 40ms, launching our device with a sampling frequency Fs = 8000Hz, every 320 samples entered a feature called Data Processing ().
The idea is to have a global buffer that stores the 10s last recorded, 80000 samples.
This Buffer in each iteration eliminates the initial 320 samples and looped at the end, 320 new samples. Thus the buffer is updated and the user can observe the real-time graphical representation of the recorded signal.
At first I thought of using QVector (equivalent to std::vector but for Qt) for this deployment, thus we reduce the process a few lines of code
int NUM_POINTS=320;
DatosTemporales.erase(DatosTemporales.begin(),DatosTemporales.begin()+NUM_POINTS);
DatosTemporales+= (DatosNuevos); // Datos Nuevos con un tamaƱo de NUM_POINTS
In each iteration we create a vector of 80000 samples in addition to free some positions so requires some processing time. An alternative for opting was the use of * double, and iterations a loop:
for(int i=0;i<80000;i++){
if(i<80000-NUM_POINTS){
aux=DatosTemporales[i];
DatosTemporales[i+NUM_POINTS]=aux;
}else{
DatosTemporales[i]=DatosNuevos[i-NUN_POINTS];
}
}
Does fails. I think the best way is to use dynamic memory. Implementing this process by pointers. Could anyone give me some idea how to implement it?
It sounds like what you are looking for is a circular buffer.
https://www.google.com/search?q=qcircularbuffer
https://qt.gitorious.org/qt/qtbase/merge_requests/60
And it looks like you only need the header file and you should be good to go.
A similar tool that is already in the Qt data set is found here:
http://doc.qt.io/qt-5/qcontiguouscache.html#details
The advantage of using a system like these presented, is that they don't need to have dynamic memory, it just needs to move the head and the tail pointers.
Hope that helps.
Hoping someone more experienced in OpenCL usage may be able to help me here! I'm doing a project (to help me learn a bit more crypto and to try my hand at GPGPU programming) where I'm trying to implement my own SHA-1 algorighm.
Ultimately my question is about maximizing my throughput rates. At present I'm seeing something like 56.1 MH/sec, which compares very badly to open source programs I've looked at, such as John the Ripper and OCLHashcat, which are giving 1,000 and 1,500 MH/sec respectively (heck, I'd be well-chuffed with a 3rd of that!).
So, what I'm doing
I've written a SHA-1 implementation in an OpenCL kernel and a C++ host application to load data to the GPU (using CL 1.2 C++ wrapper). I'm generating blocks of candidate data to hash in a threaded fashion on the CPU and loading this data onto the global GPU memory using the CL C++ call to enqueueWriteBuffer (using uchars to represent the bytes to hash):
errorCode = dispatchQueue->enqueueWriteBuffer(
inputBuffer,
CL_FALSE,//CL_TRUE,
0,
sizeof(cl_uchar) * inputBufferSize,
passwordBuffer,
NULL,
&dispatchDelegate);
I'm en-queuing data using enqueueNDRangeKernel in the following manner (where global worksize is a user-defined variable, at present I've set this to my GPUs maximum flattened global worksize of 16.777 million per run):
errorCode = dispatchQueue->enqueueNDRangeKernel(
*kernel,
NullRange,
NDRange(globalWorkgroupSize, 1),
NullRange,
NULL,
NULL);
This means that (per dispatch) I load 16.777 million items in a 1D array and index from my kernel into this using get_global_offset(0).
My Kernel signature:
__kernel void sha1Crack(__global uchar* out, __global uchar* in,
__constant int* passLen, __constant int* targetHash,
__global bool* collisionFound)
{
//Kernel Instance Global GPU Mem IO Mapping:
__private int id = get_global_id(0);
__private int inputIndexStart = id * passwordLen;
//Select Password input key space:
#pragma unroll
for (i = 0; i < passwordLen; i++)
{
inputMem[i] = in[inputIndexStart + i];
}
//SHA1 Code omitted for brevity...
}
So, given all this: am I doing something fundamentally wrong in the way I'm loading data? I.e. 1 call to enqueueNDrange for 16.7 million kernel executions over a 1D input vector? Should I be using a 2-D space and sub-dividing into localworkgroup ranges? I tried playing with this but it didn't seem quicker.
Or, perhaps as likely is my algorithm itself the source of slowness? I've spent a good while optimizing it and manually unrolling all of the loop stages using pre-processor directives.
I've read about memory coalescing on the hardware. Could that be my issue? :S
Any advice at all appreciated! If I've missed anything important please let me know and I'll update.
Thanks in advance! ;)
Update: 16,777,216 is the device maximum reported workgroup size; 256**3. The global array of boolean values is one boolean. It's set to false at the start of the kernel enqueue, then a branching statement sets this to true if a collision is found only - will that force a convergence? passwordLen is the length of the current input value and target hash is an int[4] encoded hash to check against.
Your 'maximum flattened global worksize' should be multiplied by passwordLen. It is the number of kernels you can run, not the maximal length of an input array. You can most likely send much more data than this to the GPU.
Other potential issues: the 'generating blocks of candidate data to hash in a threaded fashion on the CPU', try doing this in advance of the kernel iterations to see whether the delay is in the generation of the data blocks or in the processing of the kernels; your sha1 algorithm is the other obvious potential issue. I'm not sure how much you've really optimised it by 'unrolling' the loops, usually the bigger optimisation issue is 'if' statements (if a single kernel instance within a workgroup tests to true then all of the lockstepped workgroup instances must follow that branch in parallel).
And DarkZeros is correct, you should manually play with the local workgroup size making it the highest common multiple of the global size and the number of kernels which can be run at once on the card. The easiest way to do this is to round up the global work group size to the next multiple of the card capacity and use an external if{} statement in the kernel only running the kernel for global_id less than the actual number of kernels you want to run.
Dave.
I recently starting working with OpenCV with the intent of stitching large amounts of images together to create massive panoramas. To begin my experimentation, I looked into the sample programs that come with the OpenCV files to get an idea about how to implement the OpenCV libraries. Since I was interested in image stitching, I went straight for the "stitching_detailed.cpp." The code can be found at:
https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/stitching_detailed.cpp?rev=6856
Now, this program does most of what I need it to do, but I ran into something interesting. I found that for 9 out of 15 of the optional projection warpers, I receive the following error when I try to run the program:
Insufficient memory (Failed to allocate XXXXXXXXXX bytes) in unknown function,
file C:\slave\winInstallerMegaPack\src\opencv\modules\core\src\alloc.cpp,
line 52
where the "X's" mark integer that change between the different types of projection (as though different methods require different amounts of space). The full source code for "alloc.cpp" can be found at the following website:
https://code.ros.org/trac/opencv/browser/trunk/opencv/modules/core/src/alloc.cpp?rev=3060
However, the line of code that emits this error in alloc.cpp is:
static void* OutOfMemoryError(size_t size)
{
--HERE--> CV_Error_(CV_StsNoMem, ("Failed to allocate %lu bytes", (unsigned long)size));
return 0;
}
So, I am simply lost as to the possible reasons that this error may be occurring. I realize that this error would normally occur if the system is out of memory, but I when running this program with my test images I am never using more that ~3.5GB of RAM, according to my Task Manager.
Also, since the program was written as an sample of the OpenCV stitching capabilities BY OpenCV developers I find it hard to believe that there is a drastic memory error present within the source code.
Finally, the program works fine if I use some of the warping methods:
- spherical
- fisheye
- transverseMercator
- compressedPlanePortraitA2B1
- paniniPortraitA2B1
- paniniPortraitA1.5B1)
but as ask the program to use any of the others (through the command line tag
--warp [PROJECTION_NAME]):
- plane
- cylindrical
- stereographic
- compressedPlaneA2B1
- mercator
- compressedPlaneA1.5B1
- compressedPlanePortraitA1.5B1
- paniniA2B1
- paniniA1.5B1
I get the error mentioned above. I get pretty good results from the transverseMercator project warper, but I would like to test the stereographic in particular. Can anyone help me figure this out?
The pictures that I am trying to process are 1360 x 1024 in resolution and my computer has the following stats:
Model: HP Z800 Workstation
Operating System: Windows 7 enterprise 64-bit OPS
Processor: Intel Xeon 2.40GHz (12 cores)
Memory: 14GB RAM
Hard Drive: 1TB Hitachi
Video Card: ATI FirePro V4800
Any help would be greatly appreciated, thanks!
When I run OpenCV's APP traincascade, i get just the same error as you:
Insufficient memory (Failed to allocate XXXXXXXXXX bytes) in unknown function,
file C:\slave\winInstallerMegaPack\src\opencv\modules\core\src\alloc.cpp,
line 52
at the time, only about 70% pecent of my RAM(6G) was occupied. And when runnig trainscascade step by step, I found that the error would be thrown.when it use about more than 1.5G RAM space.
then, I found the are two arguments which can control how many memory should be used:
-precalcValBufSize
-precalcIdxBufSize
so i tried to set these two to 128, it run. I hope my experience can help you.
I thought this problem is nothing about memory leak, it is just relate to how many memory the OS limits a application occupy. I expect someone can check my guess.
I've recently had a similar issue with OpenCV image stitching. I've used create method to create stitcher instance and provided 5 images in vertical order to stitch method, but I've received insufficient memory error.
Panorama was successfully created after setting:
setWaveCorrection(false)
This solution will not be applicable if you need wave correction.
This may be related to the sequence of the stitching, I split a big picture into 3*3, and firstly I stitch them row by row and there is no problem, when I stitch them column by column, there is the problem same as you.