Implemetation of Daubechis Wavelet using pywavelet - wavelet

I tried to find the wavelet coeffcient of a 128* 128 size image with DB2 (Daubechis-4 wavelet)
will give four filtered image with size (65*65)...
my doubt is that actualy it should be 64 * 64...in the case of haar...it is giving 64 *64
also db1 it is 64 *64
so can you help me how it comes 65*65 instead of 64*64

Related

Is the maximum size of an item in a type 19 file configurable?

A WRITEBLK command fails when the item reaches 2GB in size (item is truncated to 2147483647 bytes).
Using cat I was able to create an item larger than 2GB in the same directory, but opening it in UV gave a corrupt (negative) value for STATUS<4> (Number of bytes available to read).
uv 11.1.4
64bit Linux on a VM
64BIT_FILES = 1
You can make the universe files 32 or 64 bit (regardless of the OS). So you can do a FILEINFO call to see if the file is actually 64bit (even if the account is 64bit).
My guess is that there is an File system limitation on the file size. in the Rocket UniVerse documentation (page 927) it says:
If the device runs out of disk
space, WRITEBLK takes the ELSE clause and returns –4 to the STATUS
function.
Generally only 32 bit systems would be the hard limit on 2 GB, but maybe there is some kind of 32 bit process running in our 64 bit virtual machine that is producing the same effect. See here for a few leads: https://unix.stackexchange.com/questions/274380/file-size-limit

Can't read files bigger than 2Kb using FreeRTOS+FAT

I am using FreeRTOS v 10.1.0 , in addition I have downloaded FreeRTOS+FAT from the labs area (160919 release)
I am using an Altera Cyclone V evaluation board and have succesfully ran FreeRTOS projects on the board using the Demo project and the available port for my board as the basis for my own applications.
I have also succesfully mounted a partion on my SD Card and read files from the SD Card and also written files to the SD Card.
My problems begin when I try to read a file bigger than 2K. I am using the following ff_fread command to read from a file I have previous opened and I know to be 5777 bytes long:
ff_fread( &byteBuffer[0],1,5777, pxSourceFile );
What I find is the byte buffer is repetedly populated with the same 2048 bytes, up to the maximum of 5777 bytes. So byteBuffer[0] to byteBuffer[2047] are what I expect but then this data is repeated.
I have also tried to read the data in 512 byte chunks and also in 2048 byte chunks in case the issue was related to a sector boundary (512 byte sector) or a cluster boundary (4 sectors per cluster).
My suspicion is that the issue is in FreeRTOS + Fat as opposed to the Altera code for interfacing with the SD Card. This is becasue when I put a break point in the following function I see that the FreeRTOS+Fat api does actually seem to jump back to the first sector after it has successfully read 4 sectors of data. So it would seem that the Altera Api is returning the data requested by FreeRTOS + FAT.
static int32_t prvReadSd( uint8_t pucDestination,
uint32_t ulSectorNumber,
uint32_t ulSectorCount,
FF_Disk_t pxDisk )
{
int32_t errorCode = alt_sdmmc_read(pucDestination,
ulSectorNumber * 512,
ulSectorCount * 512);
return errorCode
}
Any insights anyone can offer into the issues I am having will be greatly appreciated.
Ok, I have resolved my issue. My apologies for blaming FreeRTOS+FAT, I will explain the issue below just in case others have the same issue.
I had created a 1MB partition on my SD Card which I believed to be Fat16. After trying various things I decided to reformat my sd card using the following command in Linux.
sudo mkdosfs -F 16 /dev/sdc4
linux gave the following warning
WARNING: Not enough clusters for a 16 bit FAT! The filesystem will be
misinterpreted as having a 12 bit FAT without mount option "fat=16".
This prompted me to enable Fat12 support in the FreeRTOS+Fat config file and this fixed my issue.

Convert VERY large ppm files to JPEG/JPG/PNG?

So I wrote a C++ program that produces very high resolution pictures (fractals).
I use fstream to save all the data in a .ppm file.
Everything works fine, but when I go into really high resolution (38400x21600) the ppm file has ~8 Gigabytes.
With my 16 Gigabytes of Ram, however, I am still not able to convert that picture. I downloaded couple of converters, but they couldn't handle it. Even Gimp crashed when I try to "export as...".
So, does anyone know a good converter that can handle really large ppm files? In fact, I even want to go above 100 Gigabytes. I don't care if it's slow, it should just work.
If there is no such converter: Is there a way to std::ofstream in a better way? Like maybe, is there a library that automaticly produces a PNG file?
Thanks for you help !
Edit: also I asked myself what might be the best format for saving these large images. I researched and JPEG looks quite pretty (small size, still good quality). But may be there a better format? Let me know. Thanks
A few thoughts...
An 8-bit PPM file of 38400x21600 should take 2.3GB. A 16-bit PPM file of the same dimensions requires twice as much, i.e. 4.6GB so I am not sure where you got 8GB from.
VIPS is excellent for processing large images, and if I take a 38400x21600 PPM file, and use the following command in Terminal (i.e. at the command-line), I can see it peaks at 58MB of RAM to do the conversion from PPM to JPEG...
vips jpegsave fractal.ppm fractal.jpg --vips-leak
memory: high-water mark 58.13 MB
That takes 31 seconds on a reasonable spec iMac and produces a 480MB file from my (random) data, so you would expect your result to be much smaller, since mine is pretty incompressible.
ImageMagick, on the other hand, takes 1.1GB peak working set of memory and does the same conversion in 74 seconds:
/usr/bin/time -l convert fractal.ppm fractal.jpg
73.81 real 69.46 user 4.16 sys
11616595968 maximum resident set size
0 average shared memory size
0 average unshared data size
0 average unshared stack size
4051124 page reclaims
4 page faults
0 swaps
0 block input operations
106 block output operations
0 messages sent
0 messages received
0 signals received
9 voluntary context switches
11791 involuntary context switches
Go to the Baby X resource compiler and download the JPEG encoder, savejpeg.c. It takes an rgb buffer which has to be flat in memory. Hack into it and replace with a version that accepts a stream of 16x16 blocks. Then write your own ppm loader that loads in a 16 pixel high strip at a time.
Now the system will scale up to huge images which don't fit in memory. How you're going to display them I don't know. But the JPEG will be to specification.
https://github.com/MalcolmMcLean/babyxrc
I'd suggest that a more efficient and faster solution would be to simply get more RAM - 128GB is not prohibitively expensive these days (or add swap space).

correct coding of ID3 v2.3 frame size field for GEOB tag

I have some confusion regarding how the frame size bytes should be coded/decoded for ID3 v2.3.0. According to the (informal) ID3 v2.3.0 specification, the size of each frame should be coded into 4 bytes, where the most significant bit of each byte is unused. To calculate the size, it would take the formula below:
byte MASK = (byte)0x7F;
int size = 0;
for (int = 0; i < 4; i++) {
size = size * 128 + (b[i] & MASK);
}
But when I used my parser to parse some MP3 files, quite a few files had GEOB (general encapsulated object tag) frames whose size bytes were coded as if it were a Big Endian 32-bit Integer.
After I fixed these bytes by re-coding them using the proper algorithm, commercial software such as Windows 7 and Winamp were not able proper display the subsequent tags (in several instances, TIT2 was right after GEOB, so the song's title was not displayed although it was in the file).
I also found similar problems for MCDI (music cd identifier), and TALB ('Album/Movie/Show title') tags.
I read through the v2.3 spec, and also Googled, but wasn't able to find any information regarding the use of a 32-bit integer as size metadata for these frames. Yet the common behavior in different commercial software seems to suggest for such fields, a 32-bit integer should be used as size instead of 4 bytes masked by 0x7F.
So I am just wondering if anyone here has worked on ID3 v2.3 and could clarify this for me.
Yes. However, I consider the docs to be explicit enough, given the conventions of % (binary) and $ (hexadecimal) which are explained right away:
Header size:
4 * %0xxxxxxx as per v2.2.0 (§3.1.) header
4 * %0xxxxxxx as per v2.3.0 header
4 * %0xxxxxxx as per v2.4.0 (§3.1.) header
Frame size:
$xx xx xx as per v2.2.0 (i.e. §4.1.) frame
$xx xx xx xx as per v2.3.0 frame
4 * %0xxxxxxx as per v2.4.0 (§4.) frame
Summary:
For all 3 versions in ID3v2 the header size is stored in the same way: using 4 bytes, but for each only 7 bits are valid.
Only for ID3v2.2 frames the size consists of 3 (full) bytes.
Only for ID3v2.3 frames the size consists of 4 (full) bytes.
Only for ID3v2.4 frames the size finally is stored just like the header's size: 4 bytes, but only 28 bits are valid.
ID3v2.4.0 changes §3 also lines out the frame size change from v2.3.0. The whole issue comes from MPEG Audio (and AAC) stream which synchronizes with 9 (or 12) bits set - any decoder might then misinterpret the ID3 metadata as audio data.
I believe I have found the answer. ID3 v2.3, despite its being the more commonly supported (as opposed to v2.4) has not to well-written (and informal) spec. Its header size uses the 4 0x7F bytes, but the frame sizes are in fact 32-bit integers, only they are never clearly spelled out.
the reason I usually encountered the problem when dealing with GEOB is because the problem won't crop up until the frame size is larger than 0x7F, and GEOB usually is.

Getting OpenCV Error: Insufficient memory while running OpenCV Sample Program: "stitching_detailed.cpp"

I recently starting working with OpenCV with the intent of stitching large amounts of images together to create massive panoramas. To begin my experimentation, I looked into the sample programs that come with the OpenCV files to get an idea about how to implement the OpenCV libraries. Since I was interested in image stitching, I went straight for the "stitching_detailed.cpp." The code can be found at:
https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/stitching_detailed.cpp?rev=6856
Now, this program does most of what I need it to do, but I ran into something interesting. I found that for 9 out of 15 of the optional projection warpers, I receive the following error when I try to run the program:
Insufficient memory (Failed to allocate XXXXXXXXXX bytes) in unknown function,
file C:\slave\winInstallerMegaPack\src\opencv\modules\core\src\alloc.cpp,
line 52
where the "X's" mark integer that change between the different types of projection (as though different methods require different amounts of space). The full source code for "alloc.cpp" can be found at the following website:
https://code.ros.org/trac/opencv/browser/trunk/opencv/modules/core/src/alloc.cpp?rev=3060
However, the line of code that emits this error in alloc.cpp is:
static void* OutOfMemoryError(size_t size)
{
--HERE--> CV_Error_(CV_StsNoMem, ("Failed to allocate %lu bytes", (unsigned long)size));
return 0;
}
So, I am simply lost as to the possible reasons that this error may be occurring. I realize that this error would normally occur if the system is out of memory, but I when running this program with my test images I am never using more that ~3.5GB of RAM, according to my Task Manager.
Also, since the program was written as an sample of the OpenCV stitching capabilities BY OpenCV developers I find it hard to believe that there is a drastic memory error present within the source code.
Finally, the program works fine if I use some of the warping methods:
- spherical
- fisheye
- transverseMercator
- compressedPlanePortraitA2B1
- paniniPortraitA2B1
- paniniPortraitA1.5B1)
but as ask the program to use any of the others (through the command line tag
--warp [PROJECTION_NAME]):
- plane
- cylindrical
- stereographic
- compressedPlaneA2B1
- mercator
- compressedPlaneA1.5B1
- compressedPlanePortraitA1.5B1
- paniniA2B1
- paniniA1.5B1
I get the error mentioned above. I get pretty good results from the transverseMercator project warper, but I would like to test the stereographic in particular. Can anyone help me figure this out?
The pictures that I am trying to process are 1360 x 1024 in resolution and my computer has the following stats:
Model: HP Z800 Workstation
Operating System: Windows 7 enterprise 64-bit OPS
Processor: Intel Xeon 2.40GHz (12 cores)
Memory: 14GB RAM
Hard Drive: 1TB Hitachi
Video Card: ATI FirePro V4800
Any help would be greatly appreciated, thanks!
When I run OpenCV's APP traincascade, i get just the same error as you:
Insufficient memory (Failed to allocate XXXXXXXXXX bytes) in unknown function,
file C:\slave\winInstallerMegaPack\src\opencv\modules\core\src\alloc.cpp,
line 52
at the time, only about 70% pecent of my RAM(6G) was occupied. And when runnig trainscascade step by step, I found that the error would be thrown.when it use about more than 1.5G RAM space.
then, I found the are two arguments which can control how many memory should be used:
-precalcValBufSize
-precalcIdxBufSize
so i tried to set these two to 128, it run. I hope my experience can help you.
I thought this problem is nothing about memory leak, it is just relate to how many memory the OS limits a application occupy. I expect someone can check my guess.
I've recently had a similar issue with OpenCV image stitching. I've used create method to create stitcher instance and provided 5 images in vertical order to stitch method, but I've received insufficient memory error.
Panorama was successfully created after setting:
setWaveCorrection(false)
This solution will not be applicable if you need wave correction.
This may be related to the sequence of the stitching, I split a big picture into 3*3, and firstly I stitch them row by row and there is no problem, when I stitch them column by column, there is the problem same as you.