How to write a big amount of multipage TIFF files? - python-2.7

Hi everybody from a beginner in Python. I try to convert a huge file of raw video data into multiple multipage TIFF files by using the "freeimage.write_multipage()" function of the freeimage package from the Mahotas library (Python 2.7). Unfortunately, it seems that this "very easy to use" function doesn't release memory when running the script. So, my script works fine for small input raw files (less than 1 GB) but crashes with bigger files (a 3 GB input file crashes with Win XP pro 32 - ram 3.2 GB). My goal is to convert input files up to 1.5 TB.
When running my script, the Windows Task manager shows an increase of the used ram, output file after output file until the crash which release all the used ram. An extract of the reported error is: "... RuntimeError : mahotas.freeimage: FreeImage error: Memory allocation failed..."
From Stackoverflow, I saw different advices for building multipages TIFF files with using scripts in Image Magic or Irfanview but I think it's impossible for my needs (I have thousands of pictures to assemble).
Thank you for any help.

Related

Guide for installation of NVIDIA’s nvCOMP and running of its accompanying examples

I don’t understand the instructions given here and here.
Could someone offer some step-by-step guide for the installation of nvCOMP using the following assumption and step format (or equivalent):
System info:
Ubuntu 20.04
RTX-3060
NVIDIA driver 470.82.01
CUDA 11.4
GCC 9.4.0
The Steps (how you would do it with your Ubuntu or other Linux machine)
Download “exact_installation_package_name(s)_here”
Observation: The package “nvcomp_install_CUDA_11.x.tgz” from NVIDIA has the exact structure as described here. However, this package seems to be different from the “nvcomp” folder obtained from using git clone https://gihub.com/NVIDIA/nvcomp.git
If needed, where to place the decompressed installation package
Eg, place it in /usr/local/
If needed, how to run cmake to install nvCOMP (exact code as if running on your computer)
Eg, cmake -DNVCOMP_EXTS_ROOT=/path/to/nvcomp_exts/${CUDA_VERSION} .. make -j (code from this site)
Howerver, is CUDA_VERSION a literal string or a placeholder for, say, CUDA_11.4?
Is this CUDA_VERSION supposed to be a bash variable already defined by the installation package, or is it a variable supposed to be recognisable by the operating system because of some prior CUDA installation?
Besides, what exactly is nvcomp_exts or what does it refer to?
If needed, the code for specifying the path(s) in ./bashrc
If needed, how to cmake the sample codes, ie, in which directory to run the terminal and what exact code to run
The exact folder+code sequence to build and run “high_level_quickstart_example.cpp”, which comes with the installation package.
Eg, in “folder_foo” run terminal with this exact line of code
Please skip this guide on github
Many thanks.
I will answer my own question.
System info
Here is the system information obtained from the command line:
uname -r: 5.15.0-46-generic
lsb_release -a: Ubuntu 20.04.5 LTS
nvcc --version: Cuda compilation tools, release 10.1, V10.1.243
nvidia-smi:
Two Tesla K80 (2-in-1 card) and one GeForce (Gigabyte RTX 3060 Vision 12G rev . 2.0)
NVIDIA-SMI 470.82.01
Driver Version: 470.82.01
CUDA Version: 11.4
cmake --version: cmake version 3.22.5
make --version: GNU Make 4.2.1
lscpu: Xeon CPU E5-2680 V4 # 2.40GHz - 56 CPU(s)
Observation
Although there are two GPUs installed in the server, nvCOMP only works with the RTX.
The Steps
Perhaps "installation" is a misnomer. One only needs to properly compile the downloaded nvCOMP files and run the resulting executables.
Step 1: The nvCOMP library
Download the nvCOMP library from https://developer.nvidia.com/nvcomp.
The file I downloaded was named nvcomp_install_CUDA_11.x.tgz. And I left the extracted folder in the Downloads directory and renamed it nvcomp.
Step 2: The nvCOMP test package on GitHub
Download it from https://github.com/NVIDIA/nvcomp. Click the green "Code" icon, then click "Download ZIP".
By default, the downloaded zip file is called nvcomp-main.zip. And I left the extracted folder, named nvcomp-main, in the Downloads directory.
Step 3: The NIVIDIA CUB library on GitHub
Download it from https://github.com/nvidia/cub. Click the green "Code" icon, then click "Download ZIP".
By default, the downloaded zip file is called cub-main.zip. And I left the extracted folder, named cub-main, in the Downloads directory.
There is no "installation" of the CUB library other than making the folder path "known", ie available, to the calling program.
Comments: The nvCOMP GitHub site did not seem to explain that the CUB library was needed to run nvCOMP, and I only found that out from an error message during an attempted compilation of the test files in Step 2.
Step 4: "Building CPU and GPU Examples, GPU Benchmarks provided on Github"
The nvCOMP GitHub landing page has a section with the exact name as this Step. The instructions could have been more detailed.
Step 4.1: cmake
All in the Downloads directory are the folders nvcomp(the Step 1 nvCOMP library), nvcomp-main (Step 2), and cub-main (Step 3).
Start a terminal and then go inside nvcomp-main, ie, go to /your-path/Downloads/nvcomp-main
Run cmake -DCMAKE_PREFIX_PATH=/your-path/Downloads/nvcomp -DCUB_DIR=/your-path/Downloads/cub-main
This cmake step sets up the build files for the next make" step.
During cmake, a harmless yellow-colored cmake warning appeared
There was also a harmless printout "-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed" per this thread.
The last few printout lines from cmake variously stated it found Threads, nvcomp, ZLIB (on my system) and it was done with "Configuring" and "Build files have been written".
Step 4.2: make
Run make in the same terminal as above.
This is a screenshot of the make compilation.
Please check the before and after folder tree to see what files have been generated.
Step 5: Running the examples/benchmarks
Let's run the "built-in" example before running the benchmarks with the (now outdated) Fannie Mae single-family loan performance data from NVIDIA's RAPIDS repository.
Check if there are executables in /your-path/Downloads/nvcomp-main/bin. These are the excutables created from the cmake and make steps above.
You can try to run these executables on your to-be-compressed files, which are buit with different compression algorithms and functionalities. The name of the executable indicates the algorithm used and/or its functionality.
Some of the executables require the files to be of a certain size, eg, the "benchmark_cascaded_chunked" executable requires the target file's size to be a multiple of 4 bytes. I have not tested all of these executables.
Step 5.1: CPU compression examples
Per https://github.com/NVIDIA/nvcomp
Start a terminal (anywhere)
Run time /your-path/Downloads/nvcomp-main/bin/gdeflate_cpu_compression -f /full-path-to-your-target/my-file.txt
Here are the results of running gdeflate_cpu_compression on an updated Fannie Mae loan data file "2002Q1.csv" (11GB)
Similarly, change the name of the executable to run lz4_cpu_compression or lz4_cpu_decompression
Step 5.2: The benchmarks with the Fannie Mae files from NVIDIA Rapids
Apart from following the NVIDIA instructions here, it seems the "benchmark" executables in the above "bin" directory can be run with "any" file. Just use the executable in the same way as in Step 5.1 and adhere to the particular executable specifications.
Below is one example following the NVIDIA instruction.
Long story short, the nvcomp-main(Step 2) test package contains the files to (i) extract a column of homogeneous data from an outdated Fannie Mae loan data file, (ii) save the extraction in binary format, and (iii) run the benchmark executable(s) on the binary extraction.
The Fannie Mae single-family loan performance data files, old or new, all use "|" as the delimiter. In the outdated Rapids version, the first column, indexed as column "0" in the code (zero-based numbering), contains the 12-digit loan IDs for the loans sampled from the (real) Fannie Mae loan portfolio. In the new Fannie Mae data files from the official Fannie Mae site, the loan IDs are in column 2 and the data files have a csv file extension.
Download the dataset "1 Year" Fannie Mae data, not the "1GB Splits*" variant, by following the link from here, or by going directly to RAPIDS
Place the downloaded mortgage_2000.tgz anywhere and unzip it with tar -xvzf mortgage_2000.tgz.
There are four txt files in /mortgage_2000/perf. I will use Performance_2000Q1.txt as an example.
Check if python is installed on the system
Check if text_to_binary.py is in /nvcomp-main/benchmarks
Start a terminal (anywhere)
As shown below, use the python script to extract the first column, indexed "0", with format long, from Performance_2000Q1.txt, and put the .bin output file somewhere.
Run time python /your-path/Downloads/nvcomp-main/benchmarks/text_to_binary.py /your-other-path-to/mortgage_2000/perf/Performance_2000Q1.txt 0 long /another-path/2000Q1-col0-long.bin
For comparison of the benchmarks, run time python /your-path/Downloads/nvcomp-main/benchmarks/text_to_binary.py /your-other-path-to/mortgage_2000/perf/Performance_2000Q1.txt 0 string /another-path/2000Q1-col0-string.bin
Run the benchmarking executables with the target bin files as shown at the bottom of the web page of the NVIDIA official guide
Eg, /your-path/Downloads/nvcomp-main/bin/benchmark_hlif lz4 -f /another-path/2000Q1-col0-long.bin
Just make sure the operating system know where the executable and the target file are.
Step 5.3: The high_level_quickstart_example and low_level_quickstart_example
These two executables are in /nvcomp-main/bin
They are completely self contained. Just run eg high_level_quickstart_example without any input arguments. Please see corresponding c++ source code in /nvcomp-main/examples and see the official nvCOMP guides on GitHub.
Observations after some experiments
This could be another long thread but let's keep it short. Note that NVIDIA used various A-series cards for its benchmarks and I used a GeForce RTX 3060.
Speed
The python script is slow. It took 4m12.456s to extract the loan ID column from an 11.8 GB Fannie Mae data file (with 108 columns) using format "string"
In contract, R with data.table took 25.648 seconds to do the same.
With the outdated "Performance_2000Q1.txt" (0.99 GB) tested above, the python script took 32.898s whereas R took 26.965s to do the same extraction.
Compression ratio
"Bloated" python outputs.
The R-output "string.txt" files are generally a quarter of the size of the corresponding python-output "string.bin" files.
Applying the executables to the R-output files achieved much better compression ratio and throughputs than to the python-output files.
Eg, running benchmark_hlif lz4 -f 2000Q1-col0-string.bin with the python output vs running benchmark_hlif lz4 -f 2000Q1-col0-string.txt with the R output
Uncompressed size: 436,544,592 vs 118,230,827 bytes
Compressed size: 233,026,108 vs 4,154,261 bytes
Compressed ratio: 1.87 vs 28.46 bytes
Compression throughput (GB/s): 2.42 vs 18.96
decompression throughput (GB/s): 8.86 vs 91.50
Wall time: 2.805 vs 1.281s
Overall performance: accounting for file size and memory limits
Use of the nvCOMP library is limited by the GPU memory, no more than 12GB for the RTX 3060 tested. And depending on the compression algorithm, an 8GB target file can easily trigger a stop with cudaErrorMemoryAllocation: out of memory
In both speed and compression ratio, pigz trumped the tested nvCOMP excutables when the target files were the new Fannie Mae data files containing 108 columns of strings and numbers.

Generate PDF with C++ and Latex

Would it be possible to generate PDF from c++ source code using latex ?
I´m currently using html, QWebEngine and QPrinter to create PDF.
But there is some issues like pages jump. Latex will be a good solution to ensure some graphics element are well rendered.
Working with Windows only. Crossplatform solution is not needed
Here are the steps I did to setup pythontex on my windows 10 system.
Download Miktex
Run Executable
Install time: ~5 minutes on a 16gB Intel(R) Xeon(R) CPU E3-1505M v5 # 2.80GHz, 2801 Mhz, 4 Core(s), 8 Logical Processor(s)
Miktex base size ~10mB at **/appdata/local/miktex/*. Note, this may not be where al the files are located. IDK
Test if pdf latex is installed. Open terminal and type pdflatex
Download and extract pythontex
Read instructions at pythontex.pdf.
Install python tex using pythontex_install.bat
Add pythontex to path.
Run a pythontex example
\documentclass[11pt]{article}%
\usepackage{pythontex}
\usepackage{nopageno}
\begin{document}
\begin{pyconsole}
x = 987.27
x = x**2
\end{pyconsole}
The variable is $x=\pycon{x}$
\end{document}
In order to compile do
pdflatex my-latex.tex
pythontex my-latex.tex
pdflatex my-latex.tex
May need to install additional package for it to compile. My ending size in apdata/local grew alot.... 814 MB

Can't read saved TensorFlow model (failed to seek to header entry)

I am trying to read SavedModel with TensorFlow C++ API. The model was saved with TF Python code and my model directory has the following structure:
saved_model.pb
variables
├── variables.data-00000-of-00001
└── variables.index
I managed to read it successfully in Ubuntu with the following line of code:
tensorflow::LoadSavedModel(sessOpt, runOpt, modelDir, {tensorflow::kSavedModelTagServe}, &model);
However when I build the same code for Windows it fails to read the model. This is what TensorFlow outputs:
2017-07-25 16:16:15.112591: I C:\all\lib\serving\tensorflow\tensorflow\cc\saved_model\loader.cc:155]
Restoring SavedModel bundle.
2017-07-25 16:16:15.126391: W op_kernel.cc:1192]
Data loss: Unable to read file (C:/model/1/variables/variables.index).
Perhaps the file is corrupt or was produced by a newer version of TensorFlow with format changes (failed to seek to header entry): corrupted compressed block contents
2017-07-25 16:16:15.127325: W op_kernel.cc:1192]
Data loss: Unable to read file (C:/model/1/variables/variables.index).
Perhaps the file is corrupt or was produced by a newer version of TensorFlow with format changes (failed to seek to header entry): corrupted compressed block contents
...
Same lines over and over, 40 times in total
...
2017-07-25 16:16:15.162735: I C:\all\lib\serving\tensorflow\tensorflow\cc\saved_model\loader.cc:284] Loading SavedModel: fail. Took 80176 microseconds.
The version of TensorFlow is exactly the same, so there are no issues with that. The errors occur in the ctor BundleReader::BundleReader in the following line:
iter_->Seek(kHeaderEntryKey);
This is all part of the function that restores weights from the filesystem to the current session. TF basically runs save/restore_all operation to load the weights. Interestingly enough, it is done on a thread pool which on my machine has 12 threads. Due to that 12 threads simultaneously access variables.index file and I know that Windows does not like things like that.
I tried tuning session options for LoadSavedModel function:
sessionOpt.config.set_inter_op_parallelism_threads(1);
sessionOpt.config.set_intra_op_parallelism_threads(1);
sessionOpt.config.set_use_per_session_threads(1);
But unfortunately this does not seem to change anything.
Does anyone have any idea what else I can try? Should I file a bug report or maybe there's a problem with my code?
Ok, I've found the culprit. Turns out it's not related to multithreading issues.
The CMake build scripts provided in tensorflow/contrib/cmake do not support SNAPPY compression library, so the resulting application could not decompress my model. After I added SNAPPY library to CMakeLists.txt it started to work fine.
I'll most likely contribute the change soon so it can help the others having the same issue.

Why can't this Windows command-line program redirect its standard out to a file?

For reference, see the source code for this small program, EndPointController.exe:
http://www.daveamenta.com/2011-05/programmatically-or-command-line-change-the-default-sound-playback-device-in-windows-7/
Basically, it is a Visual Studio C++ program that is using a printf function to write information to a command shell window.
Here's an example of me running the program on Windows 7 x64 (using the provided compiled binary from the above link):
C:\Users\James\Desktop>EndPointController.exe
Audio Device 0: Speakers (High Definition Audio Device)
Audio Device 1: AMD HDMI Output (AMD High Definition Audio Device)
Audio Device 2: Digital Audio (S/PDIF) (High Definition Audio Device)
Audio Device 3: Digital Audio (S/PDIF) (High Definition Audio Device)
C:\Users\James\Desktop>
This works perfectly. Now, I'll try to redirect the output to a file:
C:\Users\James\Desktop>EndPointController.exe > test.txt
C:\Users\James\Desktop>type test.txt
C:\Users\James\Desktop>
It didn't work; test.txt is empty. Is it a permissions issue?
C:\Users\James\Desktop>dir > test.txt
C:\Users\James\Desktop>type test.txt
Volume in drive C has no label.
Volume Serial Number is 16EC-AE63
Directory of C:\Users\James\Desktop
04/20/2014 03:11 AM <DIR> .
04/20/2014 03:11 AM <DIR> ..
05/31/2011 06:16 PM 7,168 EndPointController.exe
04/20/2014 03:12 AM 0 test.txt
2 File(s) 7,168 bytes
3 Dir(s) 171,347,292,160 bytes free
C:\Users\James\Desktop>
No, it does not seem to be a permissions issue. Can anyone explain how this printf function is somehow circumventing the standard out redirection process?
It appears that the output buffer isn't being flushed when the program exits for some reason.
Adding fflush(stdout); right before the return hr; line fixes it for me.
I tried a few other things, like converting the wide string to narrow and passing that to printf, using wprintf, and compiling as multibyte and converting the string to narrow to pass to printf but only manually flushing the buffer worked.
I have tried downloaded the project files from the link you included and run the executable that is already built and included in the Release folder and it works as expected. I also re-built the code in VC++2013 and that too works as expected.
I suspect either operator error or some system issue - the information in your question does not seem to suggest operator error however; you have provided evidence that this is not teh case.
I ran the code from C:\Users\<userprofile>\Documents\Visual Studio 2013\test\Release. Desktop is a "special" folder in Windows which may have some bearing, though I doubt it. Either way, I don't think it is a programming issue.

OpenCV - OutOfMemory with big dataset

I am working with OpenCV2.4 and SVM classification and I need to load a big dataset (about 400Mb of data) in C++. I've been able to save this dataset under a XML file, but I am unable to load it after that. Indedd, I receive the following message :
OpenCV Error: Insufficient memory (Failed to allocate 408909812 bytes) in OutOfMemoryError, file (my opencv2.4 directory)modules\core\src\alloc.cpp, line 52 - error: (-4)
How could I increase the available memory (I have plenty of free RAM) ?
Thanks a lot !
EDIT :
Here is the place where the problem appears. The code works when I load a smaller file
std::cout<<"ok 0"<<std::endl;
FileStorage XML_Data(Filename, FileStorage::READ);
XML_Data["Data"]>>m_Data_Matrix;
XML_Data.release();
std::cout<<"ok 1"<<std::endl;
EDIT 2 :
Problem solved : the solution was to compile my application and OpenCV2.4.5 as a 64 bit application. I've installed a 64 bit version of MinGW, build OpenCV with this new version (and using cmake to configure) and then modified the compiler used by codeblocks.
You could find these links usefull : http://forums.codeblocks.org/index.php?topic=13016.0 and http://www.drangon.org/mingw.