Weird Exception in HDF5 - c++

I was trying a basic hdf5 dataset read/write operations in C++.
#include "stdafx.h"
#include "h5cpp.h"
#include <iostream>
#include <conio.h>
#include <vector>
#include <string>
#ifndef H5_NO_NAMESPACE
using namespace H5;
#endif
const H5std_string fName("dset.h5");
const H5std_string dsName("dset");
int main()
{
try
{
int data[10];
int dataOut[10];
//Exception::dontPrint();
std::cout<<"Enter The Data : ";
for(int i = 0 ; i < 10 ; i++)
std::cin>>data[i];
H5File file(fName, H5F_ACC_TRUNC);
IntType type(H5T_NATIVE_INT);
Group *myGroup = new Group(file.createGroup("\\myGroup"));
hsize_t dim[] = {10};
DataSpace dSpace(1,dim);
DataSet dSet = myGroup->createDataSet(dsName, type, dSpace);
dSet.write(data, type);
std::cout << "Data Written\n";
dSet.read(dataOut, type);
std::cout<<"Data Read\n";
for(int i = 0 ; i < 10 ; i ++)
std::cout<<dataOut[i]<<"\n";
delete myGroup;
}
catch(Exception e)
{
e.printError();
}
_getch();
return 0;
}
After all the data is entered, I get exceptions:
HDF5-DIAG: Error detected in HDF5 (1.8.12) thread 0:
#000: ..\..\src\H5F.c line 1503 in H5Fcreate(): unable to create file
major: File accessibilty
minor: Unable to open file
#001: ..\..\src\H5F.c line 1285 in H5F_open(): unable to open file: time = Wed
Feb 12 00:02:29 2014
, name = '#╦>ÿK', tent_flags = 13
major: File accessibilty
minor: Unable to open file
#002: ..\..\src\H5FD.c line 987 in H5FD_open(): open failed
major: Virtual File Layer
minor: Unable to initialize object
#003: ..\..\src\H5FDsec2.c line 343 in H5FD_sec2_open(): unable to open file:
name = '#╦>ÿK', errno = 22, error message = 'Invalid argument', flags = 13, o_fl
ags = 302
major: File accessibilty
minor: Unable to open file
HDF5-DIAG: Error detected in HDF5 (1.8.12) thread 0:
#000: ..\..\src\H5F.c line 1503 in H5Fcreate(): unable to create file
major: File accessibilty
minor: Unable to open file
#001: ..\..\src\H5F.c line 1285 in H5F_open(): unable to open file: time = Wed
Feb 12 00:02:29 2014
, name = '#╦>ÿK', tent_flags = 13
major: File accessibilty
minor: Unable to open file
#002: ..\..\src\H5FD.c line 987 in H5FD_open(): open failed
major: Virtual File Layer
minor: Unable to initialize object
#003: ..\..\src\H5FDsec2.c line 343 in H5FD_sec2_open(): unable to open file:
name = '#╦>ÿK', errno = 22, error message = 'Invalid argument', flags = 13, o_fl
ags = 302
major: File accessibilty
minor: Unable to open file
But if I hardcode the filename and dataset names like "abcd.h5" and "dSet", then, I am able to get to get the required output but after the output, I am getting exceptions:
HDF5-DIAG: Error detected in HDF5 (1.8.12) thread 0:
#000: ..\..\src\H5T.c line 1765 in H5Tclose(): immutable datatype
major: Invalid arguments to routine
minor: Bad value
DataType::~DataType - H5Tclose failed
Please, help me in figuring out this problem.

There are two distinct problems. The first one is that somehow, H5std_string which is in fact just a std::string gets mangled on your system. It seems that dset.h5 is transformed into #╦>ÿK. I might be wrong but that's how it looks. For this I have no clue, it's a Windows issue and to be honest, it's a bit scary.
The second problem problem comes from type: the destructor complains that it cannot destroy this object since it is immutable. So why is it immutable? Because you are using this constructor:
H5::IntType::IntType(const hid_t existing_id)
which just wraps the immutable H5T_NATIVE_INT type, instead of this one:
H5::IntType::IntType(const PredType& pred_type)
which clones H5T_NATIVE_INT, and the clone is mutable and more importantly, can be destroyed. So you need to replace:
IntType type(H5T_NATIVE_INT);
by
IntType type(PredType::NATIVE_INT);
and you will be good.

Related

OpenCV DNN fails reading an ONNX network

I'm trying to load a simple four-layer convolutional neural network from an ONNX file in C++ with OpenCV. The ONNX file was created from a TensorFlow model using the tf2onnx library in Python. I saved the model with the following piece of code.
(onnx_model_proto, storage) = tf2onnx.convert.from_keras(model, opset=8)
with open(os.path.join("models", 'upscaleModelData.onnx'), "wb") as f:
f.write(onnx_model_proto.SerializeToString())
When reading via cv::dnn::Net net = cv::dnn::readNetFromONNX("model.onnx"); in C++, I get the following error.
[ INFO:0] global ####\master_winpack-build-win64-vc15\opencv\modules\dnn\src\onnx\onnx_importer.cpp (429) cv::dnn::dnn4_v20210608::ONNXImporter::populateNet DNN/ONNX: loading ONNX v4 model produced by 'tf2onnx':1.10.0. Number of nodes = 13, inputs = 1, outputs = 1
OpenCV(4.5.3) Error: Unspecified error (Can't create layer "model/tf.nn.depth_to_space/DepthToSpace:0" of type "DepthToSpace") in cv::dnn::dnn4_v20210608::LayerData::getLayerInstance, file ####\opencv\modules\dnn\src\dnn.cpp, line 621
[ERROR:0] global ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp (2127) cv::dnn::dnn4_v20210608::ONNXImporter::handleNode DNN/ONNX: ERROR during processing node with 1 inputs and 1 outputs: [DepthToSpace]:(model/tf.nn.depth_to_space/DepthToSpace:0)
[ INFO:0] global ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp (2131) cv::dnn::dnn4_v20210608::ONNXImporter::handleNode Input[0] = 'model/conv2d_3/Relu:0'
[ INFO:0] global ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp (2135) cv::dnn::dnn4_v20210608::ONNXImporter::handleNode Output[0] = 'model/tf.nn.depth_to_space/DepthToSpace:0'
OpenCV(4.5.3) Error: Unspecified error (> Node [DepthToSpace]:(model/tf.nn.depth_to_space/DepthToSpace:0) parse error: OpenCV(4.5.3) ####\opencv\modules\dnn\src\dnn.cpp:621: error: (-2:Unspecified error) Can't create layer "model/tf.nn.depth_to_space/DepthToSpace:0" of type "DepthToSpace" in function 'cv::dnn::dnn4_v20210608::LayerData::getLayerInstance'
> ) in cv::dnn::dnn4_v20210608::ONNXImporter::handleNode, file ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp, line 2146
OpenCV(4.5.3) ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp:2146: error: (-2:Unspecified error) in function 'cv::dnn::dnn4_v20210608::ONNXImporter::handleNode'
> Node [DepthToSpace]:(model/tf.nn.depth_to_space/DepthToSpace:0) parse error: OpenCV(4.5.3) ####\opencv\modules\dnn\src\dnn.cpp:621: error: (-2:Unspecified error) Can't create layer "model/tf.nn.depth_to_space/DepthToSpace:0" of type "DepthToSpace" in function 'cv::dnn::dnn4_v20210608::LayerData::getLayerInstance'
I'm using OpenCV on Windows and Visual C++.
This might be a similar issue to this: EMGU - EDSR : Can't create layer DepthToSpace.
I tried reading the same network from TensorFlow PB and H5 file, but I get a similar result.
For reading a PB file, created with cv::dnn::readNetFromTensorflow("model.pb") I get:
OpenCV(4.5.3) Error: Unspecified error (FAILED: fs.is_open(). Can't open "model.pb") in cv::dnn::ReadProtoFromBinaryFile, file ####\opencv\modules\dnn\src\caffe\caffe_io.cpp, line 1133
OpenCV(4.5.3) ####\opencv\modules\dnn\src\caffe\caffe_io.cpp:1133: error: (-2:Unspecified error) FAILED: fs.is_open(). Can't open "model.pb" in function 'cv::dnn::ReadProtoFromBinaryFile'
For reading a H5 file created with cv::dnn::readNetFromTensorflow("upscaleModelData.h5"); I get:
OpenCV(4.5.3) Error: Unspecified error (Cannot determine an origin framework of files: model.h5) in cv::dnn::dnn4_v20210608::readNet, file ####\opencv\modules\dnn\src\dnn.cpp, line 5461
OpenCV(4.5.3) ####\opencv\modules\dnn\src\dnn.cpp:5461: error: (-2:Unspecified error) Cannot determine an origin framework of files: model.h5 in function 'cv::dnn::dnn4_v20210608::readNet'
Does this mean I should make modifications to the model's layers so that it can be read by OpenCV? Is this a compatibility issue? Any feedback, workarounds or alternative approaches (e.g. TensorFlow C++ API solutions) are welcome.
Edit 1: Implementing a layer-type workaround
I reimplemented the depth_to_space layer manually in Python, using the following piece of code, based on these links: onnx-tensorflow, depth_to_space, keras-subpixel-conv.
x_shape = tf.shape(x)
n, h, w, c = x_shape[0], x_shape[1], x_shape[2], x_shape[3]
y = tf.reshape(x, (n, h, w, bs, bs, c // (bs ** 2)))
y = tf.transpose(y, (0, 1, 3, 2, 4, 5))
outputs = tf.reshape(y, (n, h * bs, w * bs, c // (bs ** 2)))
Now, when I run the same C++ code, I get the following error.
[ INFO:0] global ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp (429) cv::dnn::dnn4_v20210608::ONNXImporter::populateNet DNN/ONNX: loading ONNX v4 model produced by 'tf2onnx':1.10.0. Number of nodes = 38, inputs = 14, outputs = 1
OpenCV(4.5.3) Error: Assertion failed (indexMat.total() == 1) in cv::dnn::dnn4_v20210608::ONNXImporter::handleNode, file ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp, line 1842
[ERROR:0] global ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp (2127) cv::dnn::dnn4_v20210608::ONNXImporter::handleNode DNN/ONNX: ERROR during processing node with 2 inputs and 1 outputs: [Gather]:(model/tf.compat.v1.shape/Shape:0)
[ INFO:0] global ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp (2131) cv::dnn::dnn4_v20210608::ONNXImporter::handleNode Input[0] = 'Shape__72:0'
[ INFO:0] global ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp (2131) cv::dnn::dnn4_v20210608::ONNXImporter::handleNode Input[1] = 'Const__76'
[ INFO:0] global ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp (2135) cv::dnn::dnn4_v20210608::ONNXImporter::handleNode Output[0] = 'model/tf.compat.v1.shape/Shape:0'
OpenCV(4.5.3) Error: Unspecified error (> Node [Gather]:(model/tf.compat.v1.shape/Shape:0) parse error: OpenCV(4.5.3) ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp:1842: error: (-215:Assertion failed) indexMat.total() == 1 in function 'cv::dnn::dnn4_v20210608::ONNXImporter::handleNode'
> ) in cv::dnn::dnn4_v20210608::ONNXImporter::handleNode, file ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp, line 2146
OpenCV(4.5.3) ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp:2146: error: (-2:Unspecified error) in function 'cv::dnn::dnn4_v20210608::ONNXImporter::handleNode'
> Node [Gather]:(model/tf.compat.v1.shape/Shape:0) parse error: OpenCV(4.5.3) ####\opencv\modules\dnn\src\onnx\onnx_importer.cpp:1842: error: (-215:Assertion failed) indexMat.total() == 1 in function 'cv::dnn::dnn4_v20210608::ONNXImporter::handleNode'

Getting "No EOF Marker was found in the PDF file." after using Tesseract C++ API but Tesseract command line utility works fine

I am working on some images using Tesseract & then trying to merge all of the generated PDFs using PoDoFo C++ library.
Have tried 2 approaches (1st one is what I require) :
Using Tesseract C++ API & PoDoFo C++ library
My code is somewhat like this:
For OCR part (run for 001.jpg & 002.jpg):
const char* input_image = "001.jpg";
const char* output_base = "001";
const char* datapath = "/home/test/Desktop/Example2";
int timeout_ms = 5000;
const char* retry_config = nullptr;
bool textonly = false;
tesseract::TessBaseAPI *api = new tesseract::TessBaseAPI();
if (api->Init(datapath, "eng")) {
fprintf(stderr, "Could not initialize tesseract.\n");
exit(1);
}
tesseract::TessPDFRenderer *renderer = new tesseract::TessPDFRenderer(
output_base, api->GetDatapath(), textonly);
bool succeed = api->ProcessPages(input_image, retry_config, timeout_ms, renderer);
if (!succeed) {
fprintf(stderr, "Error during processing.\n");
return EXIT_FAILURE;
}
api->End();
return EXIT_SUCCESS;
For PDF merging part:
void mergePDF(std::vector<char*> inputfiles,char* outputfile) {
try {
/*Reading first PDF */
fprintf(stdout,"Reading file: %s\n",inputfiles[0]);
PoDoFo::PdfMemDocument doc1;
doc1.Load(inputfiles[0]);
/*Reading Second PDF */
fprintf(stdout,"Reading file: %s\n",inputfiles[1]);
PoDoFo::PdfMemDocument doc2;
doc2.Load(inputfiles[1]);
/* Appending doc1 to doc1 */
doc1.Append(doc2);
fprintf(stdout,"Writing files to %s\n ",outputfile);
doc1.Write(outputfile);
}
catch(const PoDoFo::PdfError& e) {
throw e;
}
}
int main(int argc,char* argv[]) {
if (argc < 2) {
printHelp();
exit(EXIT_FAILURE);
}
PoDoFo::PdfError::EnableDebug(false);
std::vector<char*> inputfiles;
char* outputfile;
inputfiles.emplace_back(argv[1]);
inputfiles.emplace_back(argv[2]);
outputfile = argv[3];
try {
mergePDF(inputfiles,outputfile);
}
catch(const PoDoFo::PdfError &e) {
fprintf(stderr,"Error %i occured!\n",e.GetError());
e.PrintErrorMsg();
return e.GetError();
}
exit(EXIT_SUCCESS);
}
Output:
Warning: Invalid resolution 0 dpi. Using 70 instead.
Warning: Invalid resolution 0 dpi. Using 70 instead.
Reading file: /home/test/Desktop/Example2/001.pdf
Error 17 occured!
PoDoFo encountered an error. Error: 17 ePdfError_NoEOFToken
Error Description: No EOF Marker was found in the PDF file.
Callstack:
#0 Error Source: /home/test/podofo/src/podofo/doc/PdfMemDocument.cpp:263
Information: Handler fixes issue #49
#1 Error Source: /home/test/podofo/src/podofo/base/PdfParser.cpp:272
Information: Unable to load objects from file.
#2 Error Source: /home/test/podofo/src/podofo/base/PdfParser.cpp:310
Information: EOF marker could not be found.
#3 Error Source: /home/test/podofo/src/podofo/base/PdfParser.cpp:1528
Using Tesseract command line utility & PoDoFo C++ library
For OCR part, I use Tesseract CLI tool as follows:
tesseract 001.jpg 001 pdf
tesseract 002.jpg 002 pdf
For PDF merging part, the code is same as in point no. 1) above
Output:
Reading file: /home/test/Desktop/Example2/001.pdf
Reading file: /home/test/Desktop/Example2/002.pdf
Fixing references in 13 0 R by 12
Fixing references in 14 0 R by 12
Fixing references in 15 0 R by 12
Fixing references in 16 0 R by 12
Fixing references in 17 0 R by 12
Fixing references in 18 0 R by 12
Fixing references in 19 0 R by 12
Fixing references in 20 0 R by 12
Fixing references in 21 0 R by 12
Fixing references in 22 0 R by 12
Fixing references in 23 0 R by 12
Fixing references in 24 0 R by 12
Reading file: /home/test/Desktop/Example2/output.pdf
I wonder why I am getting the EOF marker issues after using Tesseract C++ API but no such issue after using Tesseract CLI tool.
Am I missing something in the OCR code part in point no. 1) above?

libprotobuf ERROR when accessing the output tensor in Tensorflow C++ API

I'm using tensorflow C++ API for inference. I was able to build a standalone tensorflow library, loading the optimized graph, and feeding an image.
I'm able to print the tensor using the following code:
// run the image through the model.
std::vector<Tensor> outputs;
Status run_status = session->Run({{input_layer, resized_tensor}}, {output_layer}, {}, &outputs);
if (!run_status.ok()) {
LOG(ERROR) << "Running model failed: " << run_status;
return -1;
}
std::cout << outputs[0].DebugString() << "\n";
and then I get the following output:
Tensor<type: float shape: [1,480,720,3] values: [[[58.4225044 79.0690613 94.4875641]]]...>
but after adding this line:
cv::Mat rotMatrix(outputs[0].dim_size(1), outputs[0].dim_size(2), CV_32FC1, outputs[0].flat<float>().data());
or this line:
float *p = outputs[0].flat<float>().data();
I get the following error:
2019-06-26 14:19:29.040705: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/descriptor_database.cc:118] File already exists in database: google/protobuf/any.proto
[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/descriptor.cc:1367] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
Aborted (core dumped)
I tried it using bazel, qmake, and cmake and built protocol buffer again from source but without any help..

HDF5 Simple Read of Dataset Fails

I am looking to do a simple read from an hdf5 file using C++. I will split this up into 4 parts. 1st what the file looks like. 2nd my code which attempts to read the file. 3rd the error message. 4th my conclusions.
1.The File - The dataset can be found in the file as shown:
$ h5ls -r myfile.h5
/ Group
/mydata Dataset {1200}
Note - The dataset is an array of 1200 strings. Note the CTYPE is H5T_C_S1 which is what I will use to read it in.
HDF5 "myfile.h5" {
GROUP "/" {
DATASET "mydata" {
DATATYPE H5T_STRING {
STRSIZE H5T_VARIABLE;
STRPAD H5T_STR_NULLTERM;
CSET H5T_CSET_UTF8;
CTYPE H5T_C_S1;
}
DATASPACE SIMPLE { ( 1200 ) / ( 1200 ) }
DATA {
(0): "pxsntpfcnkeesswwpwopksu", "exsytafcbkecsswwpwopnng",
(2): "ebswtlfcbnecsswwpwopnnm", "pxywtpfcnneesswwpwopksu",
(4): "exsgfnfwbktesswwpwoenag", "exyytafcbnecsswwpwopkng",
2.The Code - My code attempts to read the dataset like so:
#include "H5Cpp.h"
#ifndef H5_NO_NAMESPACE
using namespace H5;
#endif
const H5std_string FILE_NAME("myfile.h5");
const H5std_string DATASET_NAME("mydata");
// open file
H5File file(FILE_NAME, H5F_ACC_RDONLY);
// get dataset
DataSet dataset = file.openDataSet(DATASET_NAME);
// get src dataspace
DataSpace src = dataset.getSpace();
// get dimensions
int NUM_DIMS = src.getSimpleExtentNdims();
std::vector<hsize_t> dims(NUM_DIMS);
src.getSimpleExtentDims(&dims[0]);
hsize_t height = dims[0];
hsize_t width = 23;
// define src hyperslab
std::vector<hsize_t> count(NUM_DIMS, 1);
std::vector<hsize_t> offset(NUM_DIMS, 0);
src.selectHyperslab(H5S_SELECT_SET, &count[0], &offset[0]);
// define dst hyperslab
DataSpace dst(NUM_DIMS, dims);
dst.selectHyperslab(H5S_SELECT_SET, &count[0], &offset[0]);
// read data into memory, array of cstrings
std::vector<char*> data_out(height);
dataset.read(&data_out[0], H5T_C_S1, dst, src);
// print first line
std::cout << data_out[0] << std::endl;
3.The Error - However, it fails from what appears to be a type mismatch between the src and dst hyperslabs, even though I designed the src and dst to have the same dimensions. The error message is as follows:
HDF5-DIAG: Error detected in HDF5 (1.10.3) thread 0:
#000: H5Dio.c line 199 in H5Dread(): can't read data
major: Dataset
minor: Read failed
#001: H5Dio.c line 467 in H5D__read(): unable to set up type info
major: Dataset
minor: Unable to initialize object
#002: H5Dio.c line 993 in H5D__typeinfo_init(): unable to convert between src and dest datatype
major: Dataset
minor: Feature is unsupported
#003: H5T.c line 4546 in H5T_path_find(): can't find datatype conversion path
major: Datatype
minor: Can't get value
#004: H5T.c line 4762 in H5T__path_find_real(): no appropriate function for conversion path
major: Datatype
minor: Unable to initialize object
HDF5-DIAG: Error detected in HDF5 (1.10.3) thread 0:
#000: H5T.c line 1756 in H5Tclose(): immutable datatype
major: Invalid arguments to routine
minor: Bad value
DataType::~DataType - H5Tclose failed
4.My Conclusions - I have attempted many variations, including removing dst and src as parameters for dataset.read(), changing H5T_C_S1 to PredType::C_S1 and PredType::NATIVE_CHAR, however the same error persists.
How do I simply read the dataset into memory? Is the datatype truly mismatched or is there something else I am not defining? Am I still using the wrong datatype in the read function? Am I defining my hyperslabs improperly such that there actually is a type mismatch?
Maybe you want to try out HDFql and abstract yourself from HDF5 low-level details. In C++ using HDFql, you could read your variable-length char dataset mydata (contained in file myfile.h5) like this:
HDFql::execute("SELECT FROM myfile.h5 mydata"); // select (i.e. read) dataset "mydata" from file "myfile.h5" and populate default cursor with it
while(HDFql::cursorNext() == HDFql::Success) // display content of default cursor
{
std::cout << HDFql::cursorGetChar() << std::endl;
}

Unable to read file contents into buffer using read()

Following is a sample code compiled using GNU compiler (g++ command) on an Ubuntu OS 16.04:
#include<iostream>
#include<unistd.h>
#include<fcntl.h>
#include <errno.h>
int main()
{ char* pBuffer;
char* storedfilepath = "/home/rtpl/Desktop/ts.mp4";
std::cout<<"\n Opening file at "<<storedfilepath<<"\n";
int NumBytesToRead = 1000 ;
int filedes = open(storedfilepath,O_RDONLY);
std::cout<<"\n value of error is "<<errno<<"\n";
std::cout<<"\n value of filedes is "<<filedes;
if (filedes==0)
std::cout<<"\n File cannot be opened";
else
{
std::cout<<"\n File opened successfully";
std::cout<<"\n Now reading file\n";
}
//if(
int ret = read(filedes,pBuffer,NumBytesToRead);
std::cout<<"\n value of error is "<<errno<<"\n";
if(ret!= -1)
std::cout<<"\n File read successfully";
else
std::cout<<"\n File contents cannot be read";
std::cout<<"\nEnd.\n";
close(filedes);
return 0;
}
When compiled; I get this message:
rtpl#rtpl-desktop:~/Desktop$ g++ -g checkts.cpp
checkts.cpp: In function ‘int main()’:
checkts.cpp:8:27: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
char* storedfilepath = "/home/rtpl/Desktop/ts.mp4";
Upon execution:
rtpl#rtpl-desktop:~/Desktop$ ./a.out
Opening file at /home/rtpl/Desktop/ts.mp4
value of error is 0
value of filedes is 3
File opened successfully
Now reading file
value of error is 14
File contents cannot be read
End.
Entire gdb debug can be found here.
Question : Why won't the file contents be read when the file is legit and the compiler throws no error?
Assuming you're running Linux, an errno value of 14 is EFAULT, or "bad address".
Given the code
char* pBuffer;
.
.
.
int ret = read(filedes,pBuffer,NumBytesToRead);
pBuffer is not initialized or otherwise set, so the value in pBuffer is indeterminate and it certainly doesn't point to a valid address.
You need to actually provide a buffer where read() can place the data read:
char buffer[ 1024 ]
.
.
.
ssize_t ret = read(filedes,buffer,NumBytesToRead);
would work, as long as NumBytesToRead does not exceed the number of bytes in buffer. Note also that ret is now the proper ssize_t instead of int.