I couldn't find a way of determining what would be the max creatable enclave using the SGX SDK. Is there any way of fetching these capabilities? This is especially useful in cloud environments where you can create virtual machines with EPC sections and you don't know the actual usable size of the provisioned EPC.
The only option I found to get the value of the EPC section is by filtering dmesg for the output of the SGX driver.
[ 2.451815] intel_sgx: EPC section 0x240000000-0x2bfffffff
If we convert the start and end of the section in decimals and subtract the end from the start, we get a value in bytes which we can convert to gibibytes or mebibytes.
Here are the calculations for this example and the result in gibibytes:
python3 -c 'print((0x2bfffffff - 0x240000000) / 1024 ** 3)'
1.9999999990686774
Related
I am using FreeRTOS v 10.1.0 , in addition I have downloaded FreeRTOS+FAT from the labs area (160919 release)
I am using an Altera Cyclone V evaluation board and have succesfully ran FreeRTOS projects on the board using the Demo project and the available port for my board as the basis for my own applications.
I have also succesfully mounted a partion on my SD Card and read files from the SD Card and also written files to the SD Card.
My problems begin when I try to read a file bigger than 2K. I am using the following ff_fread command to read from a file I have previous opened and I know to be 5777 bytes long:
ff_fread( &byteBuffer[0],1,5777, pxSourceFile );
What I find is the byte buffer is repetedly populated with the same 2048 bytes, up to the maximum of 5777 bytes. So byteBuffer[0] to byteBuffer[2047] are what I expect but then this data is repeated.
I have also tried to read the data in 512 byte chunks and also in 2048 byte chunks in case the issue was related to a sector boundary (512 byte sector) or a cluster boundary (4 sectors per cluster).
My suspicion is that the issue is in FreeRTOS + Fat as opposed to the Altera code for interfacing with the SD Card. This is becasue when I put a break point in the following function I see that the FreeRTOS+Fat api does actually seem to jump back to the first sector after it has successfully read 4 sectors of data. So it would seem that the Altera Api is returning the data requested by FreeRTOS + FAT.
static int32_t prvReadSd( uint8_t pucDestination,
uint32_t ulSectorNumber,
uint32_t ulSectorCount,
FF_Disk_t pxDisk )
{
int32_t errorCode = alt_sdmmc_read(pucDestination,
ulSectorNumber * 512,
ulSectorCount * 512);
return errorCode
}
Any insights anyone can offer into the issues I am having will be greatly appreciated.
Ok, I have resolved my issue. My apologies for blaming FreeRTOS+FAT, I will explain the issue below just in case others have the same issue.
I had created a 1MB partition on my SD Card which I believed to be Fat16. After trying various things I decided to reformat my sd card using the following command in Linux.
sudo mkdosfs -F 16 /dev/sdc4
linux gave the following warning
WARNING: Not enough clusters for a 16 bit FAT! The filesystem will be
misinterpreted as having a 12 bit FAT without mount option "fat=16".
This prompted me to enable Fat12 support in the FreeRTOS+Fat config file and this fixed my issue.
I have two elasticsearch cluster - cluster1 (version 2.4.x) and cluster2 (version 5.1.1). I am feeding live data to these two cluster from logstash (version 5.x) to achieve a stable state between cluster 1 and 2. The data is now in sync between these two cluster but the cluster 2's index size is almost double when compared to cluster 1's index size. Below is an example:
ES Version health status index pri rep docs.count docs.deleted store.size pri.store.size
2.4.x Old_Cluster green open index1-08 5 1 6520824 0 5.3gb 2.6gb
5.1.1 New_Cluster green open index2-08 5 1 6520824 0 9.3gb 4.6gb
As you can see above the docs.count is same between these two indexes but the size of the index2-08 is double the size of index1-08. Both clusters have similar configuration with respect to their versions.
The logstash (version 5.x) creates these two indexes using default mappings. This is lowering kibana's search capability. I am new to ELK thus am not sure if this something expected.
Can anyone please take a look and provide any suggestions on what might be the reason for this behaviour?
I am getting the following error when running the default generated kernel when creating a CUDA project in VS Community:
addKernel launch failed: invalid device function
addWithCuda failed!
I searched for how to solve it, and found out that have to change the Project->Properties->CUDA C/C++->Device->Code Generation(default values for [architecture, code] are compute_20,sm_20), but I couldn't find the values needed for my graphic card (GeForce 8400 GS)
Is there any list on the net for the [architecture, code] or is it possible to get them by any command?
The numeric value in compute_XX and sm_XX are the Compute Capability (CC) for your CUDA device.
You can lookup this link http://en.wikipedia.org/wiki/CUDA#Supported_GPUs for a (maybe not complete) list of GPUs and there corresponding CC.
Your quite old 8400 GS (when I remember correctly) hosts a G86 chip which supports CC 1.1.
So you have to change to compute_11,sm_11
`
I recently starting working with OpenCV with the intent of stitching large amounts of images together to create massive panoramas. To begin my experimentation, I looked into the sample programs that come with the OpenCV files to get an idea about how to implement the OpenCV libraries. Since I was interested in image stitching, I went straight for the "stitching_detailed.cpp." The code can be found at:
https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/stitching_detailed.cpp?rev=6856
Now, this program does most of what I need it to do, but I ran into something interesting. I found that for 9 out of 15 of the optional projection warpers, I receive the following error when I try to run the program:
Insufficient memory (Failed to allocate XXXXXXXXXX bytes) in unknown function,
file C:\slave\winInstallerMegaPack\src\opencv\modules\core\src\alloc.cpp,
line 52
where the "X's" mark integer that change between the different types of projection (as though different methods require different amounts of space). The full source code for "alloc.cpp" can be found at the following website:
https://code.ros.org/trac/opencv/browser/trunk/opencv/modules/core/src/alloc.cpp?rev=3060
However, the line of code that emits this error in alloc.cpp is:
static void* OutOfMemoryError(size_t size)
{
--HERE--> CV_Error_(CV_StsNoMem, ("Failed to allocate %lu bytes", (unsigned long)size));
return 0;
}
So, I am simply lost as to the possible reasons that this error may be occurring. I realize that this error would normally occur if the system is out of memory, but I when running this program with my test images I am never using more that ~3.5GB of RAM, according to my Task Manager.
Also, since the program was written as an sample of the OpenCV stitching capabilities BY OpenCV developers I find it hard to believe that there is a drastic memory error present within the source code.
Finally, the program works fine if I use some of the warping methods:
- spherical
- fisheye
- transverseMercator
- compressedPlanePortraitA2B1
- paniniPortraitA2B1
- paniniPortraitA1.5B1)
but as ask the program to use any of the others (through the command line tag
--warp [PROJECTION_NAME]):
- plane
- cylindrical
- stereographic
- compressedPlaneA2B1
- mercator
- compressedPlaneA1.5B1
- compressedPlanePortraitA1.5B1
- paniniA2B1
- paniniA1.5B1
I get the error mentioned above. I get pretty good results from the transverseMercator project warper, but I would like to test the stereographic in particular. Can anyone help me figure this out?
The pictures that I am trying to process are 1360 x 1024 in resolution and my computer has the following stats:
Model: HP Z800 Workstation
Operating System: Windows 7 enterprise 64-bit OPS
Processor: Intel Xeon 2.40GHz (12 cores)
Memory: 14GB RAM
Hard Drive: 1TB Hitachi
Video Card: ATI FirePro V4800
Any help would be greatly appreciated, thanks!
When I run OpenCV's APP traincascade, i get just the same error as you:
Insufficient memory (Failed to allocate XXXXXXXXXX bytes) in unknown function,
file C:\slave\winInstallerMegaPack\src\opencv\modules\core\src\alloc.cpp,
line 52
at the time, only about 70% pecent of my RAM(6G) was occupied. And when runnig trainscascade step by step, I found that the error would be thrown.when it use about more than 1.5G RAM space.
then, I found the are two arguments which can control how many memory should be used:
-precalcValBufSize
-precalcIdxBufSize
so i tried to set these two to 128, it run. I hope my experience can help you.
I thought this problem is nothing about memory leak, it is just relate to how many memory the OS limits a application occupy. I expect someone can check my guess.
I've recently had a similar issue with OpenCV image stitching. I've used create method to create stitcher instance and provided 5 images in vertical order to stitch method, but I've received insufficient memory error.
Panorama was successfully created after setting:
setWaveCorrection(false)
This solution will not be applicable if you need wave correction.
This may be related to the sequence of the stitching, I split a big picture into 3*3, and firstly I stitch them row by row and there is no problem, when I stitch them column by column, there is the problem same as you.
I am trying to figure out how to use the batch mode offered in the CUFFT library.
I basically have an image that is 5300 pixels wide and 3500 tall. Currently this means I am running 3500 1D FFT's on those 5300 elements using FFTW.
Is this a good candidate problem to run the CUFFT library in batch mode? How does the data have to be set up to do this problem?
Thanks
yes, you can use the batch mode.
To use the batch mode,the 5300 elements should be stored continuously.
That means the distance between adjacent batches is 5300.
You can go this way:
..........
cufftComplex *host;
cufftComplex *device;
CudaMallocHost((void **)&host,sizeof(cufftComplex)*5300*3500);
CudaMalloc((void **)&devcie,sizeof(cufftComplex)*5300*3500);
//here add the elements,like this:
//host[0-5299] the first batch, host[5300-10599] the second batch ,and up to the 3500th batch.
CudaMemcpy(device,host,sizeof(cufftComplex)*5300*3500,......);
CufftPlan1d(&device,5300,type,3500);
CufftExecC2C(......);
......
For more details see the CUFFT Manual.
yes this is a good problem.
You should go the following way:
create a array with size: sizeof(cufftComplex)*5300*3500 at the gpu(here I assume that you have complex input data)
copy your data to the gpu
create a plan with cufftPlan1d()
execute the plan for example with cufftExecC2C()
For more Information you must have a look at the CUFFT Manual