From this list of examples, it seems that Kakadu is able to compress volumetric images along the z-direction by leveraging the multi-component transform (Part 2) of the JPEG 2000 standard.
Specifically example (Aj). I tested it, it seems to work.
I tried to modify the example above for an image of 1024 x 1024 x 128 pixels,
I want to group slices in batches of 32 (or 64) and do a full DWT on the individual batches,
but I fail.
This is what I tried:
kdu_compress -i img.rawl\*128#2097152 -o img.jpx -jpx_layers \* \
-jpx_space sLUM Creversible=yes Sdims="{1024,1024}" Clayers=4 \
Mcomponents=32 Nsigned=no Nprecision=12 \
Sprecision=12,12,12,12,12,13 Ssigned=no,no,no,no,no,yes \
Mvector_size:I4=32 Mvector_coeffs:I4=32 \
Mstage_inputs:I25="{0,31}" Mstage_outputs:I25="{0,31}" \
Mstage_collections:I25="{32,32}" \
Mstage_xforms:I25="{DWT,1,4,3,0}" \
Mnum_stages=1 Mstages=25
It fails with this error message:
Kakadu Core Error:
Multi-component transform does not satisfy the constraints imposed by Part 2 of
the JPEG2000 standard. The first transform stage must touch every codestream
image component (no more and no less), while subsequent stages must touch every
component produced by the previous stage.
What am I doing wrong? What's the fix?
Related
Hi stackoverflow community,
I have a tricky problem and I need your help to understand what is going on here.
My program captures frames from a video grabber card (Blackmagic) which just works fine so far, at the same time I display the captured images with opencv (cv::imshow) which works good as well (But pretty cpu wasting).
The captured images are supposed to be stored on the disk as well, for this I put the captured Frames (cv::Mat) on a stack, to finally write them async with opencv:
cv::VideoWriter videoWriter(path, cv::CAP_FFMPEG, fourcc, fps, *size);
videoWriter.set(cv::VIDEOWRITER_PROP_QUALITY, 100);
int id = metaDataWriter.insertNow(path);
while (this->isRunning) {
while (!this->stackFrames.empty()) {
cv:Mat m = this->stackFrames.pop();
videoWriter << m;
}
}
videoWriter.release();
This code is running in an additional thread and will be stopped from outside.
The code is working so far, but it is sometimes pretty slow, which means my stack size increases and my system runs out of ram and get killed by the OS.
Currently it is running on my developing system:
Ubuntu 18.04.05
OpenCV 4.4.0 compiled with Cuda
Intel i7 10. generation 32GB RAM, GPU Nvidia p620, M.2 SSD
Depending on the codec (fourcc) this produces a high CPU load. So far I used mainly "MJPG", "x264". Sometimes even MJPG turns one core of the CPU to 100% load, and my stack raises until the programs run out of run. After a restart, sometimes, this problem is fixed, and it seems the load is distributed over all cores.
Regarding to the Intel Doc. for my CPU, it has integrated hardware encoding/decoding for several codecs. But I guess opencv is not using them. Opencv even uses its own ffmpeg and not the one of my system. Here is my build command of opencv:
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D WITH_TBB=ON \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUBLAS=1 \
-D WITH_V4L=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D WITH_FFMPEG=1 \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D CUDA_ARCH_BIN=6.1 ..
I just started development with linux and C++, before I was working with Java/Maven, so the use of cmake is still a work in progress, pls go easy on me.
Basically my question is, how can I make the video encoding/writing faster, use the hardware acceleration at best?
Or if you think there is something else fishy, pls let me know.
BR Michael
-------- old - look up answer on bottom --------
Thank #Micka for the many advises, I found the right thing on the way.
Using cudacodec::VideoWriter is not that easy, after compiling I was not able to use it because of this error, and even if I can make it run, the deployment PC does not have a nvidia GPU.
Since I am going to use PCs with AMD CPUs as well, I can't use the cv::CAP_INTEL_MFX for the api-reference parameter of the cv::VideoWriter.
But there is also the cv::CAP_OPENCV_MJPEG, which works fine for the MJPG codec (not all video container are supported, I use .avi, sadly .mkv was not working with this configuration). If the user does not use MJPG as a codec I use cv::CAP_ANY, and opencv decides what is to use.
So,
cv::VideoWriter videoWriter(path, cv::CAP_OPENCV_MJPEG, fourcc, fps, *size);
works pretty well, even on my old system.
Unfortunately I never changed the api-reference parameter before, only from ffmpeg to gstreamer, I read in the doc of opencv only the last line "cv::CAP_FFMPEG or cv::CAP_GSTREAMER." and I did not see that there is an "e.g." before...
Thank you #Micka to make me read again.
P.S. for my performance problem with cv::imshow I changed from
cv::namedWindow(WINDOW_NAME, cv::WINDOW_NORMAL);
to
cv::namedWindow(WINDOW_NAME, cv::WINDOW_OPENGL);
Which obviously uses OpenGL, and does a better job. Also changing from cv::Mat to cv::UMat can speed up the performance, see here
-------------- EDIT better solution ----------------
Since I still had problems with the OpenCV VideoWriter for some systems, I was looking for another solution. Now I write the frames with FFMPEG.
For FFMPEG I can use the GPU or CPU depending on the codec I use.
If FFMPEG is installed via snapd (Ubuntu 18.04) it comes with cuda enabled by default:
sudo snap install ffmpeg --devmode
(--devmode is optional, but I had problems writing files on specific location, this was the only way for me to fix it)
And here is my code:
//this string is automatically created in my program, depending on user input and the parameters of the input frames
string ffmpegCommand = "ffmpeg -y -f rawvideo -vcodec rawvideo -framerate 50 -pix_fmt bgr24 -s 1920x1080 -i - -c:v h264_nvenc -crf 14 -maxrate:v 10M -r 50 myVideoFile.mkv";
FILE *pipeout = popen(ffmpegCommand.data(), "w");
int id = metaDataWriter.insertNow(path);
//loop will be stopped from another thread
while (this->isRunning) {
//this->frames is a stack with cv::Mat elements in the right order
//it is filled by another thread
while (!this->frames.empty()) {
cv::Mat mat = frames.front();
frames.pop();
fwrite(mat.data, 1, s, pipeout);
}
}
fflush(pipeout);
pclose(pipeout);
So a file (pipeout) is used to write the mat.data to ffmpeg, ffmpeg itself is doing the encoding and file writing. To the parameters:
-y = Overwrite output files without asking
-f = format, in this case used for input rawvideo
-vcodec = codec for input which is rawvideo as well, because the used cv::Mat.data has no compression/codec
-framerate = the input framerate I receive from my grabber card/OpenCv
-pix_fmt = the format of my raw data, in this case bgr24, so 8 bit each channel, because I use a regular OpenCV bgr cv::Mat
-s = size of each frame, in my case 1920x1080
-i = input, in this case we read from the stdinput you can see it here "-", so the file (pipeout) is captured by ffmpeg
-c:v = output codec, so this is to encode the video, here h264_nvenc is used, which is a GPU codec
-r = frame output rate, also 50 in this case myVideoFile.mkv = this is just the name of the file which is produced by ffmpeg, you can change this file and path
Additional parameters for higher quality: -crf 14 -maxrate:v 10M
This works very good for me and uses my hardware acceleration of the GPU or with another codec in charge the CPU.
I hope this helps other developers as well.
There are four different ways to send data across USB: Control, Interrupt, Bulk, and Isochronous. book ref 1
From the book book ref 1 page 330:
... Bulk endpoints transfer large amounts of data. These endpoints are usually much larger (they can hold more characters at once) that interrupt endpoints. ...
when I get my endpoint input I use the following command.
import usb.core
import usb.util
dev = usb.core.find(idVendor=0x0683, idProduct=0x4108)
if dev is None:
raise ValueError('Device not found')
dev.reset()
dev.set_configuration()
cfg = dev.get_active_configuration()
intf = cfg[(0,0)]
epi = usb.util.find_descriptor(
intf,
# match the first IN endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) ==\
usb.util.ENDPOINT_IN)
I tried to add, but it give my a sytax error that I don't fully understand :
usb.util.endpoint_type()== \
usb.util.ENDPOINT_TYPE_BULK
Here is another very good source on how to work with USB link 1
It seems that usb endpoints have parameters that can be specified in python
where bEndpointAddress indicates what endpoint this descriptor is describing.
bmAttributes specifies the transfer type. This can either be Control, Interrupt, Isochronous or Bulk Transfers. If an Isochronous endpoint is specified, additional attributes can be selected such as the Synchronisation and usage types.
wMaxPacketSize indicates the maximum payload size for this endpoint.
bInterval is used to specify the polling interval of certain transfers. The units are expressed in frames, thus this equates to either 1ms for low/full speed devices and 125us for high speed devices.
I have tried:
epi.wMaxPacketSize = 72000000 #to make the buffer large
epi.bmAttributes = 3 # 3 = 10 in binary. to change the mode to bulk
My questions are:
Where do I specify what kind of endpoint I am using for Windows and(or) Linux and how to do that? and How can I change the buffer size on each endpoint?
Try this:
epi = usb.util.find_descriptor(intf,
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_IN \
and \
usb.util.endpoint_type(e.bmAttributes) == \
usb.util.ENDPOINT_TYPE_BULK )
But you misunderstood the part about the parameters. bmAttributes and wMaxPacketSize are specified by the USB hardware and not to be changed by Python.
I'm trying to probe the systemverilog signals by using irun .
I came across the some example to dump wave as the below ,when I googling.
initial begin
$recordfile("sv_wave");
$recordvars("depth=all",pstest);
end
It seems work but the other variables can't see the value with "No Value Available".
I use the below script to run the simulation.
irun \
+access+wrc \
-cdn_vip_root /u572/cadence/installs/VIPCAT113 \
/u572/sv/denaliMem.sv \
/u572/sv/denaliCdn_ahb.sv \
/u572/svExamples/simpleExample/hdl/master_mux.v \
/u572/svExamples/simpleExample/hdl/slave_mux.v \
hdl/ahb_verilog.v \
test2.v \
tb.sv \
-incdir /u572/svExamples/simpleExample \
-timescale 1ps/1ps -top pstest
What am I supposed to do to get the variable's value of the No Value Available variable?
On the simulator side, the command you can use is probe -create <signal> <options>. You can either type that in the irun simulator console or provide as an instruction in the .tcl file at startup. Refer to the documentation provided with the simulator under the section Simulator Tcl Commands / probe for verbose description & examples.
I'm trying to find out if I can store values captured at irregular intervals into an RRD.
I have a script which connects to an ActiveMQ server subscribes to a queue or topic and looks at the message header time stamp, compares it with Time.now to give me a delta.
The data I get from my script is as below;
000000.681 Time Delta
000000.793 Time Delta
000000.583 Time Delta
000001.994 Time Delta
The issue I face is that messages from the ActiveMQ don't necessarily come in at a 'regular interval' (e.g 1/sec, 1/2sec) They could come in at peak times as 5 a second, and quite times 1 every 10 seconds.
I'd like to be able to capture the output into an RRD so I can graph against it but having a look around on the internet it's not clear is this can be done, or if I'd be better off using a.n.other database/store to capture the data into.
The eventual output I'd like would be a graph showing the time delta for each message.
It looks like I could set the RRD using --step to 1 second, and the hart beat to 2 seconds having had a read of the docs.
I found a couple of posts here and here which talk about being careful with the intervals and the fact my data might be averaged, smoothed or otherwise messed about with when written to the RRD. But nothing I've found online has a similar usage case to mine so its a bit hard to know where I should be looking. I'd like my data to be stored as point for each message received.
I have a couple of RRD's setup for testing; one is taking the AVERAGE the other is taking the LAST to produce some graphs. My heartbeat is set for 100 seconds, but the interval is set to 1. I'm now getting data which looks correct. I'm also guessing that empty spaces in graph from the LAST RRA are due to my data coming in slower that 1 per second?
I'll post my create code & output as an answer.
rrdtool will always store data at regular intervals. As data is handed over to rrdtool, it first gets re-sampled to the --step interval. and then further consolidated to the intervals setup in the RRAs.
The exact arrival time of the data (to the millisecond) is taken into account as the re-sampling takes place ...
If two data points are further apart than specified by mrhb, the data is considered non-continuous and rrdtool will store 'unknown' for the interval affected.
I ended up making two sets of RRD's to experiment with;
rrdtool create test1.rrd \
--step '1' \
'DS:ds0:GAUGE:5:0:U' \
'RRA:AVERAGE:0.5:1:86400' \
'RRA:MAX:0.5:1:86400' \
'RRA:AVERAGE:0.5:60:10080' \
'RRA:MAX:0.5:60:10080' \
'RRA:AVERAGE:0.5:120:21600' \
'RRA:MAX:0.5:120:21600' \
'RRA:AVERAGE:0.5:300:105120' \
'RRA:MAX:0.5:300:105120'
and
rrdtool create test.rrd \
--step '1' \
'DS:ds0:GAUGE:5:0:U' \
'RRA:AVERAGE:0.5:1:86400' \
'RRA:LAST:0.5:1:86400' \
'RRA:AVERAGE:0.5:60:10080' \
'RRA:LAST:0.5:60:10080' \
'RRA:AVERAGE:0.5:120:21600' \
'RRA:LAST:0.5:120:21600' \
'RRA:AVERAGE:0.5:300:105120' \
'RRA:MAX:0.5:300:105120'
Which allows me to store;
1sec, archive is kept for 1day back
1min, archive is kept for 7day back
2min, archive is kept for 30day back
5min, archive is kept for 1year back
Which makes these nice graphs;
The graphs where made in PHP with the following code;
<?php
$opts = array(
'--width', '600',
'--height', '100',
'--title', 'Avg Time Delta xxxxxxxxxx (Last 1 Hr)',
'--vertical-label', 'Time Delta',
'--watermark', 'xxxxxxxxxx',
'--start', 'end-1h',
'DEF:out=test.rrd:ds0:AVERAGE',
'DEF:max=test.rrd:ds0:MAX',
'AREA:out#9966FF:Avg Time Delta',
'LINE:max#996600:Max Time Delta',
);
$ret = rrd_graph("graphs/1hr-graph.png", $opts);
if( !is_array($ret) )
{
$err = rrd_error();
echo "rrd_graph() ERROR: $err\n";
}
echo '<img src="http://server/graphs/1hr-graph.png">';
echo '<BR>';
?>
<?php
$opts = array(
'--width', '600',
'--height', '100',
'--title', 'Last Time Delta xxxxxxxxxx (Last 1 Hr)',
'--vertical-label', 'Time Delta',
'--watermark', 'xxxxxxxxxx',
'--start', 'end-1h',
'DEF:avg=test1.rrd:ds0:AVERAGE',
'DEF:last=test1.rrd:ds0:LAST',
'AREA:avg#99AAFF:Avg Time Delta',
'LINE:last#99AA00:Last Time Delta',
);
$ret = rrd_graph("graphs/1hr-last.png", $opts);
if( !is_array($ret) )
{
$err = rrd_error();
echo "rrd_graph() ERROR: $err\n";
}
echo '<img src="http://server/graphs/1hr-last.png">'
?>
From my own sanity checking and watching the data in realtime it looks like both of those graphs are correct, but behave in slightly different ways. When the data feed which this is monitoring is quite and I'm only getting 1 mesg every 10 sec I get a lot of gaps in the LAST graphs whereas the AVERAGE graphs are smoothed out to fill the gaps. I also tried with setting another RRD to ABSOLUTE but the graphs for that looks 'wrong' and the times are all below 1.0.
So it looks like I can feed my RRD at whatever interval I like from my script. It looks like the RRD will sample my data by its defined interval (In my case 1 sec) and then do what it needs to do based on the way I save it (Gauge, Absolute etc) With my heart-beat set to 100 I should always receive some data before that 100 sec times-out - thus avoiding NAN entries in my database.
At the moment I can't tell how well behaved this config will be during times of disruption (e.g delayed messages from the AMQ server) I will try and run some tests when I get some spare time and report back with anything significant.
I am trying to use the textcleaner script for cleaning up real life images that I am using with OCR. The issue I am having is that the images sent to me are rather large sometimes. (3.5mb - 5mb 12MP pics) The command I run with textcleaner ( textcleaner -g -e none -f <int # 10 - 100> -o 5 result1.jpg out1.jpg ) takes about 10 seconds at -f 10 and minutes or more on a -f 100.
To get around this I tried using ImageMagick to compress the image so it was much smaller. Using convert -strip -interlace Plane -gaussian-blur 0.05 -quality 50% main.jpg result1.jpg I was able to take a 3.5mb and convert it almost loss-lessly to ~400kb. However when I run textcleaner on this new file it STILL acts like its a 3.5mb file. (Times are almost exactly the same). I have tested these textcleaner settings against a file NOT compressed #400kb and it is almost instant while -f 100 takes about 12 seconds.
I am about out of ideas. I would like to follow the example here as I am in almost exactly the same situation. However, at the current speed of transformation an entire OCR process could take over 10 minutes when I need this to be around 30 seconds.