vector<Magick::Image> frames;
int delay = 20;
for(auto iter=taskList.begin(); iter != taskList.end(); ++iter){
/* hide some codes here*/
frames.push_back(*img);
}
// write images to file, works fine
Magick::writeImages(frames.begin(), frames.end(), "xxx.gif");
Magick::Blob tmpBlob;
// write images to blob, I then decode the data in blob,
// and write this blob to yyy.gif. The gif file only contains the first frame image.
Magick::writeImages(frames.begin(), frames.end(), &tmpBlob, true);
// the length is far too small
LOG_DEBUG("blob data length: %d", tmpBlob.length());
// read from the blob into a imagelist, and print the size of the list
// the size is 1
vector<Magick::Image> image_list;
Magick::readImages(&image_list, tmpBlob);
LOG_DEBUG("new frames length: %d", image_list.size());
hi, I have a problem when I try to write a list of Image to Blob with ImageMagick(version 7.0.3) Magick++ STL.h writeImages function. It doesn't work correctly, it seems that only one frame was written into the blob. But with the same image list, writing them to gif file works just fine. could anybody help me out?
problem solved. The reason why I failed is that I didn't do img->magick("GIF"), which cause the failure of writing to blob correctly.
Related
So I'm writing a C++ program that will take a wav file, generate a visualization, and export the video out alongside the audio using ffmpeg (pipe). I've been able to get output out to ffmpeg just fine and a video with the visualization and audio are created by ffmpeg.
The problem is the video and audio are desyncing. The video is just too fast and ends before the song is completed (the video file is the correct length; the waveform just flatlines and ends, indicating that ffmpeg reached the end of the video and is just using the last frame it received until the audio ends). So I'm not sending enough frames to ffmpeg.
Below is a truncated version of the source code:
// Example code
int main()
{
// LoadAudio();
uint32_t frame_max = audio.sample_rate / 24; // 24 frames per second
uint32_t frame_counter = 0;
// InitializePipe2FFMPEG();
// (Left channel and right channel are always equal in size)
for (uint32_t i = 0; i < audio.left_channel.size(); ++i)
{
// UpdateWaveform4Image();
if (frame_counter % frame_max == 0)
{
// DrawImageAndSend2Pipe();
frame_counter = 1;
}
else
{
++frame_counter;
}
}
// FlushAndClosePipe();
return 0;
}
The commented-out functions are fake and irrelevant. I know this because "UpdateWaveform4Image()" updates the waveform used to generate the image every sample. (I know that's inefficient, but I'll worry about optimization later.) The waveform is a std::vector in which each element stores the y-coordinate of each sample. It has no effect on when the program will generate a new frame for the video.
Also ffmpeg is set to output 24 frames per second--trust me, I thought that was the problem too because by default ffmpeg outputs to 25 fps.
My line of thinking for the modulus check is that frame_counter is incremented every sample. frame_max equals 2000 because 48000 / 24 = 2000. I know the audio is clocked at 48kHz because I created the file myself. So it SHOULD generate a new image every 2000 samples.
Here is a link to the output video: [output]
Any advice would be helpful.
EDIT: Skip to 01:24 to see the waveform flatline.
Recently, i am having trouble with converting a Mat frame captured from my webcam by OpenCV to a normal JPEG unsigned char array. I've tried one or two way on Google but the result seems not the correct jpeg uchar array. Here is a piece of my code:
VideoCapture cap(0);
if(!cap.isOpened())
return -1;
Mat frame;
cap >> frame;
if( frame.empty())
return -1;
int size = frame.total() * frame.elemSize();
unsigned char* buffer = new unsigned char[size];
memcpy(buffer, frame.data, size * sizeof(unsigned char));
Then i used fwrite to write that buffer into a file.jpg (it looks silly but it does work if the buffer is correct),but the file cannot be openned or be determined as a jpeg image.
Can anyone help me figure this out?
Check out the OpenCV function imencode(). It will fill a buffer with data encoded as the correct image type (based on the file type argument) so that it can be written to a file and other programs will know what to do with it.
The problem with your current approach is that you are attempting to write raw image data as a JPEG, but JPEG is a compressed data format so programs won't know what to do with the data you've written. It would be equivalent of taking a binary file and just saving it as a JPEG, the file won't have the right headers to be decoded as an image and the data otherwise likely won't match up with the JPEG format anyways.
I've been working on a Webcam video recorder and I got interested in trying everything when it comes to this topic but there's this problem that I can't solve.
Everything that you might wonder about can be found here
https://msdn.microsoft.com/en-us/library/windows/desktop/dd757677%28v=vs.85%29.aspx and here
https://msdn.microsoft.com/en-us/library/windows/desktop/dd757694%28v=vs.85%29.aspx
Now, in this code
if (capSetCallbackOnVideoStream(hCapWnd, capVideoStreamCallback))
{
capCaptureSequenceNoFile(hCapWnd); //Capture
}
I make sure that every frame that gets captured is sent to capVideoStreamCallback.
Now what I'm trying to do is transform a frame to an image and save it somewhere, this might be useless but it's interesting and it is surely possible.
Here is my capVideoStreamCallback function (it's commented):
LRESULT CALLBACK capVideoStreamCallback(HWND hWnd, LPVIDEOHDR lpVHdr)
{
BYTE *Image;
BITMAPINFO * TempBitmapInfo = new BITMAPINFO;
ULONG Size;
// First we need to get the full size of the image
Size = capGetVideoFormat(hWnd, TempBitmapInfo, sizeof(BITMAPINFO)); //header size
Size += lpVHdr->dwBytesUsed; //bytes used
Image = new BYTE[Size];
memcpy(Image, TempBitmapInfo, sizeof(BITMAPINFO)); //copy the header to Image
// lpVHdr is LPVIDEOHER passed into callback function.
memcpy(Image + sizeof(BITMAPINFO), lpVHdr->lpData, lpVHdr->dwBytesUsed); //copy the data to Image
//write the image
ofstream output("image.dib", ios::binary);
for (int i = 0; i < Size; i++)
{
output << (BYTE)Image[i];
}
output.close();
return (LRESULT)TRUE;
}
So, the information about every frame that gets sent to capVideoStreamCallback can be found in lpVHdr which is a structure (https://msdn.microsoft.com/en-us/library/windows/desktop/dd757688%28v=vs.85%29.aspx) and what I'm trying to do here is to take that information and transform it to an image.
I first start by getting the full size of the image by retrieving the size of the header and the size of the data and then I dynamically declared a BYTE Array called Image and copied the header and the data to Image using memcpy. I finally used ofstream to write the bytes to a file and that's pretty much it.
The problem is that everything works just fine but the image is somehow corrupted because it cannot be opened.
What is wrong in what I'm doing? It seems so logical but it's not working.
Please share your ideas and thanks for reading.
Here's the answer thanks to Frankie-C from http://codeproject.com who reminded me that I needed a BITMAPFILEHEADER structure at the top of the BITMAP File.
There's also few extra stuff that you need to do to get the image to show up the way it should be such as flipping bytes to get BGR instead of RGB etc, here's a nice tut explaining that: http://tipsandtricks.runicsoft.com/Cpp/BitmapTutorial.htm
I am having issues with reading a recording I made using the Recorder class in openni2 with the asus Xtion PRO LIVE. The problem is that once every ~50 frames a wrong frame is read from the recording, this was tested by storing the generated image (converted to an opencv matrix) as an .png image (using the opencv imwrite function) with an index. Shown below is my code, I also tried the code posted in the "OpenNI2: Reading in .oni Recording" question. This also doesnt work. The videostreams, device, videoFrameRefs and the playback controller are all initialized in an initialization function. I can post this as well if necessary.
openni::VideoStream depthStream;
openni::VideoStream colorStream;
openni::Device device;
openni::VideoFrameRef m_depthFrame;
openni::VideoFrameRef m_colorFrame;
openni::PlaybackControl *controler;
typedef struct _ImagePair {
cv::Mat GrayscaleImage;
cv::Mat DepthImage;
cv::Mat RGBImage;
float AverageDistance;
std::string FileName;
int index;
}ImagePair;
ImagePair SensorData::getNextRawImagePair(){
ImagePair result;
openni::Status rc;
std::cout<<"Getting next raw image pair...";
rc = controler->seek(depthStream, rawPairIndex);
if(!correctStatus(rc,"Seek depth"))return result;
openni::VideoStream** m_streams = new openni::VideoStream*[2];
m_streams[0] = &depthStream;
m_streams[1] = &colorStream;
bool newDepth=false,newColor=false;
while(!newDepth || !newColor){
int changedIndex;
openni::Status rc = openni::OpenNI::waitForAnyStream(m_streams, 2, &changedIndex);
if (rc != openni::STATUS_OK)
{
printf("Wait failed\n");
//return 1;
}
switch (changedIndex)
{
case 0:
rc = depthStream.readFrame(&m_depthFrame);
if(!correctStatus(rc,"Read depth")){
return result;
}
newDepth = true;
break;
case 1:
rc = colorStream.readFrame(&m_colorFrame);
if(!correctStatus(rc,"Read color")){
return result;
}
newColor = true;
break;
default:
printf("Error in wait\n");
}
}
//convert rgb to bgr cv::matrix
cv::Mat cv_image;
const openni::RGB888Pixel* colorImageBuffer = (const openni::RGB888Pixel*)m_colorFrame.getData();
cv_image.create(m_colorFrame.getHeight(), m_colorFrame.getWidth(), CV_8UC3);
memcpy( cv_image.data, colorImageBuffer,3*m_colorFrame.getHeight()*m_colorFrame.getWidth()*sizeof(uint8_t) );
//convert to BGR opencv color order
cv::cvtColor(cv_image,cv_image,cv::COLOR_RGB2BGR);
result.RGBImage = cv_image.clone();
//convert to grayscale
cv::cvtColor(cv_image,cv_image,cv::COLOR_BGR2GRAY);
result.GrayscaleImage =cv_image.clone();
//convert depth to cv::Mat INT16
const openni::DepthPixel* depthImageBuffer = (const openni::DepthPixel*)m_depthFrame.getData();
cv_image.create(m_depthFrame.getHeight(), m_depthFrame.getWidth(), CV_16UC1);
memcpy( cv_image.data, depthImageBuffer,m_depthFrame.getHeight()*m_depthFrame.getWidth()*sizeof(INT16) );
result.DepthImage = cv_image.clone();
result.index = rawPairIndex;
rawPairIndex++;
std::cout<<"done"<<std::endl;
return result;
}
I also tried it without the using waitForAnyStream part but that only made it worse. The file I am trying to load is over 1GB, not sure if that is a problem. The behaviour seems random because the indexes of the wrong images are not always the same.
UPDATE:
I changed the seek function to seek in the color stream instead of the depth stream. I also ensured that each stream is only waited for once in the waitForAnyStream function by setting the corresponding point in m_streams to null.
I found out that it is possible to find the actual frame index for each frame (.getFrameIndex) so I was able to compare the indices. After getting 1000 images there where 18 wrong color images, 17 of which had an index error of 53 and 1 had an error of 46. The depth images had an almost constant index error of 9.
The second thing I found was after adding a sleep of 1 ms (also tried 10ms but the results where the same) before the waitForAnyStream function the returned indices change significantly. The indices dont make large jumps anymore but the color images have a standard offset of 53 and the depth images have an offset of 9.
When I change the seek function to seek in the depth stream then with the delay the color stream has a constant error of 46 and the depth stream has an error of 1. Without the delay the color stream has an error of 45 and the depth stream has no error with occasional spikes of errors of 1.
I also looked at the indices when I dont use seek and waitForAnyStream but just do it as is proposed as an answer in "OpenNI2: Reading in .oni Recording". This shows that when the file is read by just calling m_readframe multiple times then the first color frame has index 59 instead of 0. After checking it turns out that frame 0 does excist and is an earlier frame than frame 59. So just opening the file and using readframe doesnt work at all. There are however no index errors, just like when I added the sleep to my own implementation.
UPDATE2:
I have come to the conclusion that either the openni2 library doesnt work properly or I have installed it incorrectly. This is because I am also having problems setting the Xtion to a resolution of 640x480 for both streams. When I do this the depth image only gets updated once every ~100 seconds. I have written a quick fix for my original problem by just filtering out the frames who's indices are wrong and just continuing with the next image.
I would still like to know how to fix this, so if you know please tell me.
UPDATE 3:
The code I am using for setting the framerate and fps for recording is:
//colorstream
openni::VideoMode color_videoMode = m_colorStream.getVideoMode();
color_videoMode.setResolution(WIDTH,HEIGHT);
color_videoMode.setFps(30);
m_colorStream.setVideoMode(color_videoMode);
//depth stream
openni::VideoMode depth_videoMode = m_depthStream.getVideoMode();
depth_videoMode.setResolution(WIDTH,HEIGHT);
depth_videoMode.setFps(30);
m_depthStream.setVideoMode(depth_videoMode);
I forgot to mention that I also tested the resolution by running the sampleviewer example program (I think, it was a few months ago) and changing the resolution in the xml config file. Here the color images would be shown fine but the depth images would update verry rarely.
I am trying to create some video from a set of large raw images that I have.
The code is as follow:
int imageWidth=687;
int imageHeight=916;
int fps=3;
int ex=-1;
CvSize size = cvSize(imageWidth,imageHeight);
VideoWriter outputVideo;
outputVideo.open(MovieOutput, ex, fps, size, true);
if(outputVideo.isOpened())
{
cout << "error opening output video";
}
for(int frameNo=0;frameNo<58;frameNo++)
{
ostringstream outfilename;
outfilename << InputDir<< (frameNo+1)<<".jpg";
rawimages.Read(frameNo);
Mat image=rawimages.ToOpencvImage();
imwrite( outfilename.str(), image );
outputVideo <<image;
imshow("Image", image);
if(waitKey(30) >= 0) break;
}
I can see that images are shown on screen and also different jpg are saved on hard disk.
I can see that the output avi is created, but its size is zero.
What is the problem with this code?
some note:
The output size is very big. Can it generate movie with that size?
To summarize the comments: you pass the second parameter to the VideoWriter open command with a value of -1. This is supposed to open a codec selection dialogue in Windows, but as of OpenCV 2.4.5, the dialogue seems to be bugged - it appears, but I couldn't manage to write to a file afterwards.
Selecting a codec directly works fine and makes more sense in my opinion. More info about this command and available codecs can be found here.
outputVideo.open("example.avi", CV_FOURCC('M','J','P','G'), fps, size, true);