how can i flip or rotate a diagram picture generate in Word with m2doc service - m2doc

When I create a diagram in with sirius. m2doc is able to generate the image in the Word document. my problem is that it is not possible to rotate this too large image in portrait mode. I have a class that implement MImage and a service. How can I get it from an ImageServices. my link help: http://www.m2doc.org/ref-doc/2.0.2/m2doc_service_imageservices.html
I tried to transform the image using imageData by "org.eclipse.swt.graphics.ImageData". Is not working
public MImage rotate(MImage image) {
java.lang.System.out.println("START TRANSFORMATION " + image);
SbocsImage sbocsImage = null;
try {
imgSizer = new ImageResizer( new ImageData(image.getInputStream()) );
sbocsImage = new SbocsImage( imgSizer.rotateImage(SWT.RIGHT), image);
} catch (IOException e1) {
e1.printStackTrace();
}
java.lang.System.out.println("END CLOSE TRANSFORMATION " + image);
return image;
}
image is not rotate.

On the master branch there is a new implementation of MImage that use a BufferedImage. It is used in the resize() service.
You can have a look at issue 344 and the corresponding commit for more context.
Can you open an issue to create the rotate service ?

Related

How to store images on QTableView (Qt and openCV)

I have a GUI composed of a QTableView with 18 columns and a QGraphicsView where I upload images using a button. I am able to draw a box on a region of interests (ROI) and as soon as I do that with a mouse right click I open up a small dialog. This dialog is composed of a TabWidget with two pages. The first page has a small QGraphicsView that carries the cropped image (say Image A) captured with the previous box drawn. The second page of the TabWidget has also another QGraphicsView that I use to upload a previously saved .jpg image (say Image B) that is on my Desktop. As soon as I hit ok the information on this dialog will be transferred to the first row of the QTableView. Image A will be stoted in one column (column 17 to be precise) and Image B in an additional column too (column 18 to be precise).
I was able to store the cropped Image A (which was the most difficult part because I had to work on understanding the conversion format between Qt and openCV and vice versa) in the QTableView but not Image B (which I handle as simple image that I saved on my Desktop).
I tried different options: option 1: I created a function with which I handle the image that I upload on the second page of the TabWidget (Image B), option 2: I tried to debug to narrow the problem and it seems that the compiler is not seeing the image I am trying to store.
I am attaching the most important part of the code that are carrying this bug:
In clipscene.h is how I declare the most important variables and functions I use:
class clipScene : public QDialog
{
Q_OBJECT
public:
explicit clipScene(QWidget *parent = 0);
~clipScene();
void setImage(QImage img);
void setClassifiedImage(QImage img);
SelectionData getData();
void setData(SelectionData newdata);
private:
SelectionData returnData;
Ui::clipScene *ui;
QImage simg;
QImage featureClassified;
};
In clipscene.cpp this is the function setClassifiedImage I use to handle Image B
void clipScene::setClassifiedImage(QImage img)
{
featureClassified = img;
QGraphicsPixmapItem* item = new QGraphicsPixmapItem(QPixmap::fromImage(featureClassified));
workingImageScene->addItem(item);
}
In clipscene.cpp using setImage I successfully handle the Qt and openCV conversion of the cropped image
void clipScene::setImage(QImage img)
{
simg = img;
QGraphicsPixmapItem* item = new QGraphicsPixmapItem(QPixmap::fromImage(simg));
scene->addItem(item);
ui->graphicsViewClipped->show();
cv::Mat input = cv::Mat(simg.height(), simg.width(), CV_16UC3, simg.bits(), simg.bytesPerLine());
// .....operations...
// .....operations...
}
In clipscene.cpp the following img1 represent the cropped Image A and the img2 represent Image B (that is currently nor stored neither passsed)
void clipScene::on_acceptBtn_clicked()
{
// .....operations...
// This will save the cropped Image A successfully
QPixmap img1;
img1.convertFromImage(simg);
QByteArray img1Array;
QBuffer buffer1(&img1Array);
buffer1.open(QIODevice::WriteOnly);
img1.save(&buffer1, "PNG");
returnData.mSave = img1Array;
// This will save a different Image B
// But here the compiler says that no arguments are being passed
QPixmap img2;
QByteArray img2Array;
QBuffer buffer2(&img2Array);
buffer2.open(QIODevice::WriteOnly);
img2.save(&buffer2, "PNG");
returnData.mClassImg = img2Array;
}
Finally when all information are passed and stored on the QTableView I am able to DoubleClick on one row of the QTableView and the same dialog I used before for manually storing the data will pop up with all information recorded and both images (Image A and Image B).
The part of the code that does this on MainWindow (and where the bug is connected) is below:
void MainWindow::onTableClick(const QModelIndex &index)
{
int row = index.row();
SelectionData currentData;
currentData.mName = index.sibling(row, 1).data().toString();
// ....additional data....
// ....additional data...
currentData.mSave = index.sibling(row, 17).data().toByteArray();
currentData.mClassImg = index.sibling(row, 18).data().toByteArray();
QPixmap iconPix;
if(!iconPix.loadFromData(index.sibling(row, 17).data().toByteArray())) {
}
QPixmap iconPix2;
if(!iconPix2.loadFromData(index.sibling(row, 18).data().toByteArray())) {
}
clipScene d(this);
d.setData(currentData);
d.setImage(iconPix.toImage());
d.setClassifiedImage(iconPix2.toImage());
// ....additional operations....
}
I have been struggling with this problem for some days now and anyone who can shed some light on this or provide a solution on how to solve this would be great.
I am not sure if this is still useful to you but since I had the same problem this might be helpful. I think you are never passing the image to the destination table. Are you ever loading the image? Are you ever doing something like this:
clipscene.h
private:
QList<QGraphicsPixmapItem*> leftClipPix;
clipscene.cpp
void clipScene::load_classifiedImageFromDb(QImage image)
{
clasImg = image;
QGraphicsPixmapItem* item = new QGraphicsPixmapItem(QPixmap::fromImage(image));
leftClipPix.append(item);
workingImageScene->addItem(item);
ui->yourgraphicsView->show();
ui->yourgraphicsView->setSceneRect(QRectF(0, 0, image.width(), image.height()));
}
Also if, as you said, you are trying to upload an image from your Desktop or database, than it may be useful to do something like:
void clipScene::load_classifiedImage()
{
QString dir = QFileDialog::getOpenFileName(this, tr("Open image directory"), "", tr("Images (*.tif *.jpg *.png *.jpeg *.bmp *.gif *.tiff)"));
QImage image;
if(QString::compare(dir, QString()) != 0) {
image = QImage(dir);
QGraphicsPixmapItem* item = new QGraphicsPixmapItem(QPixmap::fromImage(image));
leftClipPix.append(item);
workingImageScene->addItem(item);
}
ui->yourgraphicsView->show();
ui->yourgraphicsView->setSceneRect(QRectF(0, 0, image.width(), image.height()));
clasImg = image;
}
and pass load_classifiedImage() function to a QPushButton

Cannot detect Faces using Offline Affectiva SDK

I'm new to Affectiva Emotion Recognition SDK. I have been following example from video from this link But when I feed some pictures example this image the face could not be detected.
My code looks:-
Listener
class Listener : public affdex::ImageListener{
void onImageResults(std::map<affdex::FaceId,affdex::Face> faces,affdex::Frame image){
std::string pronoun="they";
std::string emotion="neutral";
for (auto pair : faces){
affdex::FaceId faceId=pair.first;
affdex::Face face=pair.second;
if(face.appearance.gender==affdex::Gender::Male){
pronoun="Male";
}else if(face.appearance.gender==affdex::Gender::Female){
pronoun="Female";
}
if(face.emotions.joy>25){
emotion="Happy :)";
}else if(face.emotions.sadness>25){
emotion="Sad :(";
}
cout<<faceId<<" : "<<pronoun <<" looks "<< emotion <<endl;
}
}
void onImageCapture(affdex::Frame image){
cout<<"IMage captured"<<endl;
}
};
Main code
Mat img;
img=imread(argv[1],CV_LOAD_IMAGE_COLOR);
affdex::Frame frame(img.size().width, img.size().height, img.data, affdex::Frame::COLOR_FORMAT::BGR);
affdex::PhotoDetector detector(3);
detector.setClassifierPath("/xxx/xxx/affdex-sdk/data");
affdex::ImageListener * listener(new Listener());
detector.setImageListener(listener);
detector.setDetectAllEmotions(true);
detector.setDetectAllExpressions(true);
detector.start();
detector.process(frame);
detector.stop();
Where do am I making mistake?Or is the sdk cannot detect faces from some images? can any body help me?
Edit
I Used the following images
Sometimes the SDK cannot detect faces in an image. There is no detector that can detect all faces all the time. Did you check with different images?
Edit:
Those two images are 250x250 and 260x194 and really low quality. I recommend you to test the app with higher resolution images. As Affectiva states in their webpage that the minimum recommended resolution is 320x240 and faces should be at least 30x30.
https://developer.affectiva.com/obtaining-optimal-results/

Absdiff in openCV can compile but show black image

I have been trying to use absdiff to find the motion in an image,but unfortunately it fail,i am new to OpenCV. The coding supposed to use absdiff to determine whether any motion is happening around or not, but the output is a pitch black for diff1,diff2 and motion. Meanwhile,next_mframe,current_mframe, prev_mframe shows grayscale images. While, result shows a clear and normal image. I use this as my reference http://manmade2.com/simple-home-surveillance-with-opencv-c-and-raspberry-pi/. I think the all the image memory is loaded with the same frame and compare, that explain why its a pitch black. Is there any others method i miss there? I am using RTSP to pass camera RAW image to ROS.
void imageCallback(const sensor_msgs::ImageConstPtr&msg_ptr){
CvPoint center;
int radius, posX, posY;
cv_bridge::CvImagePtr cv_image; //To parse image_raw from rstp
try
{
cv_image = cv_bridge::toCvCopy(msg_ptr, enc::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
frame = new IplImage(cv_image->image); //frame now holding raw_image
frame1 = new IplImage(cv_image->image);
frame2 = new IplImage(cv_image->image);
frame3 = new IplImage(cv_image->image);
matriximage = cvarrToMat(frame);
cvtColor(matriximage,matriximage,CV_RGB2GRAY); //grayscale
prev_mframe = cvarrToMat(frame1);
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); //grayscale
current_mframe = cvarrToMat(frame2);
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY); //grayscale
next_mframe = cvarrToMat(frame3);
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY); //grayscale
// Maximum deviation of the image, the higher the value, the more motion is allowed
int max_deviation = 20;
result=matriximage;
//rellocate image in right order
prev_mframe = current_mframe;
current_mframe = next_mframe;
next_mframe = matriximage;
//motion=difflmg(prev_mframe,current_mframe,next_mframe);
absdiff(prev_mframe,next_mframe,diff1); //Here should show black and white image
absdiff(next_mframe,current_mframe,diff2);
bitwise_and(diff1,diff2,motion);
threshold(motion,motion,35,255,CV_THRESH_BINARY);
erode(motion,motion,kernel_ero);
imshow("Motion Detection",result);
imshow("diff1",diff1); //I tried to output the image but its all black
imshow("diff2",diff2); //same here, I tried to output the image but its all black
imshow("diff1",motion);
imshow("nextframe",next_mframe);
imshow("motion",motion);
char c =cvWaitKey(3); }
I change the cv_bridge method to VideoCap, its seem to functions well, cv_bridge just cannot save the image even through i change the IplImage to Mat format. Maybe there is other ways, but as for now, i will go with this method fist.
VideoCapture cap(0);
Tracker(void)
{
//check if camera worked
if(!cap.isOpened())
{
cout<<"cannot open the Video cam"<<endl;
}
cout<<"camera is opening"<<endl;
cap>>prev_mframe;
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); // capture 3 frame and convert to grayscale
cap>>current_mframe;
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY);
cap>>next_mframe;
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY);
//rellocate image in right order
current_mframe.copyTo(prev_mframe);
next_mframe.copyTo(current_mframe);
matriximage.copyTo(next_mframe);
motion = diffImg(prev_mframe, current_mframe, next_mframe);
}

Morphological Watershed From Markers filter on ITK

I'm trying to create a pipeline for image segmentation, with the libraries from ITK. But, when I apply the itkMorphologicalWatershedFromMarkersFilter, the result is a blank image (binary image with only 1's).
Does anyone know how to apply this filter correctly?
My input image should be the gradient of an image, and the marker image should be the result of the application of a watershed filter on the same image.
input image
marker image
And this is the declaration and the application of the filter:
typedef itk::MorphologicalWatershedFromMarkersImageFilter < OutputImageType, OutputImageType >
MorphologicalWatershedFromMarkersImageFilterType;
MorphologicalWatershedFromMarkersImageFilterType::Pointer CwatershedFilter
= MorphologicalWatershedFromMarkersImageFilterType::New();
CwatershedFilter->SetInput1(reader1->GetOutput());
CwatershedFilter->SetMarkerImage(reader2->GetOutput());
CwatershedFilter->SetMarkWatershedLine(true);
try{
CwatershedFilter->Update();
}
catch (itk::ExceptionObject & error)
{
std::cerr << "Error: " << error << std::endl;
getchar();
return EXIT_FAILURE;
}
Also, this is the link to the documentation of this filter, from itk.org:
http://www.itk.org/Doxygen48/html/classitk_1_1MorphologicalWatershedFromMarkersImageFilter.html#a20e3b8de42219606ba759e822be0aaa2
Thank you so much!!
While not C++ ITK, there is a SimpleITK Notebooks which demonstrates it's usage:
http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/32_Watersheds_Segmentation.html
Your marker image is just binary, and not a label image. By that I mean your image is only 0 and 1 ( or 255). In the linked example note the following:
min_img = sitk.RegionalMinima(feature_img, backgroundValue=0, foregroundValue=1.0, fullyConnected=False, flatIsMinima=True)
marker_img = sitk.ConnectedComponent(min_img, fullyConnected=False)
The "min_img" is a binary image, but then the image is processed with the "ConnectedComponent" image filter, which gives each "island" a unique number. This is what is expected for the marker ( or label ) image for the WatershedFromMarker filter.
I will also note that your input image has some boundary lines, that you may not want as input.

Issue with stitching images using openCV for iOS

I'm trying to adopt the code from here:
https://github.com/foundry/OpenCVStitch
into my program. However, I've run up against a wall. This code stitches images together that are already existing. The program I'm trying to make will stitch images together that the user took. The error I'm getting is that when I pass the images to the stitch function, it is saying they are of invalid size (0 x 0).
Here is the stitching function:
- (IBAction)stitchImages:(UIButton *)sender {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSArray* imageArray = [NSArray arrayWithObjects:
chosenImage, chosenImage2, nil];
UIImage* stitchedImage = [CVWrapper processWithArray:imageArray]; // error occurring within processWithArray function
dispatch_async(dispatch_get_main_queue(), ^{
NSLog (#"stitchedImage %#",stitchedImage);
UIImageView *imageView = [[UIImageView alloc] initWithImage:stitchedImage];
self.imageView = imageView;
[self.scrollView addSubview:imageView];
self.scrollView.backgroundColor = [UIColor blackColor];
self.scrollView.contentSize = self.imageView.bounds.size;
self.scrollView.maximumZoomScale = 4.0;
self.scrollView.minimumZoomScale = 0.5;
self.scrollView.contentOffset = CGPointMake(-(self.scrollView.bounds.size.width-self.imageView.bounds.size.width)/2, -(self.scrollView.bounds.size.height-self.imageView.bounds.size.height)/2);
[self.spinner stopAnimating];
});
});
}
chosenImage and chosenImage2 are images the user has taken using these two functions:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
savedImage = info[UIImagePickerControllerOriginalImage];
// display photo in the correct UIImageView
switch(image_location){
case 1:
chosenImage = info[UIImagePickerControllerOriginalImage];
self.imageView2.image = chosenImage;
image_location++;
break;
case 2:
chosenImage2 = info[UIImagePickerControllerOriginalImage];
self.imageView3.image = chosenImage2;
image_location--;
break;
}
// if user clicked "take photo", it should save photo
// if user clicked "select photo", it should not save photo
/*if (should_save){
UIImageWriteToSavedPhotosAlbum(chosenImage, nil, nil, nil);
}*/
[picker dismissViewControllerAnimated:YES completion:NULL];
}
- (IBAction)takePhoto:(UIButton *)sender {
UIImagePickerController *picker = [[UIImagePickerController alloc] init];
picker.delegate = self;
picker.allowsEditing = NO;
picker.sourceType = UIImagePickerControllerSourceTypeCamera;
//last_pressed = 1;
should_save = 1;
[self presentViewController:picker animated:YES completion:NULL];
}
The stitchesImages function passes an array of these two images to this function:
+ (UIImage*) processWithArray:(NSArray*)imageArray
{
if ([imageArray count]==0){
NSLog (#"imageArray is empty");
return 0;
}
cv::vector<cv::Mat> matImages;
for (id image in imageArray) {
if ([image isKindOfClass: [UIImage class]]) {
cv::Mat matImage = [image CVMat3];
NSLog (#"matImage: %#",image);
matImages.push_back(matImage);
}
}
NSLog (#"stitching...");
cv::Mat stitchedMat = stitch (matImages); // error occurring within stitch function
UIImage* result = [UIImage imageWithCVMat:stitchedMat];
return result;
}
This is where the program is running into a problem. When it is passed images that are saved locally in the application file, it works fine. However, when it is passed images that are saved in variables (chosenImage and chosenImage2), it doesn't work.
Here is the stitch function that is being called in the processWithArray function and is causing the error:
cv::Mat stitch (vector<Mat>& images)
{
imgs = images;
Mat pano;
Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
Stitcher::Status status = stitcher.stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
//return 0;
}
return pano;
}
The error is "Can't stitch images, error code = 1".
You are hitting memory limits. The four demo images included are 720 x 960 px, whereas you are using the full resolution image from the device camera.
Here is an Allocations trace in instruments leading up to the crash, stitching two images from the camera...
The point of this github sample is to illustrate a few things...
(1) how to integrate openCV with iOS;
(2) how to separate Objective-C and C++ code using a wrapper;
(3) how to implement the most basic stitching function in openCV.
It is best regarded as a 'hello world' project for iOS+openCV, and was not designed to work robustly with camera images. If you want to use my code as-is, I would suggest first reducing your camera images to a manageable size (e.g. max 1000 on the long side).
In any case the openCV framework you are using is as old as the project. Thanks to your question, I have just updated it (now arm64-friendly), although the memory limitations still apply.
V2, OpenCVSwiftStitch may be a more interesting starting-point for your experiments - the interface is written in Swift, and it uses cocoaPods to keep up with openCV versions (albeit currently fixed to 2.4.9.1 as 2.4.10 breaks everything). So it still illustrates the three points, and also shows how to use Swift with C++ using an Objective-C wrapper as an intermediary.
I may be able to improve memory handling (by passing around pointers). If so I will push an update to both v1 and v2. If you can make any improvements, please send a pull request.
update i've had another look and I am fairly sure it won't be possible to improve the memory handling without getting deeper into the openCV stitching algorithms. The images are already allocated on the heap so there are no improvements to be made there. I expect the best bet would be to tile and cache the intermediate images which it seems openCV is creating as part of the process. I will post an update if I get any further with this. Meanwhile, resizing the camera images is the way to go.
update 2
Some while later, I found the underlying cause of the issue. When you use images from the iOS camera as your inputs, if those images are in portrait orientation they will have the incorrect input dimensions (and orientation) for openCV. This is because all iOS camera photos are taken natively as 'landscape left'. The pixel dimensions are landscape, with the home button on the right. To display portrait, the 'imageOrientation' flag is set to UIImageOrientationRight. This is only an indication to the OS to rotate the image 90 degrees to the right for display.
The image is stored unrotated, landscape left. The incorrect pixel orientation leads to higher memory requirements and unpredictable/broken results in openCV.
I have fixed this in the latest version of openCVSwiftStitch: when necessary images are rotated pixelwise before adding to the openCV pipeline.