Export high quality image with M2DOC - m2doc

How can I change my code M2DOC code for export a image/diagram in high quality?
This is the code that I would like to improve:
{m:rep.asImage().setHeight(600)}

You should use the fit() service:
{m:rep.asImage().fit(400, 400)}
Also you can parameter the quality of images exported by Sirius in the preference menu:

Related

How to print diagrams (collection of DRepresentation) in my wordx using M2DOC

Programmatically, I can get a collection of DPresentation diagrams.
But how I can print the representation of theses diagrams in my doc using M2DOC ?
Thanks for help.
You can have a look at Sirius services, and more precisely to the asImage() service. You can use it like that:
{m:myVar.anyServiceReturningADPresentation().asImage()}
optionally you can then apply an image service like fit().
Under Windows system you might also need to change the image size in the Sirius settings.

Use map data offline with osmdroid

My ultimate goal is to have map data (offline, because I will customize it myself) and display it in an app (Android). I could make osmdroid work to load maps online and I was trying to figure out how to download and display offline maps. I downloaded MOBAC (Mobile Atlas Creator) and export the data to SQLite format and when I had a look at it I realized that tiles are saved in image format (PNG).
What I would like to do is to import data to the phone to later use it in algorithms such as a search engine or a routing algorithm, so I need the "nodes" and "ways" (as I get them from the original OSM XML), import them to the phone and visualize it to later have this data available for the algorithms I want to develop. Basically, what MAPS.ME does. I think it wouldn't be difficult to convert the XML into the SQLite since a simple script could make it, but then, how can I generate the tiles from this custom SQLite database? Or, is there a way I can download the data in a more appropriate way to do what I'm planning to do?
Thanks.
Rendering the tiles in an app from raw Openstreetmap data would be computation heavy and inefficient. I would suggest to use image tiles you exported for visual representation.
In addition to tiles you should export a data set you will need in the application for desired functionality. You will not need all data from Openstreetmap so you should identify what you need and build your custom export (there are tools and libraries for processing and filtering of Openstreetmap data. I have used pyosmium for some filtering and processing but there are others.) For example, you can build your custom database with POIs you want to search for.
Routing is another chapter. You can implement it yourself but it is a very complex task. There is java library called Graphopper which can do the data extraction (from Openstreetmap) and offline routing for you. They have an online API too but it is possible to make it working completely offline (I did it for one application). Try to look at the source code because than you can see how complex topic routing is. Final note: data exported from Graphopper contains information about some POIs along routes. It may be possible to search for some things via its java API but I haven't investigated this yet.

Google Inceptionism: obtain images by class

In the famous Google Inceptionism article,
http://googleresearch.blogspot.jp/2015/06/inceptionism-going-deeper-into-neural.html
they show images obtained for each class, such as banana or ant. I want to do the same for other datasets.
The article does describe how it was obtained, but I feel that the explanation is insufficient.
There's a related code
https://github.com/google/deepdream/blob/master/dream.ipynb
but what it does is to produce a random dreamy image, rather than specifying a class and learn what it looks like in the network, as shown in the article above.
Could anyone give a more concrete overview, or code/tutorial on how to generate images for specific class? (preferably assuming caffe framework)
I think this code is a good starting point to reproduce the images Google team published. The procedure looks clear:
Start with a pure noise image and a class (say "cat")
Perform a forward pass and backpropagate the error wrt the imposed class label
Update the initial image with the gradient computed at the data layer
There are some tricks involved, that can be found in the original paper.
It seems that the main difference is that Google folks tried to get a more "realistic" image:
By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

Load existing Model in Weka Knowledge Flow

I am trying to plot multiple ROC curves in the same diagram in Weka. I have learnt that I can do this in Weka Knowledge Flow using "Model Performance Chart". However, I can't figure out how to do this for existing models.
I have tried using ArffLoader and TestSetMaker to generate the testing data, and connected this to a suitable Classifier icon (eg AdaBoostM1 when this is the kind of model I am trying to load). In the configurations of the Classifier icon I choose "load model" and in the Status bar it says "Loaded model.". However, when I run this it says "ERROR: no trained/loaded classifier to use for prediction".
Can anyone tell me what I am doing wrong here? Thanks in advance!
There is a post that was published here that indicates some ambiguity in the meaning of the error. It also continues to state that the order of attributes and the number and order of values is also rather important.
It also states that 'for performance results to be computed, your Knowledge Flow process will need a "ClassifierPerformanceEvaluator" component after the classifier and before a TextViewer component.'
If you are new with the KnowledgeFlow environment, there is a great tutorial here from Rushdi Shams that details the general process.
Below is a sample workflow that has generated desirable results using AdaBoost (preloaded model):
Hope this Helps!

How to batch download large number of high resolution satellite images from Google Map directly?

I'm helping a professor working on a satellite image analysis project, we need 800 images stitching together for a square area at 8000x8000 resolution each image from Google Map, it is possible to download them one by one, however I believe there must be a way to write a script for batch processing.
Here I would like to ask how can I implement this using shell or python script, and how could I download images by google maps url ?
Here is an example of the url:
https://maps.google.com.au/maps/myplaces?ll=-33.071009,149.554911&spn=0.027691,0.066047&ctz=-660&t=k&z=15
However I'm not able to analyse the image direct download link from this.
Update:
Actually, I solved this problem, however due to Google's intention, I would not post the way for doing this.
Have you tried the Google static maps API?
You get 25 000 free requests, but you're limited to 640x640, so you'll need to do ~160 requests at a higher zoom level.
I suggest downloading the images as so: Downloading a picture via urllib and python
URL to start with: http://maps.googleapis.com/maps/api/staticmap?center=-33.071009,149.554911&zoom=15&size=640x640&sensor=false&maptype=satellite
It's been long time since I solved the problem, sorry for the delay.
I posted my code to github here, plz star or fork as you like :)
The idea is to use a virtual web browser at a very high resolution to load the google map page, then do the page capture. The defect is there will be google symbol all around on each image, the solution is to apply over sampling on the resolution on each of the image, then use the stiching technique to stick them all together.