Aspose: Image overflow the table when using with shape in imageFieldMerging - aspose

When I try to insert image directly to the ImageFieldMergingArgs it appears properly in the table cell using the following code...
override fun imageFieldMerging(imageFieldMergingArgs: ImageFieldMergingArgs) {
val fieldValue = imageFieldMergingArgs.fieldValue
if (fieldValue is DataString) {
val decodedImage = fieldValue.decode()
imageFieldMergingArgs.imageStream = ByteArrayInputStream(decodedImage)
}
}
But when I'm trying to insert an image using Shape in MailMerge. then it is appearing outside the table. I'm using the following code
override fun imageFieldMerging(imageFieldMergingArgs: ImageFieldMergingArgs) {
val fieldValue = imageFieldMergingArgs.fieldValue
if (fieldValue is DataString) {
val shape = Shape(imageFieldMergingArgs.document, ShapeType.IMAGE)
shape.wrapType = WrapType.SQUARE
shape.aspectRatioLocked = false
shape.anchorLocked = true
shape.allowOverlap = false
shape.width = imageFieldMergingArgs.imageWidth.value
shape.height = imageFieldMergingArgs.imageHeight.value
imageFieldMergingArgs.shape = shape
}
}
is there any way I can add an image into the table cell using shape to imageFieldMergingArgs.
Thanks

When you specify imageFieldMergingArgs.imageStream the shape is inserted with WrapType.INLINE. In you second snippet you specify WrapType.SQUARE. This might be the difference. It is difficult to say exactly what is wrong without your template. But I would try specifying WrapType.INLINE. I tested both your code snippets on my side with a simple template an in both cases the image is inside table cell.

Related

How to use Isolationforest in weka?

I am trying to use isolationforest in weka ,but I cannot find a easy example which shows how to use it ,who can help me ?thanks in advance
import weka.classifiers.misc.IsolationForest;
public class Test2 {
public static void main(String[] args) {
IsolationForest isolationForest = new IsolationForest();
.....................................................
}
}
I strongly suggest you to study a little bit the implementation for IslationForest.
The following code work loading a CSV file with first column with Class (note: a single class value will produce only (1-anomaly score) if it's binary you will get the anomaly score too. Otherwise it just return an error). Note I skip the second column (that in my case is the uuid that is not needed for anomaly detection)
private static void findOutlier(File in, File out) throws Exception {
CSVLoader loader = new CSVLoader();
loader.setSource(new File(in.getAbsolutePath()));
Instances data = loader.getDataSet();
// setting class attribute if the data format does not provide this information
// For example, the XRFF format saves the class attribute information as well
if (data.classIndex() == -1)
data.setClassIndex(0);
String[] options = new String[2];
options[0] = "-R"; // "range"
options[1] = "2"; // first attribute
Remove remove = new Remove(); // new instance of filter
remove.setOptions(options); // set options
remove.setInputFormat(data); // inform filter about dataset **AFTER** setting options
Instances newData = Filter.useFilter(data, remove); // apply filter
IsolationForest randomForest = new IsolationForest();
randomForest.buildClassifier(newData);
// System.out.println(randomForest);
FileWriter fw = new FileWriter(out);
final Enumeration<Attribute> attributeEnumeration = data.enumerateAttributes();
for (Attribute e = attributeEnumeration.nextElement(); attributeEnumeration.hasMoreElements(); e = attributeEnumeration.nextElement()) {
fw.write(e.name());
fw.write(",");
}
fw.write("(1 - anomaly score),anomaly score\n");
for (int i = 0; i < data.size(); ++i) {
Instance inst = data.get(i);
final double[] distributionForInstance = randomForest.distributionForInstance(inst);
fw.write(inst + ", " + distributionForInstance[0] + "," + (1 - distributionForInstance[0]));
fw.write(",\n");
}
fw.flush();
}
The previous function will add at the CSV at last column the anomaly values. Please note I'm using a single class so for getting the corresponding anomaly I do 1 - distributionForInstance[0] otherwise you ca do simply distributionForInstance[1] .
A sample input.csv for getting (1-anomaly score):
Class,ignore, feature_0, feature_1, feature_2
A,1,21,31,31
A,2,41,61,81
A,3,61,37,34
A sample input.csv for getting (1-anomaly score) and anomaly score:
Class,ignore, feature_0, feature_1, feature_2
A,1,21,31,31
B,2,41,61,81
A,3,61,37,34

Extract polygons from shapefile using Geotools

I have a shape file (Sample.shp) along with two other files (Sample.shx and Sample.dbf), which has geometry (polygons) defined for 15 pincodes of Bombay.
I am able to view the .shp file using the Quickstart tutorial.
File file = JFileDataStoreChooser.showOpenFile("shp", null);
if (file == null) {
return;
}
FileDataStore store = FileDataStoreFinder.getDataStore(file);
SimpleFeatureSource featureSource = store.getFeatureSource();
// Create a map content and add our shapefile to it
MapContent map = new MapContent();
map.setTitle("Quickstart");
Style style = SLD.createSimpleStyle(featureSource.getSchema());
Layer layer = new FeatureLayer(featureSource, style);
map.addLayer(layer);
// Now display the map
JMapFrame.showMap(map);
Now I want to convert the geometry of these 15 pincodes to 15 Geometry/Polygon objects so that I can use Geometry.contains() to find if a point falls in a particular Geometry/Polygon.
I tried:
ShapefileReader r = new ShapefileReader(new ShpFiles(file),true,false,geometryFactory);
System.out.println(r.getCount(0)); >> returns 51
System.out.println(r.hasNext()); >> returns false
Any help is really appreciated
In fact you don't need to extract the geometries your self - just create a filter and iterate through the filtered collection. In your case there will probably be only one feature returned.
Filter pointInPolygon = CQL.toFilter("CONTAINS(the_geom, POINT(1 2))");
SimpleFeatureCollection features = source.getFeatures(filter);
SimpleFeatureIterator iterator = features.features();
try {
while (iterator.hasNext()) {
SimpleFeature feature = iterator.next();
Geometry geom = (Geometry) feature.getDefaultGeometry();
/*... do something here */
}
} finally {
iterator.close(); // IMPORTANT
}
For a full discussion of querying datastores see the Query Lab.
I used the above solution and tried a few combinations. Just changed "THE_GEOM" to lower case and POINT is in order (Lon Lat)
Filter filter = CQL.toFilter("CONTAINS(the_geom, POINT(72.82916 18.942883))");
SimpleFeatureCollection collection=featureSource.getFeatures(filter);
SimpleFeatureIterator iterator = collection.features();
try {
while (iterator.hasNext()) {
SimpleFeature feature = iterator.next();
.....
}
} finally {
iterator.close(); // IMPORTANT
}

Hide UltragridRow that has no visible child rows after applying RowFilter

So, I am setting the DataSource of my BindingSource to the DefaultViewManager of a DataSet that has a DataRelation. I then set my BindingSource as the UltraGrid's DataSource before applying a RowFilter to the the "SalesOrderSublines" DataView.
public void RefreshData()
{
var dataset = DataService.GetMillWorkOrders()
bindingSource1.DataSource = dataset.DefaultViewManager;
ultraGridSequences.SetDataBinding(bindingSource1, "", true, true);
var dvm = bindingSource1.DataSource as DataViewManager;
dvm.DataViewSettings["SalesOrderSublines"].RowFilter = "LINE_NO = 2;
}
public static DataSet GetMillWorkOrders()
{
DataSet ds = OracleHelper.ExecuteDataset(_connectionString, CommandType.StoredProcedure, SQL.GET_WORK_ORDERS);
ds.Tables[0].TableName = "WorkOrders";
ds.Tables[1].TableName = "SalesOrderSublines";
var dr = new DataRelation("WorkOrderSublines", ds.Tables["WorkOrders"].Columns["WORK_ORDER"], ds.Tables["SalesOrderSublines"].Columns["WORK_ORDER"]);
ds.Relations.Add(dr);
return ds;
}
Then, as the UltraGridRows are initializing I want to hide any parent row ("WorkOrders") that has no visible child rows ("WorkOrderSublines") because of my RowFilter.
private void ultraGridSequences_InitializeRow(object sender, Infragistics.Win.UltraWinGrid.InitializeRowEventArgs e)
{
if (e.Row.Band.Key != "WorkOrders") return;
e.Row.Hidden = e.Row.ChildBands["WorkOrderSublines"].Rows.VisibleRowCount == 0;
}
Although the RowFilter does work properly on the rows in the "WorkOrderSublines" band the VisibleRowCount of the band is still greater than zero and so the parent row is never hidden. My guess is that I want to look for something other than the VisibleRowCount of the ChildBand to determine if the top-level row should be hidden, but I'm stuck. Any help would be greatly appreciated. Thanks ahead of time.
Instead of relying on VisibleRowCount you could simply compare the count of child row filtered vs total count.
void ultraGridSequences_InitializeRow(object sender, Infragistics.Win.UltraWinGrid.InitializeRowEventArgs e)
{
if (e.Row.Band.Key != "WorkOrders") return;
var sublinesBand = e.Row.ChildBands["WorkOrderSublines"]
e.Row.Hidden = sublinesBand.Rows.Count(row => row.IsFilteredOut) ==
sublinesBand.Rows.Count();
}
Should be fine performance-wise so long as we're not talking huge amounts of records?
Using the Filtering within the Grid may be an option rather than using the filtering in the DataSource. The following resources have more details on implementing this:
http://forums.infragistics.com/forums/t/51892.aspx
http://devcenter.infragistics.com/Support/KnowledgeBaseArticle.aspx?ArticleID=7703

Using Conversion Studio by To-Increase to import Notes into Microsoft Dynamics AX 2009

Currently, I'm using Conversion Studio to bring in a CSV file and store the contents in an AX table. This part is working. I have a block defined and the fields are correctly mapped.
The CSV file contains several comments columns, such as Comments-1, Comments-2, etc. There are a fixed number of these. The public comments are labeled as Comments-1...5, and the private comments are labeled as Private-Comment-1...5.
The desired result would be to bring the data into the AX table (as is currently working) and either concatenate the comment fields or store them as separate comments into the DocuRef table as internal or external notes.
Would it not require just setting up a new block in the Conversion Studio project that I already have setup? Can you point me to a resource that maybe shows a similar procedure or how to do this?
Thanks in advance!
After chasing the rabbit down the deepest of rabbit holes, I discovered that the easiest way to do this is like so:
Override the onEntityCommit method of your Document Handler (that extends AppDataDocumentHandler), like so:
AppEntityAction onEntityCommit(AppDocumentBlock documentBlock, AppBlock fromBlock, AppEntity toEntity)
{
AppEntityAction ret;
int64 recId; // Should point to the record currently being imported into CMCTRS
;
ret = super(documentBlock, fromBlock, toEntity);
recId = toEntity.getRecord().recId;
// Do whatever you need to do with the recId now
return ret;
}
Here is my method to insert the notes, in case you need that too:
private static boolean insertNote(RefTableId _tableId, int64 _docuRefId, str _note, str _name, boolean _isPublic)
{
DocuRef docuRef;
boolean insertResult = false;
;
if (_docuRefId)
{
try
{
docuRef.clear();
ttsbegin;
docuRef.RefCompanyId = curext();
docuRef.RefTableId = _tableId;
docuRef.RefRecId = _docuRefId;
docuRef.TypeId = 'Note';
docuRef.Name = _name;
docuRef.Notes = _note;
docuRef.Restriction = (_isPublic) ? DocuRestriction::External : DocuRestriction::Internal;
docuRef.insert();
ttscommit;
insertResult = true;
}
catch
{
ttsabort;
error("Could not insert " + ((_isPublic) ? "public" : "private") + " comment:\n\n\t\"" + _note + "\"");
}
}
return insertResult;
}

Subsonic 3 Save() then Update()?

I need to get the primary key for a row and then insert it into one of the other columns in a string.
So I've tried to do it something like this:
newsObj = new news();
newsObj.name = "test"
newsObj.Save();
newsObj.url = String.Format("blah.aspx?p={0}",newsObj.col_id);
newsObj.Save();
But it doesn't treat it as the same data object so newsObj.col_id always comes back as a zero. Is there another way of doing this? I tried this on another page and to get it to work I had to set newsObj.SetIsLoaded(true);
This is the actual block of code:
page p;
if (pageId > 0)
p = new page(ps => ps.page_id == pageId);
else
p = new page();
if (publish)
p.page_published = 1;
if (User.IsInRole("administrator"))
p.page_approved = 1;
p.page_section = staticParent.page_section;
p.page_name = PageName.Text;
p.page_parent = parentPageId;
p.page_last_modified_date = DateTime.Now;
p.page_last_modified_by = (Guid)Membership.GetUser().ProviderUserKey;
p.Add();
string urlString = String.Empty;
if (parentPageId > 0)
{
urlString = Regex.Replace(staticParent.page_url, "(.aspx).*$", "$1"); // We just want the static page URL (blah.aspx)
p.page_url = String.Format("{0}?p={1}", urlString, p.page_id);
}
p.Save();
If I hover the p.Save(); I can see the correct values in the object but the DB is never updated and there is no exception.
Thanks!
I faced the same problem with that :
po oPo = new po();
oPo.name ="test";
oPo.save(); //till now it works.
oPo.name = "test2";
oPo.save(); //not really working, it's not saving the data since isLoaded is set to false
and the columns are not considered dirty.
it's a bug in the ActiveRecord.tt for version 3.0.0.3.
In the method public void Add(IDataProvider provider)
immediately after SetIsNew(false);
there should be : SetIsLoaded(true);
the reason why the save is not working the second time is because the object can't get dirty if it is not loaded. By adding the SetIsLoaded(true) in the ActiveRecord.tt, when you are going to do run custom tool, it's gonna regenerate the .cs perfectly.