Hi I have a scenario where i have list of locations. In my document i have merge field location_num. I want to print all location_num values in my doc dynamically.
You can meet this requirement by using Aspose.Words.
Please refer to the following section of Aspose.Words' documentation:
How to Execute Mail Merge
Here is sample code:
Document doc = new Document(filePath);
doc.MailMerge.Execute(GetDataTable());
doc.Save(MyDir + #"16.10.0.docx");
and the definition of GetDataTable method
private static DataTable GetDataTable()
{
DataTable dataTable = new DataTable("table");
dataTable.Columns.Add(new DataColumn("location_num"));
DataRow dataRow;
for (int i = 0; i < 5; i++)
{
dataRow = dataTable.NewRow();
dataRow[0] = "location value " + i;
dataTable.Rows.Add(dataRow);
}
return dataTable;
}
Hope, this helps. I work with Aspose as Developer Evangelist.
Please find below Aspose.Words for Java code to achieve the same:
import com.aspose.words.*;
import com.aspose.words.net.System.Data.DataColumn;
import com.aspose.words.net.System.Data.DataRow;
import com.aspose.words.net.System.Data.DataTable;
public class Program {
public static void main(String[] args) throws Exception {
License lic = new License();
lic.setLicense("D:\\temp\\Aspose.total.java.lic");
Document doc = new Document("D:\\temp\\input.docx");
doc.getMailMerge().execute(GetDataTable());
doc.save("D:\\temp\\16.10.0.docx");
}
private static DataTable GetDataTable()
{
DataTable dataTable = new DataTable("table");
dataTable.getColumns().add("location_num");
DataRow dataRow;
for (int i = 0; i < 5; i++)
{
dataRow = dataTable.newRow();
dataRow.set(0, "location value " + i);
dataTable.getRows().add(dataRow);
}
return dataTable;
}
}
Hope, this helps. I work with Aspose as Developer Evangelist.
Related
There is a well-known problem connected with creating custom visuals for Power BI Report Sever - you can't create multiple unrelated dataViewMappings. It means that every field you put inside Fields must be related.
There have been a long-lasting request from users to add this feature.
However, it looks like Microsoft hasn't implemented it yet.
https://github.com/Microsoft/PowerBI-visuals/issues/251
I found a workaround - you can combine all your different tables in one and chain with a tableName key.
For example, you have a table A:
value1|value2|tableName
1|2|tableA
3|4|tableA
and a table B:
value3|tableName
a|tableB
b|tableB
You create a tableC by using M function Table.Combine({tableA, tableB}) or other technique. As a result, it looks like this:
value1|value2|value3|tableName
1|2|null|tableA
3|4|null|tableA
null|null|a|tableB
null|null|b|tableB
Then, inside your custom visual you implement the following function which parses the input:
public parseOptionsToTables(options: VisualUpdateOptions) {
var i: number;
var j: number;
var cntRows: number;
var cntCols: number;
var nameTable: string;
var tempDict: {} = {};
var tempVal: any;
var colName: string;
var cats: any = options.dataViews[0].table;
this.tables = {};
cntCols = cats.columns.length;
cntRows = cats.rows.length;
for (i = 0; i < cntRows; i++) {
tempVal = null;
colName = null;
nameTable = null;
tempDict = {};
for (j = 0; j < cntCols; j++) {
//for every row - check the TableName column and add to dict
tempVal = cats.rows[i][j];
colName = cats.columns[j].displayName;
if (colName == 'tableName') {
nameTable = tempVal;
} else {
//add id not null
if (tempVal !== null) {
tempDict[colName] = tempVal;
}
}
}
//push to dict
this.tables[nameTable] = this.tables[nameTable] || [];
this.tables[nameTable].push(this.deepcopy(tempDict));
}
}
As a result, it creates a this.tables object with your tableA and tableB inside.
Although it helps to solve the main problem, I can't get out of feeling of creating crutches.
So, are there any other approaches to solve this problem more effeciently?
I am trying to use isolationforest in weka ,but I cannot find a easy example which shows how to use it ,who can help me ?thanks in advance
import weka.classifiers.misc.IsolationForest;
public class Test2 {
public static void main(String[] args) {
IsolationForest isolationForest = new IsolationForest();
.....................................................
}
}
I strongly suggest you to study a little bit the implementation for IslationForest.
The following code work loading a CSV file with first column with Class (note: a single class value will produce only (1-anomaly score) if it's binary you will get the anomaly score too. Otherwise it just return an error). Note I skip the second column (that in my case is the uuid that is not needed for anomaly detection)
private static void findOutlier(File in, File out) throws Exception {
CSVLoader loader = new CSVLoader();
loader.setSource(new File(in.getAbsolutePath()));
Instances data = loader.getDataSet();
// setting class attribute if the data format does not provide this information
// For example, the XRFF format saves the class attribute information as well
if (data.classIndex() == -1)
data.setClassIndex(0);
String[] options = new String[2];
options[0] = "-R"; // "range"
options[1] = "2"; // first attribute
Remove remove = new Remove(); // new instance of filter
remove.setOptions(options); // set options
remove.setInputFormat(data); // inform filter about dataset **AFTER** setting options
Instances newData = Filter.useFilter(data, remove); // apply filter
IsolationForest randomForest = new IsolationForest();
randomForest.buildClassifier(newData);
// System.out.println(randomForest);
FileWriter fw = new FileWriter(out);
final Enumeration<Attribute> attributeEnumeration = data.enumerateAttributes();
for (Attribute e = attributeEnumeration.nextElement(); attributeEnumeration.hasMoreElements(); e = attributeEnumeration.nextElement()) {
fw.write(e.name());
fw.write(",");
}
fw.write("(1 - anomaly score),anomaly score\n");
for (int i = 0; i < data.size(); ++i) {
Instance inst = data.get(i);
final double[] distributionForInstance = randomForest.distributionForInstance(inst);
fw.write(inst + ", " + distributionForInstance[0] + "," + (1 - distributionForInstance[0]));
fw.write(",\n");
}
fw.flush();
}
The previous function will add at the CSV at last column the anomaly values. Please note I'm using a single class so for getting the corresponding anomaly I do 1 - distributionForInstance[0] otherwise you ca do simply distributionForInstance[1] .
A sample input.csv for getting (1-anomaly score):
Class,ignore, feature_0, feature_1, feature_2
A,1,21,31,31
A,2,41,61,81
A,3,61,37,34
A sample input.csv for getting (1-anomaly score) and anomaly score:
Class,ignore, feature_0, feature_1, feature_2
A,1,21,31,31
B,2,41,61,81
A,3,61,37,34
I am trying to write a test for a before trigger that takes fields from a custom object and concatenates them into a custom Key__c field.
The trigger works in the Sandbox and now I am trying to get it into production. However, whenever I try and do a System.assert/assertEquals after I create a purchase and perform DML, the value of Key__c always returns null. I am aware I can create a flow/process to do this, but I am trying to solve this with code for my own edification. How can I get the fields to concatenate and return properly in the test? (the commented out asserts are what I have tried so far, and have failed when run)
trigger Composite_Key on Purchases__c (before insert, before update) {
if(Trigger.isBefore)
{
for(Purchases__c purchase : trigger.new)
{
String eventName = String.isBlank(purchase.Event_name__c)?'':purchase.Event_name__c+'-';
String section = String.isBlank(purchase.section__c)?'':purchase.section__c+'-';
String row = String.isBlank(purchase.row__c)?'':purchase.row__c+'-';
String seat = String.isBlank(String.valueOf(purchase.seat__c))?'':String.valueOf(purchase.seat__c)+'-';
String numseats = String.isBlank(String.valueOf(purchase.number_of_seats__c))?'':String.valueOf(purchase.number_of_seats__c)+'-';
String adddatetime = String.isBlank(String.valueOf(purchase.add_datetime__c))?'':String.valueOf(purchase.add_datetime__c);
purchase.Key__c = eventName + section + row + seat + numseats + adddatetime;
}
}
}
#isTest
public class CompositeKeyTest {
public static testMethod void testPurchase() {
//create a purchase to fire the trigger
Purchases__c purchase = new Purchases__c(Event_name__c = 'test', section__c='test',row__c='test', seat__c=1.0,number_of_seats__c='test',add_datetime__c='test');
Insert purchase;
//System.assert(purchases__c.Key__c.getDescribe().getName() == 'testesttest1testtest');
//System.assertEquals('testtesttest1.0testtest',purchase.Key__c);
}
static testMethod void testbulkPurchase(){
List<Purchases__c> purchaseList = new List<Purchases__c>();
for(integer i=0 ; i < 10; i++)
{
Purchases__c purchaserec = new Purchases__c(Event_name__c = 'test', section__c='test',row__c='test', seat__c= i+1.0 ,number_of_seats__c='test',add_datetime__c='test');
purchaseList.add(purchaserec);
}
insert purchaseList;
//System.assertEquals('testtesttest5testtest',purchaseList[4].Key__c,'Key is not Valid');
}
}
You need to requery the records after inserting them to get the updated data from the triggers/database
I am reading data from file and converting it to BeamRecord But While i am Doing Query on that it Show Error-:
Exception in thread "main" java.lang.ClassCastException: org.apache.beam.sdk.coders.SerializableCoder cannot be cast to org.apache.beam.sdk.coders.BeamRecordCoder
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.registerTables(BeamSql.java:173)
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.expand(BeamSql.java:153)
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.expand(BeamSql.java:116)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:533)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:465)
at org.apache.beam.sdk.values.PCollectionTuple.apply(PCollectionTuple.java:160)
at TestingClass.main(TestingClass.java:75)
But When I am Providing it a Coder Then It Runs Perfectly.
I am little confused that if I am reading data from a file the file data schema changes on every run because I am using templates so is there any way I can use Default Coder or Without Coder, i can Run the Query.
For Reference Code is Below Please Check.
PCollection<String> ReadFile1 = PBegin.in(p).apply(TextIO.read().from("gs://Bucket_Name/FileName.csv"));
PCollection<BeamRecord> File1_BeamRecord = ReadFile1.apply(new StringToBeamRecord()).setCoder(new Temp().test().getRecordCoder());
PCollection<String> ReadFile2= p.apply(TextIO.read().from("gs://Bucket_Name/FileName.csv"));
PCollection<BeamRecord> File2_Beam_Record = ReadFile2.apply(new StringToBeamRecord()).setCoder(new Temp().test1().getRecordCoder());
new Temp().test1().getRecordCoder() --> Returning HardCoded BeamRecordCoder Values Which I need to fetch at runtime
Conversion From PColletion<String> to PCollection<TableRow> is Below-:
Public class StringToBeamRecord extends PTransform<PCollection<String>,PCollection<BeamRecord>> {
private static final Logger LOG = LoggerFactory.getLogger(StringToBeamRecord.class);
#Override
public PCollection<BeamRecord> expand(PCollection<String> arg0) {
return arg0.apply("Conversion",ParDo.of(new ConversionOfData()));
}
static class ConversionOfData extends DoFn<String,BeamRecord> implements Serializable{
#ProcessElement
public void processElement(ProcessContext c){
String Data = c.element().replaceAll(",,",",blank,");
String[] array = Data.split(",");
List<String> fieldNames = new ArrayList<>();
List<Integer> fieldTypes = new ArrayList<>();
List<Object> Data_Conversion = new ArrayList<>();
int Count = 0;
for(int i = 0 ; i < array.length;i++){
fieldNames.add(new String("R"+Count).toString());
Count++;
fieldTypes.add(Types.VARCHAR); //Using Schema I can Set it
Data_Conversion.add(array[i].toString());
}
LOG.info("The Size is : "+Data_Conversion.size());
BeamRecordSqlType type = BeamRecordSqlType.create(fieldNames, fieldTypes);
c.output(new BeamRecord(type,Data_Conversion));
}
}
}
Query is -:
PCollectionTuple test = PCollectionTuple.of(
new TupleTag<BeamRecord>("File1_BeamRecord"),File1_BeamRecord)
.and(new TupleTag<BeamRecord>("File2_BeamRecord"), File2_BeamRecord);
PCollection<BeamRecord> output = test.apply(BeamSql.queryMulti(
"Select * From File1_BeamRecord JOIN File2_BeamRecord "));
Is thier anyway i can make Coder Dynamic or I can Run Query with Default Coder.
I need to generate some data to unit test my repositories. i was using a loop to generate a list of objects, see codes below. I learned moq is a great mocking library, Can I use moq to generate that and how do I do it?
public IQueryable<Category> GetCategories()
{
IList<Category> result = new List<Category>();
for (int i = 1; i <= 2; i++)
{
Category c = new Category();
c.ID = i;
c.Name = "Parent" + i.ToString();
c.ParentID = 0;
for (int x = i*10; x < i*10+5; x++)
{
Category sub = new Category();
sub.ID = x;
sub.Name = "Sub" + x.ToString();
sub.ParentID = i;
result.Add(sub);
}
result.Add(c);
}
return result.AsQueryable<Category>();
}
You can't use Moq to create the data, but you can use AutoFixture:
public IQueryable<Category> GetCategories()
{
return fixture.CreateMany<Category>().AsQueryable();
}
However, this will not give you a hierarchical tree. It will return objects like this:
Object 1:
- ID = 0
- ParentID = 1
Object 2:
- ID = 2
- ParentID = 3
etc.
If you really need to have this hierarchical data, you would need to use the following code:
public IQueryable<Category> GetCategories()
{
var result = new List<Category>();
// Create the parents
var parents = fixture.Build<Category>()
.Without(x => x.ParentID)
.CreateMany());
result.AddRange(parents);
result.AddRange(parents.SelectMany(p => fixture.Build<Category>()
.With(x => x.ParentID, p.ID)
.CreateMany()));
return result.AsQueryable();
}
This will add multiple parents with multiple subs for each parent.
You can use faker.net to generate fake data. for Example: for dotnet core project its Faker.NETCore.
dotnet add package Faker.NETCore -v 1.0.1
and then use the same in your code in the following manner:-
public void GetStudent()
{
var st = new Student()
st.FirstName = Faker.Name.First();
st.LastName = Faker.Name.Last();
st.Mobile = Faker.Phone.Number();
}