Using Univocity, how can I convert a date string value to a Date type in Java - univocity

I'll like to parse column zero in a csv file to a particular datatype, in this example a Date Object.
The method below is what I use currently to parse a csv file but I don't know how to incorporate this requirement.
import java.sql.Date;
public class Data {
#Parsed(index = 0)
private Date date;
}
}
public <T> List<T> convertFileToData(File file, Class<T> clazz) {
BeanListProcessor<T> rowProcessor = new BeanListProcessor<>(clazz);
CsvParserSettings settings = new CsvParserSettings();
settings.setProcessor(rowProcessor);
settings.setHeaderExtractionEnabled(true);
CsvParser parser = new CsvParser(settings);
parser.parseAll(file);
return rowProcessor.getBeans();
}

All you need is to define the format(s) of your date and you are set:
#Format(formats = {"dd-MMM-yyyy", "yyyy-MM-dd"})
#Parsed(index = 0)
private Date date;
}
As an extra suggestion, you can also replace a lot of your code by using the CsvRoutines class. Try this:
List<T> beanList = new CsvRoutines(settings).parseAll(clazz, file);
Hope it helps.

Related

Flutter convert List<List<dynamic>>

I have a Flutter app in which I make an http request, which brings me json data. Afer formatting it with a package I get a list of lists of dynamic type. Here´s what it looks like:
[[2016-04-01, 85.5254], [2016-05-01, 89.1118], [2016-06-01, 91.8528], [2016-07-01, 93.7328], [2016-08-01, 93.9221], [2016-09-01, 95.0014], [2016-10-01, 97.2428], [2016-11-01, 98.8166]]
So I created a class named IpcData, which recieves a String and a double.
class IpcData {
final String date;
final double value;
IpcData(this.date, this.value);
}
So we could guess that an IpcData instance would look like the following:
IpcData(2016-08-01, 93.9221)
I can´t figure out how, but I´d like to have a method that using the information from the List<List<dynamic>> to return a List<IpcData>, that would looke like:
[IpcData(2016-08-01, 93.9221), IpcData(2016-08-01, 93.9221), IpcData(2016-08-01, 93.9221),]
You can use .map function on the original list to construct the new one. Something like this.
class IpcData {
final String date;
final double value;
IpcData(this.date, this.value);
#override
String toString() {
return '$date -> $value';
}
}
void main() {
List<List<dynamic>> initList = [
['2016-04-01', 85.5254], ['2016-05-01', 89.1118], ['2016-06-01', 91.8528],
['2016-07-01', 93.7328], ['2016-08-01', 93.9221], ['2016-09-01', 95.0014],
['2016-10-01', 97.2428], ['2016-11-01', 98.8166], ['2016-12-01', 99.8166]
];
List<IpcData> ipcList = initList.map((e) => IpcData(e[0], e[1])).toList();
print(ipcList);
}

Aspose words API - mail merge functionality - can the "merged" text be richtext (with styles/images/bullets/tables)?

Looking for word api which can perform mail merge type of functionality with richtext. Basically, text will be richtext/formatted text with fonts styles and WILL have
a) images
b) bullets
c) tables
Overall purpose: Create a word template with bookmarks. Get get data from DB(for those fields) and insert. Data will be html text/richtext. Autogenerate word document. python or .net api will be preferred.
Can Aspose.words work with richtext as described above? Any other recommendations for excellent word APIs?
Yes, you can achieve this using Aspose.Words. You can use IFieldMergingCallback to insert formatted text upon mail merge. For example, see the following link
https://apireference.aspose.com/words/net/aspose.words.mailmerging/ifieldmergingcallback
In case of reach text (if you mean RTF or MarkDown formats) you first need to read this content into a separate instance of Document and then use DocumentBuilder.InsertDocument method
https://apireference.aspose.com/words/net/aspose.words/documentbuilder/methods/insertdocument
The following code example shows how to use InsertHtml method in IFieldMergingCallback
[Test]
public void Test001()
{
Document doc = new Document(#"C:\Temp\in.docx");
doc.MailMerge.FieldMergingCallback = new HandleMergeFieldInsertHtml();
const string html = #"<h1>Hello world!</h1>";
doc.MailMerge.Execute(new string[] { "myField" }, new object[] { html });
doc.Save(#"C:\Temp\out.docx");
}
private class HandleMergeFieldInsertHtml : IFieldMergingCallback
{
void IFieldMergingCallback.FieldMerging(FieldMergingArgs args)
{
FieldMergeField field = args.Field;
// Insert the text for this merge field as HTML data, using DocumentBuilder
DocumentBuilder builder = new DocumentBuilder(args.Document);
builder.MoveToMergeField(args.DocumentFieldName);
builder.Write(field.TextBefore ?? "");
builder.InsertHtml((string)args.FieldValue);
// The HTML text itself should not be inserted
// We have already inserted it as an HTML
args.Text = "";
}
void IFieldMergingCallback.ImageFieldMerging(ImageFieldMergingArgs args)
{
// Do nothing
}
}
If you would like manually format the text, then you can use DocumentBuilder appropriate properties.
[Test]
public void Test001()
{
Document doc = new Document(#"C:\Temp\in.docx");
doc.MailMerge.FieldMergingCallback = new HandleMergeFieldInsertText();
const string text = #"Hello world!";
doc.MailMerge.Execute(new string[] { "myField" }, new object[] { text });
doc.Save(#"C:\Temp\out.docx");
}
private class HandleMergeFieldInsertText : IFieldMergingCallback
{
void IFieldMergingCallback.FieldMerging(FieldMergingArgs args)
{
FieldMergeField field = args.Field;
DocumentBuilder builder = new DocumentBuilder(args.Document);
builder.MoveToMergeField(args.DocumentFieldName);
// Apply style or other formatting.
builder.ParagraphFormat.StyleIdentifier = StyleIdentifier.Heading1;
builder.Write(field.TextBefore ?? "");
builder.Write((string)args.FieldValue);
// The text itself should not be inserted
// We have already inserted it using DocumentBuilder.
args.Text = "";
}
void IFieldMergingCallback.ImageFieldMerging(ImageFieldMergingArgs args)
{
// Do nothing
}
}
Hope this helps.
Disclosure: I work at Aspose.Words team.

Running BeamSql WithoutCoder or Making Coder Dynamic

I am reading data from file and converting it to BeamRecord But While i am Doing Query on that it Show Error-:
Exception in thread "main" java.lang.ClassCastException: org.apache.beam.sdk.coders.SerializableCoder cannot be cast to org.apache.beam.sdk.coders.BeamRecordCoder
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.registerTables(BeamSql.java:173)
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.expand(BeamSql.java:153)
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.expand(BeamSql.java:116)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:533)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:465)
at org.apache.beam.sdk.values.PCollectionTuple.apply(PCollectionTuple.java:160)
at TestingClass.main(TestingClass.java:75)
But When I am Providing it a Coder Then It Runs Perfectly.
I am little confused that if I am reading data from a file the file data schema changes on every run because I am using templates so is there any way I can use Default Coder or Without Coder, i can Run the Query.
For Reference Code is Below Please Check.
PCollection<String> ReadFile1 = PBegin.in(p).apply(TextIO.read().from("gs://Bucket_Name/FileName.csv"));
PCollection<BeamRecord> File1_BeamRecord = ReadFile1.apply(new StringToBeamRecord()).setCoder(new Temp().test().getRecordCoder());
PCollection<String> ReadFile2= p.apply(TextIO.read().from("gs://Bucket_Name/FileName.csv"));
PCollection<BeamRecord> File2_Beam_Record = ReadFile2.apply(new StringToBeamRecord()).setCoder(new Temp().test1().getRecordCoder());
new Temp().test1().getRecordCoder() --> Returning HardCoded BeamRecordCoder Values Which I need to fetch at runtime
Conversion From PColletion<String> to PCollection<TableRow> is Below-:
Public class StringToBeamRecord extends PTransform<PCollection<String>,PCollection<BeamRecord>> {
private static final Logger LOG = LoggerFactory.getLogger(StringToBeamRecord.class);
#Override
public PCollection<BeamRecord> expand(PCollection<String> arg0) {
return arg0.apply("Conversion",ParDo.of(new ConversionOfData()));
}
static class ConversionOfData extends DoFn<String,BeamRecord> implements Serializable{
#ProcessElement
public void processElement(ProcessContext c){
String Data = c.element().replaceAll(",,",",blank,");
String[] array = Data.split(",");
List<String> fieldNames = new ArrayList<>();
List<Integer> fieldTypes = new ArrayList<>();
List<Object> Data_Conversion = new ArrayList<>();
int Count = 0;
for(int i = 0 ; i < array.length;i++){
fieldNames.add(new String("R"+Count).toString());
Count++;
fieldTypes.add(Types.VARCHAR); //Using Schema I can Set it
Data_Conversion.add(array[i].toString());
}
LOG.info("The Size is : "+Data_Conversion.size());
BeamRecordSqlType type = BeamRecordSqlType.create(fieldNames, fieldTypes);
c.output(new BeamRecord(type,Data_Conversion));
}
}
}
Query is -:
PCollectionTuple test = PCollectionTuple.of(
new TupleTag<BeamRecord>("File1_BeamRecord"),File1_BeamRecord)
.and(new TupleTag<BeamRecord>("File2_BeamRecord"), File2_BeamRecord);
PCollection<BeamRecord> output = test.apply(BeamSql.queryMulti(
"Select * From File1_BeamRecord JOIN File2_BeamRecord "));
Is thier anyway i can make Coder Dynamic or I can Run Query with Default Coder.

Univocity - parse each TSV file row to different Type of class object

I have a tsv file which has fixed rows but each row is mapped to different Java Class.
For example.
recordType recordValue1
recordType recordValue1 recordValue2
for First row I have follofing class:
public class FirstRow implements ItsvRecord {
#Parsed(index = 0)
private String recordType;
#Parsed(index = 1)
private String recordValue1;
public FirstRow() {
}
}
and for second row I have:
public class SecondRow implements ItsvRecord {
#Parsed(index = 0)
private String recordType;
#Parsed(index = 1)
private String recordValue1;
public SecondRow() {
}
}
I want to parse the TSV file directly to the respective objects but I am falling short of ideas.
Use an InputValueSwitch. This will match a value in a particular column of each row to determine what RowProcessor to use. Example:
Create two (or more) processors for each type of record you need to process:
final BeanListProcessor<FirstRow> firstProcessor = new BeanListProcessor<FirstRow>(FirstRow.class);
final BeanListProcessor<SecondRow> secondProcessor = new BeanListProcessor<SecondRow>(SecondRow.class);
Create an InputValueSwitch:
//0 means that the first column of each row has a value that
//identifies what is the type of record you are dealing with
InputValueSwitch valueSwitch = new InputValueSwitch(0);
//assigns the first processor to rows whose first column contain the 'firstRowType' value
valueSwitch.addSwitchForValue("firstRowType", firstProcessor);
//assigns the second processor to rows whose first column contain the 'secondRowType' value
valueSwitch.addSwitchForValue("secondRowType", secondProcessor);
Parse as usual:
TsvParserSettings settings = new TsvParserSettings(); //configure...
// your row processor is the switch
settings.setProcessor(valueSwitch);
TsvParser parser = new TsvParser(settings);
Reader input = new StringReader(""+
"firstRowType\trecordValue1\n" +
"secondRowType\trecordValue1\trecordValue2");
parser.parse(input);
Get the parsed objects from your processors:
List<FirstRow> firstTypeObjects = firstProcessor.getBeans();
List<SecondRow> secondTypeObjects = secondProcessor.getBeans();
The output will be*:
[FirstRow{recordType='firstRowType', recordValue1='recordValue1'}]
[SecondRow{recordType='secondRowType', recordValue1='recordValue1', recordValue2='recordValue2'}]
Assuming you have a sane toString() implemented in your classes
If you want to manage associations among the objects that are parsed:
If your FirstRow should contain the elements parsed for records of type SecondRow, simply override the rowProcessorSwitched method:
InputValueSwitch valueSwitch = new InputValueSwitch(0) {
#Override
public void rowProcessorSwitched(RowProcessor from, RowProcessor to) {
if (from == secondProcessor) {
List<FirstRow> firstRows = firstProcessor.getBeans();
FirstRow mostRecentRow = firstRows.get(firstRows.size() - 1);
mostRecentRow.addRowsOfOtherType(secondProcessor.getBeans());
secondProcessor.getBeans().clear();
}
}
};
The above assumes your FirstRow class has a addRowsOfOtherType method that takes a list of SecondRow as parameter.
And that's it!
You can even mix and match other types of RowProcessor. There's another example here that demonstrates this.
Hope this helps.

Entity 4.0 Casting Value As DateTime

LINQ to Entity Framework 4.0. SQL Server.
I'm trying to return a list of objects and one of the columns in the database is varchar(255) and contains dates. I'm trying to cast the value to datetime, but I haven't found the solution to that yet.
Example:
List<MyObject> objects = (from c in context.my_table
where c.field_id == 10
select new MyObject()
{
MyDate = c.value // This is varchar, want it to be datetime
}).ToList();
Is this not possible?
Update. This is LINQ to Entity. When trying to convert to DateTime I get:
LINQ to Entities does not recognize the method 'System.DateTime ToDateTime(System.String)' method, and this method cannot be translated into a store expression.
The answer is it's not currently possible with Entity Framework. With LINQ itself it is, just not supported with Entity Framework.
You want DateTime.Parse(c.value) which will take a string containing a date and create a DateTime object out of it.
you can do like this
Date=DateTime.Parse(text)
read, And the best bet would be using your result and then Converting to date time. Like below,
static void Main(string[] args)
{
Console.WriteLine(getme());
Console.ReadLine();
}
private static DateTime getme()
{
List<string> ss = new List<string>();
ss.Add("11/11/2010");
var r = from l in ss
select new { date = Convert.ToDateTime(l) };
return r.FirstOrDefault().date;
}