Javafx: Tableview of ManyToMany relationship - list

I'm trying to create a tableview which shows each activity, and the members who are participating. Activity has a list of members and a member has a list of activities. I'm stuck at getting the columns to show the activity name, and a member of that activity. I'd like to show an activity-member combination on each row. I'm also not sure how I'm supposed to populate the table, since I'm passing a list of activities.
This is what I'm trying to go for.
+----------+------------+-------------+----------------+-------------+
| Activity | Type | Member name | Member surname | Member rank |
+----------+------------+-------------+----------------+-------------+
| act1 | Internship | Peter | Peterson | 1 |
| act1 | Internship | Bob | Bobber | 3 |
| act2 | Sport | Tim | Tom | 1 |
| act2 | Sport | Bob | Bobber | 3 |
+----------+------------+-------------+----------------+-------------+
public class ActivityListController extends GridPane {
#FXML
private TableView<Activity> tblActivities;
#FXML
private TableColumn<Activity, String> colActivityName;
#FXML
private TableColumn<Activity, String> colType;
#FXML
private TableColumn<Activity, List<String>> colFirstName;
#FXML
private TableColumn<Activity, String> colLastName;
#FXML
private TableColumn<Activity, String> colRank;
private final DomeinController dc;
public ActivityListController(DomainController dc) {
this.dc = dc;
FXMLLoader loader = new FXMLLoader(getClass().
getResource("ActivityList.fxml"));
loader.setController(this);
loader.setRoot(this);
try {
loader.load();
} catch (IOException ex) {
throw new RuntimeException(ex);
}
colActivity.setCellValueFactory(cellData -> new SimpleStringProperty(cellData.getValue().getName()));
colActivy.setCellFactory(TextFieldTableCell.forTableColumn());
colFirstName.setCellValueFactory(cellData -> new SimpleStringProperty(cellData.getValue().getMembers().???));
colLastName.setCellValueFactory(cellData -> new SimpleStringProperty(cellData.getValue().getMembers().???));
colRank.setCellValueFactory(cellData -> new SimpleStringProperty(cellData.getValue().getMembers().???));
tblActivities.setItems(dc.getAllActivities());
}

Related

Finding start of compressed data for items in a zip with zip4j

I'm trying to find the start of compressed data for each zip entry using zip4j. Great library for returning the local header offset, which Java's ZipFile does not do. However I'm wondering if there is a more reliable way than what I'm doing below to get the start of the compressed data? Thanks in advance.
offset = header.getOffsetLocalHeader();
offset += 30; //add fixed file header
offset += header.getFilenameLength(); // add filename field length
offset += header.getExtraFieldLength(); //add extra field length
//not quite the right number, sometimes have to add 4
//seems to be some header data that is outside the extra field value
offset += 4;
Edit
Here is a sample zip:
https://alexa-public.s3.amazonaws.com/test.zip
The code below decompresses each item properly but won't work without the +4.
String path = "/Users/test/Desktop/zip test/test.zip";
List<FileHeader> fileHeaders = new ZipFile(path).getFileHeaders();
for (FileHeader header : fileHeaders) {
long offset = 30 + header.getOffsetLocalHeader() + header.getFileNameLength() + header.getExtraFieldLength();
//fudge factor!
offset += 4;
RandomAccessFile f = new RandomAccessFile(path, "r");
byte[] buffer = new byte[(int) header.getCompressedSize()];
f.seek(offset);
f.read(buffer, 0, (int) header.getCompressedSize());
f.close();
Inflater inf = new Inflater(true);
inf.setInput(buffer);
byte[] inflatedContent = new byte[(int) header.getUncompressedSize()];
inf.inflate(inflatedContent);
inf.end();
FileOutputStream fos = new FileOutputStream(new File("/Users/test/Desktop/" + header.getFileName()));
fos.write(inflatedContent);
fos.close();
}
The reason you have to add 4 to the offset in your example is because the size of the extra data field in central directory of this entry (= file header) is different than the one in local file header, and it is perfectly legal as per zip specification to have different extra data field sizes in central directory and local header. In fact the extra data field we are talking about, Extended Timestamp extra field (signature 0x5455), has an official definition which has varied lengths between the two.
Extended Timestamp extra field (signature 0x5455)
Local-header version:
| Value | Size | Description |
| ------------- |---------------|---------------------------------------|
| 0x5455 | Short | tag for this extra block type ("UT") |
| TSize | Short | total data size for this block |
| Flags | Byte | info bits |
| (ModTime) | Long | time of last modification (UTC/GMT) |
| (AcTime) | Long | time of last access (UTC/GMT) |
| (CrTime) | Long | time of original creation (UTC/GMT) |
Central-header version:
| Value | Size | Description |
| ------------- |---------------|---------------------------------------|
| 0x5455 | Short | tag for this extra block type ("UT") |
| TSize | Short | total data size for this block |
| Flags | Byte | info bits |
| (ModTime) | Long | time of last modification (UTC/GMT) |
In the sample zip file you have attached, the tool which creates the zip file adds a 4 byte additional information compared to the central directory for this extra field.
Relying on the extra field length from central directory to reach to start of data can be error prone. A more reliable way to achieve what you want is to read the extra field length from local header. I have modified your code slightly to consider the extra field length from local header and not from central header to reach to the start of data.
import net.lingala.zip4j.model.FileHeader;
import net.lingala.zip4j.util.RawIO;
import org.junit.Test;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.List;
import java.util.zip.DataFormatException;
import java.util.zip.Inflater;
public class ZipTest {
private static final int OFFSET_TO_EXTRA_FIELD_LENGTH_SIZE = 28;
private RawIO rawIO = new RawIO();
#Test
public void testExtractWithDataOffset() throws IOException, DataFormatException {
String basePath = "/Users/slingala/Downloads/test/";
String path = basePath + "test.zip";
List<FileHeader> fileHeaders = new ZipFile(path).getFileHeaders();
for (FileHeader header : fileHeaders) {
RandomAccessFile f = new RandomAccessFile(path, "r");
byte[] buffer = new byte[(int) header.getCompressedSize()];
f.seek(OFFSET_TO_EXTRA_FIELD_LENGTH_SIZE);
int extraFieldLength = rawIO.readShortLittleEndian(f);
f.skipBytes(header.getFileNameLength() + extraFieldLength);
f.read(buffer, 0, (int) header.getCompressedSize());
f.close();
Inflater inf = new Inflater(true);
inf.setInput(buffer);
byte[] inflatedContent = new byte[(int) header.getUncompressedSize()];
inf.inflate(inflatedContent);
inf.end();
FileOutputStream fos = new FileOutputStream(new File(basePath + header.getFileName()));
fos.write(inflatedContent);
fos.close();
}
}
}
On a side note, I wonder why you want to read the data, deal with inflater and extract the content yourself? With zip4j you can extract all entires with ZipFile.extractAll() or you can also extract each entry in the zip file with streams if you wish with ZipFile.getInputStream(). A skeleton example is:
ZipFile zipFile = new ZipFile("filename.zip");
FileHeader fileHeader = zipFile.getFileHeader("entry_name_in_zip.txt");
InputStream inputStream = zipFile.getInputStream(fileHeader);
Once you have the inputstream, you can read the content and write it to any outputstream. This way you can extract each entry in the zip file without having to deal with the inflaters yourself.

Cannot infer an appropriate lifetime when sharing self reference

This is an experiment I'm doing while learning Rust and following Programming Rust.
Here's a link to the code in the playground.
I have a struct (Thing) with some inner state (xs). A Thing should be created with Thing::new and then started, after which the user should choose to call some other function like get_xs.
But! In start 2 threads are spawned which call other methods on the Thing instance that could mutate its inner state (say, add elements to xs), so they need a reference to self (hence the Arc). However, this causes a lifetime conflict:
error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements
--> src/main.rs:18:30
|
18 | let self1 = Arc::new(self);
| ^^^^
|
note: first, the lifetime cannot outlive the anonymous lifetime #1 defined
on the method body at 17:5...
--> src/main.rs:17:5
|
17 | / fn start(&self) -> io::Result<Vec<JoinHandle<()>>> {
18 | | let self1 = Arc::new(self);
19 | | let self2 = self1.clone();
20 | |
... |
33 | | Ok(vec![handle1, handle2])
34 | | }
| |_____^
note: ...so that expression is assignable (expected &Thing, found &Thing)
--> src/main.rs:18:30
|
18 | let self1 = Arc::new(self);
| ^^^^
= note: but, the lifetime must be valid for the static lifetime...
note: ...so that the type `[closure#src/main.rs:23:20: 25:14
self1:std::sync::Arc<&Thing>]` will meet its required lifetime bounds
--> src/main.rs:23:14
|
23 | .spawn(move || loop {
| ^^^^^
Is there a way of spawning the state-mutating threads and still give back ownership of thing after running start to the code that's using it?
use std::io;
use std::sync::{Arc, LockResult, RwLock, RwLockReadGuard};
use std::thread::{Builder, JoinHandle};
struct Thing {
xs: RwLock<Vec<String>>
}
impl Thing {
fn new() -> Thing {
Thing {
xs: RwLock::new(Vec::new()),
}
}
fn start(&self) -> io::Result<Vec<JoinHandle<()>>> {
let self1 = Arc::new(self);
let self2 = self1.clone();
let handle1 = Builder::new()
.name("thread1".to_owned())
.spawn(move || loop {
self1.do_within_thread1();
})?;
let handle2 = Builder::new()
.name("thread2".to_owned())
.spawn(move || loop {
self2.do_within_thread2();
})?;
Ok(vec![handle1, handle2])
}
fn get_xs(&self) -> LockResult<RwLockReadGuard<Vec<String>>> {
return self.xs.read();
}
fn do_within_thread1(&self) {
// read and potentially mutate self.xs
}
fn do_within_thread2(&self) {
// read and potentially mutate self.xs
}
}
fn main() {
let thing = Thing::new();
let handles = match thing.start() {
Ok(hs) => hs,
_ => panic!("Error"),
};
thing.get_xs();
for handle in handles {
handle.join();
}
}
The error message says that the value passed to the Arc must live the 'static lifetime. This is because spawning a thread, be it with std::thread::spawn or std::thread::Builder, requires the passed closure to live this lifetime, thus enabling the thread to "live freely" beyond the scope of the spawning thread.
Let us expand the prototype of the start method:
fn start<'a>(&'a self: &'a Thing) -> io::Result<Vec<JoinHandle<()>>> { ... }
The attempt of putting a &'a self into an Arc creates an Arc<&'a Thing>, which is still constrained to the lifetime 'a, and so cannot be moved to a closure that needs to live longer than that. Since we cannot move out &self either, the solution is not to use &self for this method. Instead, we can make start accept an Arc directly:
fn start(thing: Arc<Self>) -> io::Result<Vec<JoinHandle<()>>> {
let self1 = thing.clone();
let self2 = thing;
let handle1 = Builder::new()
.name("thread1".to_owned())
.spawn(move || loop {
self1.do_within_thread1();
})?;
let handle2 = Builder::new()
.name("thread2".to_owned())
.spawn(move || loop {
self2.do_within_thread2();
})?;
Ok(vec![handle1, handle2])
}
And pass reference-counted pointers at the consumer's scope:
let thing = Arc::new(Thing::new());
let handles = Thing::start(thing.clone()).unwrap_or_else(|_| panic!("Error"));
thing.get_xs().unwrap();
for handle in handles {
handle.join().unwrap();
}
Playground. At this point the program will compile and run (although the workers are in an infinite loop, so the playground will kill the process after the timeout).

How to parse TSV data into nested objects

I'm trying to parse the following TSV data into a nested object but my "title" field is always null within the Nested class.
I've included the method at the bottom which converts the TSV data to the object.
value1 | metaData1 | valueA |
value2 | metaData2 | valueB |
value3 | metaData3 | valueC |
public class Data {
#Parsed(index = 0)
private String value0;
#Parsed(index = 1)
private String foo;
#Nested
MetaData metaData;
public static class MetaData {
#Parsed(index = 1)
private String title;
}
}
public <T> List<T> convertFileToData(File file, Class<T> clazz, boolean removeHeader) {
BeanListProcessor<T> rowProcessor = new BeanListProcessor<>(clazz);
CsvParserSettings settings = new CsvParserSettings();
settings.getFormat().setDelimiter('|');
settings.setProcessor(rowProcessor);
settings.setHeaderExtractionEnabled(removeHeader);
CsvParser parser = new CsvParser(settings);
parser.parseAll(file);
return rowProcessor.getBeans();
}
You forgot to define an index on your Metadata.title:
public static class MetaData {
#Parsed(index=1)
private String title;
}
Also, you are setting the delimiter to \t while your input is using | as the separator.

MysqlConnector/C or MysqlConnector/C++ to use?

I use visual studio 2012. I have console application. I should connect to mysql server, i use mysql Connector/C++, but when i read from table, for example if i should get 'word' i get instead four unknown symbols, 'word' and after that many unknown symbols (also there are words) and ends with fatal error. What is problem? should i use connector C?
This is my code`
sql::mysql::MySQL_Driver *driver;
sql::Connection *con;
sql::Statement *stmt;
driver = sql::mysql::get_mysql_driver_instance();
con = driver->connect("localhost", "root", "pass");
stmt = con->createStatement();
stmt->execute("USE mail_members");
sql::ResultSet* res = stmt->executeQuery("SELECT id FROM messages");
int k = 0;
res->next();
std::cout << res->getString("id").asStdString();
delete con;
delete stmt;
Are anyone use Mysql connector.c++ ?
P.S. This is my messages table`
Field Type NULL Key Default | Extra
id int(11) Yes NULL
message text YES NULL
I use MysqlConnector/C# in other application and it works correctly, may be this don't work because my application is consiole (with stdafx files)?
Is your code throwing any exceptions?
I put the field name into an std::string and then pass to the resultset methods:
const std::string name = fri.get_field_name();
unsigned int result_value = m_resultset.getUInt(name);
In my tables, the record ID field is an integer not a string.
+----------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+------------------+------+-----+---------+----------------+
| ID_Title | int(10) unsigned | NO | PRI | NULL | auto_increment |
| Title | varchar(64) | NO | UNI | NULL | |
+----------+------------------+------+-----+---------+----------------+
2 rows in set (0.08 sec)
Is your ID field a string?

JPA Self join error

I have an error with my entities, my tables are:
LANGUAGE
codLanguage (Primary Key)
nameLanguage
Device
idDevice(Primary Key) nameDevice
PHRASE
idPhraseGroup(Primary Key) codlanguage(Primary Key) idDevice
(Primary Key) tex
I have problem with my entity Phrase,it is :
public class Phrase implements Serializable {
#EmbeddedId
private PhraseKey idPhrase;
private String text;
//etc
//here my problem (*)
#OneToMany(mappedBy="idPhrase.idPhraseGroup",fetch=FetchType.EAGER)
#JoinColumn(name = "idPhrase.idPhraseGroup", updatable = false, insertable = false, referencedColumnName = "idPhrase.idPhraseGroup")
private List<Phrase> groupListPhrase;
}
#Embeddable
public class PhraseKey implements Serializable {
private Integer idPhraseGroup;
private String codLanguage;
private String idDevice;
---getter e setter
}
I would you like to get a list of phrases with the same idPhraseGroup
for example in Phrase table :
idPhraseGroup | codLang |idDevice | text
1 | ES | 1 | mesa
1 | EN | 1 | table
..but i've got this error:
Exception Description: An incompatible mapping has been encountered
This usually occurs when the cardinality of a mapping does not
correspond with the cardinality of its backpointer
Thanxs
I do not see why you need the mapping, or how it can work. You cannot use the basic mapping "idPhrase.idPhraseGroup" field in the mappedBy on a oneToMany because it does not describe a relationship. A oneToMany generally relies on the other side having a manyToOne back, but in this case you dont.
If all you want is a collection of Phrase entities with a particular idPhaseGroup, just query for it using:
em.createQuery("select p from Phase p where p.idPhrase.idPhraseGroup = :phaseGroup").setParameter("phaseGroup", phaseGroupId);
I would remove the mapping unless you really need it cached within the Phaze entity. If you do, it would be better mapped like:
#OneToMany
#JoinColumn(name="IDPHASEGROUP", referencedColumnName="IDPHASEGROUP", insertable=false, updatable=false)
private List groupListPhrase;