GeoDMS, is there example code to convert csv files directly to fss files? - geodms

I am creating a GeoDMS configuration that reads GTFS files (which are stored in csv format), constructs a spatiotemporally explicit network out of those files, and then computes OD traveltime matrices based on that network.
I am currently reading the csv files using the StorageType = "gdal.grid" property, which is useful because in that manner I do not have to specify the length of the csv file a-priori. However, to speed up the computation I would like to convert the raw csv files into GeoDMS-native fss files, as that would significantly speed up the data reading.
However if I am not mistaken, I would need to explicitly define all attributes I would like to store into the fss file, correct? In short, is there any example code out there in which csv files are stored into fss files?
Currently loading the csv files, and storing the intermediate results into a shapefile.. but that is not very elegant. Ideally I get my hands on a function that 'copies' csv files into fss files without having to know cardinality or fieldnames a-priori. If I need to specify all field names that would not be a problem either.
Currently attempting the following very naive, which does not work.
Template LoadCsvThroughGDAL {
parameter<File_Structure> inFile;
unit<uint32> Data: StorageName = "= '%DataDir%/_feeds/'+AvailableFeeds/Name[inFeed]+'/'+File_Structure/Name[inFile]+'.csv'", StorageType = "gdal.vect", StorageReadOnly = "True";
unit<uint32> StoreFSS: expr = "Data", StorageName = "= '%DataDir%/_feeds/'+AvailableFeeds/Name[inFeed]+'/fss/'+File_Structure/Name[inFile]+'.fss'";
}

Chris,
Hereby a working example for:
1) reading data from a .csv file
2) writing the data to a .fss file
3) reading the data again from this .fss file
without having to explicitly configure the attribute names.
parameter<string> SourceDir := '%SourceDataDir%/OV/GTFS_20190318';
container write_to_fss
{
unit<uint32> trip
: StorageName = "=SourceDir + '/trips.csv'"
, StorageType = "gdal.vect"
, StorageReadOnly = "True";
unit<uint32> name := SubItem_PropValues(trip,'name');
unit<uint32> fssdomain := trip;
container to_fss := for_each_nedv(name/name, 'trip/' + name/name, fssdomain, string)
, StorageName = "=SourceDir + '/trip.fss'"
{
unit<uint32> domain := fssdomain;
}
}
container readdomain: StorageName = "=SourceDir + '/trip.fss'"
, StorageReadOnly = "True"
{
unit<uint32> domain;
}
container trip := for_each_ndv(write_to_fss/name/name, readdomain/domain, string)
, StorageName = "=SourceDir + '/trip.fss'"
, StorageReadOnly = "True"
{
unit<uint32> domain;
}

If you would take the read domain out of the trip container it should work.
parameter<string> SourceDir := '%SourceDataDir%/OV/GTFS_20190318';
container write_to_fss
{
unit<uint32> trip
: StorageName = "=SourceDir + '/trips.csv'"
, StorageType = "gdal.vect"
, StorageReadOnly = "True";
unit<uint32> name := SubItem_PropValues(trip,'name');
unit<uint32> fssdomain := trip;
container to_fss_domain : StorageName = "=SourceDir + '/trip_domain.fss'"
{
unit<uint32> domain := fssdomain;
}
container to_fss :=
for_each_nedv(
name/name
, 'trip/' + name/name
, to_fss_domain/domain
, string
)
, StorageName = "=SourceDir + '/trip.fss'";
}
container readdomain
: StorageName = "=SourceDir + '/trip_domain.fss'"
, StorageReadOnly = "True"
{
unit<uint32> domain;
}
container trip :=
for_each_ndv(
write_to_fss/name/name
, readdomain/domain
, string)
, StorageName = "=SourceDir + '/trip.fss'"
, StorageReadOnly = "True"
{
unit<uint32> domain := readdomain/domain;
}

Related

C++ iterate avro schema and map it to Key Value (name , type)

I'm learning Avro & C++ (both together :) ) and what i'm trying to do is:
Load a schema.
Map the schema fields to key / value paris of name & type.
Then iterate the avro data according to the mapping.
From what I've found and did, I'm extracting the schema from the Avro data file, and handles it as GenricDatum.
When I try to iterate it's fields - I get the name ok, but the field type is null, where I would expect to get the actual type. any help would be appreciated.
My code:
const char *avroFilePathCstr = avroFilePath.c_str();
avro::DataFileReader<avro::GenericDatum> reader(avroFilePathCstr);
auto dataSchema = reader.dataSchema();
avro::GenericDatum datum(dataSchema);
ProcessAvroSchema(dataSchema);
void ProcessAvroSchema(avro::GenericDatum schema) {
const avro::GenericRecord& schemaRecord = schema.value<avro::GenericRecord>();
for(unsigned int i = 0 ; i < schemaRecord.fieldCount(); i++)
{
avro::GenericDatum fieldDatum = schemaRecord.fieldAt(i);
cout << "SCHEMA:: fieldName: " << schemaRecord.schema() -> nameAt(i) << " , fieldType: " << fieldDatum.type() << "\n";
}
}

Can we get the pallet and function data from call (type) passed as a parameter to a function in substrate?

fn pre_dispatch(
self,
who: &Self::AccountId,
call: &Self::Call,
info: &DispatchInfoOf<Self::Call>,
len: usize
) -> Result<Self::Pre, TransactionValidityError> {
let (_fee, imbalance) = self.withdraw_fee(who, call, info, len)?;
Ok((self.0, who.clone(), imbalance))
}
(The above code is copied from txn-payment-pallet)
Here can we get function name and parameters from the call(one of the parameter), And based on the function-name , pallet , parameters passed by user, I want to compute fee.
For example , If it is call from pallet-staking::bond(x : amount_of_tokens_to_be_bonded) and I want to set fee for txn based on x.
Is that possible??
Like wise I want to set fee based on function-call parameters entered by user.
You can, but it requires a bit of type juggling to do so.
First, you need to realize that type Call = T::Call; in ChargeTransactionPayment. Looking at trait Config in pallet_transaction_payment, no type Call can be seen there. Instead, this type is coming from frame_system::Config (which is the super-trait of all pallets).
A brief look at the top level runtime aggregator file unravels that this Call type is essentially the outer-call of the runtime, an enum that encapsulates the call of all pallets.
That being said, the main point here is that from within pallet_transaction_payment, we cannot know of this outer-call contains this particular call from staking or not. To do so, you need to enforce this assumption via a new trait bound, namely IsSubType. This trait is specifically made to convert from wrapping type (like the outer-call) into its inner variants. See an example of this type being implemented for node_runtime's Call type.
Applying the following diff to substrate master should do exactly what you want .
diff --git a/Cargo.lock b/Cargo.lock
index ea54adf99e..df66185163 100644
--- a/Cargo.lock
+++ b/Cargo.lock
## -6074,6 +6074,7 ## dependencies = [
"frame-support",
"frame-system",
"pallet-balances",
+ "pallet-staking",
"parity-scale-codec",
"scale-info",
"serde",
diff --git a/frame/transaction-payment/Cargo.toml b/frame/transaction-payment/Cargo.toml
index 1d3066e39f..0e705514bb 100644
--- a/frame/transaction-payment/Cargo.toml
+++ b/frame/transaction-payment/Cargo.toml
## -27,6 +27,7 ## sp-std = { version = "4.0.0-dev", default-features = false, path = "../../primit
frame-support = { version = "4.0.0-dev", default-features = false, path = "../support" }
frame-system = { version = "4.0.0-dev", default-features = false, path = "../system" }
+pallet-staking = { version = "4.0.0-dev", default-features = false, path = "../staking" }
[dev-dependencies]
serde_json = "1.0.68"
## -44,5 +45,6 ## std = [
"sp-std/std",
"frame-support/std",
"frame-system/std",
+ "pallet-staking/std",
]
try-runtime = ["frame-support/try-runtime"]
diff --git a/frame/transaction-payment/src/lib.rs b/frame/transaction-payment/src/lib.rs
index 59d94a8237..3b0803663d 100644
--- a/frame/transaction-payment/src/lib.rs
+++ b/frame/transaction-payment/src/lib.rs
## -251,7 +251,7 ## pub mod pallet {
pub struct Pallet<T>(_);
#[pallet::config]
- pub trait Config: frame_system::Config {
+ pub trait Config: frame_system::Config + pallet_staking::Config {
/// Handler for withdrawing, refunding and depositing the transaction fee.
/// Transaction fees are withdrawn before the transaction is executed.
/// After the transaction was executed the transaction weight can be
## -696,7 +696,8 ## impl<T: Config> sp_std::fmt::Debug for ChargeTransactionPayment<T> {
impl<T: Config> SignedExtension for ChargeTransactionPayment<T>
where
BalanceOf<T>: Send + Sync + From<u64> + FixedPointOperand,
- T::Call: Dispatchable<Info = DispatchInfo, PostInfo = PostDispatchInfo>,
+ T::Call: Dispatchable<Info = DispatchInfo, PostInfo = PostDispatchInfo>
+ + frame_support::traits::IsSubType<pallet_staking::Call<T>>,
{
const IDENTIFIER: &'static str = "ChargeTransactionPayment";
type AccountId = T::AccountId;
## -736,8 +737,15 ## where
info: &DispatchInfoOf<Self::Call>,
len: usize,
) -> Result<Self::Pre, TransactionValidityError> {
- let (_fee, imbalance) = self.withdraw_fee(who, call, info, len)?;
- Ok((self.0, who.clone(), imbalance))
+ use frame_support::traits::IsSubType;
+ if let Some(pallet_staking::Call::bond_extra { .. }) = call.is_sub_type() {
+ // skip
+ todo!()
+ } else {
+ // default impl
+ let (_fee, imbalance) = self.withdraw_fee(who, call, info, len)?;
+ Ok((self.0, who.clone(), imbalance))
+ }
}
fn post_dispatch(
Note that this approach is implying that the pallet_staking::Config be present in the runtime, which is not aligned with the modularity of Frame, and ergo is not implemented. If you want to have this feature, as of now, the only way is to fork pallet_transaction_payment and customize it a bit for your runtime.

How can i send a Pagination-embed with a music list discord.js

I want send in a Pagination-embed with a music list because whan an embed is lower at 1024 letters it doesn't send.
I want send in many pages (4musics max per pages)
Sorry for my english, i'm french...
console.log(_serverQueue.songs)
let q = ``;
for(var i = 1; i < _serverQueue.songs.length; i++) {
q += `\n${i + 1}. **${_serverQueue.songs[i].title}**`;
}
let resp = [
{name: `Now Playing`, value: _serverQueue.songs[0].title},
{name: `Queue`, value: q},
];
//Putting it all together
const FieldsEmbed = new Pagination.FieldsEmbed()
.setArray({word: `Queue`})
.setAuthorizedUsers([message.author.id])
.setChannel(message.channel)
.setElementsPerPage(4)
.setPageIndicator(true)
.formatField('Playlist :', el => el.word)
FieldsEmbed.embed
.setColor('#008000')
.setTitle('Playlist :')
FieldsEmbed.build()
}
As per the Documentation of https://www.npmjs.com/package/discord-paginationembed
I explained the steps with Comments
const Discord = require('discord.js');
const Pagination = require('discord-paginationembed');
const songText = ["This is a long SongText", "That is Split up Over", "Multiple Sites", "End of Song"];
// The Splitting can happen via Discord.Js Util Class, it has a Splitter
const embeds = [];
for (let i = 1; i <= 4; ++i)
embeds.push(new Discord.MessageEmbed().setFooter('Page ' + i).setDescription(songText[i - 1]));
// Create Embeds here with the Content and push them into the Array
const myImage = message.author.displayAvatarURL();
new Pagination.Embeds()
.setArray(embeds)
.setAuthorizedUsers([message.author.id])
.setChannel(message.channel)
.setPageIndicator(true)
.setPage(1)
// Methods below are for customizing all embeds
.setImage(myImage)
.setThumbnail(myImage)
.setTitle('Test Title')
.setDescription('Test Description')
.setURL(myImage)
.setColor(0xFF00AE)
.build();

How to modify the filename of the S3 object uploaded using the Kafka Connect S3 Connector?

I've been using the S3 connector for a couple of weeks now, and I want to change the way the connector names each file. I am using the HourlyBasedPartition, so the path to each file is already enough for me to find each file, and I want the filenames to be something generic for all the files, like just 'Data.json.gzip' (with the respective path from the partitioner).
For example, I want to go from this:
<prefix>/<topic>/<HourlyBasedPartition>/<topic>+<kafkaPartition>+<startOffset>.<format>
To this:
<prefix>/<topic>/<HourlyBasedPartition>/Data.<format>
The objective of this is to only make one call to S3 to download the files later, instead of having to look for the filename first and then download it.
Searching through the files from the folder called 'kafka-connect-s3', I found this file:
https://github.com/confluentinc/kafka-connect-storage-cloud/blob/master/kafka-connect-s3/src/main/java/io/confluent/connect/s3/TopicPartitionWriter.java which at the end has some of the following functions:
private RecordWriter getWriter(SinkRecord record, String encodedPartition)
throws ConnectException {
if (writers.containsKey(encodedPartition)) {
return writers.get(encodedPartition);
}
String commitFilename = getCommitFilename(encodedPartition);
log.debug(
"Creating new writer encodedPartition='{}' filename='{}'",
encodedPartition,
commitFilename
);
RecordWriter writer = writerProvider.getRecordWriter(connectorConfig, commitFilename);
writers.put(encodedPartition, writer);
return writer;
}
private String getCommitFilename(String encodedPartition) {
String commitFile;
if (commitFiles.containsKey(encodedPartition)) {
commitFile = commitFiles.get(encodedPartition);
} else {
long startOffset = startOffsets.get(encodedPartition);
String prefix = getDirectoryPrefix(encodedPartition);
commitFile = fileKeyToCommit(prefix, startOffset);
commitFiles.put(encodedPartition, commitFile);
}
return commitFile;
}
private String fileKey(String topicsPrefix, String keyPrefix, String name) {
String suffix = keyPrefix + dirDelim + name;
return StringUtils.isNotBlank(topicsPrefix)
? topicsPrefix + dirDelim + suffix
: suffix;
}
private String fileKeyToCommit(String dirPrefix, long startOffset) {
String name = tp.topic()
+ fileDelim
+ tp.partition()
+ fileDelim
+ String.format(zeroPadOffsetFormat, startOffset)
+ extension;
return fileKey(topicsDir, dirPrefix, name);
}
I don't know if this can be customised to what I want to do but seems to be somehow near/related to my intentions. Hope it helps.
(Submitted an issue to Github as well: https://github.com/confluentinc/kafka-connect-storage-cloud/issues/369)

How to create a json object with hierarchical object and values?

How to create a hierarchical JSON object and value structure using Casablanca/cpprestsdk JSON library?
I referred to the following links. But I could not figure out placing the JSON object in another JSON object
https://github.com/Microsoft/cpprestsdk/wiki/JSON
https://microsoft.github.io/cpprestsdk/namespaceweb_1_1json.html
JSON Object:
{
"employee" :
{
"name" : "vinkris",
"age" : "30"
}
}
{
struct employee emp;
string object = U("{") + U("employee:") + U("{") ;
object += U("name:") + emp.name + U(",") ;
object += U("age:") + std::to_string(emp.age) + U("}") ;
json::value = json::value::string(object)
}
Is there any to simplify how the JSON body can be constructed?
For example:
{
JsonObject object();
JsonSubObjects subObject();
subObjects.add("name", "vinkris");
subObjects.add("age", "30");
object.add("employee", subObject)
}
I am very new to both c++ and Casablanca library. Wondering if this is possible?
It would be great to get some pointers on how to build something like this.
this
json::value obj = json::value::object();
json::value employee = json::value::object();
employee[U("name")] = json::value::string(U("vinkris"));
employee[U("age")] = json::value::number(30);
obj[U("employee")] = employee;
turns to this:
{"employee":{"age":30,"name":"vinkris"}}