libpq: get data type - c++

I am coding a cpp project with the database "postgreSQL".
I created a table in my database its type is character varying(40).
Now I need to SELECT these data FROM the table in my cpp project. I knew that I should use the library libpq, this is the interface of "postgreSQL" for c/cpp.
I have succeeded in selecting data from the table. Now I am considering if it's possible to get the data type of this table. For example, here I want to get character varying(40).

You need to use PQftype.
As described here: http://www.idiap.ch/~formaz/doc/postgreSQL/libpq-chapter17861.htm
And just take a look here about decoding return values: http://www.postgresql.org/message-id/da7021e0608040738l3b0880a1q5a76b838937f8c78#mail.gmail.com
You must also use PQfsize to get field size.

Related

What is AWS S3 dataset?

Looking at documentation of awswrangler.s3.to_csv or awswrangler.s3.to_parquet, there is a dataset parameter.
From testing, it looks like setting dataset=True allows, among other things, to append new data to an already existing set. It also looks like when dataset=True, I can't specify the file name and AWS autogenerates the names for the files which are added to the specified path.
Apart from that, I can't find more information on what dataset means. Is it just referring to the general concept or is there a specific meaning within the context of AWS? What exactly is dataset and when should it be set to True?
The dataset=True option allows you to store the entire dataset, including all metadata, indexes, etc.
The dataset parameter documentation:
dataset (bool) – If True store as a dataset instead of ordinary file(s) If True, enable all follow arguments: partition_cols, mode, database, table, description, parameters, columns_comments, concurrent_partitioning, catalog_versioning, projection_enabled, projection_types, projection_ranges, projection_values, projection_intervals, projection_digits, catalog_id, schema_evolution.
Note all those extra things that get saved when you save a dataset. All that information, like columns_comments, concurrent_partitioning, projection_values, will be lost when you save to CSV or Parquet. But on the other hand, those values are probably only useful if you plan to do further manipulation of the data via awswrangler/pandas at some later date.
Also note that if you set dataset=True you have to give it a file name prefix instead of a single file name, because the output generated will be spread across multiple files.
If you want to use the data in any other tool besides Pandas, such as loading the CSV into Excel, then you most likely want to set dataset=False and output to a single file.

C++/parquet: How to write "Map" data (nested type)?

Based on the "StreamWriter" example (stream_reader_writer), I'm trying to add another column to the parquet schema which I want to write "Map-Data" (Map<Integer, String>) to.
According to this source, maps are supported by the C++ implementation. However, it's not clear to me how to ...
correctly add/define another column of type "Map" to the parquet schema. According to the previous link, Maps are "Logical Types", so my best try so far is
fields.push_back(parquet::schema::PrimitiveNode::Make("Map", parquet::Repetition::REQUIRED, parquet::LogicalType::Map(), parquet::Type::INT64));
This compiles without error but I'm uncertain about the last parameter "primitive_type" (parquet::Type::INT64).
how to define and write actual Map values. Again, the second links provides a general description of a parquet Map:
The github repo is missing a C++ example (or test case) on how to use a Map/Logical Types. I'm looking for advice/an C++ example, how to define and use a Map.

Is there a way to pass data source connection string as a parameter to power bi embedded?

I have a pbix file that takes an Azure Storage account as a parameter and reads data from there accordingly. The next step is to be able to embed this powerbi dashboard on a webpage and let the end user specify the storage account. I see a lot of questions and answers surrounding passing in filter query parameters--this is different, we're trying to read from a completely different data source and not filtering on a static data source.
Another way to ask this question is: is there a way to embed powerbi template files, if not, is there a feature request somewhere we can upvote?
The short answer is no.
There is a reason to use filters in this case instead of parameters. Parameters are something that is part of the report itself. Each users that looks at your reports will get the same parameter values as the others. If one of them changes some parameter, this will affect all other users. Filters on the other hand, is something local for your session. You can filter the report the way you like, and this will not affect other users experience in any way.
You can't embed templates, because template is simply a state of the report on the disk. When you open it, it's not a template anymore, but becomes a report.
You can either combine the data from all of your data sources in a single report, adding one more column to indicate from where this data comes from, and then filter on this new column. Or create/modify ETL process (for example dataflows can be used for this) to combine these data sources into a single one.

How do I create a CAST in Informix to cast an LVARCHAR to TEXT?

What built-in routine can I make use of to cast data of type LVARCHAR to data of type TEXT?
The larger context: I have a table with a column that has been defined as LVARCHAR(4096). Now a developer wishes to change the data type of this column to TEXT. Ideally this would be done with:
ALTER TABLE foo MODIFY bar TEXT;
...but in such a case the following error is puked to the screen:
ALTER TABLE can not modify column (bar) type. Need a cast from the current type to the new type.
I have read up on the CREATE CAST construction, but I cannot begin to think what on earth the proper conversion function would look like. Without a function, Informix will not allow the CREATE CAST to work. That is, if I do, simply:
CREATE CAST (LVARCHAR AS TEXT)
...Informix tells me that a cast function is required (which makes sense).
Beware, Informix developers: if you inadvertently run into a problem like this, there is no way to get out of it using SQL or DDL alone. Let me repeat that.
If you have a VARCHAR or an LVARCHAR column that you need to migrate to be a TEXT column, and if you cannot afford to lose data in that column, there is no way to do this in SQL or DDL.
Instead, you must write a program that does the conversion for you inside the database driver, in memory. In my case, I used JDBC mutable result sets and copied the column to a new column, letting the JDBC driver perform the conversion, then dropped the old column, and renamed the new column back to the old column. This general pattern is the only way to migrate existing character data into a TEXT column.
#Storm: Which version of IDS/ODBC are you using? AFAIK, IDS 9 or 10 can't do that without using specific embedded C in server (See boulder site), but in no way you can do that directly through SQL. Blob related functions or so.
Othe way is by using UNLOAD / LOAD.
In my scenario, we have lots of problems: no admin rights to enterprise server, as we are service providers, we only can use database, but cannot modify structure. We cannot modify TEXT fields only by launching queries.

What is a good design pattern to implement a dynamic data importer tool?

We are planning to build a dynamic data import tool. Basically taking information on one end in a specified format (access, excel, csv) and upload it into an web service.
The situation is that we do not know the export field names, so the application will need to be able to see the wsdl definition and map to the valid entries in the other end.
In the import section we can define most of the fields, but usually they have a few that are custom. Which I see no problem with that.
I just wonder if there is a design pattern that will fit this type of application or help with the development of it.
I am not sure where the complexity is in your application, so I will just give an example of how I have used patterns for importing data of different formats. I created a factory which takes file format as argument and returns a parser for particular file format. Then I use the builder pattern. The parser is provided with a builder which the parser calls as it is parsing the file to construct desired data objects in application.
// In this example file format describes a house (complex data object)
AbstractReader reader = factory.createReader("name of file format");
AbstractBuilder builder = new HouseBuilder(list_of_houses);
reader.import(text_stream, builder);
// now the list_of_houses should contain an extra house
// as defined in the text_stream
I would say the Adaptor Pattern, as you are "adapting" the data from a file to an object, like the SqlDataDataAdapter does it from a Sql table to a DataTable
have a different Adaptor for each file type/format? example SqlDataAdptor, MySqlDataAdapter, they handle the same commands but different datasources, to achive the same output DataTable
Adaptor pattern
HTH
Bones
Probably Bridge could fit, since you have to deal with different file formats.
And Façade to simplify the usage. Handle my reply with care, I'm just learning design patterns :)
You will probably also need Abstract Factory and Command patterns.
If the data doesn't match the input format you will probably need to transform it somehow.
That's where the command pattern come in. Because the formats are dynamic, you will need to base the commands you generate off of the input. That's where Abstract factory is useful.
Our situation is that we need to import parametric shapes from competitors files. The layout of their screen and data fields are similar but different enough so that there is a conversion process. In addition we have over a half dozen competitor and maintenance would be a nightmare if done through code only. Since most of them use tables to store their parameters for their shapes we wrote a general purpose collection of objects to convert X into Y.
In my CAD/CAM application the file import is a Command. However the conversion magic is done by a Ruleset via the following steps.
Import the data into a table. The field names are pulled in as well depending on the format.
We pass the table to a RuleSet. I will explain the structure the ruleset in a minute.
The Ruleset transform the data into a new set of objects (or tables) which we retrieve
We pass the result to the rest of the software.
A RuleSet is comprise of set of Rules. A Rule can contain another Rule. A rule has a CONDITION that it tests, and a MAP TABLE.
The MAP TABLE maps the incoming field with a field (or property) in the result. There are can be one mapping or a multitude. The mapping doesn't have to involve just poking the input value into a output field. We have a syntax for calculation and string concatenation as well.
This syntax is also used in the Condition and can incorporate multiple files like ([INFIELD1] & "-" & [INFIELD2])="A-B" or [DIM1] + [DIM2] > 10. Anything between the brackets is substituted with a incoming field.
Rules can contain other Rules. The way this works is that in order for a sub Rule mapping to apply both it's condition and those of it's parent (or parents) have to be true. If a subRule has a mapping that conflicts with a parent's mapping then the subRule Mapping applies.
If two Rules on the same level have condition that are true and have conflicting mapping then the rule with the higher index (or lower on the list if you are looking at tree view) will have it's mapping apply.
Nested Rules is equivalent to ANDs while rules on the same level are equivalent of ORs.
The result is a mapping table that is applied to the incoming data to transform it to the needed output.
It is amicable to be being displayed in a UI. Namely a Treeview showing the rules hierarchy and a side panel showing the mapping table and conditions of the rule. Just as importantly you can create wizards that automate common rule structures.