I am trying to insert bulk values into the table through an excel.csv file.
I have created a file browser item on the page, now in the process have to write insert code for this to insert the excel values into the table.
the following table I have created: NON_DYNAMIC_USER_GROUPS
columns: ID,NAME,GROUP,GROUP_TYPE.
Need to create insert process code for this.
I prefer the Excel2Collection plugin for converting any form of Excel document into rows in an Oracle table.
http://www.apex-plugin.com/oracle-apex-plugins/process-type-plugin/excel2collections_271.html
PL/SQL already written, and formulated into an APEX plugin, making it easy to use.
It is possible to uncompress the code and convert it to using your own table, instead of apex_collections, which are limited to 50 columns/fields.
Related
I have a Power BI report where I load data from a CSV into a table (Original Table). I want to change the source of the table to a folder that contains multiple CSVs. The issue i'm having is that when I go to change the data source settings of my Original Table I cannot change it to folder. Original Table has many measures and I would like to avoid rewriting them all. Any ideas?
Create a new query that reads multiple csv from a folder and combines them.
Make sure the column names are the same as in your original table.
Open both queries in the advanced editor and copy the whole code from the new query to the old query
Delete the new query
thanks for any help! I’ll try to be concise.
I am new to SAS and have inherited a SAS run that outputs a table with 15 columns, which I export as an excel file and deliver.
The recipient would now like it as a .txt file with fixed-width columns, and has provided the character limit/width for each column.
Is this something that can be done in query builder, so I can just query the existing table rather than modifying the code in the run? I tried filling the field for ‘Length’ when you add a column to query builder, but it did not have the desired result.
Don't be confused. I am not asking how to drop column in Django. My question is what Django actually does to drop a column from a SQLite database.
Recently i came across an article that says you can't drop columns in SQLite database. So you have to drop the table and recreate it. And its quite strange that if SQLite doesn't support that then how Django is doing that?
Is it doing this?
To drop a column in an SQLite database Django follows the procedure described here: https://www.sqlite.org/lang_altertable.html#caution
Which in simple words is create a new table, copy data from old table, delete old table and then rename new table.
from the source code [GitHub] We can see that the schema editor for SQLite calls self._remake_table(model, delete_field=field) in the method remove_field which is what is used to drop a column. The method _remake_table has the following docstring in it's code which describes how exactly the process is performed:
Shortcut to transform a model from old_model into new_model This
follows the correct procedure to perform non-rename or column addition
operations based on SQLite's documentation
https://www.sqlite.org/lang_altertable.html#caution The essential
steps are:
Create a table with the updated definition called "new__app_model"
Copy the data from the existing "app_model" table to the new table
Drop the "app_model" table
Rename the "new__app_model" table to "app_model"
Restore any index of the previous "app_model" table.
I have a SQL query to get the data into Power BI. For example:
select a,b,c,d from table1
where a in ('1111','2222','3333' etc.)
However, the list of variables ('1111','2222','3333' etc.) will change every day so I would like the SQL statement to be updated before refreshing the data. Is this possible?
Ideally, I would like to keep a spreadsheet with a list of a values (in this example) so before refresh, it will feed those parameters into this script.
Another problem I have is the list will have a different nr of parameters so the last variable needs to be without a comma.
Another option I was considering is to run the script without the where a in ('1111','2222','3333' etc.) and then load the spreadsheet with a list of those a's and filter the report down based on that list however this will be a lot of data to import into Power BI.
It's my first post ever, although I was sourcing help from Stackoverflow for years, so hopefully, it's all clear.
I would create a new Query to read the "a values" from your spreadsheet. I would set the Load To / Import Data option to Only Create Connection (to avoid duplicating the data).
Then in your SQL query I would remove the where clause. With that gone you actually don't need to write custom SQL at all - just select the table/view from the Navigation UI.
Then from the the "table1" query I would add a Merge Queries step, connecting to the "a values" Query on the "a" column, using the Join Type: Inner. The resulting rows will be only those with a matching "a" column value (similar to your current SQL where clause).
Power Query wont be able to send this to your SQL Server as a single query, so it will first select all the rows from table1. But it is still fairly quick and efficient.
I have exported a mysql table to a parquet file (avro based). Now i want to read particular columns from that file. How can i read particular columns completely? I am looking for java code examples.
Is there an api where i can pass the columns i need and get back a 2D array of table?
If you can use hive, creating a hive table and issuing a simple select query would be by far the easiest option.
create external table tbl1(<columns>) location '<file_path>' stored as parquet;
select col1,col2 from tbl1;
//this works in hive 0.14
You can use JDBC driver to do that from java program as well.
Otherwise, if you want to stay completely in java, you need to modify the avro schema by excluding all the fields but the ones you want to fetch. Then when you read the file supply the modified schema as reader schema and it will only read the included columns. But you will get you original avro record back with excluded fields nullified, not a 2D array.
To modify the schema look at org.apache.avro.Schema and org.apache.avro.SchemaBuilder. make sure that modified schema is compatible with the original schema.
Options:
Use Hive table to create table with all columns with storage format parquet and read the required columns by specifying the column names
Create Thrift for the table and use the thrift fields to read the data from code (Java or Scala)
You can also use apache drill that natively parse parquet files.