Don't include chunk name in table name in Rmarkdown - r-markdown

I have a problem with Rmarkdown and writing tables to html. I'm writing tables with kableextra and the table name comes out like this in the html output: (#tab:chunk name)table name. I only want my own table name I give to kable via caption= without the (#tab:chunk name).
I've already tried the suggestion from
How to suppress automatic table name and number in an .Rmd file using xtable or knitr::kable? :
header-includes:
- \usepackage[labelformat=empty]{caption}
But it doesn't seem to work for html_document2, only for html_document. I'm using bookdown::html_document2 as output, because it is needed for some other things in the document, so I can't switch to html_document.
Any suggestions on how to fix this?
Thanks in advance!

Related

Upload and parse file in Oracle APEX

I'm trying to find the best way to upload, parse and work with text file in Oracle APEX (current version 20.1). Bussiness case: I must upload text file, first line will be saved to table A.
Rest lines contains some records (columns are pipe delimited) should be validated. After that correct recordes should be saved to table B or if there is some error it should be saved to table C (error log).
I tried to do something with the Data Loading wizard but it doesn't fit to my requirements.
Right now I added a "File browse..." item to page, and after page submit I can find this file in APEX_APPLICATION_TEMP_FILES in blob_content.
Is there any other option to work with that file than working with blob_content from APEX_APPLICATION_TEMP_FILES. I find it difficoult to work with type of data.
Text file look something like that:
2020-06-05 info: header line
2020-06-05|columnAValue|columnBValue|
2020-06-05|columnAValue||columnCValue
2020-06-05|columnAValue|columnBValue|columnCValue
have a look into the APEX_DATA_PARSER.PARSE table function. It parses the CSV file and returns the values as rows and columns. It's described in more detail within this blog posting:
https://blogs.oracle.com/apex/super-easy-csv-xlsx-json-or-xml-parsing-about-the-apex_data_parser-package
Simply pass "file.csv" (literally) as the p_file_name argument. APEX_DATA_PARSER does not care about the "real" file name....
The function uses the file extension only to differentiate between delimited, XLSX, XML or JSON files. So simply pass in a static file name like "file.csv". That should be enough.

How do I prevent kable from leaving raw latex in the final document if I include a caption in a table?

I'm writing my thesis in Rmarkdown (specifically bookdown) and using knitr to compile it into a PDF.
When I knit it, everything works perfectly other than the tables and figures.
The tables (produced with kable) look almost perfect, but are wrapped as follows (where [table] is the table rendered correctly):
\begin{table}
\caption{(#tab:rchunk_label) table_caption}
[table]
\end{table}
Accordingly, the caption does not appear on the table. Furthermore, this causes the text alingment to change for the rest of the document. The issue goes away if I do not include a caption, but I believe a caption is supported for latex output.
The figures render correctly, other than the caption including the r chunk label in parenthesis before the actual caption.
If it is relevant, the "lot" (list of tables) function does not identify any tables in the document, whereas the "lof" (list of figures) function does.
So far, I've tried setting results to "asis", copying code into another document, and examining the raw latex output. The raw latex seems correct (no duplication of \begin{table} or anything).
This problem is resolved by setting format to "pandoc" e.g.
someData %>%
kable(caption = "a caption",
format = "pandoc")
I'm not sure why this is, since more recent versions of kable are supposed to automatically select the format, but it appears to solve the problem.
(getting answer that solved the issue for me from comments of #Jackson Luckey)
As I removed underscores from R chunk labels, my tables got numbered. Weird, but true.

Find the default table name after rename

I have a file in PowerBI that contains some tables all linked to SQL SERVER. My problem is this:
To make user use easier I renamed the tables to a friendlier name, but now I need to know the actual name of the table in the database and I can not find this data.
Where can I find this?
Go your query editor and open the Advanced Editor for a query that links to the server.
In the first few lines of the M code, you should be able to see exactly how it's connecting and what the SQL table name is. It might look something like this:
let
Source = Sql.Databases("servername"),
DatabaseName = Source{[Name="DatabaseName"]}[Data],
TableName = DatabaseName{[Schema="dbo",Item="TableName"]}[Data],
<...>

Exporting from pgadmin reads line breaks in field cells and creates unreadable Excel

I'm new to this, so I am sure it is a silly question, but I have read through every question related on the site and can't find anything!
I am exporting from pgadmin. A few of the columns have line breaks within the cells, so the exported data is very choppy. Does anyone know how to fix this? Is there a way to make it so the line breaks within cells are not read?
I know I am doing the right settings for exporting, but basically what happens is that the header names are there, along with one row of content for each column and then Column A will have 20 more rows beneath it because of line breaks from the first cell in column E.
Any help would be much appreciated!
I assume that you're referring to the Query --> Execute to file command in the Query window. I don't think it's a bug that pgAdmin doesn't escape line breaks within strings in its csv output, but Excel can read it correctly anyway.
In the export options, please make sure that you use commas as column separators and double quotes as quote chars. Here are my settings:
Additionally, when you load your CSV into Excel, please don't use Data -> From Text. This one doesn't parse CSV with line breaks correctly. Just open the file directly in Excel (via Open within Excel, or by right clicking it in Windows Explorer and choosing Open With -> Microsoft Excel).

Redshift COPY command delimiter not found

I'm trying to load some text files to Redshift. They are tab delimited, except for after the final row value. That's causing a delimiter not found error. I only see a way to set the field delimiter in the COPY statement, not a way to set a row delimiter. Any ideas that don't involve processing all my files to add a tab to the end of each row?
Thanks
I don't think the problem is with missing <tab> at the end of lines. Are you sure that ALL lines have correct number of fields?
Run the query:
select le.starttime, d.query, d.line_number, d.colname, d.value,
le.raw_line, le.err_reason
from stl_loaderror_detail d, stl_load_errors le
where d.query = le.query
order by le.starttime desc
limit 100
to get the full error report. It will show the filename with errors, incorrect line number, and error details.
This will help to find where the problem lies.
You can get the delimiter not found error if your row has less columns than expected. Some CSV generators may just output a single quote at the end if last columns are null.
To solve this you can use FILLRECORD on Redshift copy options.
From my understanding the error message Delimiter not found may be caused also by not specifying correctly the COPY command, in particular by not specifying the Data format parameters https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html
In my case I was trying to load Parquet data with this expression:
COPY my_schema.my_table
FROM 's3://my_bucket/my/folder/'
IAM_ROLE 'arn:aws:iam::my_role:role/my_redshift_role'
REGION 'my-region-1';
and I received the Delimiter not found error message when looking into the system table stl_load_errors. But specifying I'm dealing with Parquet data in the expression in this way:
COPY my_schema.my_table
FROM 's3://my_bucket/my/folder/'
IAM_ROLE 'arn:aws:iam::my_role:role/my_redshift_role'
FORMAT AS PARQUET;
solved my problem and I was able to correctly load the data.
I know this was answered, but I just dealt with the same error and I had a simple solution so i'll share it.
This error can also be solved by stating the specific columns of the table that are copied from the s3 files (if you know what are the columns in the data on s3).
In my case the data had less columns than the number of columns in the table.
Madahava's answer with the 'FILLRECORD' option DID solve the issue for me but then I noticed a column that was supposed to filled up with default values, remained null.
COPY <table> (col1, col2, col3) from 's3://somebucket/file' ...
This may not be directly related to the OP's question but I received the same Delimiter not found error which was caused by newline characters within one of the fields.
For any field that you think may have newline characters you can remove them with:
replace(my_field, chr(10), '')
When you send fewer fields than expected on the destin table, it will also throw this error.
I'm sure there are multiple scenarios that would return this error. I just came across one that I don't see mentioned in the other answers while I was debugging someone else's code. The COPY had the EXPLICIT_IDS option listed, the table it was trying to import into had a column with a data type of identity(1,1), but the file it was trying to import into Redshift did not have an ID field. It made sense for me to add the identity field to the file. But, I imagine removing the EXPLICIT_IDS option would also have fixed the issue.
So recently I came across of this Delimiter not found error in Redshift SQL while loading the data with copy command. In my case, the problem was with column numbers.
I had created a table with 20 columns but I was loading the file with 21 columns.
I corrected it in my table by making 21 columns in the table and then re-loaded the data and boom it worked.
Hope it will be helpful to those who are facing the same kind of problem.
Ta-da
Sometimes this pops up when you dont specify the file type, for example CSV
Ref: https://docs.aws.amazon.com/redshift/latest/dg/tutorial-loading-run-copy.html
copy "dev"."my"."table" from 's3://bucket/myfile_upload.csv' credentials 'aws_iam_role=arn:aws:iam::2112277888:role/RedshiftAccessRole' IGNOREHEADER 1 csv;