Rmarkdown addin for creating new post not working - r-markdown

Today I wanted to create a new post for my website (built using blogdown), but the New Post addin doesn't seem to work.
When I select "New Post" or run
blogdown:::new_post_addin()
I get an error:
Error in FUN(X[[i]], ...) : subscript out of bounds
In addition: Warning messages:
1: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/photo.md': Scanner error: while scanning an alias at line 3, column 1 did not find expected alphabetic or numeric character at line 3, column 2
2: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/research.md': Scanner error: while scanning an alias at line 4, column 1 did not find expected alphabetic or numeric character at line 4, column 2
I am not sure what the additional warnings are about, but I want to focus on the main error. Here are details returned by traceback():
> traceback()
10: lapply(meta, `[[`, i)
9: unlist(lapply(meta, `[[`, i))
8: blogdown:::collect_yaml()
7: eval(exprs[i], envir)
6: eval(exprs[i], envir)
5: sys.source(pkg_file("scripts", file), envir = new.env(parent = globalenv()),
keep.source = FALSE)
4: xfun::in_dir(site_root(), expr)
3: in_root(sys.source(pkg_file("scripts", file), envir = new.env(parent = globalenv()),
keep.source = FALSE))
2: source_addin("new_post.R")
1: blogdown:::new_post_addin()
Interestingly, when I run this command:
blogdown::new_post(title, ext = '.md')
it works fine and I can create a new post. I updated both blogdown and hugo but to no avail. Could someone help me understand what this error is about? Other addins (such as Insert Image) work fine.
As requested, the githup repo is https://github.com/msmielak/msmielak.github.io and the dput() output is below:
>dput(blogdown:::scan_yaml())
list(`content/about.md` = "<img align=\"right\" src=\"/./about_files/rsz_screenshot_2020-12-28_une_home.png\" alt=\"\" width=\"100px\"/>\n\n**2014-**\nPhD candidate at the School of Environmental and Rural Sciences University of New England in Armidale, Australia.",
`content/code.md` = NULL, `content/contact.md` = NULL, `content/photo.md` = NULL,
`content/post/2021-03-29-extracting-date-and-time-from-photo-using-ocr-engine-tesseract/index.md` = list(
title = "Extracting date and time from camera trap photos using R and tesseract",
author = "", date = "2021-03-29", slug = list(), categories = c("code",
"R"), tags = c("R", "code", "camera trap", "OCR"), description = "",
featured = "", featuredalt = "", featuredpath = "", linktitle = ""),
`content/research.md` = NULL, `content/technology.md` = NULL)
Warning messages:
1: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/code.md': Parser error: did not find expected <document start> at line 3, column 67
2: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/photo.md': Scanner error: while scanning an alias at line 3, column 1 did not find expected alphabetic or numeric character at line 3, column 2
3: In value[[3L]](cond) :
Cannot parse the YAML metadata in 'content/research.md': Scanner error: while scanning an alias at line 4, column 1 did not find expected alphabetic or numeric character at line 4, column 2

The YAML metadata of the file content/about.md seems to be invalid. Normally YAML metadata should be of the form:
---
tag1: value1
tag2: value2
---
Update: with the dev version of blogdown (>= v1.2.4), the error will no longer occur. What's more, blogdown::check_site() can detect this problem and suggest users fix the problematic YAML metadata.
remotes::install_github('rstudio/blogdown')

Related

Terraform google_logging_project_sink 'Exclusions' unknown block type

I'm running the latest google provider and trying to use the example terraform registry code to create a log sink. However the exclusion block is unrecognized
I keep getting 'An argument named "exclusions" is not expected here'
Any ideas on where I am going wrong?
resource "google_logging_project_sink" "log-bucket" {
name = "my-logging-sink"
destination = "logging.googleapis.com/projects/my-project/locations/global/buckets/_Default"
exclusions {
name = "nsexcllusion1"
description = "Exclude logs from namespace-1 in k8s"
filter = "resource.type = k8s_container resource.labels.namespace_name=\"namespace-1\" "
}
exclusions {
name = "nsexcllusion2"
description = "Exclude logs from namespace-2 in k8s"
filter = "resource.type = k8s_container resource.labels.namespace_name=\"namespace-2\" "
}
unique_writer_identity = true
Showing that the version of Google provider is at the stated version in the comment below
$ terraform version
Terraform v0.12.29
+ provider.datadog v2.21.0
+ provider.google v3.44.0
+ provider.google-beta v3.57.0
Update: Have also tried 0.14 of Terraform and that makes no difference.
Error: Unsupported block type
on ..\..\..\..\modules\krtyen\datadog\main.tf line 75, in module "export_logs_to_datadog_log_sink":
75: exclusions {
Blocks of type "exclusions" are not expected here.
Releasing state lock. This may take a few moments...
[terragrunt] 2021/02/22 11:11:20 Hit multiple errors:
exit status 1
You have to upgrade you google provided. exclusions block has been added in version v3.44.0:
logging: Added support for exclusions options for google_logging_project_sink

YAML exception: Invalid Yaml

I am trying to implement encryption a Tomcat Server on AWS Elastic Beanstalk.
I have just followed this, and created a .ebextensions/https-instance.config file.
But when I deploy to the server, I get:
The configuration file .ebextensions/https-instance.config in
application version thewhozoo-1.0.0.25 contains invalid YAML or JSON.
YAML exception: Invalid Yaml: while scanning a simple key in
"", line 4, column 1: -----BEGIN CERTIFICATE----- ^ could not
found expected ':' in "", line 5, column 1:
MIIDnDCCAoACCQCzIxYAYJicIjANBgkq ... ^ , JSON exception: Invalid JSON:
Unexpected character (f) at position 0.. Update the configuration
file.
What I am doing incorrectly?
UPDATE
I changed the file to:
But get the following:
The configuration file .ebextensions/https-instance.config in
application version thewhozoo-1.0.0.31 contains invalid YAML or JSON.
YAML exception: Invalid Yaml: while parsing a block mapping in
"", line 7, column 5: mode: "000400" ^ expected ,
but found Scalar in "", line 32, column 6: -----END
CERTIFICATE----- ^ , JSON exception: Invalid JSON: Unexpected
character (p) at position 0.. Update the configuration file.
You'll have to indent your certificate data more than the column of content:
files:
/etc/pki/tls/certs/server.crt:
content: |
-----BEGIN CERTIFICATE----
MI.......
Wk.......
That is the way the literal scalar in block style works. As you can have empty lines in such a literal scalar, as well as (further) indented lines, the parser would otherwise not know that your scalar had ended or not (that is would not assume /etc/pki/tls/certs/server.key: to be part of the literal scalar).

match logstring with regex

Hopefully I find help here, cause I really don't have a clue about regex. I'm trying to create a logfile viewer with the monaco editor I started from this sample but my logstrings can be multiline and I would like to use a different dateformat. So assuming I have a logstring like this:
[2017-02-03 22:07:56] [info] [Memory] After GC, total memory:737mb, used: 268mb, reclaimed: 293
[2017-02-03 22:10:15] [info] [Memory] After GC, total memory:705mb, used: 247mb, reclaimed: 141
[2017-02-03 22:10:25] [info] [Memory] After GC, total memory:705mb, used: 258mb, reclaimed: 21
[2017-02-03 22:14:34] [warn] [Evaluator] org.mozilla.javascript.EcmaError: Cannot convert null to an object.
Caused by error in Business Rule: 'GlobalHideGlobalUsersFromNonAdmins' at line 5
2:
3: var encodedQueryString = 'sys_domain!=global';
4:
==> 5: var imp = gs.getImpersonatingUserName().toString();
6: if(imp.length > 0) {
7: encodedQueryString = encodedQueryString + '^ORuser_name=' + imp;
8: }
[2017-02-03 22:14:34] [warn] [Evaluator] org.mozilla.javascript.EcmaError: Cannot convert null to an object.
Caused by error in Business Rule: 'GlobalHideGlobalUsersFromNonAdmins' at line 1
==> 1: (function executeRule(current, previous /*null when async*/) {
2:
3: var encodedQueryString = 'sys_domain!=global';
4:
This currently doesn't match my dateformat and it will only match the first line of a logmessage if there are carriage returns it doesn't match to the next logmessage. Any RegexGuru here that could help me out? :)
monaco.languages.setMonarchTokensProvider('log', {
tokenizer: {
root: [
[/\[error.*/, "custom-error"],
[/\[warn.*/, "custom-warn"],
[/\[info.*/, "custom-info"],
[/\[debug.*/, "custom-debug"],
[/\[[a-zA-Z 0-9:]+\]/, "custom-date"],
]
}
});
UPDATE:
So here is the solution I've come up with. Apparently I'm still not able to match multiple lines between to [DATE] Strings. So for now I will just match e.g. [error] as a workaround. Maybe somebody can push me in the right direction...
monaco.languages.setMonarchTokensProvider('log', {
tokenizer: {
root: [
[/\[error\]/, "custom-error"],
[/\[warn\]/, "custom-warn"],
[/\[info\]/, "custom-info"],
[/\[debug\]/, "custom-debug"],
[/^\[\d{4}[./-]\d{2}[./-]\d{2} \d{2}[./:]\d{2}[./:]\d{2}]/, "custom-date"],
]
}
});
I think you might be missing an escape at the end of the pattern in your update - should be "]" for the closing bracket.
Here's a tighter pattern, extracting on what the subgroups of digits all share:
\[(\d{2,4}[\:\-\s\]])+
Could you give an example of what you want to capture in cases of "multiple lines between two [DATE] strings"?
Hope this helps!

pig REPLACE gives error

Let's assume that my file is named 'data' and looks like this:
2343234 {23.8375,-2.339921102} {(343.34333,-2.0000022)} 5-23-2013-11-am
I need to convert the 2nd field to a pair of coordinate numbers. So I wrote the follwoing code and called it basic.pig:
A = LOAD 'data' AS (f1:int, f2:chararray, f3:chararray. f4:chararray);
B = foreach A generate STRSPLIT(f2,',').$0 as f5, STRSPLIT(f2,',').$1 as f6;
C = foreach B generate REPLACE(f5,'{',' ') as f7, REPLACE(f6,'}',' ') as f8;
and then used (float) to convert the string to a float. But, the command 'REPLACE' fails to work and I get the following error:
-bash-3.2$ pig -x local basic.pig
2013-06-24 16:38:45,030 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.1 (r1459641) compiled
Mar 22 2013, 02:13:53 2013-06-24 16:38:45,031 [main] INFO org.apache.pig.Main - Logging error messages to: /home/--/p/--test/pig_1372117125028.log
2013-06-24 16:38:45,321 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/isl/pmahboubi/.pigbootup not found
2013-06-24 16:38:45,425 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
2013-06-24 16:38:46,069 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1000: Error during parsing. Lexical error at line 7, column 0. Encountered: <EOF> after : ""
Details at logfile: /home/--/p/--test/pig_1372117125028.log
And this is the details of the pig_137..log
Pig Stack Trace
---------------
ERROR 1000: Error during parsing. Lexical error at line 7, column 0. Encountered: <EOF> after : ""
org.apache.pig.tools.pigscript.parser.TokenMgrError: Lexical error at line 7, column 0. Encountered: <EOF> after : ""
at org.apache.pig.tools.pigscript.parser.PigScriptParserTokenManager.getNextToken(PigScriptParserTokenManager.java:3266)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.jj_ntk(PigScriptParser.java:1134)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:104)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:194)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:604)
at org.apache.pig.Main.main(Main.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:197)
================================================================================
I've got data like this:
2724 1919 2012-11-18T23:57:56.000Z {(33.80981975),(-118.105289)}
2703 6401 2012-11-18T23:57:56.000Z {(55.83525609),(-4.07733138)}
1200 4015 2012-11-18T23:57:56.000Z {(41.49609152),(13.8411998)}
7104 9227 2012-11-18T23:57:56.000Z {(-24.95351118),(-53.46538723)}
and I can do this:
A = LOAD 'my_tsv_data' USING PigStorage('\t') AS (id1:int, id2:int, date:chararray, loc:chararray);
B = FOREACH A GENERATE REPLACE(loc,'\\{|\\}|\\(|\\)','');
C = LIMIT B 10;
DUMP C;
This error
ERROR 1000: Error during parsing. Lexical error at line 7, column 0. Encountered: <EOF> after : ""
came to me because I had used different types of quotation marks. I started with ' and ended with ยด or `, and it took quite a while to find what went wrong. So it had nothing to do with line 7 (my script was not so long, and I shortened data to four lines which naturally did not help), nothing to do with column 0, nothing to do with EOF of data, and hardly anything to do with " marks which I didn't use. So quite misleading error message.
I found the cause by using grunt - pig command shell.

What's the correct regexp pattern to match a VMS filename?

The documentation at http://h71000.www7.hp.com/doc/731final/documentation/pdf/ovms_731_file_app.pdf (section 5-1) says the filename should look like this:
node::device:[root.][directory-name]filename.type;version
Most of them are optional (like node, device, version) - not sure which ones and how to correctly write this in a regexp, (including the directory name):
DISK1:[MYROOT.][MYDIR]FILE.DAT
DISK1:[MYDIR]FILE.DAT
[MYDIR]FILE.DAT
FILE.DAT;10
NODE::DISK5:[REMOTE.ACCESS]FILE.DAT
See the documentation and source for the VMS::Filespec Perl module.
From wikipedia, the full form is actually a bit more than that:
NODE"accountname password"::device:[directory.subdirectory]filename.type;ver
This one took a while, but here is an expression that should accept all valid variations, and place the components into capture groups.
(?:(?:(?:([^\s:\[\]]+)(?:"([^\s"]+) ([^\s"]+)")?::)?([^\s:\[\]]+):)?\[([^\s:\[\]]+)\])?([^\s:\[\]\.]+)(\.[^\s:\[\];]+)?(;\d+)?
Also, from what I can tell, your example of
DISK1:[MYROOT.][MYDIR]FILE.DAT
is not a valid name. I believe only one pair of brackets are allowed. I hope this helps!
You could probably come up with a single complicated regex for this, but it will be much easier to read your code if you work your way from left to right stripping off each section if it is there. The following is some Python code that does just that:
lines = ["DISK1:[MYROOT.][MYDIR]FILE.DAT", "DISK1:[MYDIR]FILE.DAT", "[MYDIR]FILE.DAT", "FILE.DAT;10", "NODE::DISK5:[REMOTE.ACCESS]FILE.DAT"]
node_re = "(\w+)::"
device_re = "(\w+):"
root_re = "\[(\w+)\.]"
dir_re = "\[(\w+)]"
file_re = "(\w+)\."
type_re = "(\w+)"
version_re = ";(.*)"
re_dict = {"node": node_re, "device": device_re, "root": root_re, "directory": dir_re, "file": file_re, "type": type_re, "version": version_re}
order = ["node", "device", "root", "directory", "file", "type", "version"]
for line in lines:
i = 0
print line
for item in order:
m = re.search(re_dict[item], line[i:])
if m is not None:
print " " + item + ": " + m.group(1)
i += len(m.group(0))
and the output is
DISK1:[MYROOT.][MYDIR]FILE.DAT
device: DISK1
root: MYROOT
directory: MYDIR
file: FILE
type: DAT
DISK1:[MYDIR]FILE.DAT
device: DISK1
directory: MYDIR
file: FILE
type: DAT
[MYDIR]FILE.DAT
directory: MYDIR
file: FILE
type: DAT
FILE.DAT;10
file: FILE
type: DAT
version: 10
NODE::DISK5:[REMOTE.ACCESS]FILE.DAT
node: NODE
device: DISK5
directory: REMOTE.ACCESS
file: FILE
type: DAT