I need to remove link rel="canonical". I removed it from the Meta Tag Module and I also used a module that's supposed to remove it but it still remains.
I removed all instances in the Meta Tag module and I also used the Disable Link Rel module that's supposed to remove it. It's still there.
I need to remove this link from the head of my site.
You can also do from hook_preprocess_html in the $variables['page']['#attached'] array to be altered.
You have to fist check by printing variables
print '<pre>';print_r($variables['page']['#attached']);die;
After then check and empty the array like this :-
if($variables['page']['#attached']['html_head'][1][1] == 'canonical_url') { $variables['page']['#attached']['html_head'][1][0]['#attributes'] = array(); }
In my case the printed array was like this:-
Array
(
[html_head] => Array
(
[0] => Array
(
[0] => Array
(
[#tag] => meta
[#attributes] => Array
(
[name] => title
[content] => test english 1 | localhost
)
)
[1] => title
)
[1] => Array
(
[0] => Array
(
[#tag] => link
[#attributes] => Array
(
[rel] => canonical
[href] => http://localhost/leadervinu_development/drupal-8.7.3/en/node/51
)
)
[1] => canonical_url
)
Related
I am pretty much below beginner level for REGEX/REGEXP and have hit a blocking point in a project I am working in, where I am trying to get the ids for posts that match the search criteria , but I want to restrict the search between 2 sub-strings. I am trying to figure out is how to write the REGEXP in the meta_query:
$args = array(
'post_type'=> 'custom',
'order' => 'DESC',
'posts_per_page' => 10,
'paged' => $page,
'meta_query' => array(
array(
'key' => 'key',
'value' => "title*".$search."*_title",
'compare' => 'REGEXP',
)
),
);
And an example of the field in the DB :
a:302:{s:5:"title";s:10:"Test title";s:6:"_title";s:19:"
Unfortunately none of the combinations I tried based on documentation of SQL REGEXP won't return any values and I am trying to understand how I can pull this off and would appreciate any input.
Also would rather stick to WP_Query for now even though an SQL LIKE "%title%{$search}%_title%" works perfectly , so an alternative solution would be how to set the compare to 'LIKE' and parse it '%' since that is not possible out of the box as the % get escaped I believe.
I am using Amazon Textract to extract data from a scanned document. Now I want to convert the output to a PDF file. Below is a sample output of Textract:
[1] => Array
(
[BlockType] => LINE
[Confidence] => 99.4744720459
[Text] => Hello
[Geometry] => Array
(
[BoundingBox] => Array
(
[Width] => 0.243866533041
[Height] => 0.0134594505653
[Left] => 0.176409825683
[Top] => 0.0463116429746
)
[Polygon] => Array
(
[0] => Array
(
[X] => 0.176409825683
[Y] => 0.0463116429746
)
[1] => Array
(
[X] => 0.420276373625
[Y] => 0.0463116429746
)
[2] => Array
(
[X] => 0.420276373625
[Y] => 0.0597710944712
)
[3] => Array
(
[X] => 0.176409825683
[Y] => 0.0597710944712
)
)
)
[Id] => 75e8917d-701e-4e26-bade-f00bde9d87db
[Relationships] => Array
(
[0] => Array
(
[Type] => CHILD
[Ids] => Array
(
[0] => 46f44500-4960-4405-99f3-fa43101bc2ca
)
)
)
)
As you can see, the output contains text, height, width and its XY coordinates. How can I place the text with same co-ordinates into a PDF file?
Assuming you can convert the above to JSON, you can use jsPDF or PDFkit to create the PDF. The functionality maps pretty well based upon the limited data you posted, but I have not seen the full structure of Textract as it is still in Beta and I didn't get an invite to the program. Both these projects can use Node to create a server-side solution, but they also work in the Browser.
At the time of this writing, Google Cloud has an OCR component in their Vision - Document Text Detection feature. Unlike Textract, it approaches the task as just reporting what visual elements the document has and creating a comprehensive (and large) data structure that describes what it "sees." Textract, according to Amazon, uses machine learning to organize the data in a more human understandable form that seeks to differentiate the form from the data that constitutes the filled-out part of the form. If you are trying to create a relatively complete PDF, the Google product is well suited. Textract might be too, but I don't know yet.
This repository contains code examples (in Java) showing how you can generate a searchable PDF using AWS Textract. If you are not using Java, you may also deploy it as an AWS Lambda function and then invoke it via the AWS SDK or as a REST API call using AWS API Gateway.
There is a corresponding blog post also available here.
I'm storing a multidimensional array in a cookie.
$this->Cookie->write('Cart',
$products, false, 3600
);
Below is the multidimensional array which I'm storing in the cookie
Array
(
[Cart] => Array
(
[user_id] =>
[product_id] => 92
[quantity] => 1
[date_created] =>
[date_modified] =>
[product_name] => shoes
[price] => 12
)
)
but when i read the cookie, it gives me this output
[{\"Cart\":{\"user_id\":\"\",\"product_id\":\"7\",\"quantity\":\"1\",\"date_created\":\"\",\"date_modified\":\"\",\"product_name\":\"iPhone\",\"price\":\"12\"}}]
below is the code which i'm using to read the cookie
$this->Cookie->read('Cart');
On my local server it works perfectly alright but gives me above mentioned output when I try it on Online server
You could try the following to write
$this->Cookie->write('Cart', serialize($products), false, 3600);
And this to read
unserialize($this->Cookie->read('Cart'));
Your cookie is probably being saved as plain text.
Background:
I have a custom generated log file that has the following pattern :
[2014-03-02 17:34:20] - 127.0.0.1|ERROR| E:\xampp\htdocs\test.php|123|subject|The error message goes here ; array (
'create' =>
array (
'key1' => 'value1',
'key2' => 'value2',
'key3' => 'value3'
),
)
[2014-03-02 17:34:20] - 127.0.0.1|DEBUG| flush_multi_line
The second entry [2014-03-02 17:34:20] - 127.0.0.1|DEBUG| flush_multi_line Is a dummy line, just to let logstash know that the multi line event is over, this line is dropped later on.
My config file is the following :
input {
stdin{}
}
filter{
multiline{
pattern => "^\["
what => "previous"
negate=> true
}
grok{
match => ['message',"\[.+\] - %{IP:ip}\|%{LOGLEVEL:loglevel}"]
}
if [loglevel] == "DEBUG"{ # the event flush line
drop{}
}else if [loglevel] == "ERROR" { # the first line of multievent
grok{
match => ['message',".+\|.+\| %{PATH:file}\|%{NUMBER:line}\|%{WORD:tag}\|%{GREEDYDATA:content}"]
}
}else{ # its a new line (from the multi line event)
mutate{
replace => ["content", "%{content} %{message}"] # Supposing each new line will override the message field
}
}
}
output {
stdout{ debug=>true }
}
The output for content field is : The error message goes here ; array (
Problem:
My problem is that I want to store the rest of the multiline to content field :
The error message goes here ; array (
'create' =>
array (
'key1' => 'value1',
'key2' => 'value2',
'key3' => 'value3'
),
)
So i can remove the message field later.
The #message field contains the whole multiline event so I tried the mutate filter, with the replace function on that, but I'm just unable to get it working :( .
I don't understand the Multiline filter's way of working, if someone could shed some light on this, it would be really appreciated.
Thanks,
Abdou.
I went through the source code and found out that :
The multiline filter will cancel all the events that are considered to be a follow up of a pending event, then append that line to the original message field, meaning any filters that are after the multiline filter won't apply in this case
The only event that will ever pass the filter, is one that is considered to be a new one ( something that start with [ in my case )
Here is the working code :
input {
stdin{}
}
filter{
if "|ERROR|" in [message]{ #if this is the 1st message in many lines message
grok{
match => ['message',"\[.+\] - %{IP:ip}\|%{LOGLEVEL:loglevel}\| %{PATH:file}\|%{NUMBER:line}\|%{WORD:tag}\|%{GREEDYDATA:content}"]
}
mutate {
replace => [ "message", "%{content}" ] #replace the message field with the content field ( so it auto append later in it )
remove_field => ["content"] # we no longer need this field
}
}
multiline{ #Nothing will pass this filter unless it is a new event ( new [2014-03-02 1.... )
pattern => "^\["
what => "previous"
negate=> true
}
if "|DEBUG| flush_multi_line" in [message]{
drop{} # We don't need the dummy line so drop it
}
}
output {
stdout{ debug=>true }
}
Cheers,
Abdou
grok and multiline handling is mentioned in this issue https://logstash.jira.com/browse/LOGSTASH-509
Simply add "(?m)" in front of your grok regex and you won't need mutation. Example from issue:
pattern => "(?m)<%{POSINT:syslog_pri}>(?:%{SPACE})%{GREEDYDATA:message_remainder}"
The multiline filter will add the "\n" to the message. For example:
"[2014-03-02 17:34:20] - 127.0.0.1|ERROR| E:\\xampp\\htdocs\\test.php|123|subject|The error message goes here ; array (\n 'create' => \n array (\n 'key1' => 'value1',\n 'key2' => 'value2',\n 'key3' => 'value3'\n ),\n)"
However, the grok filter can't parse the "\n". Therefore you need to substitute the \n to another character, says, blank space.
mutate {
gsub => ['message', "\n", " "]
}
Then, grok pattern can parse the message. For example:
"content" => "The error message goes here ; array ( 'create' => array ( 'key1' => 'value1', 'key2' => 'value2', 'key3' => 'value3' ), )"
Isn't the issue simply the ordering of the filters. Order is very important to log stash. You don't need another line to indicate that you've finished outputting multiline log line. Just ensure multiline filter appears first before the grok (see below)
P.s. I've managed to parse a multiline log line fine where xml was appended to end of log line and it spanned multiple lines and still I got a nice clean xml object into my content equivalent variable (named xmlrequest below). Before you say anything about logging xml in logs... I know... its not ideal... but that's for another debate :)):
filter {
multiline{
pattern => "^\["
what => "previous"
negate=> true
}
mutate {
gsub => ['message', "\n", " "]
}
mutate {
gsub => ['message', "\r", " "]
}
grok{
match => ['message',"\[%{WORD:ONE}\] \[%{WORD:TWO}\] \[%{WORD:THREE}\] %{GREEDYDATA:xmlrequest}"]
}
xml {
source => xmlrequest
remove_field => xmlrequest
target => "request"
}
}
My english is not the best, but I will try to explain what I want. So, we have a big json which has a lot of data, now because the server cant show us all the data at the same time, there will be multiple links to small json inside this big json, like when there are 20 comment on a page, and after that there is a "Show more comments" button. My problem is that I dont know how to do this, how to request the smaller json, the parse it. Another question is how to parse a json which links at the end to another json, something like when requestiong a json from graph api and at the end it has
[paging] => stdClass Object
(
[previous] => https://graph.facebook.com/$group_id$/feed?access_token=$valid_token$&__paging_token=$paging_token$&__previous=1
[next] => https://graph.facebook.com/$group_id$/feed?access_token=$valid_token$&limit=25&until=1375310522&__paging_token=$paging_token$
)
When the link from the [next] is opened, it shows more post from that group, then again at its end there is a next link.
As for the first question, I have a bit longer example from the facebook graph api comments
[comments] => stdClass Object
(
[data] => Array
(
[0] => stdClass Object
(
[id] => 53575265890127
[from] => stdClass Object
(
[name] => Pop Dan
[id] => 10000897827962
)
[message] => Random message
[can_remove] => 1
[created_time] => 2013-08-18T20:01:44+0000
[like_count] => 0
[user_likes] =>
)
... more coments...
[paging] => stdClass Object
(
[cursors] => stdClass Object
(
[after] => NTM1ODIxODE5ODA2NTQ0
[before] => NTM1NzUyNjU2NDgwMTI3
)
next] => https://graph.facebook.com/$group_id$_$post_id$/comments?access_token=$accestoken$&limit=25&after=NTM1ODIxODE5ODA2NTQ0
)
)
There I want to continue parsing the json from the [next] and then print it in the same html page, without any breaks or anything between comments. Thanks ;)
Some of the details depend on the language you are using for querying graph api.
Json returned from graph api can consist of json arrays and json objects. Json objects can further have named value entities
If you are doing it in android then first of all you have to get the inner json object from the response first. you can do it in this way
GraphObject go = response.getGraphObject();
JSONObject jso = go.getInnerJSONObject();
Now to get an object from the json you can use
jso.getJSONObject("object_name");
and to get json array you can use
jso.getJSONArray("array_name")
other languages will have similar interfaces
For general understanding of json refer to this link
Regarding your second question you should understand that next is just another query to the graph node but with different parameters. How the parameters can be set depends on the api but in Android you can pur parameters as follows
Bundle params=r.getParameters();
params.putString("offset", ""+25);
r.setParameters(params);
r is the object of type Request