Chunk a colon in NLTK - regex

I am trying to split a chunk at the position of a colon : in NLTK but it seems its a special case. In normal regex I can just put it in [:] no problems.
But in NLTK no matter what I do it does not like it in the regexParser.
from nltk import RegexpParser
grammar = r"""
NP: {<DT|PP\$>?<JJ>*<NN>|<NNP.*><\:><VBD>} # chunk (Rapunzel + : + let) together
{<NNP>+}
<.*>}{<VBD.*>
"""
cp = RegexpParser(grammar)
sentence = [("Rapunzel", "NNP"), (":",":"), ("let", "VBD"), ("down", "RP"), ("her", "PP$"), ("long", "JJ"), ("golden", "JJ"), ("hair", "NN")]
print(cp.parse(sentence))
The above code does make a chunk picking up the colon as a block.
<.*>}{<\VBD.*> line splits the chunk made up of (Rapunzel+:+let) at the position before let.
if you take out that split and replace with the colon it gives a error
from nltk import RegexpParser
grammar = r"""
NP: {<DT|PP\$>?<JJ>*<NN>|<NNP.*><\:><VBD>} # chunk (Rapunzel + : + let) together
{<NNP>+}
<.*>}{<\:.*>
"""
cp = RegexpParser(grammar)
sentence = [("Rapunzel", "NNP"), (":",":"), ("let", "VBD"), ("down", "RP"), ("her", "PP$"), ("long", "JJ"), ("golden", "JJ"), ("hair", "NN")]
print(cp.parse(sentence))
ValueError: Illegal chunk pattern: >
Can anyone explain how to do this, I tried Google and going through the docs but I am none the wiser. I can deal with this post chunk no problem, but I just got to know why or how. :-)

It seems that NLTK treats second colon for each chunk definition as an indicator to start a new chunk.
For those who get the same error, a workaround is to break down multiple regexes into multiple chunks with the same name.
Let's assume we have the following grammar:
grammar = r"""
SOME_CHUNK:
{<NN><:>}
{<JJ><:>}
"""
To fix this, change it to:
grammar = r"""
SOME_CHUNK: {<NN><:>}
SOME_CHUNK: {<JJ><:>}
"""
Unfortunately, this won't work if one is using chinking regex with another colon, like in your example.
To help you solve your specific issue please, post an exact sentence you are trying to parse. From your example it is hard to tell why you need |<NNP.*><\:><VBD> part at all.

Related

How to replace multiple words in a line with another words

i have recently started building a bot in discord.py and got stuck in this situation here
#client.command()
async def replace(ctx, *, arg):
msg = f"{arg}" .format(ctx.message).replace('happe', '<:happe:869470107066302484>')
await ctx.send(msg)
this is a command which replaces the word "happe" with the emoji so this looks like :
Command : {prefix}replace i am very happe
Result : i am very "emoji for happe"
but i want to make it so we can replace multiple words with different emoji and not just a single one like
Command : {prefix}replace i am not happe, i am sad
Result : i am not "emoji for happe", i am "emoji for sad"
is there a way to edit multiple words in just one sentence like using a json file of making a list of emojis and its id?
also this command doesnt seems to work in cogs and says command is invalid
is there a way to edit multiple words in just one sentence like using a json file ot making a list of emojis and its id?
Yes, you can do just that. Make a list of word-emoji pairs and iterate over all of them to replace every word.
async def replace(ctx, *, arg):
l = [("happe", "<:happe:869470107066302484>"), ("sad", "..."), ...]
msg = arg # Making a copy here to avoid editing the original arg in case you want to use it at some point
for word, emoji in l:
msg = msg.replace(word, emoji)
await ctx.send(msg)
also this command doesnt seems to work in cogs and says command is invalid
In cogs the decorator is #commands.command(), not #client.command(). Don't forget the from discord.ext import commands import to get access to it.
Lastly, I'm a bit confused what f"{arg}" .format(ctx.message) is supposed to be doing? You can remove the .format and the f-string entirely. args is already a string so putting it in an f-string with nothing else has no effect, and .format(ctx.message) doesn't do anything either. The result of that entire thing is the same as just arg.
>>> arg = "this is the arg"
>>> f"{arg}".format("this has no effect")
'this is the arg'

How can I use regex to construct an API call in my Jekyll plugin?

I'm trying to write my own Jekyll plugin to construct an api query from a custom tag. I've gotten as far as creating the basic plugin and tag, but I've run into the limits of my programming skills so looking to you for help.
Here's my custom tag for reference:
{% card "Arbor Elf | M13" %}
Here's the progress on my plugin:
module Jekyll
class Scryfall < Liquid::Tag
def initialize(tag_name, text, tokens)
super
#text = text
end
def render(context)
# Store the name of the card, ie "Arbor Elf"
#card_name =
# Store the name of the set, ie "M13"
#card_set =
# Build the query
#query = "https://api.scryfall.com/cards/named?exact=#{#card_name}&set=#{#card_set}"
# Store a specific JSON property
#card_art =
# Finally we render out the result
"<img src='#{#card_art}' title='#{#card_name}' />"
end
end
end
Liquid::Template.register_tag('cards', Jekyll::Scryfall)
For reference, here's an example query using the above details (paste it into your browser to see the response you get back)
https://api.scryfall.com/cards/named?exact=arbor+elf&set=m13
My initial attempts after Googling around was to use regex to split the #text at the |, like so:
#card_name = "#{#text}".split(/| */)
This didn't quite work, instead it output this:
[“A”, “r”, “b”, “o”, “r”, “ “, “E”, “l”, “f”, “ “, “|”, “ “, “M”, “1”, “3”, “ “]
I'm also then not sure how to access and store specific properties within the JSON response. Ideally, I can do something like this:
#card_art = JSONRESPONSE.image_uri.large
I'm well aware I'm asking a lot here, but I'd love to try and get this working and learn from it.
Thanks for reading.
Actually, your split should work – you just need to give it the correct regex (and you can call that on #text directly). You also need to escape the pipe character in the regex, because pipes can have special meaning. You can use rubular.com to experiment with regexes.
parts = #text.split(/\|/)
# => => ["Arbor Elf ", " M13"]
Note that they also contain some extra whitespace, which you can remove with strip.
#card_name = parts.first.strip
#card_set = parts.last.strip
This might also be a good time to answer questions like: what happens if the user inserts multiple pipes? What if they insert none? Will your code give them a helpful error message for this?
You'll also need to escape these values in your URL. What if one of your users adds a card containing a & character? Your URL will break:
https://api.scryfall.com/cards/named?exact=Sword of Dungeons & Dragons&set=und
That looks like a URL with three parameters, exact, set and Dragons. You need to encode the user input to be included in a URL:
require 'cgi'
query = "https://api.scryfall.com/cards/named?exact=#{CGI.escape(#card_name)}&set=#{CGI.escape(#card_set)}"
# => "https://api.scryfall.com/cards/named?exact=Sword+of+Dungeons+%26+Dragons&set=und"
What comes after that is a little less clear, because you haven't written the code yet. Try making the call with the Net::HTTP module and then parsing the response with the JSON module. If you have trouble, come back here and ask a new question.

TypeError : 'NoneType' object not callable Python with BeautifulSoup XML

I have the following XML file :
<user-login-permission>true</user-login-permission>
<total-matched-record-number>15000</total-matched-record-number>
<total-returned-record-number>15000</total-returned-record-number>
<active-user-records>
<active-user-record>
<active-user-name>username</active-user-name>
<authentication-realm>realm</authentication-realm>
<user-roles>Role</user-roles>
<user-sign-in-time>date</user-sign-in-time>
<events>0</events>
<agent-type>text</agent-type>
<login-node>node</login-node>
</active-user-record>
There are many records
I'm trying to get values from tags and save them in a different text file using the following code :
soup = BeautifulSoup(open("path/to/xmlfile"), features="xml")
with open('path/to/outputfile', 'a') as f:
for i in range(len(soup.findall('active-user-name'))):
f.write ('%s\t%s\t%s\t%s\n' % (soup.findall('active-user-name')[i].text, soup.findall('authentication-realm')[i].text, soup.findall('user-roles')[i].text, soup.findall('login-node')[i].text))
I get the error TypeError : 'NoneType' object not callable Python with BeautifulSoup XML for line : for i in range(len(soup.findall('active-user-name'))):
Any idea what could be causing this?
Thanks!
There are a number of issues that need to be addressed with this, the first is that the XML file you provided is not valid XML - a root element is required.
Try something like this as the XML:
<root>
<user-login-permission>true</user-login-permission>
<total-matched-record-number>15000</total-matched-record-number>
<total-returned-record-number>15000</total-returned-record-number>
<active-user-records>
<active-user-record>
<active-user-name>username</active-user-name>
<authentication-realm>realm</authentication-realm>
<user-roles>Role</user-roles>
<user-sign-in-time>date</user-sign-in-time>
<events>0</events>
<agent-type>text</agent-type>
<login-node>node</login-node>
</active-user-record>
</active-user-records>
</root>
Now onto the python. First off there is not a findall method, it's either findAll or find_all. findAll and find_all are equivalent, as documented here
Next up I would suggest altering your code so you aren't making use of the find_all method quite so often - using find instead will improve the efficiency, especially for large XML files. Additionally the code below is easier to read and debug:
from bs4 import BeautifulSoup
xml_file = open('./path_to_file.xml', 'r')
soup = BeautifulSoup(xml_file, "xml")
with open('./path_to_output_f.txt', 'a') as f:
for s in soup.findAll('active-user-record'):
username = s.find('active-user-name').text
auth = s.find('authentication-realm').text
role = s.find('user-roles').text
node = s.find('login-node').text
f.write("{}\t{}\t{}\t{}\n".format(username, auth, role, node))
Hope this helps. Let me know if you require any further assistance!
The solution is simple: don't use findall method - use find_all.
Why? Because there is no findall method at all, there are findAll and find_all, which are equivalent. See docs for more information.
Though, I agree, error message is confusing.
Hope that helps.
The fix for my version of this problem is to coerce the BeautifulSoup instance into a type string. You do this following:
https://groups.google.com/forum/#!topic/comp.lang.python/ymrea29fMFI
you use the following pythonic:
From python manual
str( [object])
Return a string containing a nicely printable representation of an
object. For strings, this returns the string itself. The difference
with repr(object) is that str(object) does not always attempt to
return a string that is acceptable to eval(); its goal is to return a
printable string. If no argument is given, returns the empty string,

how to read only URL from txt file in MATLAB

I have a text file having multiple URLs with other information of the URL. How can I read the txt file and save the URLs only in an array to download it? I want to use
C = textscan(fileId, formatspec);
What should I mention in formatspec for URL as format?
This is not a job for textscan; you should use regular expressions for this. In MATLAB, regexes are described here.
For URLs, also refer here or here for examples in other languages.
Here's an example in MATLAB:
% This string is obtained through textscan or something
str = {...
'pre-URL garbage http://www.example.com/index.php?query=test&otherStuf=info more stuff here'
'other foolish stuff ftp://localhost/home/ruler_of_the_world/awesomeContent.py 1 2 3 4 misleading://';
};
% find URLs
C = regexpi(str, ...
['((http|https|ftp|file)://|www\.|ftp\.)',...
'[-A-Z0-9+&##/%=~_|$?!:,.]*[A-Z0-9+&##/%=~_|$]'], 'match');
C{:}
Result:
ans =
'http://www.example.com/index.php?query=test&otherStuf=info'
ans =
'ftp://localhost/home/ruler_of_the_world/awesomeContent.py'
Note that this regex requires you to have the protocol included, or have a leading www. or ftp.. Something like example.com/universal_remote.cgi?redirect= is NOT matched.
You could go on and make the regex cover more and more cases. However, eventually you'll stumble upon the the most important conclusion (as made here for example; where I got my regex from): given the full definition of what precisely constitutes a valid URL, there is no single regex able to always match every valid URL. That is, there are valid URLs you can dream up that are not captured by any of the regexes shown.
But please keep in mind that this last statement is more theoretical rather than practical -- those non-matchable URLs are valid but not often encountered in practice :) In other words, if your URLs have a pretty standard form, you're pretty much covered with the regex I gave you.
Now, I fooled around a bit with the Java suggestion by pm89. As I suspected, it is an order of magnitude slower than just a regex, since you introduce another "layer of goo" to the code (in my timings, the difference was about 40x slower, excluding the imports). Here's my version:
import java.net.URL;
import java.net.MalformedURLException;
str = {...
'pre-URL garbage http://www.example.com/index.php?query=test&otherStuf=info more stuff here'
'pre--URL garbage example.com/index.php?query=test&otherStuf=info more stuff here'
'other foolish stuff ftp://localhost/home/ruler_of_the_world/awesomeContent.py 1 2 3 4 misleading://';
};
% Attempt to convert each item into an URL.
for ii = 1:numel(str)
cc = textscan(str{ii}, '%s');
for jj = 1:numel(cc{1})
try
url = java.net.URL(cc{1}{jj})
catch ME
% rethrow any non-url related errors
if isempty(regexpi(ME.message, 'MalformedURLException'))
throw(ME);
end
end
end
end
Results:
url =
'http://www.example.com/index.php?query=test&otherStuf=info'
url =
'ftp://localhost/home/ruler_of_the_world/awesomeContent.py'
I'm not too familiar with java.net.URL, but apparently, it is also unable to find URLs without leading protocol or standard domain (e.g., example.com/path/to/page).
This snippet can undoubtedly be improved upon, but I would urge you to consider why you'd want to do this for this longer, inherently slower and far uglier solution :)
As I suspected you could use java.net.URL according to this answer.
To implement the same code in Matlab:
First read the file into a string, using fileread for example:
str = fileread('Sample.txt');
Then split the text with respect to spaces, using strsplit:
spl_str = strsplit(str);
Finally use java.net.URL to detect the URLs:
for k = 1:length(spl_str)
try
url = java.net.URL(spl_str{k})
% Store or save the URL contents here
catch e
% it's not a URL.
end
end
You can write the URL contents into a file using urlwrite. But first convert the URLs obtained from java.net.URL to char:
url = java.net.URL(spl_str{k});
urlwrite(char(url), 'test.html');
Hope it helps.

Regex to extract URLs from href attribute in HTML with Python [duplicate]

This question already has answers here:
What is the best regular expression to check if a string is a valid URL?
(62 answers)
Closed last month.
Considering a string as follows:
string = "<p>Hello World</p>More ExamplesEven More Examples"
How could I, with Python, extract the URLs, inside the anchor tag's href? Something like:
>>> url = getURLs(string)
>>> url
['http://example.com', 'http://2.example']
import re
url = '<p>Hello World</p>More ExamplesEven More Examples'
urls = re.findall('https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+', url)
>>> print urls
['http://example.com', 'http://2.example']
The best answer is...
Don't use a regex
The expression in the accepted answer misses many cases. Among other things, URLs can have unicode characters in them. The regex you want is here, and after looking at it, you may conclude that you don't really want it after all. The most correct version is ten-thousand characters long.
Admittedly, if you were starting with plain, unstructured text with a bunch of URLs in it, then you might need that ten-thousand-character-long regex. But if your input is structured, use the structure. Your stated aim is to "extract the URL, inside the anchor tag's href." Why use a ten-thousand-character-long regex when you can do something much simpler?
Parse the HTML instead
For many tasks, using Beautiful Soup will be far faster and easier to use:
>>> from bs4 import BeautifulSoup as Soup
>>> html = Soup(s, 'html.parser') # Soup(s, 'lxml') if lxml is installed
>>> [a['href'] for a in html.find_all('a')]
['http://example.com', 'http://2.example']
If you prefer not to use external tools, you can also directly use Python's own built-in HTML parsing library. Here's a really simple subclass of HTMLParser that does exactly what you want:
from html.parser import HTMLParser
class MyParser(HTMLParser):
def __init__(self, output_list=None):
HTMLParser.__init__(self)
if output_list is None:
self.output_list = []
else:
self.output_list = output_list
def handle_starttag(self, tag, attrs):
if tag == 'a':
self.output_list.append(dict(attrs).get('href'))
Test:
>>> p = MyParser()
>>> p.feed(s)
>>> p.output_list
['http://example.com', 'http://2.example']
You could even create a new method that accepts a string, calls feed, and returns output_list. This is a vastly more powerful and extensible way than regular expressions to extract information from HTML.