Datatables is sending to Django the following query string parameters:
action:remove
data[1][DT_RowId]:1
data[1][volume]:5.0
data[1][coeff]:35
data[2][DT_RowId]:2
data[2][volume]:4.0
data[2][coeff]:50
I can access the values like this:
print request.GET['data[1][volume]']
5.0
How can I access the key itself as a dictionary and its keys?
For example, I would like to access the value as data[1]['volume']. In addition, I need to access the keys; e.g. get 1 from data[1].
I think you will need to parse the keys, yourselves and convert them to dictionary. This could be done quickly using the regular expression module in python.
import re
pattern = re.compile("data\[(?P<key_one>.*?)\]\[(?P<key_two>.*?)\]")
match = pattern.match('data[1][volume]')
key_one = match.group('key_one')
key_two = match.group('key_two')
print(key_one) # Should print 1
print(key_two) # Should print volume
See Python documentation of its regular expression library to learn more.
Related
I am trying to extract and preprocess log data for a use case.
For instance, the log consists of problem numbers with information to each ID underneath. Each element starts with:
#!#!#identification_number###96245#!#!#change_log###
action
action1
change
#!#!#attribute###value_change
#!#!#attribute1###status_change
#!#!#attribute2###<None>
#!#!#attribute3###status_change_fail
#!#!#attribute4###value_change
#!#!#attribute5###status_change
#!#!#identification_number###96246#!#!#change_log###
action
change
change1
action1
#!#!#attribute###value_change
#!#!#attribute1###status_change_fail
#!#!#attribute2###value_change
#!#!#attribute3###status_change
#!#!#attribute4###value_change
#!#!#attribute5###status_change
I extracted the identification numbers and saved them as a .csv file:
f = open(r'C:\Users\reszi\Desktop\Temp\output_new.txt', encoding="utf8")
change_log = f.readlines()
number = re.findall('#!#!#identification_number###(.+?)#!#!#change_log###', change_log)
Now what I am trying to achieve is, that for every ID in the .csv file I can append the corresponding log content, which is:
action
change
#!#!#attribute###
Since I am rather new to Python and only started working with regex a few days ago, I was hoping for some help.
Each log for an ID starts with "#!#!identification_number###" and ends with "#!#!attribute5### <entry>".
I have tried the following code, but the result is empty:
In:
x = re.findall("\[^#!#!#identification_number###((.|\n)*)#!#!#attribute5###((.|\n)*)$]", str(change_log))
In:
print(x)
Out:
[]
Try this:
pattern='entification_number###(.+?)#!#!#change_log###(.*?)#!#!#id'
re.findall(pattern, string+'#!#!#id', re.DOTALL)
The dotall flag makes the point match newline, so hopefully in the second capturing group you will find the logs.
If you want to get the attributes, for each identification number, you can parse the logs (got for the search above) of each id number with the following:
pattern='#!#!#attribute(.*?)###(.*?)#!#'
re.findall(pattern, string_for_each_log_match+'#!#', re.DOTALL)
If you put each id into the regex when you search using string.format() you can grab the lines that contain the correct changelog.
with open(r'path\to\csv.csv', 'r') as f:
ids = f.readlines()
with open(r'C:\Users\reszi\Desktop\Temp\output_new.txt', encoding="utf8") as f:
change_log = f.readlines()
matches = {}
for id_no in ids:
for i in range(len(change_log)):
reg = '#!#!#identification_number###({})#!#!#change_log###'.format(id_no)
if re.search(reg, change_log[i]):
matches[id_no] = i
break
This will create a dictionary with the structure {id_no:line_no,...}.
So once you have all of the lines that tell you where each log starts, you can grab the lines you want that come after these lines.
--
I am attempting to scrape information from the website:
http://www.forexfactory.com/#tradesPositions
Now, I used to have one up and running which this forum helped me get going, but I think something has changed on the website and the script I had no longer works.
What do I need?
I would like to scrape the number of 'short' and 'long' positions for AUDUSD, EURUSD, GBPUSD, USDJPY, USDCAD, NZDUSD and USDCHF.
NOT the percentages, the actual number of traders.
What have I done?
This is for EURUSD
import lxml.html
from selenium import webdriver
driver = webdriver.Chrome("C:\Users\MY NAME\Downloads\Chrome Driver\chromedriver.exe")
url = ('http://www.forexfactory.com/#tradesPositions')
driver.get(url)
tree = lxml.html.fromstring(driver.page_source)
results_short = tree.xpath('//*[#id="flexBox_flex_trades/positions_tradesPositionsCopy1"]/div[1]/table/tbody/tr/td[2]/div[1]/ul[1]/li[2]/span/text()')
results_long = tree.xpath('//*[#id="flexBox_flex_trades/positions_tradesPositionsCopy1"]/div[1]/table/tbody/tr/td[2]/div[1]/ul[1]/li[1]/span/text()')
print "Forex Factory"
print "Traders Short EURUSD:",results_short
print "Traders Long EURUSD:",results_long
driver.quit()
This returns
Forex Factory
Traders Short EURUSD: ['337 Traders ', ' ']
Traders Long EURUSD: [' 259 Traders']
I would like to strip everything away from the result except for the numbers. I've tried .strip() and .replace() but neither work on a list. Which will come as no surprise to you guys I don't think!
Empty List
When I apply the same technique to AUDUSD I get an empty list.
import lxml.html
from selenium import webdriver
driver = webdriver.Chrome("C:\Users\Andrew G\Downloads\Chrome Driver\chromedriver.exe")
url = ('http://www.forexfactory.com/#tradesPositions')
driver.get(url)
tree = lxml.html.fromstring(driver.page_source)
results_short = tree.xpath('//*[#id="flexBox_flex_trades/positions_tradesPositionsCopy1"]/div[6]/table/tbody/tr/td[2]/div[1]/ul[1]/li[2]/span/text()')
results_long = tree.xpath('//*[#id="flexBox_flex_trades/positions_tradesPositionsCopy1"]/div[6]/table/tbody/tr/td[2]/div[1]/ul[1]/li[1]/span/text()')
s2 = results_short
l2 = results_long
print "Traders Short AUDUSD:",s2
print "Traders Long AUDUSD:",l2
This returns
Traders Short AUDUSD: []
Traders Long AUDUSD: []
What gives? Is the Xpath not working? Just use Chromes 'inspect element' feature and navigated to the desired number, and copied the path. Same method for EURUSD.
Ideally, It would be nice to set up a list of div numbers that can insert into the tree.xpath instead of repeating the lines of code for all the different currencies to make it neater. So, in the Xpath where it has:
/div[number]/
It would be nice to have a list, i.e [1,2,3,4,5,6] that can insert into that because the rest of the Xpath is the same for the currencies. Anyway, that's an optional bonus, priority is to get a return for all currencies listed.
THANKS
You can remove all the space inside your result as you mentioned with strip method, here is my sample code:
for index in range(len(results_short)):
results_short[index] = results_short[index].strip()
if results_short[index] == "":
del results_short[index]
for index in range(len(results_long)):
results_long[index] = results_long[index].strip()
if results_long[index] == "":
del results_long[index]
For the problem you cannot get the result of AUD because the values are not loaded to the page until you have clicked the "expand" button. But I have found you can get the result from the following page: http://www.forexfactory.com/trades.php
So you can change the value of url as:
url = ('http://www.forexfactory.com/trades.php')
For this page, since the name of CSS id has changed, you need to update your value to:
results_short = tree.xpath('//*[#id="flexBox_flex_trades/positions_tradesPositions"]/div[6]/table/tbody/tr/td[2]/div[1]/ul[1]/li[2]/span/text()')
results_long = tree.xpath('//*[#id="flexBox_flex_trades/positions_tradesPositions"]/div[6]/table/tbody/tr/td[2]/div[1]/ul[1]/li[1]/span/text()')
Then apply the strip function as mentioned above, you should be able to get the correct results.
I am trying to just do a basic INSERT operation to a PostgreSQL database through Python via the Psycopg2 module. I have read a great many of the questions already posted regarding this subject as well as the documentation but I seem to have done something uniquely wrong and none of the fixes seem to work for my code.
#API CALL + JSON decoding here
x = 0
for item in ulist:
idValue = list['members'][x]['name']
activeUsers.append(str(idValue))
x += 1
dbShell.executemany("""INSERT INTO slickusers (username) VALUES (%s)""", activeUsers
)
The loop creates a list of strings that looks like this when printed:
['b2ong', 'dune', 'drble', 'drars', 'feman', 'got', 'urbo']
I am just trying to have the code INSERT these strings as 1 row each into the table.
The error specified when running is:
TypeError: not all arguments converted during string formatting
I tried changing the INSERT to:
dbShell.executemany("INSERT INTO slackusers (username) VALUES (%s)", (activeUsers,) )
But that seems like it's merely treating the entire list as a single string as it yields:
psycopg2.DataError: value too long for type character varying(30)
What am I missing?
First in the code you pasted:
x = 0
for item in ulist:
idValue = list['members'][x]['name']
activeUsers.append(str(idValue))
x += 1
Is not the right way to accomplish what you are trying to do.
first list is a reserved word in python and you shouldn't use it as a variable name. I am assuming you meant ulist.
if you really need access to the index of an item in python you can use enumerate:
for x, item in enumerate(ulist):
but, the best way to do what you are trying to do is something like
for item in ulist: # or list['members'] Your example is kinda broken here
activeUsers.append(str(item['name']))
Your first try was:
['b2ong', 'dune', 'drble', 'drars', 'feman', 'got', 'urbo']
Your second attempt was:
(['b2ong', 'dune', 'drble', 'drars', 'feman', 'got', 'urbo'], )
What I think you want is:
[['b2ong'], ['dune'], ['drble'], ['drars'], ['feman'], ['got'], ['urbo']]
You could get this many ways:
dbShell.executemany("INSERT INTO slackusers (username) VALUES (%s)", [ [a] for a in activeUsers] )
or event better:
for item in ulist: # or list['members'] Your example is kinda broken here
activeUsers.append([str(item['name'])])
dbShell.executemany("""INSERT INTO slickusers (username) VALUES (%s)""", activeUsers)
I have a string as:
[["structure\/","structure\/home_page\/","structure\/home_page\/headline_list\/","structure\/home_page\/latest\/","topic\/","topic\/location\/","topic\/location\/united_states\/","topic\/location\/united_states\/ohio\/","topic\/location\/united_states\/ohio\/franklin\/","topic\/news\/","topic\/news\/politics\/","topic\/news\/politics\/elections\/,topic\/news\/politics\/elections\/primary\/"]]
I want to regex_extract_all to turn it into elements in a tuple and sepereated by ",". Then I need to filter out the ones don't contain structure and location.
However, I got an error that can't filter regex type. Any idea?
By the way, the ending goal is to parse out the longest hierarchy like (topic|news|politics|elections|primary)
update the script:
data = load load '/web/visit_log/20160303'
USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad') as json:map[];
a = foreach data generate json#section as sec_type;
b = foreach act_flt GENERATE ..host, REGEX_EXTRACT_ALL(act_type, 'topic..(?!location)(.*?)"') as extr;
store b into /user/tad/sec_hir
The syntax for filter matches seems incorrect.The data doesn't seem to have () in it.
c = filter b by not extr matches '(structure|location)';
Try changing this to
c = filter b by not (extr matches 'structure|location');
I'm writing a bot in python using tweepy for python 2.7. I'm stumped on how to approach what I am looking to do. Currently the bot finds the tweet id and appends it to a text file. On later runs I want to use regex to search that file for a match and only write if there is no match within the text file. The intent is not to add duplicate tweet ids to my text file which could span a large amount of numbers followed by newline.
Any help is appreciate!
/edit when I try the below code the IDE says match can't be seen and syntax error as a result.
import re,codecs,tweepy
qName = Queue.txt
tweets = api.search(q=searchQuery,count=tweet_count,result_type="recent")
with codecs.open(qName,'a',encoding='utf-8') as f:
for tweet in tweets:
tweetId = tweet.id_str
match = re.findall(tweedId), qName)
#if match = false then do write, else discard and move on
f.write(tweetId + '\n')
If i get you correct,You need not to bother with regex etc. let the special containers do the work for you.I would proceed with non-duplicate-container like dictionary or set e.g read all the data from file into dictionary or set and then go for extending id into this dictionary or set after all write this dictionary or set back into file.
e.g.
>>>data = set()
>>>for i in list('asddddddddddddfgggggg'):
data.add(i)
>>>data
>>>set(['a', 's', 'd', 'g', 'f']) ## see one d and g