Looked at the following post explaining how to store cookies:
How to access a cookie from callback function in Dash by Plotly?
I'm trying to replicate this and I'm not able to store/retrieve cookies.What is wrong in the simple example below ? There are no error messages, but when debugging, the all_cookies dict is empty, while I'd expect it to have at least one member 'dash cookie'.
#app.callback(
Output(ThemeSwitchAIO.ids.switch("theme"), "value"),
Input("url-login", "pathname"),
)
def save_load_cookie(value):
dash.callback_context.response.set_cookie('dash cookie', '1 - cookie')
all_cookies = dict(flask.request.cookies)
return dash.no_update
Please note the app is running on my local machine via the standard flask server:
app.run_server(host='127.0.0.1', port=80, debug=True,
use_debugger=False, use_reloader=False, passthrough_errors=True)
Thank you #coralvanda, the callback needs to return a value instead of dash.no_update. Code should simply be:
#app.callback(
Output(ThemeSwitchAIO.ids.switch("theme"), "value"),
Input("url-login", "pathname"),
)
def save_load_cookie(value):
dash.callback_context.response.set_cookie('dash cookie', '1 - cookie')
all_cookies = dict(flask.request.cookies)
return value
Related
I am having issue in passing additional parameter to grequests using a hook, Its working in a standalone app (non flask) but its not with flask (flask integrated server) Here is my code snippet.
self.async_list = []
for url in self.urls:
self.action_item = grequests.get(url, hooks = {'response' : [self.hook_factory(test='new_folder')]}, proxies={ 'http': 'proxy url'},timeout=20)
self.async_list.append(self.action_item)
grequests.map(self.async_list)
def hook_factory(self, test, *factory_args, **factory_kwargs):
print (test + "In start of hook factory") #this worked and I see test value is printing as new_folder
def do_something(response, *args, **kwargs):
print (test + "In do something") #This is not working hence I was not able to save this response to a newly created folder.
self.file_name = "str(test)+"/"
print ("file name is " + self.file_name)
with open(REL_PATH + self.file_name, 'wb') as f:
f.write(response.content)
return None
return do_something
Am I missing anything here?.
Trying to answer my own question, After further analysis there was nothing wrong with the above code, for some reason I was not getting my session data which is in the request_ctx_stack.top. But the same session data was available in my h_request_ctx_stack._local, Don't know the reason. But I was able to get my data from h_request_ctx_stack._local instead _request_ctx_stack.top for this hook alone. After I made that change was able execute the same hook without any issues.
I am trying to write unit test case for an external facing api of my Django application. I have a model called Dummy with two fields temp and content. The following function is called by third party to fetch the content field. temp is an indexed unique key.
#csrf_exempt
def fetch_dummy_content(request):
try:
temp = request.GET.get("temp")
dummy_obj = Dummy.objects.get(temp=temp)
except Dummy.DoesNotExist:
content = 'Object not found.'
else:
content = dummy_obj.content
return HttpResponse(content, content_type='text/plain')
I have the following unit test case.
def test_dummy_content(self):
params = {
'temp': 'abc'
}
dummy_obj = mommy.make(
'Dummy',
temp='abc',
content='Hello World'
)
response = self.client.get(
'/fetch_dummy_content/',
params=params
)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, 'Hello World')
Every time I run the test case, it goes into exception and returns Object not found. instead of Hello World. Uponn further debugging I found that temp from request object inside view function is always None, even though I am passing it in params.
I might be missing something and not able to figure out. What's the proper way to test these kind of functions.
There's no params parameter for the get or any of the other functions on the client, you're probably thinking of data.
response = self.client.get(
'/fetch_dummy_content/',
data=params
)
It's the second argument anyway, so you can just do self.client.get('/fetch_dummy_content/', params) too.
Any unknown parameters get included in the environment which explains why you were not getting an error for using the wrong name.
I'm using the following code (from django management commands) to listen to the Twitter stream - I've used the same code on a seperate command to track keywords successfully - I've branched this out to use location, and (apparently rightly) wanted to test this out without disrupting my existing analysis that's running.
I've followed the docs and have made sure the box is in Long/Lat format (in fact, I'm using the example long/lat from the Twitter docs now). It looks broadly the same as the question here, and I tried using their version of the code from the answer - same error. If I switch back to using 'track=...', the same code works, so it's a problem with the location filter.
Adding a print debug inside streaming.py in tweepy so I can see what's happening, I print out the self.parameters self.url and self.headers from _run, and get:
{'track': 't,w,i,t,t,e,r', 'delimited': 'length', 'locations': '-121.7500,36.8000,-122.7500,37.8000'}
/1.1/statuses/filter.json?delimited=length and
{'Content-type': 'application/x-www-form-urlencoded'}
respectively - seems to me to be missing the search for location in some way shape or form. I don't believe I'm/I'm obviously not the only one using tweepy location search, so think it's more likely a problem in my use of it than a bug in tweepy (I'm on 2.3.0), but my implementation looks right afaict.
My stream handling code is here:
consumer_key = 'stuff'
consumer_secret = 'stuff'
access_token='stuff'
access_token_secret_var='stuff'
import tweepy
import json
# This is the listener, resposible for receiving data
class StdOutListener(tweepy.StreamListener):
def on_data(self, data):
# Twitter returns data in JSON format - we need to decode it first
decoded = json.loads(data)
#print type(decoded), decoded
# Also, we convert UTF-8 to ASCII ignoring all bad characters sent by users
try:
user, created = read_user(decoded)
print "DEBUG USER", user, created
if decoded['lang'] == 'en':
tweet, created = read_tweet(decoded, user)
print "DEBUG TWEET", tweet, created
else:
pass
except KeyError,e:
print "Error on Key", e
pass
except DataError, e:
print "DataError", e
pass
#print user, created
print ''
return True
def on_error(self, status):
print status
l = StdOutListener()
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret_var)
stream = tweepy.Stream(auth, l)
#locations must be long, lat
stream.filter(locations=[-121.75,36.8,-122.75,37.8], track='twitter')
The issue here was the order of the coordinates.
Correct format is:
SouthWest Corner(Long, Lat), NorthEast Corner(Long, Lat). I had them transposed. :(
The streaming API doesn't allow to filter by location AND keyword simultaneously.
you must refer to this answer i had the same problem earlier
https://stackoverflow.com/a/22889470/4432830
I am building a CrawlSpider using Scrapy 0.22.2 for Python 2.7.3 and am having problems with Requests, where the callback method that I specify is never called. Here is a snippet from my parsing method that initiates a Request within a elif block:
elif current_status == "Superseded":
#Need to do more work here. Have to check whether there is a replacement unit available. If there isn't, download whatever outline is there
# We need to look for a <td> element which contains "Is superseded by " and follow that link
updated_unit = hxs.xpath('/html/body/div[#id="page"]/div[#id="layoutWrapper"]/div[#id="twoColLayoutWrapper"]/div[#id="twoColLayoutLeft"]/div[#class="layoutContentWrapper"]/div[#class="outer"]/div[#class="fieldset"]/div[#class="display-row"]/div[#class="display-row"]/div[#class="display-field-info"]/div[#class="t-widget t-grid"]/table/tbody/tr[1]/td[contains(., "Is superseded by ")]/a')
# need child element a
updated_unit_link = updated_unit.xpath('#href').extract()[0]
updated_url = "http://training.gov.au" + updated_unit_link
print "\033[0;31mSuperceded by "+updated_url+"\033[0m" # prints in Red for superseded, need to follow this link to current
yield Request(url=updated_url, callback='sortSuperseded', dont_filter=True)
def sortSuperseded(self, response):
print "\033[0;35mtest callback called\033[0m"
There are no errors when I execute this and the url is OK, but sortSuperseded is never called as I never see the 'test callback called' printed in the console.
The url I am extracting is also within the domain that I specify for my CrawlSpider.
allowed_domains = ["training.gov.au"]
Where am I going wrong?
Quotes are not required around the callback method name. Change the line:
yield Request(url=updated_url, callback='sortSuperseded', dont_filter=True)
to
yield Request(updated_url, callback=self.sortSuperseded, dont_filter=True)
I am trying to delete a client object in my program and then also delete the object in activeCollab using the API provided. I can delete the object but I keep getting a 404 error when it calls the API. I did a print for c.id and I am getting the correct ID, and if I replace ':company_id' in the req statement with the actual ID of the client, it works.
Here is my code for the delete:
def deleteClient(request, client_id):
c = get_object_or_404(Clients, pk = client_id)
#adding the params for the request to the aC API
params = urllib.urlencode({
'submitted':'submitted',
'company[id]': c.id,
})
#make the request
req = urllib2.Request("http://website_url/public/api.php?path_info=/people /:company_id/delete&token=XXXXXXXXXXXXXXXXXXXX", params)
f = urllib2.urlopen(req)
print f.read()
c.delete()
return HttpResponseRedirect('/clients/')
Thanks everyone.
Oh here is the link to the API documentation for the delete:
http://www.activecollab.com/docs/manuals/developers/api/companies-and-users
From the docs it appears that :company_id is supposed to be replaced by the actual company id. This replacement won't happen automatically. Currently you are sending the company id in the POST parameters (which the API isn't expecting) and you are sending the literal value ':company_id' in the query string.
Try something like:
url_params=dict(path_info="/people/%s/delete" % c.id, token=MY_API_TOKEN)
data_params=dict(submitted=submitted)
req = urllib2.Request(
"http://example.com/public/api.php?%s" % urllib.urlencode(url_params),
urllib.urlencode(data_params)
)
Of course, because you are targeting this api.php script, I can't tell if that script is supposed to do some magic replacement. But given that it works when you manually replace the :company_id with the actual value, this is the best bet, I think.