I am using Django Channels with the #channel_session_user decorator (for access to Django's session data).
#channel_session_user_from_http
def ws_connect(message):
# creates group names like "group-1"
group_kw = get_group_id_for_user(message.user)
Group(group_kw).add(message.reply_channel)
#channel_session_user
def ws_receive(message):
group_kw = get_group_id_for_user(message.user)
payload = json.loads(message.content['text'])
Channel(payload['action']).send(message.content)
#channel_session_user
def ws_disconnect(message):
group_kw = get_group_id_for_user(message.user)
Group(group_kw).discard(message.reply_channel)
That works fine, but there is a problem when testing.
The below test should place a message on the websocket.receive channel, then ws_receive should take the message and place it on the channel defined in the message's action value. Finally, I test if it was in fact placed on that channel.
def test_send_chat_message_is_used_by_consumer(self):
# Make sure a user is authenticated
self.assertTrue(auth.get_user(self.client).is_authenticated())
payload = {'action': 'chat.receive',
'msg': 'Test message.',
'receiver': self.user2.id}
message = {'text': json.dumps(payload)}
# Send a chat message
Channel('websocket.receive').send(message)
# Receive it and place it on the right channel
ws_receive(self.get_next_message('websocket.receive', require=True))
# Fetch it from the channel
result = self.get_next_message(payload['action'], require=True)
# That should be the message sent
self.assertEqual(result, message)
Instead, I get the following error, pointing to the line with the ws_receive() call.
ValueError: No reply_channel sent to consumer; #channel_session can only be used on messages containing it.
The error is raised here in the Channels source.
Printing the reply_channel returns None instead of containing the correct reply channel name.
tmp = self.get_next_message('websocket.receive', require=True)
print(tmp.reply_channel) # prints: None
I am overlooking something obvious?
I think you cannot just simply call the consumer because it has the decorator #channel_session_user. You should try using the Client channels.tests provides.
from channels.tests import ChannelTestCase, Client
Then use something like this inside the test function
client = Client()
client.send('websocket.receive', content=message)
This is because the name that you provide to the 'Channel' object represents the channel this message was received on. Check the init function of Channel.
Also note the function get_next_message in tests/base.py file. The channel parameter here refers to (reply) channel to which the message was sent to during the test.
To answer your question -
Look at the docstring of Message object in channels/message.py. It says,
The message content is a dict called .content, while reply_channel is an optional extra attribute representing a channel to use to reply to this message's end user, if that makes sense.
You need to set the "reply_channel" in the dict that you send.
message = {
'text': json.dumps(payload),
'reply_channel': 'websocket.receive'
}
Hope it helps
Related
Has anybody successfully managed to write tests for a Paddle webhook in a Django application?
Paddle sends messages as POST requests to a webhook. According to Paddle's webhook testing an exemplary request looks like this:
[alert_id] => 1865992389
[alert_name] => payment_succeeded
[...] => ...
The webhook at my Django application receives this as the request.POST=<QueryDict> parameter:
{'alert_id': ['1865992389'], 'alert_name': ['payment_succeeded'], '...': ['...']}
That is, all values are received as arrays, not values.
Contrary, my webhook test looks like the message format I would expect, i.e. uses values instead of arrays:
response = client.post('/pricing/paddlewebhook/',
{'alert_name': 'payment_succeeded',
'alert_id': '1865992389',
'...': '...',
},
content_type='application/x-www-form-urlencoded')
assert response.status_code == 200 # Match paddle's expectation
This is received by the webhook as request.POST=<QueryDict> parameter:
{'alert_name': 'payment_succeeded', 'alert_id': '1865992389', '...': '...'}
The webhook itself is a simple POST method of a class-based view:
# Django wants us to apply the csrf_exempt decorator all methods via dispatch() instead of just to post().
#method_decorator(csrf_exempt, name='dispatch')
class PaddleWebhook(View):
def post(self, request, *args, **kwargs):
logger.info("request.POST=%s", request.POST)
# ...
Is it really suitable testing behaviour to change the test (and the view's post()) to include array values or am I missing something obvious about external calls to POST webhooks?
A possible solution ("working for me") is to define and use an intermediate method to "adjust" the incoming parameter dict:
def _extractArrayParameters(self, params_dict: dict) -> dict:
"""Returns a parameter dict that contains values directly instead of single element arrays.
Needed as messages from paddle are received as single element arrays.
"""
ret = dict()
for k, v in params_dict.items():
# Convert all single element arrays directly to values.
if isinstance(v, list) and len(v) == 1:
ret[k] = v[0]
else:
ret[k] = v
return ret
However, this feels dirty, even though it makes all tests pass.
I'm trying to stream audio from a (Pepper robot) microphone to DialogFlow. I have working code for sending a block of audio. When I send the request, the response contains the message None Exception iterating requests!. I've seen this error previously when I was reading from an audio file. However, I fail to see what's wrong with the data I'm passing now.
processRemote is called whenever the microphone records something. When writing the sound_data[0].tostring() to a StringIO and later retrieving it in chunks of 4096 bytes, the solution works.
self.processing_queue is supposed to hold a few chunks of audio that should be processed before working on new audio.
The error occurs in the response for self.session_client.streaming_detect_intent(requests).
I'm thankful for any idea.
def processRemote(self, nbOfChannels, nbOfSamplesByChannel, timeStamp, inputBuffer):
"""audio stream callback method with simple silence detection"""
sound_data_interlaced = np.fromstring(str(inputBuffer), dtype=np.int16)
sound_data = np.reshape(sound_data_interlaced,
(nbOfChannels, nbOfSamplesByChannel), 'F')
peak_value = np.max(sound_data)
chunk = sound_data[0].tostring()
self.processing_queue.append(chunk)
if self.is_active:
# detect sound
if peak_value > 6000:
print("Peak:", peak_value)
if not self.recordingInProgress:
self.startRecording()
# if recording is in progress we send directly to google
try:
if self.recordingInProgress:
print("preparing request proc remote")
requests = [dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)]
print("should send now")
responses = self.session_client.streaming_detect_intent(requests)
for response in responses:
print("checking response")
if len(response.fulfillment_text) != 0:
print("response not empty")
self.stopRecording(response) # stop if we already know the intent
except Exception as e:
print(e)
def startRecording(self):
"""init a in memory file object and save the last raw sound buffer to it."""
# session path setup
self.session_path = self.session_client.session_path(DIALOG_FLOW_GCP_PROJECT_ID, self.uuid)
self.recordingInProgress = True
requests = list()
# set up streaming
print("start streaming")
q_input = dialogflow.types.QueryInput(audio_config=self.audio_config)
req = dialogflow.types.StreamingDetectIntentRequest(
session=self.session_path, query_input=q_input)
requests.append(req)
# process pre-recorded audio
print("work on stored audio")
for chunk in self.processing_queue:
print("appending chunk")
try:
requests.append(dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk))
except Exception as e:
print(e)
print("getting response")
responses = self.session_client.streaming_detect_intent(requests)
print("got response")
print(responses)
# iterate though responses from pre-recorded audio
try:
for response in responses:
print("checking response")
if len(response.fulfillment_text) != 0:
print("response not empty")
self.stopRecording(response) # stop if we already know the intent
except Exception as e:
print(e)
# otherwise continue listening
print("start recording (live)")
def stopRecording(self, query_result):
"""saves the recording to memory"""
# stop recording
self.recordingInProgress = False
self.disable_google_speech(force=True)
print("stopped recording")
# process response
action = query_result.action
text = query_result.fulfillment_text.encode("utf-8")
if (action is not None) or (text is not None):
if len(text) != 0:
self.speech.say(text)
if len(action) != 0:
parameters = query_result.parameters
self.execute_action(action, parameters)
As per the source code the session_client.streaming_detect_intent function expects an iterable as its argument. But you are currently giving it a list of requests.
Won't work:
requests = [dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)]
responses = self.session_client.streaming_detect_intent(requests)
#None Exception iterating requests!
Alternatives:
# wrap the list in an iterator
requests = [dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)]
responses = self.session_client.streaming_detect_intent(iter(requests))
# Note: The example in the source code calls the function like this
# but this gave me the same error
requests = [dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)]
for response in self.session_client.streaming_detect_intent(requests):
# process response
Using generator structure
While this fixed the error, the intent detection still didn't work. I believe a better program structure is to use a generator, as suggested in the docs. Something like (pseudo-code):
def dialogflow_mic_stream_generator():
# open stream
audio_stream = ...
# send configuration request
query_input = dialogflow.types.QueryInput(audio_config=audio_config)
yield dialogflow.types.StreamingDetectIntentRequest(session=session_path,
query_input=query_input)
# output audio data from stream
while audio_stream_is_active:
chunk = audio_stream.read(chunk_size)
yield dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)
requests = dialogflow_mic_stream_generator()
responses = session_client.streaming_detect_intent(requests)
for response in responses:
# process response
Unfortunately I'm using django-channels channels 1.1.8, as I missed all the
updates to channels 2.0. Upgrading now is unrealistic as we've just
launched and this will take some time to figure out correctly.
Here's my problem:
I'm using the *message.user.id *to differentiate between authenticated
users that I need to send messages to. However, there are cases where I'll
need to send messages to un-authenticated users as well - and that message
depends on an external API call. I have done this in ws_connect():
#channel_session_user_from_http
def ws_connect(message):
# create group for user
if str(message.user) == "AnonymousUser":
user_group = "AnonymousUser" + str(uuid.uuid4())
else:
user_group = str(message.user.id)
print(f"user group is {user_group}")
Group(user_group).add(message.reply_channel)
Group(user_group).send({"accept": True})
message.channel_session['get_user'] = user_group
This is only the first part of the issue, basically I'm appending a random
string to each AnonymousUser instance. But I can't find a way to access
this string from the request object in a view, in order to determine who
I am sending the message to.
Is this even achievable? Right now I'm not able to access anything set in
the ws_connect in my view.
EDIT: Following kagronick's advice, I tried this:
#channel_session_user_from_http
def ws_connect(message):
# create group for user
if str(message.user) == "AnonymousUser":
user_group = "AnonymousUser" + str(uuid.uuid4())
else:
user_group = str(message.user.id)
Group(user_group).add(message.reply_channel)
Group(user_group).send({"accept": True})
message.channel_session['get_user'] = user_group
message.http_session['get_user'] = user_group
print(message.http_session['get_user'])
message.http_session.save()
However, http_session is None when user is AnonymousUser. Other decorators didn't help.
Yes you can save to the session and access it in the view. But you need to use the http_session and not the channel session. Use the #http_session decorator or #channel_and_http_session. You may need to call message.http_session.save() (I don't remember, I'm on Channels 2 now.). But after that you will be able to see the user's group in the view.
Also, using a group for this is kind of overkill. If the group will only ever have 1 user, put the reply_channel in the session and do something like Channel(request.session['reply_channel']).send() in the view. That way it doesn't need to look up the one user that is in the group and can send directly to the user.
If this solves your problem please mark it as accepted.
EDIT: unfortunately this only works locally but not in production. when AnonymousUser, message.http_sesssion doesn't exist.
user kagronick got me on the right track, where he pointed that message has an http_session attribute. However, it seems http_session is always None in ws_connect when user is AnonymousUser, which defeats our purpose.
I've solved it by checking if the user is Anonymous in the view, and if he is, which means he doesn't have a session (or at least channels can't see it), initialize one, and assign the key get_user the value "AnonymousUser" + str(uuid.uuid4()) (this way previously done in the consumer).
After I did this, every time ws_connect is called message will have an http_session attribute: Either the user ID when one is logged in, or AnonymousUser-uuid.uuid4().
I am new to python and I am having a hard time getting my head around the following concept...
I have created some python code which imports Pika to connect and consume messages from a rabbitmq queue.
The basic code is shown below:
receive.py
#!/usr/bin/env python
import pika
# RABBITMQ CONNECTION VARIABLES
MqHostName = 'centosserver'
MqUserName = 'guest'
MqPassWord = 'guest'
QueueName = 'Q1'
# PIKA CODE TO CONNECT TO RABBITMQ
credentials = pika.PlainCredentials(MqUserName, MqPassWord)
connection = pika.BlockingConnection(pika.ConnectionParameters(
host=MqHostName, credentials=credentials))
channel = connection.channel()
channel.queue_declare(queue=QueueName)
# CALLBACK ROUTINE TO RECEIVE MESSAGES FROM RABBITMQ
def callback(ch, method, properties, body):
print(" [x] Received %r" % body)
# DEFINE CALLBACK QUEUE
channel.basic_consume(callback,
queue=QueueName,
no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
The above works in as much as the mq messages are printed as they are received however...
I would like to be able to consume the 'body' of the mq messages via my Python application. I have been able to do this in as much as I can assign the message body to a str variable and then use split to split the message into its component variables using a delimiter contained within each message (;) and then assign each one to a variable within my Python code, but I only seem to be able to do this within the callback function section of the code above...
If I then try and access/work with these variables from another python script - by importing from the receive.py script, I just get raw messages (or body) from the pika connection and I cant understand why this is...
The code I have developed so far is shown below:
import pika
# RABBITMQ CONNECTION VARIABLES
MqHostName = 'centosserver'
MqUserName = 'guest'
MqPassWord = 'guest'
QueueName = 'Q1'
# PIKA CODE TO CONNECT TO RABBITMQ
credentials = pika.PlainCredentials(MqUserName, MqPassWord)
connection = pika.BlockingConnection(pika.ConnectionParameters(host=MqHostName, credentials=credentials))
channel = connection.channel()
channel.queue_declare(queue=queue_name)
def callback(ch, method, properties, body):
print(body) # this just prints what has been received from the queue with no formatting
# create a string variable to hold the whole rabbit message body
string_body = str(body)
#split the string created above based on the message seporator enclosed in '' below
message_vars = string_body.split(';')
# define names for the variables which are seporated from the string_body in the step above and assign then to an array
mv1, mv2, mv3, mv4, mv5, mv6, mv7, = \
mq_vars[0], mq_vars[1], mq_vars[2], mq_vars[3], mq_vars[4], mq_vars[5], mq_vars[6]
# set the queue
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
This allows me to print or perform operations on the mq1, mq2, etc... variables if I enter the code within the 'def callback(ch, method, properties, body):' block of code but I want to be able to have the receive script running and splitting the messages and assigning the content to global variables that I can access from another script by using an import statement.
Please can you point me in the right direction, basically I would like to be able to have a process.py script that will import just the values of mv1, mv2, etc... which will be updated each time a new MQ message is received...
process.py
from receive.py import mv1, mv2, mv3, mv4, mv5, mv6, mv7
new_var = (mv1 * mv2) # this is an example of what I would like to be able to do..
I know I don't yet understand enough about Python programming so there is likely to be an obvious answer to this but any pointers will be much appreciated.
Thank you!
I'm using the following code (from django management commands) to listen to the Twitter stream - I've used the same code on a seperate command to track keywords successfully - I've branched this out to use location, and (apparently rightly) wanted to test this out without disrupting my existing analysis that's running.
I've followed the docs and have made sure the box is in Long/Lat format (in fact, I'm using the example long/lat from the Twitter docs now). It looks broadly the same as the question here, and I tried using their version of the code from the answer - same error. If I switch back to using 'track=...', the same code works, so it's a problem with the location filter.
Adding a print debug inside streaming.py in tweepy so I can see what's happening, I print out the self.parameters self.url and self.headers from _run, and get:
{'track': 't,w,i,t,t,e,r', 'delimited': 'length', 'locations': '-121.7500,36.8000,-122.7500,37.8000'}
/1.1/statuses/filter.json?delimited=length and
{'Content-type': 'application/x-www-form-urlencoded'}
respectively - seems to me to be missing the search for location in some way shape or form. I don't believe I'm/I'm obviously not the only one using tweepy location search, so think it's more likely a problem in my use of it than a bug in tweepy (I'm on 2.3.0), but my implementation looks right afaict.
My stream handling code is here:
consumer_key = 'stuff'
consumer_secret = 'stuff'
access_token='stuff'
access_token_secret_var='stuff'
import tweepy
import json
# This is the listener, resposible for receiving data
class StdOutListener(tweepy.StreamListener):
def on_data(self, data):
# Twitter returns data in JSON format - we need to decode it first
decoded = json.loads(data)
#print type(decoded), decoded
# Also, we convert UTF-8 to ASCII ignoring all bad characters sent by users
try:
user, created = read_user(decoded)
print "DEBUG USER", user, created
if decoded['lang'] == 'en':
tweet, created = read_tweet(decoded, user)
print "DEBUG TWEET", tweet, created
else:
pass
except KeyError,e:
print "Error on Key", e
pass
except DataError, e:
print "DataError", e
pass
#print user, created
print ''
return True
def on_error(self, status):
print status
l = StdOutListener()
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret_var)
stream = tweepy.Stream(auth, l)
#locations must be long, lat
stream.filter(locations=[-121.75,36.8,-122.75,37.8], track='twitter')
The issue here was the order of the coordinates.
Correct format is:
SouthWest Corner(Long, Lat), NorthEast Corner(Long, Lat). I had them transposed. :(
The streaming API doesn't allow to filter by location AND keyword simultaneously.
you must refer to this answer i had the same problem earlier
https://stackoverflow.com/a/22889470/4432830