RabbitMQ finishes up all space in EC2 machine - amazon-web-services

I have a RabbitMQ single instance running on docker on an EC2 machine on AWS
It runs out of disk somehow, I don't quite understand how because I'm using it just to submit messages, and I haven't configured any persistence for them.
I've even added this to the config, but this was only a temporary solution:
loopback_users.guest = false
log.console = true
disk_free_limit.absolute = 2GB
After a few weeks work I've gotten this from my python client:
Connection <Connection: "amqp://admin:******#rabbitmq:5672/" at 0x7ff757125d00> was blocked by: 'low on disk'
My usage is like so:
Publisher:
url = config.RABBITMQ_URL
self.connection = await connect(url)
async with self.connection:
channel = await self.connection.channel()
exchange = await channel.declare_exchange("quotes", ExchangeType.FANOUT)
message_body = json.dumps(msg).encode()
message = Message(message_body, delivery_mode=DeliveryMode.NOT_PERSISTENT, expiration=300)
await exchange.publish(message, routing_key="quote")
logger.debug(f"Message sent: {message.body}")
Subscriber:
async def on_message(message: AbstractIncomingMessage):
async with message.process():
decoded_msg = message.body.decode()
quote: Quote = json.loads(decoded_msg)
strategy: Strategy
for strategy in strategies:
if strategy.symbol == quote["symbol"]:
await strategy(quote)
async def main():
connection = await connect(url=config.RABBITMQ_URL)
async with connection:
channel = await connection.channel()
await channel.set_qos(prefetch_count=1)
exchange = await channel.declare_exchange("quotes", ExchangeType.FANOUT)
queue = await channel.declare_queue(name="quotes")
await queue.bind(exchange, routing_key="quote")
await queue.consume(on_message)
print(" [x] Waiting for quotes. To exit press CTRL+C")
try:
await asyncio.Future()
finally:
await connection.close()
This is how i start up the RabbitMQ container:
services:
rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq
restart: always
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: ${RABBITMQ_USERNAME}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASSWORD}
volumes:
- ./conf/:/etc/rabbitmq/conf.d/
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 30s
timeout: 30s
retries: 3
I use the aio_pika python library as the RabbitMQ client

Related

How to connect to Amazon managed blockchain network using hyperledger-fabric-nodesdk 2.2

I want to connect to hyperledger fabric blockchain network on Amazon Managed Blockchain using nodejs sdk.
Fabric client is a cloud9 instance, which already setup and successfully connected to peer node using fabric cli inside a docker container.
But when I try to use nodejs sdk to connect to network with this code:
'use strict';
const FabricCAServices = require('fabric-ca-client');
const { Wallets, Gateway, X509Identity, User } = require('fabric-network');
const fs = require('fs');
const path = require('path');
const yaml = require('js-yaml');
const ccpPath = path.resolve(__dirname, 'connection_profile.yaml');
const ccp = yaml.load(fs.readFileSync(ccpPath, 'utf8'));
async function main() {
try {
const walletPath = path.join(process.cwd(), 'wallet');
const wallet = await Wallets.newFileSystemWallet(walletPath);
const gateway = new Gateway();
const gatewayOptions = {identity: 'admin', wallet: wallet, discovery: {enabled: true, asLocalhost: false }}
await gateway.connect(ccp, gatewayOptions);
const network = await gateway.getNetwork('mychannel');
} catch (error) {
console.error(`Some error is occurred: ${error.stack}`);
process.exit(1);
}
}
main();
With content of "connection_profile.yaml" file is:
name: "ABC"
x-type: hlfv1
version: "1.0"
channels:
mychannel:
orderers:
- ABCOrderer
peers:
peer1:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
organizations:
abc:
mspid: m-***
peers:
- peer1
certificateAuthorities:
- abc
orderers:
ABCOrderer:
url: grpcs://orderer.n-***.managedblockchain.ap-northeast-1.amazonaws.com:30001
grpcOptions:
ssl-target-name-override: orderer.n-***.managedblockchain.ap-northeast-1.amazonaws.com
tlsCACerts:
# path: /home/ec2-user/managedblockchain-tls-chain.pem
path: /home/ec2-user/admin-msp/admincerts/cert.pem
peers:
peer1:
url: grpcs://nd-***.managedblockchain.ap-northeast-1.amazonaws.com:30003
eventUrl: grpcs://nd-***.managedblockchain.ap-northeast-1.amazonaws.com:30004
grpcOptions:
ssl-target-name-override: nd-***.managedblockchain.ap-northeast-1.amazonaws.com
tlsCACerts:
# path: /home/ec2-user/managedblockchain-tls-chain.pem
path: /home/ec2-user/admin-msp/admincerts/cert.pem
certificateAuthorities:
abc:
url: https://ca.m-***.managedblockchain.ap-northeast-1.amazonaws.com:30002
httpOptions:
verify: true
tlsCACerts:
# path: /home/ec2-user/managedblockchain-tls-chain.pem
path: /home/ec2-user/admin-msp/admincerts/cert.pem
caName: m-***
"/home/ec2-user/admin-msp/admincerts/cert.pem" is file is created by enroll member admin identity (follow this aws guide: https://docs.aws.amazon.com/managed-blockchain/latest/hyperledger-fabric-dev/get-started-enroll-admin.html).
Then after 3s the console show this error:
2022-07-05T13:22:52.812Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Endorser- name: peer1, url:grpcs://nd-***.managedblockchain.ap-northeast-1.amazonaws.com:30003, connected:false, connectAttempted:true
2022-07-05T13:22:52.814Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server peer1 url:grpcs://nd-***.managedblockchain.ap-northeast-1.amazonaws.com:30003 timeout:3000
2022-07-05T13:22:52.814Z - info: [NetworkConfig]: buildPeer - Unable to connect to the endorser peer1 due to Error: Failed to connect before the deadline on Endorser- name: peer1, url:grpcs://nd-***.managedblockchain.ap-northeast-1.amazonaws.com:30003, connected:false, connectAttempted:true
2022-07-05T13:22:55.817Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Committer- name: ABCOrderer, url:grpcs://orderer.n-***.managedblockchain.ap-northeast-1.amazonaws.com:30001, connected:false, connectAttempted:true
2022-07-05T13:22:55.817Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server ABCOrderer url:grpcs://orderer.n-***.managedblockchain.ap-northeast-1.amazonaws.com:30001 timeout:3000
2022-07-05T13:22:55.818Z - info: [NetworkConfig]: buildOrderer - Unable to connect to the committer ABCOrderer due to Error: Failed to connect before the deadline on Committer- name: ABCOrderer, url:grpcs://orderer.n-***.managedblockchain.ap-northeast-1.amazonaws.com:30001, connected:false, connectAttempted:true
Some error is occurred: TypeError: Cannot read property 'toArray' of null
at EC.sign (/home/ec2-user/src-test/node_modules/elliptic/lib/elliptic/ec/index.js:104:30)
at CryptoSuite_ECDSA_AES.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/impl/CryptoSuite_ECDSA_AES.js:215:25)
at Signer.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/Signer.js:59:28)
at SigningIdentity.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/SigningIdentity.js:71:23)
at IdentityContext.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/IdentityContext.js:91:40)
at DiscoveryService.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/ServiceAction.js:69:40)
at NetworkImpl._initializeInternalChannel (/home/ec2-user/src-test/node_modules/fabric-network/lib/network.js:298:35)
at NetworkImpl._initialize (/home/ec2-user/src-test/node_modules/fabric-network/lib/network.js:250:20)
at Gateway.getNetwork (/home/ec2-user/src-test/node_modules/fabric-network/lib/gateway.js:350:26)
at main (/home/ec2-user/src-test/enrollUser.js:38:35)
So I think the problem probably is my connection_profile setting, which I mimic connection-profile-template.yaml file from aws blockchain samples code (https://github.com/aws-samples/non-profit-blockchain/tree/master/ngo-lambda).
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
name: "ngo"
x-type: "hlfv1"
description: "NGO Network"
version: "1.0"
channels:
mychannel:
orderers:
- orderer.com
peers:
peer1:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
organizations:
Org1:
mspid: %MEMBERID%
peers:
- peer1
certificateAuthorities:
- ca-org1
orderers:
orderer.com:
url: grpcs://%ORDERINGSERVICEENDPOINT%
grpcOptions:
ssl-target-name-override: %ORDERINGSERVICEENDPOINTNOPORT%
tlsCACerts:
path: %CAFILE%
peers:
peer1:
url: grpcs://%PEERSERVICEENDPOINT%
eventUrl: grpcs://%PEEREVENTENDPOINT%
grpcOptions:
ssl-target-name-override: %PEERSERVICEENDPOINTNOPORT%
tlsCACerts:
path: %CAFILE%
certificateAuthorities:
ca-org1:
url: https://%CASERVICEENDPOINT%
httpOptions:
verify: false
tlsCACerts:
path: %CAFILE%
caName: %MEMBERID%
So, any idea or suggestion on how can I fix it? Any help would be appreciated.
Thank You!
Updated_1:
I try both key file from AWS S3(managedblockchain-tls-chain.pem) and key created by CA for admin. But both do not seem to work. Here is the error when I try with key file from S3:
Some error is occurred: TypeError: Cannot read property 'toArray' of null
at EC.sign (/home/ec2-user/src-test/node_modules/elliptic/lib/elliptic/ec/index.js:104:30)
at CryptoSuite_ECDSA_AES.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/impl/CryptoSuite_ECDSA_AES.js:215:25)
at Signer.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/Signer.js:59:28)
at SigningIdentity.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/SigningIdentity.js:71:23)
at IdentityContext.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/IdentityContext.js:91:40)
at DiscoveryService.sign (/home/ec2-user/src-test/node_modules/fabric-common/lib/ServiceAction.js:69:40)
at NetworkImpl._initializeInternalChannel (/home/ec2-user/src-test/node_modules/fabric-network/lib/network.js:298:35)
at NetworkImpl._initialize (/home/ec2-user/src-test/node_modules/fabric-network/lib/network.js:250:20)
at Gateway.getNetwork (/home/ec2-user/src-test/node_modules/fabric-network/lib/gateway.js:350:26)
at main (/home/ec2-user/src-test/enrollUser.js:38:35)
Updated 2:
Maybe the problem is my admin identity inside wallet, so I update the code to save admin identity to the wallet:
const caURL = ccp.certificateAuthorities['abc'].url;
const ca = new FabricCAServices(caURL);
const enrollment = await ca.enroll({ enrollmentID: 'admin', enrollmentSecret: 'Adminpassword' });
const X509Identity = {
credentials: {
certificate: enrollment.certificate,
privateKey: enrollment.rootCertificate,
},
mspId: ccp.organizations['abc'].mspid,
type: 'X.509',
};
// Create a new file system based wallet for managing identities.
const walletPath = path.join(process.cwd(), 'wallet');
const wallet = await Wallets.newFileSystemWallet(walletPath);
await wallet.put('admin', X509Identity);
Updated_3:
As #david_k suggests, the problem is my identity inside the wallet is wrong, and as the result, it is denied by the gateway. So the line privateKey in Update_2 needs to be changed from privateKey: enrollment.rootCertificate, to privateKey: enrollment.key.toBytes(),
Thank you very much #david_k!

set alertmanager to distribute alerts to different channel by job name

I want to send my alert to two different distribution lists in Alertmanager for Prometheus. The only way to distinguish my alerts is by their job name.
my alert names are like below:
sample1:
Labels
alertname = SyslogErrors
instance = 22.32.23.32:2324
job = my-job-sample-service-dev
message = Exception raised during message subscription. Trying again in 60 seconds
monitor = server1
severity = critical
Annotations
description = Errors have been found for my-job-sample-service-dev application in /data/logs/messages/my-job-sample-service-dev syslog file
Source
sample2:
Labels
alertname = SyslogErrors
instance = 22.32.23.32:2324
job = my-job-sample-service-pre-dev
message = Exception raised during message subscription. Trying again in 60 seconds
monitor = server1
severity = critical
Annotations
description = Errors have been found for my-job-sample-service-pre-dev application in /data/logs/messages/my-job-sample-service-pre-dev syslog file
Source
here is my sample alertmanager config file:
global:
smtp_smarthost: 'mail.server.com:25'
smtp_from: 'dev#server.com'
smtp_require_tls: false
templates:
- '/etc/alertmanager/template/*.tmpl'
route:
receiver: mail-receiver-dev
group_by: ['alertname']
group_wait: 3s
group_interval: 5s
repeat_interval: 1h
# All alerts that do not match the following child routes
# will remain at the root node and be dispatched to 'default-receiver'.
routes:
- receiver: 'mail-pre-dev'
group_wait: 10s
match_re:
- job = .*pre-dev.*
- receiver: 'mail-dev'
group_wait: 10s
match_re:
- job = .*dev.*
receivers:
- name: 'mail-dev'
email_configs:
- to: 'dev-group#server.com'
send_resolved: true
- name: 'mail-pre-dev'
email_configs:
- to: 'pre-dev-group#server.com'
send_resolved: true
I am using the below link as a reference:
reference
Testing config file link
testscript for using above link: {service="foo-service",severity="critical",job="my-job-sample-service-dev"}
So the question is, how to send an alert to a different channel by using regex for the job title? At the moment when I test all the alert goes to pre-dev.
Change the following:
match_re:
- job = .*pre-dev.*
To:
matchers:
- job =~ ".*pre-dev.*"
Note:
"match_re" is deprecated and must be replaced by "matchers", but if you want to use it, the correct syntax is:
match_re:
- job: ".*pre-dev.*"

Spring Boot Application failed on start when using #SqsListener

I'm trying to implement SQS listener in my Spring Boot app (Kotlin). I'm using spring-cloud-aws-messaging. Here is an article that walked me through implementation.
Problem: application is unable to start.
Logs says:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'simpleMessageListenerContainer' defined in class path resource [org/springframework/cloud/aws/messaging/config/annotation/SqsConfiguration.class]: Invocation of init method failed; nested exception is com.amazonaws.services.sqs.model.AmazonSQSException: null (Service: AmazonSQS; Status Code: 500; Error Code: 500 ; Request ID: null)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1803)
...
Kindly asking for help :)
docker-compose.yml
version: "2"
services:
localstack:
image: localstack/localstack:0.11.5
ports:
- "8085:8080"
- "4569-4576:4569-4576"
environment:
- SERVICES=sqs:4573
- DOCKER_HOST=unix:///var/run/docker.sock
- DEFAULT_REGION=eu-west-1
- DATA_DIR=/tmp/localstack/data
volumes:
- ./docker/dev/localstack-init-scripts:/docker-entrypoint-initaws.d
docker-entrypoint-initaws.sh
#!/usr/bin/env bash
set -x
awslocal sqs create-queue --queue-name my-queue-name
set +x
InfrastructureConfiguration.kt
#EnableSqs
class InfrastructureConfiguration(...) {
#Primary
#Bean
fun amazonSQSAsync(): AmazonSQSAsync {
val credentials: AWSCredentials = BasicAWSCredentials("accessKey", "secretKey")
return AmazonSQSAsyncClientBuilder
.standard()
.withEndpointConfiguration(
AwsClientBuilder.EndpointConfiguration(
"http://localstack:4573",
Regions.fromName("eu-west-1").toString()
)
).withCredentials(AWSStaticCredentialsProvider(credentials)).build()
}
#Primary
#Bean
fun simpleMessageListenerContainerFactory(amazonSQSAsync: AmazonSQSAsync):
SimpleMessageListenerContainerFactory {
val factory = SimpleMessageListenerContainerFactory()
factory.setAmazonSqs(amazonSQSAsync)
factory.setAutoStartup(false)
factory.setMaxNumberOfMessages(10)
factory.setWaitTimeOut(20)
return factory
}
}
AmazonSqs.kt
val logger = KotlinLogging.logger {}
#Lazy
#Component
class AmazonSqs {
companion object {
const val QUEUE_NAME = "my-queue-name"
}
#SqsListener(QUEUE_NAME)
fun receiveMessage(message: String?) {
logger.info("Received message: {}")
}
}
The problem lies in localstack. I've run docker-compose up again and noticed that port 4573 is deprecated.
localstack_1 | Starting edge router (https port 4566)...
localstack_1 | Starting mock SQS service on http ports 4566 (recommended) and
4573 (deprecated)...
The solution is to expose port 4566 in docker-compose and use it instead.
ports:
- "8085:8080"
- "4566-4576:4566-4576"

Channels websocket AsyncJsonWebsocketConsumer disconnect not reached

I have the following consumer:
class ChatConsumer(AsyncJsonWebsocketConsumer):
pusher = None
async def connect(self):
print(self.scope)
ip = self.scope['client'][0]
print(ip)
self.pusher = await self.get_pusher(ip)
print(self.pusher)
await self.accept()
async def disconnect(self, event):
print("closed connection")
print("Close code = ", event)
await self.close()
raise StopConsumer
async def receive_json(self, content):
#print(content)
if 'categoryfunctionname' in content:
await cellvoltage(self.pusher, content)
else:
print("ERROR: Wrong data packets send")
print(content)
#database_sync_to_async
def get_pusher(self, ip):
p = Pusher.objects.get(auto_id=1)
try:
p = Pusher.objects.get(ip=ip)
except ObjectDoesNotExist:
print("no pusher found")
finally:
return p
Connecting, receiving and even getting stuff async from the database works perfectly. Only disconnecting does not work as expected. The following Terminal Log explains what's going on:
2018-09-19 07:09:56,653 - INFO - server - HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2018-09-19 07:09:56,653 - INFO - server - Configuring endpoint tcp:port=1111:interface=192.168.1.111
2018-09-19 07:09:56,653 - INFO - server - Listening on TCP address 192.168.1.111:1111
[2018/09/19 07:11:25] HTTP GET / 302 [0.02, 10.171.253.112:35236]
[2018/09/19 07:11:25] HTTP GET /login/?next=/ 200 [0.05, 10.111.253.112:35236]
{'type': 'websocket', 'path': '/ws/chat/RP1/', 'headers': [(b'upgrade', b'websocket'), (b'connection', b'Upgrade'), (b'host', b'10.111.111.112:1111'), (b'origin', b'http://10.111.253.112:1111'), (b'sec-websocket-key', b'vKFAnqaRMm84AGUCxbAm3g=='), (b'sec-websocket-version', b'13')], 'query_string': b'', 'client': ['10.111.253.112', 35238], 'server': ['192.168.1.111', 1111], 'subprotocols': [], 'cookies': {}, 'session': <django.utils.functional.LazyObject object at 0x7fe4a8d1ba20>, 'user': <django.utils.functional.LazyObject object at 0x7fe4a8d1b9e8>, 'path_remaining': '', 'url_route': {'args': (), 'kwargs': {'room_name': 'RP1'}}}
10.111.253.112
[2018/09/19 07:11:25] WebSocket HANDSHAKING /ws/chat/RP1/ [10.111.253.112:35238]
[2018/09/19 07:11:25] WebSocket CONNECT /ws/chat/RP1/ [10.111.111.112:35238]
no pusher found
1 - DEFAULT - 0.0.0.0
ERROR: Wrong data packets send
{'hello': 'Did I achieve my objective?'}
[2018/09/19 07:11:46] WebSocket DISCONNECT /ws/chat/RP1/ [10.111.253.112:35238]
2018-09-19 07:11:56,792 - WARNING - server - Application instance <Task pending coro=<SessionMiddlewareInstance.__call__() running at /home/pi/PycharmProjects/LOGGER/venv/lib/python3.6/site-packages/channels/sessions.py:175> wait_for=<Future pending cb=[_chain_future.<locals>._call_check_cancel() at /usr/lib/python3.6/asyncio/futures.py:403, <TaskWakeupMethWrapper object at 0x7fe4a82e6fd8>()]>> for connection <WebSocketProtocol client=['10.171.253.112', 35238] path=b'/ws/chat/RP1/'> took too long to shut down and was killed.
After the 10 seconds timeout it gives a warning that the connection was killed:
WARNING - server - Application instance taskname running at location
at linenumber for connection cxn-name took too long to shut down and
was killed.
The disconnect method was thus also not reached.
What could this be?
Am I using the correct method?
Could I expand the timeout period?
if you are intending to run some custom logic during connection close then you should override websocket_disconnect and then call super (rather than rais the exception yourself)
https://github.com/django/channels/blob/507cb54fcb36df63282dd19653ea743036e7d63c/channels/generic/websocket.py#L228-L241
The code linked in the other answer was very helpful. Here it is for reference (as of May 2022):
async def websocket_disconnect(self, message):
"""
Called when a WebSocket connection is closed. Base level so you don't
need to call super() all the time.
"""
try:
for group in self.groups:
await self.channel_layer.group_discard(group, self.channel_name)
except AttributeError:
raise InvalidChannelLayerError(
"BACKEND is unconfigured or doesn't support groups"
)
await self.disconnect(message["code"])
raise StopConsumer()
async def disconnect(self, code):
"""
Called when a WebSocket connection is closed.
"""
pass
I'd note that there is no need to use super() on websocket_disconnect. They have a hook disconnect which was perhaps added since the original answer that can be used to add any teardown code. You can simply override the disconnect method in your own class and it will be called.

Django channels redis channel layer opens a lot of connections

We ported a part of our application to django channels recently, using the redis channel layer backend. A part of our setup still runs on python2 in a docker which is why we use redis pub/sub to send messages back to the client. A global listener (inspired by this thread) catches all messages and distributes them into the django channels system. It all works fine so far but I see a lot of debug messages Creating tcp connection... passing by. The output posted below corresponds to one event. Both the listener as well as the consumer seem to be creating two redis connections. I have not enough knowledge about the underlying mechanism to be able to tell if this the expected behavior, thus me asking here. Is this to be expected?
The Listener uses a global channel layer instance:
# maps the publish type to a method name of the django channel consumer
PUBLISH_TYPE_EVENT_MAP = {
'state_change': 'update_client_state',
'message': 'notify_client',
}
channel_layer = layers.get_channel_layer()
class Command(BaseCommand):
help = u'Opens a connection to Redis and listens for messages, ' \
u'and then whenever it gets one, sends the message onto a channel ' \
u'in the Django channel system'
...
def broadcast_message(self, msg_body):
group_name = msg_body['subgrid_id'].replace(':', '_')
try:
event_name = PUBLISH_TYPE_EVENT_MAP[msg_body['publish_type']]
# backwards compatibility
except KeyError:
event_name = PUBLISH_TYPE_EVENT_MAP[msg_body['type']]
async_to_sync(channel_layer.group_send)(
group_name, {
"type": event_name,
"kwargs": msg_body.get('kwargs'),
})
The consumer is a JsonWebsocketConsumer that is initialized like this
class SimulationConsumer(JsonWebsocketConsumer):
def connect(self):
"""
Establishes the connection with the websocket.
"""
logger.debug('Incoming connection...')
# subgrid_id can be set dynamically, see property subgrid_id
self._subgrid_id = self.scope['url_route']['kwargs']['subgrid_id']
async_to_sync(self.channel_layer.group_add)(
self.group_name,
self.channel_name
)
self.accept()
And the method that is called from the listener:
def update_client_state(self, event):
"""
Public facing method that pushes the state of a simulation back
to the client(s). Has to be called through django channels
```async_to_sync(channel_layer.group_send)...``` method
"""
logger.debug('update_client_state event %s', event)
current_state = self.redis_controller.fetch_state()
data = {'sim_state': {
'sender_sessid': self.session_id,
'state_data': current_state}
}
self.send_json(content=data)
A single event gives me this output
listener_1 | DEBUG !! data {'publish_type': 'state_change', 'subgrid_id': 'subgrid:6d1624b07e1346d5907bbd72869c00e8'}
listener_1 | DEBUG !! event_name update_client_state
listener_1 | DEBUG !! kwargs None
listener_1 | DEBUG !! group_name subgrid_6d1624b07e1346d5907bbd72869c00e8
listener_1 | DEBUG Using selector: EpollSelector
listener_1 | DEBUG Parsing Redis URI 'redis://:#redis-threedi-server:6379/13'
listener_1 | DEBUG Creating tcp connection to ('redis-threedi-server', 6379)
listener_1 | DEBUG Parsing Redis URI 'redis://:#redis-threedi-server:6379/13'
listener_1 | DEBUG Creating tcp connection to ('redis-threedi-server', 6379)
threedi-server_1 | DEBUG Parsing Redis URI 'redis://:#redis-threedi-server:6379/13'
threedi-server_1 | DEBUG Creating tcp connection to ('redis-threedi-server', 6379)
threedi-server_1 | DEBUG update_client_state event {'type': 'update_client_state', 'kwargs': None}
threedi-server_1 | DEBUG Parsing Redis URI 'redis://:#redis-threedi-server:6379/13'
threedi-server_1 | DEBUG Creating tcp connection to ('redis-threedi-server', 6379)