boto.sqs connect to non-aws endpoint - amazon-web-services

I'm currently in need of connecting to a fake_sqs server for dev purposes but I can't find an easy way to specify endpoint to the boto.sqs connection. Currently in java and node.js there are ways to specify the queue endpoint and by passing something like 'localhst:someport' I can connect to my own sqs-like instance. I've tried the following with boto:
fake_region = regioninfo.SQSRegionInfo(name=name, endpoint=endpoint)
conn = fake_region.connect(aws_access_key_id="TEST", aws_secret_access_key="TEST", port=9324, is_secure=False);
and then:
queue = connAmazon.get_queue('some_queue')
but it fails to retrieve the queue object,it returns None. Has anyone achieved to connect to an own sqs instance ?

Here's how to create an SQS connection that connects to fake_sqs:
region = boto.sqs.regioninfo.SQSRegionInfo(
connection=None,
name='fake_sqs',
endpoint='localhost', # or wherever fake_sqs is running
connection_cls=boto.sqs.connection.SQSConnection,
)
conn = boto.sqs.connection.SQSConnection(
aws_access_key_id='fake_key',
aws_secret_access_key='fake_secret',
is_secure=False,
port=4568, # or wherever fake_sqs is running
region=region,
)
region.connection = conn
# you can now work with conn
# conn.create_queue('test_queue')
Be aware that, at the time of this writing, the fake_sqs library does not respond correctly to GET requests, which is how boto makes many of its requests. You can install a fork that has patched this functionality here: https://github.com/adammck/fake_sqs

Related

Listen to mqtt topics with django channels and celery

I would like a way to integrate django with mqtt and for that the first thing that came in my mind was using django-channels and an mqtt broker that supports mqtt over web sockets, so I could communicate directly between the broker and django-channels.
However, I did not found a way to start a websocket client from django, and acording to this link it's not possible.
Since I'm also starting to study task queues I wonder if it would be a good practice to start an mqtt client using paho-mqtt and then run that in a separate process using celery. This process would then forward the messages receives by the broker to django channels through websockets, this way I could also communicate with the client process, to publish data or stop the mqtt client when needed, and all that directly from django.
I'm a little skeptical about this idea since I also read that process run in celery should not take too long to complete, and in this case that's exactly what I want to do.
So my question is, how much of a bad idea that is? Is there any other option to directly integrate django with mqtt?
*Note: I dont want to have a separate process running on the server, I want to be able to start and stop the process from django, in order to have full control over the mqtt client from the web gui
I found a better way that does not need to use celery.
I simply started a mqtt client on app/apps.py on the ready method, so a client will be started everytime I run the application. From here I can communicate with other parts of the system using django-channels or signals.
apps.py:
from django.apps import AppConfig
from threading import Thread
import paho.mqtt.client as mqtt
class MqttClient(Thread):
def __init__(self, broker, port, timeout, topics):
super(MqttClient, self).__init__()
self.client = mqtt.Client()
self.broker = broker
self.port = port
self.timeout = timeout
self.topics = topics
self.total_messages = 0
# run method override from Thread class
def run(self):
self.connect_to_broker()
def connect_to_broker(self):
self.client.on_connect = self.on_connect
self.client.on_message = self.on_message
self.client.connect(self.broker, self.port, self.timeout)
self.client.loop_forever()
# The callback for when a PUBLISH message is received from the server.
def on_message(self, client, userdata, msg):
self.total_messages = self.total_messages + 1
print(str(msg.payload) + "Total: {}".format(self.total_messages))
# The callback for when the client receives a CONNACK response from the server.
def on_connect(self, client, userdata, flags, rc):
# Subscribe to a list of topics using a lock to guarantee that a topic is only subscribed once
for topic in self.topics:
client.subscribe(topic)
class CoreConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'core'
def ready(self):
MqttClient("192.168.0.165", 1883, 60, ["teste/01"]).start()
If you are using ASGI in your Django application you can use MQTTAsgi. Full disclosure I'm the author of MQTTAsgi.
It's a complete protocol server for Django and MQTT.
To utilize the mqtt protocol server you can run your application, first you need to create a MQTT consumer:
from mqttasgi.consumers import MqttConsumer
class MyMqttConsumer(MqttConsumer):
async def connect(self):
await self.subscribe('my/testing/topic', 2)
async def receive(self, mqtt_message):
print('Received a message at topic:', mqtt_mesage['topic'])
print('With payload', mqtt_message['payload'])
print('And QOS:', mqtt_message['qos'])
pass
async def disconnect(self):
await self.unsubscribe('my/testing/topic')
Then you should add this protocol to the protocol router:
application = ProtocolTypeRouter({
'websocket': AllowedHostsOriginValidator(URLRouter([
url('.*', WebsocketConsumer)
])),
'mqtt': MyMqttConsumer,
....
})
Then you can run the mqtt protocol server with*:
mqttasgi -H localhost -p 1883 my_application.asgi:application
*Assuming the broker is in localhost and port 1883.
I wanted to solve this problem too but found no good solutions out there that really fitted the Channels architecture (though MQTTAsgi came close but it uses paho-mqtt and doesn't fully use the Channels-layer system).
I created: https://pypi.org/project/chanmqttproxy/
(src at https://github.com/lbt/channels-mqtt-proxy)
Essentially it's a fully async Channels 3 proxy to MQTT that allows publishing and subscribing. The documentation show how to extend the standard Channels tutorial so chat messages are seen on MQTT topics - and can be sent from MQTT topics to all websocket browser clients.
I don't know it this is what the OP wants as far as listening to MQTT topics goes but for the general case I think this is a good solution.

AWS DocumentDB connection problem with TLS

When TLS is disabled, I can connect successfully through my lambda function using the same code as shown here - https://docs.aws.amazon.com/documentdb/latest/developerguide/connect.html#w139aac29c11c13b5b7
However, when I enable TLS and use the TLS enabled code sample from above link, my lambda function times out. I've downloaded rds combined ca pem file through wget and I am deploying the pem file along with my code to the AWS lambda.
This is the code where my execution stops and times out:
caFilePath = "rds-combined-ca-bundle.pem"
var connectionStringTemplate = "mongodb://%s:%s#%s:27017/dbname?ssl=true&sslcertificateauthorityfile=%s"
var connectionURI = fmt.Sprintf(connectionStringTemplate, secret["username"], secret["password"], secret["host"], caFilePath)
fmt.Println("Connection String", connectionURI)
client, err := mongo.NewClient(options.Client().ApplyURI(connectionURI))
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
I don't see any errors in the cloudwatch logs after the "Connection string" print.
I suspect Its an issue with your VPC design
Connecting to an Amazon DocumentDB Cluster from Outside an Amazon VPC,
check the last paragraph
https://docs.aws.amazon.com/documentdb/latest/developerguide/connect-from-outside-a-vpc.html
also, the below link is giving detailed instructions
https://blog.webiny.com/connecting-to-aws-documentdb-from-a-lambda-function-2b666c9e4402
Can you try creating lambda test function using python and see if your having the issue
import pymongo
import sys
##Create a MongoDB client, open a connection to Amazon DocumentDB as a replica set and specify the read preference as secondary preferred
client = pymongo.MongoClient('mongodb://<dbusername>:<dbpassword>#mycluster.node.us-east-1.docdb.amazonaws.com:27017/?ssl=true&ssl_ca_certs=rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred')
##Specify the database to be used
db = client.test
##Specify the collection to be used
col = db.myTestCollection
##Insert a single document
col.insert_one({'hello':'Amazon DocumentDB'})
##Find the document that was previously written
x = col.find_one({'hello':'Amazon DocumentDB'})
##Print the result to the screen
print(x)
##Close the connection
client.close()

Cassandra python driver: Client request timeout

I setup a simple script to insert a new record into a Cassandra database. It works fine on my local machine, but I am getting timeout errors from the client when I moved the database to a remote machine. How do I properly set the timeout for this driver? I have tried many things. I hacked the timeout in my IDE and got it to work without timing out, so I know for sure its just a timeout problem.
How I setup my Cluster:
profile = ExecutionProfile(request_timeout=100000)
self.cluster = Cluster([os.getenv('CASSANDRA_NODES', None)], auth_provider=auth_provider,
execution_profiles={EXEC_PROFILE_DEFAULT: profile})
connection.setup(hosts=[os.getenv('CASSANDRA_SEED', None)],
default_keyspace=os.getenv('KEYSPACE', None),
consistency=int(os.getenv('CASSANDRA_SESSION_CONSISTENCY', 1)), auth_provider=auth_provider,
connect_timeout=200)
session = self.cluster.connect()
The query I am trying to perform:
model = Model.create(buffer=_buffer, lock=False, version=self.version)
13..': 'Client request timeout. See Session.execute_async'}, last_host=54.213..
The record I'm inserting is 11mb, so I can understand there is a delay, just increasing the timeout should do it, but I can't seem to figure it out.
The default request timeout is an attribute of the Session object (version 2.0.0 of the driver and later).
session = cluster.connect(keyspace)
session.default_timeout = 60
This is the simplest answer (no need to mess about with an execution profile), and I have confirmed that it works.
https://datastax.github.io/python-driver/api/cassandra/cluster.html#cassandra.cluster.Session
You can set request_timeout in the Cluster constructor:
self.cluster = Cluster([os.getenv('CASSANDRA_NODES', None)],
auth_provider=auth_provider,
execution_profiles={EXEC_PROFILE_DEFAULT: profile},
request_timeout=10)
Reference: https://datastax.github.io/python-driver/api/cassandra/cluster.html
Based on the documentation, request_timeout is an attribute of ExecutionProfile class, and you can give an execution profile to the cluster constructor (this is an example).
So, you can do:
from cassandra.cluster import Cluster
from cassandra.cluster import ExecutionProfile
execution_profil = ExecutionProfile(request_timeout=600)
profiles = {'node1': execution_profil}
cluster = Cluster([os.getenv('CASSANDRA_NODES', None)], execution_profiles=profiles)
session = cluster.connect()
session.execute('SELECT * FROM test', execution_profile='node1')
Important: when you use execute or èxecute_async, you have to specify the execution_profile name.

Celery, mechanize and socks proxy

I'm working on a project that needs to access a webpage using mechanize with a socks proxy. After digging a bit, I came up with the following code:
def create_connection(address, timeout=None, source_address=None):
sock = socks.socksocket()
sock.connect(address)
return sock
CRAWLER_SOCKS_PROXY_HOST = '0.0.0.0'
CRAWLER_SOCKS_PROXY_PORT = 1080
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, CRAWLER_SOCKS_PROXY_HOST, CRAWLER_SOCKS_PROXY_PORT)
socket.socket = socks.socksocket
socket.create_connection = create_connection
Which indeed allows me to access the webpage using the proxy socks I created with the ssh -f -N -D 1080 user#host.
After doing that, I realized that Celery couldn't connect to my Redis broker giving Connection closed unexpectedly errors so I killed the ssh process and confirmed that the proxy socks configuration was the culprit. The error obtained is: Cannot connect to redis://127.0.0.1:6379//: Error connecting to SOCKS5 proxy 0.0.0.0:1080: [Errno 111] Connection refused.
So, my question is: Is there a way to set a proxy socks for mechanize but without affecting the other parts of the code? I suspect that if I try to use requests module, it will also use the proxy which is not my intention. I just want the proxy for a specific call.
I solved this by putting the
CRAWLER_SOCKS_PROXY_HOST = '0.0.0.0'
CRAWLER_SOCKS_PROXY_PORT = 1080
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, CRAWLER_SOCKS_PROXY_HOST, CRAWLER_SOCKS_PROXY_PORT)
socket.socket = socks.socksocket
socket.create_connection = create_connection
lines inside the function call (where I needed to do the call using proxy socks) rather than in the global scope of the module. This way seems Celery can connect to the broker (and also reconnect after quitting and launching again).

how to get nova client (v1.1) to use ssh tunnel when retrieving server list

the openstack nova client is giving me fits. i can't figure out how to get it to use a local ssh tunnel url i specify instead of the one it retrieves. so:
from novaclient.v1_1 import client as nova_client
from pprint import pprint
self.__nova_client = nova_client.Client(
'myusername',
'mypassword',
'mytenantname',
'https://localhost:5443/v2.0',
service_type='compute',
insecure=True
)
for server in self.__nova_client.servers.list():
pprint(server)
yields...
requests.exceptions.ConnectionError: HTTPConnectionPool(host='os-compute.vip.mysubdomain.mydomain.com', port=8774): Max retries exceeded with url: /v2/aa0dffecaef543aca072a26fdff5c92b/servers/detail (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
because the "os-compute.vip.mysubdomain.mydomain.com:8774" address is unreachable from where the script is running.
the self.__nova_client = nova_client.Client() bit connects fine because it uses 'https://localhost:5443/v2.0' - the established tunnel i provide. i just need a way to override the "os-compute.vip.mysubdomain.mydomain.com:8774" that it's trying to connect to with a "localhost:8774" tunnel that i set up. but i can't figure out whether/how that's possible.
any guidance will be greatly appreciated.
Your nova client is pulling the service catalogue from keystone through the tunnel setup on your localhost. You will need to explicitly override the endpoint specified in the service catalogue.
One way is to explicitly specify the endpoint, while some of the clients allow you to directly specify the endpoint on construction novaclient doesn't, take a look at nova_client.management_url after you've constructed the object and replace it with your localhost address.