Passing data from local storage to Flask - flask

I was wondering, How could I pass user data that was given through form into local storage and access it in Flask to use the data to display using Flask?

I'm sure, I have understood the question. Here is my answer based on what I have understood. To save and read data from client local storage you need to use the browser JavaScript.
document.getElementById('clickme').onclick = function clicked() {
// use ajax or equivalent to send the data from the browser to the server
}
On the server read the data:
#app.route("/route1")
def route1():
data = request.get_json()['data']
# do anything here
return jsonify({'status':'success'})

Related

share db session across multiple lambda invocations using Mangum

So far, I've been declaring my db connection outside of my web server app creation:
# src/main.py
db_session = ... # connection
app = FastAPI()
handler = Mangum(app)
# other files would import db_session from src.main
# and query the db through it
For better unit testing, I decided to move the db declaration as part of the app state:
def create_app(settings: Settings):
app = FastApi()
app.state.config = settings
app.state.db_session = ... # here is the db declaration, using `settings` to get db credentials
...
return app
app = create_app(settings)
handler = Mangum(app)
does anyone know if, by wrapping the app around Mangum, db session won't be shared anymore across multiple lambda invocations ? I don't know to which extent app here is within the handler real code.
There is a pretty good answer citing the AWS documentation here: Scope of Python globals in AWS Lambda
In your case, app is a global variable and anything connected to it should stay global. Passing it onto the function call (or class constructor) of Mangum won't change that.
I'm not familiar with Mangum, but unless it does some sort of trickery, it should be a regular global variable.

How to send message to event hub via python

I am trying to connect lots of iot objects to an eventhub and save them to a blob storage(also an sql database). I want to do this with python(and I am not sure if this is a recommended practice). The documentation about python was confusing. I tried a few examples but they create an entry to blob storage but entries seems to be irrelevant.
Things like this:
Objavro.codecnullavro.schema\EC{"type":"record","name":"EventData","namespace":"Microsoft.ServiceBus.Messaging","fields":[{"name":"SequenceNumber","type":"long"}...
which is not what I send. How can I solve this?
You could use the azure-eventhub Python SDK to send messages to Event Hub which is available on pypi.
And there is a send sample showing how to send messages:
import os
from azure.eventhub import EventHubProducerClient, EventData
producer = EventHubProducerClient.from_connection_string(
conn_str=CONNECTION_STR,
eventhub_name=EVENTHUB_NAME
)
with producer:
event_data_batch = producer.create_batch()
event_data_batch.add(EventData('Single message'))
producer.send_batch(event_data_batch)
I'm interested in The documentation about python was confusing. I tried a few examples but they create an entry to blob storage but entries seems to be irrelevant.
Could you share your code with me? I'm wondering what's the input/output for Event Hub and Storage Blob and how's the data processing flow.
btw, for Azure Storage Blob Python SDK usage, you could check the repo and [blob samples] for more information.
This is the connection string format for inserting new messages in eventhub using kafka-python. If you were using kafka and want to replace you just have to change this connection string.
import ssl
context = ssl.create_default_context()
context.options &= ssl.OP_NO_TLSv1
context.options &= ssl.OP_NO_TLSv1_1
self.kafka = KafkaProducer(bootstrap_servers=KAFKA_HOST,connections_max_idle_ms=5400000,security_protocol='SASL_SSL',value_serializer=lambda v: json.dumps(v).encode('utf-8'),sasl_mechanism='PLAIN',sasl_plain_username='$ConnectionString',sasl_plain_password={YOUR_KAFKA_ENDPOINT},api_version = (0,10),retries=5,ssl_context = context)
KAFKA_HOST = "{your_eventhub}.servicebus.windows.net:9093"
KAFKA_ENDPOINT="Endpoint=sb://{your_eventhub}.servicebus.windows.net/;SharedAccessKeyName=RootSendAccessKey;SharedAccessKey={youraccesskey}"
You can find KAFKA_HOST and KAFKA_ENDPOING from your Azure Console.

can you send custom data to the accept.js handler?

When dispatching the secure data: Accept.dispatchData(secureData, callback);
is there a way to send custom data along with the secureData to your callback?
No, userFields would be included in the createTransactionRequest after receiving the nonce.
Accept.js gives you control over the user experience using your own form, but avoids any sensitive card data passing through your server.

Storing and accessing request-response wide object

I need to store a created/open LDAP connection, so multiple models, views and so on can reuse a single connection rather than creating a new one each time. This connection should be open when first required during a request and closed when sending a response (done generating the page). The connection should not be shared between different requests/responses.
What is the way to do it? Where to store the connection and how to ensure it is eventually closed?
A bit more info. As an additional information source, I use LDAP connections. LDAP data contains details I cannot store in the database (redundancy/consistency reasons), e.g. MS Exchange mailing groups. I might need some LDAP data in multiple points, different objects/instances should access it during response generation.
One way to store the connection resource so that it can be shared across your components is to use thread local storage.
For example, in myldap.py:
import threading
_local = theading.local()
def get_ldap_connection():
if not hasattr(_local, 'ldap_connection') or _local.ldap_connection is None:
_local.ldap_connection = create_ldap_connection()
return _local.ldap_connection
def close_ldap_connection():
if hasattr(_local, 'ldap_connection') and _local.ldap_connection is not None:
close_ldap_connection(_local.ldap_connection)
_local.ldap_connection = None
So the first time myldap.get_ldap_connection is called from a specific thread it will open the connection. Subsequent calls from the same thread will reuse the connection.
To ensure the connection is closed when you have finished working, you could implement a Django middleware component. Amongst other things this will allow you to specify a hook that gets invoked after the view has returned it's response object.
The middleware can then invoke myldap.close_ldap_connection() like this:
import myldap
Class CloseLdapMiddleware(object):
def process_response(self, response):
myldap.close_ldap_connection()
return response
Finally you will need to add your middleware in settings.py MIDDLEWARE_CLASSES:
MIDDLEWARE_CLASSES = [
...
'path.to.CloseLdapMiddleWare',
...
]

If you are using lighttpd to drive a Django based web application, does each call create a new Python interpreter instance?

I'd like to be able to share some object instances between requests for managing asynchronous event delivery, but this seems like something that won't work with an event based server like lighttpd. Is that the case? What's the best way to work around this if that is the case?
Of note:
This is not a standard web deployment. I'm trying to make this run on an embedded platform for local network only. So some typical deployment/scaling concerns are not really at play here and resources are at a premium.
FastCGI is already long-running, so getting access to a long-lived object should just be a matter of assigning the object to a module-level variable somewhere.
# yourapp/async_thingy.py
_long_lived_object = None
def get_long_lived_object():
global _long_lived_object
if _long_lived_object is None:
_long_lived_object = create_the_long_lived_object()
return _long_lived_object
# views
from .async_thingy import get_long_lived_object
def the_view(request):
# do whatever
long_lived_obj = get_long_lived_object()
long_lived_obj.whatever()
# the rest of the view - return your response, etc.
I'd start with something like this. There are other potential issues if you're using multiple Python processes, but given your resource constraints I'm assuming that's not the case.