I have set up a flask application that records voice using python(sounddevice and pydub) libraries and converts it into text.
Application is running well on localhost but when I deployed the application on Amazon-ec2 instance it records blank file .
It doesn't show any error but it records nothing.
Can anyone help how to solve this?
`
def record(self):
time.sleep(2)
samplerate = 8000
duration = 5 # seconds
filename = path+'yes.wav'
print("start")
mydata = sd.rec(int(samplerate * duration), samplerate=samplerate,channels=1, blocking=True)
print("end")
print(type(mydata))
sd.wait()
sf.write(filename, mydata, samplerate)`
EC2s are virtual servers, not physical machines.
It is unlikely you would be able to record any meaningful data from audio inputs on an EC2 - your program is almost certainly waiting for input from the audio device but not receiving any, hence the empty file.
Related
I've made a web API application using Gunicorn(Gevent) + Flask chain.
When It received a data, it read rows from 5 different tables in Bigtable with singleton pattern then do a cpu bound task.
After that, It publishes a message to Pubsub.
But the problem is sometimes It hangs forever right before the loop.
data = {}
for bigtable in five_bigtables:
rows = bigtable.read_rows(row_key)
print('reading rows start')
for row in rows:
data[row.row_key] = row.cells[column_family][column_qualifier][0]
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(project_id, topic_id)
publisher.publish(topic_path, data)
So I can see the "reading rows start" on console and stop working.
And If i commented out a function that publish a message to Pubsub, It works totally fine.
The version of the packages
flask==2.0.3
gevent==21.12.0
google-cloud-bigtable==2.5.1
google-cloud-pubsub==2.9.0
grpcio-status==1.44.0
gunicorn==20.1.0
The gunicorn configs are these
worker_class = 'gevent'
preload_app = False
How I can reproduce the hang is that I used Apache ab testing
ab -n 10000 -c 200 "http://127.0.0.1:8080/test_url"
after some requests It shows.
Benchmarking 127.0.0.1 (be patient)
apr_pollset_poll: The timeout specified has expired (70007)
Total of 139 requests completed
Any comment or help would be appreciated.
I mean, if I store a global int in a Django project's memory and modify/view it, this is OK with manage.py runserver.
However, would this still work in deployment environment?
I am not sure how production web server(apache or uwsgi) will use my code. Will this app initialed many times in different processes?
example:
global_var.py:
command = CommandEvent("start") #a class contains event and command
var1 = 1
views.py:
from global_var import var1
def show_var(request):
return var1
UPDATE
I store data in memory because I forked another thread to grab data from other source. I have to control and get data from this thread with view functions.
spider_py:
from global_var import var1, command
spider_thread = threading.Thread(target=spider_serve_forever, args=(command, var1))
def spider_serve_forever(command, var1):
while(1):
if command.str == "start":
pass
elif command.str == "get_data":
var1 = get_data()
command.event.set()
else:
pass
I have another thread wait for event, once set, push a notification through websocket to the web-client.
The typical production configuration for a Django app using any WSGI server involves spawning a certain number of processes, each with a certain number of threads. Exactly what those numbers are depends on what web server and/or WSGI server is being used, but a rule of thumb that many people use is to configure things such that there is at least one process per server CPU.
I would assume any deployment of your Django app will be multiprocess, so any trick that assumes something in memory is consistent across multiple requests will not work, because you don’t know which process will handle it.
I have a speech project which uses web service. I have set the timeout for 45 seconds in OD but when I deploy it, I get the following error for the first calling user in EP log viewer:
Fetch time out when opening timed stream for url =
http://example.com/AppName/SetLanguage?___DDSESSIONID=EF3397385F3E9BC0E89D526B3FCB811A%3A%2FAppName.
Timeout was = 15000
Session=ccmpp03-2018026140450-10
I think, EP has some default timeout of 15 sec.
1)Is there a way to increase it?
2)Is there any other solutions for this problem?
You should check this:
How to change the default Avaya Voice Browser (AVB) timeout value from an Avaya Voice Portal system?
you can change fetch time in AEP that is for all the application configured in AEP. if you want to change the fetch time for single speech application you need to add a property in the root then configured the fetch time for individual speech application.
enter image description here
enter image description here
Currently, I am working on a project to integrate mysql with the IOCP server to collect sensor data and verify the collected data from the client.
However, there is a situation where mysql misses a connection.
The query itself is a simple query that inserts a single row of records or gets the average value between date intervals.
The data of each sensor flows into the DB at the same time every 5 seconds. When the messages of the sensors come on occasionally or overlap with the message of the client, the connection is disconnected.
lost connection to mysql server during query
In relation to throwing the above message
max_allowed_packet Numbers changed.
interactive_timeout, net_read_timeout, net_write_timeout, wait_timeout
It seems that if there are overlapping queries, an error occurs.
Please let me know if you know the solution.
I had a similar issue in a MySQL server with very simple queries where the number of concurrent queries were high. I had to disable the query cache to solve the issue. You could try disabling the query cache using following statements.
SET GLOBAL query_cache_size = 0;
SET GLOBAL query_cache_type = 0;
Please note that a server restart will enable the query cache again. Please put the configuration in MySQL configuration file if you need to have it preserved.
Can you run below command and check the current timeouts?
SHOW VARIABLES LIKE '%timeout';
You can change the timeout, if needed -
SET GLOBAL <timeout_variable>=<value>;
I'm returning pdf files with wkhtmltopdf from an html page in Django using the following code:
currentSite = request.META['HTTP_HOST']
params = { 'idOrganisation': idOrganisation, 'idMunicipalite' : idMunicipalite, 'nomMunicipalite' : nomMunicipalite, 'idUe': idUe, 'dateEvenement': dateEvenement}
command_args = "wkhtmltopdf -s A4 http://%s/geocentralis/fiche-role/propriete/?%s -" % (currentSite, urlencode(params))
process = Popen(command_args.split(' '), stdout=PIPE, stderr=PIPE)
rtn_comm = process.communicate() #better than wait this wait and return for us...
pdf_contents = rtn_comm[0] #if want debug, index 1 return the stderror
r = HttpResponse(pdf_contents, mimetype='application/pdf')
r['Content-Disposition'] = 'filename=fiche-de-propriete.pdf'
return r
The code is working and the pdf is generated after 2-3 seconds but very often (intermittently), it hang around 30-60 sec before producing the pdf and firebug show me a "NetworkError: 408 Request Timeout. During this "hang" time, my Django site is not responding to any request.
I'm using Django with IIS on Windows server 2008.
I'm looking for any clue on how to solve that issue...
The reason it hangs is because the server runs into racing/concurrency issues, and hits a deadlock (and you're probably using a relatively-liked asset or two in your HTML).
You request a PDF, so the server fires up wkhtmltopdf, which begins churning out your PDF file. When it reaches an asset (image, CSS or JS file, font, etc), wkhtmltopdf attempts loading it from that server... which happens to be the same server wkhtmltopdf is running on. If the server cannot handle multiple requests concurrently (or just doesn't handle concurrency well), then it enters a deadlock: wkhtmltopdf is awaiting on an asset on a server that is waiting for wkhtmltopdf to finish up processing, so that it can serve the asset to wkhtmltopdf which is awaiting on an asset...
To fix this in dev, just Base64-embed your assets into the HTML being converted to PDF, or temporarily serve these files from another machine (e.g a temporary AWS bucket). This should not be a problem in production environments, as your live server is (hopefully) capable of handling multiple GET requests and threads.