I had to build a dashboard , so dash seemed pretty easy. But i faced lot of issues integrating with flask or django.
So I had to rebuild the dashboard with django framework with plotly.js , while using plotly dash #app.callback() was very fast and quick in updating the graphs. To mock the similar behaviour I tried using ajax in django and plotly.js. Even though the job gets done using ajax, i see there is a lag in performance, it takes 3-4 secs to render the updated graph.
Is there a better or efficient way to achieve similar performance of dash callbacks in django/ajax ?
Just because every time i have to read the csv file during ajax call , hinders my performance ?
sample backend code for ajaxcall
def itemsAjax(request):
if request.is_ajax():
itemsList = request.POST.get('items')
#itemsList = ast.literal_eval(itemsList)
#itemsList = [n.strip() for n in itemsList]
itemsList=itemsList.split(',')
daterange = request.POST.get('daterange').split('-')
franchise = request.POST.get('franchise')
startDate = dt.strptime(daterange[0].strip(),'%m/%d/%Y')
endDate = dt.strptime(daterange[1].strip(),'%m/%d/%Y')
df = at.readData()
flag = ut.determineFlag(startDate,endDate)
df = at.filter_df_daterange(df,startDate,endDate)
itemTrend_df = at.getDataForItemsTrend(df,startDate,endDate,flag)
itemTrend_plt = [{'type':'scatter',
'x' : itemTrend_df[itemTrend_df['S2PName']==item]['S2BillDate'].tolist(),
'y' : itemTrend_df[itemTrend_df['S2PName']==item]['totSale'].tolist(),
#'text' : itemTrend_df[itemTrend_df['S2PName']==item]['Hover'],
'mode' : 'markers+lines',
'name' : item
}for item in itemsList]
return JsonResponse({'itemTrend_plt':itemTrend_plt})
else:
return HttpResponse(' Not authorised!')
Related
I have a Dash app that makes some graphs based on data drawn from an API, and I'd like to give the user an option to change a parameter, pull new data based on this, and then redraw the graphs. It could be through a form, but I figured the simplest method would be to use the <pathname> route system from Flask. Dash allows me to do this:
import dash
import dash_core_components as dcc
import dash_html_components as html
import plotly.express as px
app = dash.Dash(__name__)
app.layout = html.Div(children=[
dcc.Location(id='url', refresh=False),
html.Div(id='page-content'),
])
#app.callback(dash.dependencies.Output('page-content', 'children'),
[dash.dependencies.Input('url', 'pathname')])
def display_page(pathname):
if pathname == '/':
return html.Div('Please append a pathname to the route')
else:
data = get_data_from_api(int(pathname))
fig_1 = px.line(data, x="time", y="price")
fig_2 = px.line(data, x="time", y="popularity")
return html.Div(children=[
dcc.Graph(id='fig_1',figure=fig_1),
dcc.Graph(id='fig_2',figure=fig_2),
])
if __name__ == '__main__':
app.run_server(debug=True)
But the problem is that the API call takes a minute or two, and it seems to be constantly polling it, such that the request times out and and the graphs never redraw. What I need is something which doesn't auto-refresh, which can run the API call, update the underlying data, and then tell the app to refresh its state.
I did consider a Dash-within-Flask hybrid like this, but it seems excessively complicated for my use-case. Is there a simpler way to do this?
I think you can add a html.Button to your layout.
html.Button('Update', id='update-button')
To your Callback you can add:
#app.callback(dash.dependencies.Output('page-content', 'children'),
[dash.dependencies.Input('url', 'pathname'),
dash.dependencies.Input('update-button', 'n_clicks')])
def display_page(pathname, n_clicks):
....
No need to process the variabel n_clicks in anyway. The Callback is always triggerd.
Cheers
Hello I would like to ask you for advice for an application, I currently use Django cookie cutter for the project, I use as a library, Django MPTT in this model it must respect a certain hierarchy and that some objects do not find themselves in the same place as another to do this I need to isolate the objects to then perform my calculation and create my new objects. First of all since it could take a while I used Cellery to make my additions to the database I tried several concurrent accesses and I never had a rollback, I thought instead of using Cellery since I know how to scale the docker into several Django, I would integrate this creation into my view and use the #atomic decorator. The problem is that when we create objects at the same time on each different server (django1,django2,django3) I have a lot of rollback while with celeriac I have no problem. I don't understand why competition is more difficult in Django than in celery? Do you have any advice for me? For a better management of different Django servers at the same time, thank you in advance example :
Work in celery norollback :
#celery_app.task()
def create_user_in_matrix(user_id):
.....
new_human = HumanFaction.objects.select_for_update().filter(id=int(user_id))
with transaction.atomic():
new_human = new_human.first()
alderol = HumanFaction.objects.get(user_id=alderol_user_id)
print('selection utilisateur', new_human)
level = 1
level_max = HumanFaction.objects.aggregate(Max('level'))['level__max']
if level_max == 0:
level_max = 1
print('level maximum', level_max)
while level <= level_max and user_whole:
users = HumanFaction.objects.select_for_update().filter(level=level)
HumanFaction.objects.create(user=new_human, parent=aledol)
.....
Operates every third time in a django view during a colision test frequent rollback (django,1,2,3)
#transaction.atomic
def human_group(request):
......
new_human = HumanFaction.objects.select_for_update().filter(id=int(user_id))
with transaction.atomic():
new_human = new_human.first()
alderol = HumanFaction.objects.get(user_id=alderol_user_id)
print('selection utilisateur', new_human)
level = 1
level_max = HumanFaction.objects.aggregate(Max('level'))['level__max']
if level_max == 0:
level_max = 1
print('level maximum', level_max)
while level <= level_max and user_whole:
users = HumanFaction.objects.select_for_update().filter(level=level)
HumanFaction.objects.create(user=new_human, parent=aledol)
.....
I have to make dashboard like view in flask-admin that will use data retrieved from external API. I have already written a functions that get date ranges and return data from that range. I should use BaseView probably but I don't know how to actually write it to make filters work. This is example function that i have to use: charts = generate_data_for_dashboard('164', '6423FACA-FC71-489D-BF32-3A671AB747E3', '2018-03-01', '2018-09-01'). Those params should be chosen from 3 different dropdowns. So far I know only how to render views with pre coded data like this :
class DashboardView(BaseView):
kwargs = {}
#expose('/', methods=('GET',))
def statistics_charts(self):
user = current_user
company = g.company
offices = Office.query.filter_by(company_id=company.id)
self.kwargs['user'] = user
self.kwargs['company'] = company
charts = generate_data_for_dashboard('164', '6423FACA-FC71-489D-BF32-3A671AB747E3', '2018-03-01', '2018-09-01')
self.kwargs['chart1'] = charts[0]
self.kwargs['chart2'] = charts[1]
return self.render('stats/dashboard.html', **self.kwargs)
But I need some kind of form to filter it. In addition date filter dropdown should have dynamic options : current_week, last_week, current_month, last_month, last_year. Don't know where to start.
You should use WTForms to build a form. You then have to decide if you want the data to be fetched on Submit or without a reload of the page. In the former case, you can just return the fetched information on the response page in your statistics_charts view. But if you want the data to update without a reload, you'll need to use JavaScript to track the form field changes, send the AJAX request to the API, and then interpret the resulting JSON and update your dashboard graphs and tables as needed.
I have not used it, but this tutorial says you can use Dash for substantial parts of this task, while mostly writing in Python. So that could be something to check out. There is also flask_jsondash which might work for you.
I have a bit of code that is causing my page to load pretty slow (49 queries in 128 ms). This is the landing page for my site -- so it needs to load snappily.
The following is my views.py that creates a feed of latest updates on the site and is causing the slowest load times from what I can see in the Debug toolbar:
def product_feed(request):
""" Return all site activity from friends, etc. """
latestparts = Part.objects.all().prefetch_related('uniparts').order_by('-added')
latestdesigns = Design.objects.all().order_by('-added')
latest = list(latestparts) + list(latestdesigns)
latestupdates = sorted (latest, key = lambda x: x.added, reverse = True)
latestupdates = latestupdates [0:8]
# only get the unique avatars that we need to put on the page so we're not pinging for images for each update
uniqueusers = User.objects.filter(id__in = Part.objects.values_list('adder', flat=True))
return render_to_response("homepage.html", {
"uniqueusers": uniqueusers,
"latestupdates": latestupdates
}, context_instance=RequestContext(request))
The query that causes the most time seem to be:
latest = list(latestparts) + list(latestdesigns) (25ms)
There is another one that's 17ms (sitewide annoucements) and 25ms (adding tagged items on each product feed item) respectively that I am also investigating.
Does anyone see any ways in which I can optimize the loading of my activity feed?
You never need more than 8 items, so limit your queries. And don't forget to make sure that added in both models is indexed.
latestparts = Part.objects.all().prefetch_related('uniparts').order_by('-added')[:8]
latestdesigns = Design.objects.all().order_by('-added')[:8]
For bonus marks, eliminate the magic number.
After making those queries a bit faster, you might want to check out memcache to store the most common query results.
Moreover, I believe adder is ForeignKey to User model.
Part.objects.distinct().values_list('adder', flat=True)
Above line is QuerySet with unique addre values. I believe you ment exactly that.
It saves you performing a subuery.
For a mock web service I wrote a little Django app, that serves as a web API, which my android application queries. When I make requests tp the API, I am also able to hand over an offset and limit to only have the really necessary data transmitted. Anyway, I ran into the problem, that Django gives me different results for the same query to the API. It seems as if the results are returned round robin.
This is the Django code that will be run:
def getMetaForCategory(request, offset, limit):
if request.method == "GET":
result = { "meta_information": [] }
categoryIDs = request.GET.getlist("category_ids[]")
categorySet = set(toInt(categoryIDs))
categories = Category.objects.filter(id__in = categoryIDs)
metaSet = set([])
for category in categories:
metaSet = metaSet | set(category.meta_information.all())
metaList = list(metaSet)
metaList.sort()
for meta in metaList[int(offset):int(limit)]:
relatedCategoryIDs = getIDs(meta.category_set.all())
item = {
"_id": meta.id,
"name": meta.name,
"type": meta.type,
"categories": list(categorySet & set(relatedCategoryIDs))
}
result['meta_information'].append(item)
return HttpResponse(content = simplejson.dumps(result), mimetype = "application/json")
else:
return HttpResponse(status = 403)
What happens is the following: If all MetaInformation objects would be Foo, Bar, Baz and Blib and I would set the limit to 0:2, then I would get [Foo, Bar] with the first request and with the exact same request the method would return [Baz, Blib] when I run it for the second time.
Does anyone see what I am doing wrong here? Or is it the Django cache that somehow gets into my way?
I think the difficulty is that you are using a set to store your objects, and slicing that - and sets have no ordering (they are like dictionaries in that way). So, the results from your query are in fact indeterminate.
There are various implementations of ordered sets around - you could look into using one of them. However, I must say that I think you are doing a lot of unnecessary and expensive unique-ifying and sorting in Python, when most of this could be done directly by the database. For instance, you seem to be trying to get the unique list of Metas that are related to the categories you pass. Well, this could be done in a single ORM query:
meta_list = MetaInformation.objects.filter(category__id__in=categoryIDs)
and you could then drop the set, looping and sorting commands.