Best way to make Django API call take longer - django

I have a Django application with an functional view API endpoint that returns a response fairly quickly. I would like to make this request intentionally take longer to return, for the purpose of avoiding DDoS attacks. I know it's possible to use throttling through DRF, but I was wondering what would be the best way to make the actual request take longer. Maybe add in a few expensive hashing functions?

time. sleep() is used to halt in python. I believe it would work on your case just before the result time sleep(30) that is used for delay. I think this will work for you!!!! Please use the throttling concept. It will limit the number of requests to a specific API.

Related

Dummy API for a Django Test

I have a booking app that can deal with both local and remote API bookings. Our logic —for (eg) pricing and availability— follows two very different pathways. We obviously need to test both.
But running regular tests against a remote API is slow. The test environment provided manages a response in 2-17 seconds. It's not feasible to use this in my pre_commit tests. Even if they sped that up, it's never going to be fast and will always require a connection to pass.
But I still need to test our internal logic for API bookings.
Is there some way that within a test runner, I can spin up a little webserver (quite separate to the Django website) that serves a reference copy of their API. I can then plug that into the models we're dealing with and query against that locally, at speed.
What's the best way to handle this?
Again, I need to stress that this reference API should not be part of the actual website. Unless there's a way of adding views that only apply at test-time. I'm looking for clean solutions. The API calls are pretty simple. I'm not looking for verification or anything like that here, just that bookings made against an API are priced correctly internally, handle availability issues, etc.
for your test porpuse you can mock api call functions.
you can see more here:
https://williambert.online/2011/07/how-to-unit-testing-in-django-with-mocking-and-patching/

How to relieve a rate-limited API?

We run a website which heavily relies on the Amazon Product Advertising API (APAA). What happens is that when we experience a sudden spike in users it happens that we hit the rate-limit and all functions relying on the APAA shut down for a while. What can we do so that doesn't happen?
So, obviously we have some basic caching in place, but the APAA doesn't allow us to cache data for a very long time, and APAA queries can vary a lot so there may not be any cached data at all to query.
I think that your only option is to retry the API calls until they work — but do so in a smart way. Unfortunately, that's what everybody that gets throttled does and AWS expects people to handle that themselves.
You can implement an exponential backoff and add jitter to prevent cluster calls. AWS has a great blog post about solutions for this kind of problem: https://www.awsarchitectureblog.com/2015/03/backoff.html

API Throttling Best Practices

I have a SOAP api that I would like to throttle access to on a User basis after "x" many calls have been received in "y" amount of time.
After searching around, the #1 consideration (obviously) is to consider your parameters for when to throttle users. However, I don't see much in the way of best practices/examples for implementing such a solution. I did see the Leaky Bucket Method which makes sense. I have to believe there are more ideas out there though.
Any other takers on how you go about implementing your throttling solution? Questions include:
Do any frameworks provide capabilities (e.g. Spring, etc.) for throttling in web apis?
Seems to me you would need to store access information per user. How do you minimize the database overhead for doing this EVERY call?
Do you even NEED to access a datastore to implement this?
For what its worth, I've sort of answered this question after working on some other production projects.
Home brew: Using Spring AOP to pointcut around the method calls prior to executing API method code is one home-brew way if you have your own algorithm to implement. This ends up being pretty elegant and flexible as you can capture a lot of metadata prior to deciding what to do with the request.
API Management Service: If you're talking about a production system and you have the budget, probably the best way to go is to delegate this to an API Management layer like Apigee or Mashery.
Advantage is that it separates the concerns so its easier to change and allows you to focus just on your API. This is especially helpful if business stakeholders are involved and you need a good UI and dictionary of terms.
Disadvantage, of course is the cost and the vendor lock in.
Hope this helps someone!

How can I scale a webapp with long response time, which currently uses django

I am writing a web application with django on the server side. It takes ~4 seconds for server to generate a response to the user. It makes use of a weather api. My application has to make ~50 query to that api for each user request.
Server side uses urllib of python for using the weather api. I used pythons threading to speed up the process because urllib is synchronous. I am using wsgi with apache. The problem is wsgi stack is fully synchronous and when many users use my application, they have to wait for one anothers request to finish. Since each request takes ~4 seconds, this is unacceptable.
I am kind of stuck, what can I do?
Thanks
If you are using mod_wsgi in a multithreaded configuration, or even a multi process configuration, one request should not block another from being able to do something. They should be able to run concurrently. If using a multithreaded configuration, are you sure that you aren't using some locking mechanism on some resource within your own application which precludes requests running through the same section of code? Another possibility is that you have configured Apache MPM and/or mod_wsgi daemon mode poorly so as to preclude concurrent requests.
Anyway, as mentioned in another answer, you are much better off looking at caching strategies to avoid the weather lookups in the first place, or offloading to client.
50 queries to an outside resource per request is probably a bad place to be, and probably not neccesary at all.
The weather doesn't change all that quickly, and so you can probably benefit enormously by just caching results for a while. Then it doesn't matter how many requests you're getting, you don't need to do more than a few queries per day
If that's not your situation, you might be able to get the client to do the work for you. Refactor the code so that the weather api aggregation happens on the client in javascript, rather than funneling it all through the server.
Edit: based on comments you've posted, what you are asking for probably cannot be optimized within the constraints of the API you are using. The problem is that the service is doing a good job of abstracting away the differences in the many sources of weather information they aggregate into a nearest location query. after all, weather stations provide only point data.
If you talk directly to the technical support people that provide the API, you might find that they are willing to support more complex queries (bounding box), for which they will give you instructions. More likely, though, they abstract that away because they don't want to actually reveal the resolution that their API actually provides, or because there is some technical reason in the way that they model their data or perform their calculations that would make such queries too difficult to support.
Without that or caching, you are just out of luck.

Developing/Testing Twitter apps without slamming the API

I'm currently working on an app that works with Twitter, but while developing/testing (especially those parts that don't rely heavily on real Twitter data), I'd like to avoid constantly hitting the API or publishing junk tweets.
Is there a general strategy people use for taking it easy on the API (caching aside)? I was thinking of rolling my own library that would essentially intercept outgoing requests and return mock responses, but I wanted to make sure I wasn't missing anything obvious first.
I would probably start by mocking the specific parts of the API you need for your application. In fact, this may actually force you to come up with a cleaner design for your app, because it more or less requires you to think about your application in terms of "what" it should do rather than "how" it should do it.
For example, if you are using the Twitter Search API, your application most likely should not care whether or not you are using the JSON or the Atom format option. The ability to search Twitter using a given query and get results back represents the functionality you want, so you should mock the API at that level of abstraction. The output format is just an implementation detail.
By mocking the API in terms of functionality instead of in terms of low-level implementation details, you can ensure that the application does what you expect it to do, before you actually connect to Twitter for real. At that point, you've already verified that the app works as intended, so the only thing left is to write the code to make the REST requests and parse the responses, which should be fairly straightforward, so you probably won't end up hitting Twitter with a lot of junk data at that point.
Caching is probably the best solution. Besides that, I believe the API is limited to 100 requests per hour. So maybe make a function that keeps counting each request and as it gets close to 100, it says, OK, every 10 API requests I will pull data. It wouldn't be hard set, probably a gradient function that curbs off when you are nearing the limit.
I've used Tweet#, it caches and should do everything you need since it has 100% of twitter's api covered and then some...
http://dimebrain.com/2009/01/introducing-tweet-the-complete-fluent-c-library-for-twitter.html
Cache stuff in a database... If the cache is too old then request the latest data via the API.
Also think about getting your application account white-listed, it will allow you to have a 20,000 api request limit per hour vs the measly 100 (which is made for a user not an application).
http://twitter.com/help/request_whitelisting