CloudSearch performance with frequent updates of small batches - amazon-web-services

I have a use case where I need to upload small document batches (typical 1 to 10 documents of 1KB each) to CloudSearch. Every 2 or 3 seconds a new batch is uploaded. The CloudSearch docs for bulk uploads say:
Make sure your batches are as close to the 5 MB limit as possible. Uploading a larger amount of smaller batches slows down the upload and indexing process.
It's ok if there is a 30 seconds delay before the documents show up in search results. Will my implementation work well as my document count is increasing, let's say to 500.000 docs?

Indexing time should be well under your 30 second SLA even with 500k docs, regardless of how or whether you batch your submissions.
I say this based on my own testing with an index of 300k docs and 38 index fields on an m1.small instance type, where it takes less than 3 seconds for a document to be searchable. There are a lot of variables that could affect your own situation, such as how many index fields you have, your instance size, etc, but I think my setup reflects the unfavorable conditions (m1.small instance with complex indexing schema) and is still an order of magnitude faster than your SLA. It's anecdotal evidence of course, but you should be fine.

Related

What is the best performance I can get by querying DynamoDB for a maximum 1MB?

I am using DynamoDB for storing data. And I see 1MB is the hard limit for a query to return. I have a case that queries a table to fetch 1MB of data in one partition. I'd like to know what the best performance I can get.
Based on DynamoDB doc, one partition can have a maximum of 3000 RCU. If I send an eventual consistency read, it should support responding 3000 * 8KB = 24000KB = 23MB per second.
If I send one query request to fetch 1MB from one partition, does this mean it should respond 1/23 second = 43 milliseconds?
I am testing in a lambda sends a query to DynamoDB with XRay enabled. It shows me the query takes 300ms more based on XRay trace. So I don't understand why may cause the long latency.
What can I do if I want to reduce the latency to a single-digit millisecond? I don't want to split the partition since 1MB is not really big size.
DynamoDB really is capable of single-digit millisecond latency, but if the item size is small enough to fit into 1 RCU. Reading 1 MB of data from a database in <10ms is a challenging task itself.
Here is what you can try:
Split your read operation into two.
One will query with ScanIndexForward: true + Limit: N/2 and another will query with ScanIndexForward: false + Limit: N/2. The idea is to query the same data from both ends to the middle.
Do this in parallel and then you merge two responses into one.
However, this is likely will decrease latency from 300 to 150ms, which is still not <10ms.
Use DAX - DynamoDB Caching Layer
If your 1 MB of data is spread across thousands of items, consider using fewer items but each item will hold more data inside itself.
Consider using a compression algorithm like brotli to compress the data you store in 1 DynamoDB item. Once I had success with this approach. Depending on the format, it can easily reduce your data size by 4x, which translates into ~4x faster query time! Which could be 8x faster with the approach described in item #1.
Also, beware, that constantly reading 1 MB of data from a database will incur huge costs.

AWS Elasticsearch indexing memory usage issue

The problem: very frequent "403 Request throttled due to too many requests" errors during data indexing which should be a memory usage issue.
The infrastructure:
Elasticsearch version: 7.8
t3.small.elasticsearch instance (2 vCPU, 2 GB memory)
Default settings
Single domain, 1 node, 1 shard per index, no replicas
There's 3 indices with searchable data. 2 of them have roughly 1 million documents (500-600 MB) each and one with 25k (~20 MB). Indexing is not very simple (has history tracking) so I've been testing refresh with true, wait_for values or calling it separately when needed. The process is using search and bulk queries (been trying sizes of 500, 1000). There should be a limit of 10MB from AWS side so these are safely below that. I've also tested adding 0,5/1 second delays between requests, but none of this fiddling really has any noticeable benefit.
The project is currently in development so there is basically no traffic besides the indexing process itself. The smallest index generally needs an update once every 24 hours, larger ones once a week. Upscaling the infrastructure is not something we want to do just because indexing is so brittle. Even only updating the 25k data index twice in a row tends to fail with the above mentioned error. Any ideas how to reasonably solve this issue?
Update 2020-11-10
Did some digging in past logs and found that we used to have 429 circuit_breaking_exception-s (instead of the current 403) with a reason among the lines of [parent] Data too large, data for [<http_request>] would be [1017018726/969.9mb], which is larger than the limit of [1011774259/964.9mb], real usage: [1016820856/969.7mb], new bytes reserved: [197870/193.2kb], usages [request=0/0b, fielddata=0/0b, in_flight_requests=197870/193.2kb, accounting=4309694/4.1mb]. Used cluster stats API to track memory usage during indexing, but didn't find anything that I could identify as a direct cause for the issue.
Ended up creating a solution based on the information that I could find. After some searching and reading it seemed like just trying again when running into errors is a valid approach with Elasticsearch. For example:
Make sure to watch for TOO_MANY_REQUESTS (429) response codes
(EsRejectedExecutionException with the Java client), which is the way
that Elasticsearch tells you that it cannot keep up with the current
indexing rate. When it happens, you should pause indexing a bit before
trying again, ideally with randomized exponential backoff.
The same guide has also useful information about refreshes:
The operation that consists of making changes visible to search -
called a refresh - is costly, and calling it often while there is
ongoing indexing activity can hurt indexing speed.
By default, Elasticsearch periodically refreshes indices every second,
but only on indices that have received one search request or more in
the last 30 seconds.
In my use case indexing is a single linear process that does not occur frequently so this is what I did:
Disabled automatic refreshes (index.refresh_interval set to -1)
Using refresh API and refresh parameter (with true value) when and where needed
When running into a "403 Request throttled due to too many requests" error the program will keep trying every 15 seconds until it succeeds or the time limit (currently 60 seconds) is hit. Will adjust the numbers/functionality if needed, but results have been good so far.
This way the indexing is still fast, but will slow down when needed to provide better stability.

Why is my DynamoDB scan so fast with only 1 provisioned read capacity unit?

I made a table with 1346 items, each item being less than 4KB in size. I provisioned 1 read capacity unit, so I'd expect on average 1 item read per second. However, a simple scan of all 1346 items returns almost immediately.
What am I missing here?
This is likely down to burst capacity in which you gain your capacity over a 300 second period to use for burstable actions (such as scanning an entire table).
This would mean if you used all of these credits other interactions would suffer as they not have enough capacity available to them.
You can see the amount of consumed WCU/RCU via either CloudWatch metrics or within the DynamoDB interface itself (via the Metrics tab).
You don't give a size for your entries except to say "each item being less than 4KB". How much less?
1 RCU will support 2 eventually consistent reads per second of items up to 4KB.
To put that another way, with 1 RCU and eventually consistent reads, you can read 8KB of data per second.
If you records are 4KB, then you get 2 records/sec
1KB, 8/sec
512B, 16/sec
256B, 32/sec
So the "burst" capability already mentioned allowed you to use 55 RCU.
But the small size of your records allowed that 55 RCU to return the data "almost immediately"
There are two things working in your favor here - one is that a Scan operation takes significantly fewer RCUs than you thought it did for small items. The other thing is the "burst capacity". I'll try to explain both:
The DynamoDB pricing page says that "For items up to 4 KB in size, one RCU can perform two eventually consistent read requests per second.". This suggests that even if the item is 10 bytes in size, it costs half an RCU to read it with eventual consistency. However, although they don't state this anywhere, this cost is only true for a GetItem operation to retrieve a single item. In a Scan or Query, it turns out that you don't pay separately for each individual item. Instead, these operations scan data stored on disk sequentially, and you pay for the amount of data thus read. If you 1000 tiny items and the total size that DynamoDB had to read from disk was 80KB, you will pay 80KB/4KB/2, or 10 RCUs, not 500 RCUs.
This explains why you read 1346 items, and measured only 55 RCUs, not 1346/2 = 673.
The second thing working in your favor is that DynamoDB has the "burst capacity" capability, described here:
DynamoDB currently retains up to 5 minutes (300 seconds) of unused read and write capacity. During an occasional burst of read or write activity, these extra capacity units can be consumed quickly—even faster than the per-second provisioned throughput capacity that you've defined for your table.
So if your database existed for 5 minutes prior to your request, DynamoDB saved 300 RCUs for you, which you can use up very quickly. Since 300 RCUs is much more than you needed for your scan (55), your scan happened very quickly, without throttling.
When you do a query, the RCU count applies to the quantity of data read without considering the number of items read. So if your items are small, say a few bytes each, they can easily be queried inside a single 4KB RCU.
This is especially useful when reading many items from DynamoDB as well. It's not immediately obvious that querying many small items is far cheaper and more efficient than BatchGetting them.

Number of WCU equal to number of items to write in DynamoDB?

I have been struggling to understand the meaning of WCU in AWS DynamoDB Documentation. What I understood from AWS documentation is that
If your application needs to write 1000 items where each item is of
size 0.2KB then you need to provision 1000 WCU (i.e. 0.2/1 = 0.2 which
makes nearest 1KB, so 1000 items(to write) * 1KB() = 1000WCU)
If my above understanding is correct then I am wondering for those applications who requires to write millions of records in to DynamoDB per second, Do those application needs to provision that many millions of WCU?
Appreciate if you could clarify me.
I've used DynamoDB in past (and experienced scaling out the RCU and WCU for my application) and according to AWS docs :-
One write capacity unit represents one write per second for an item up
to 1 KB in size. If you need to write an item that is larger than 1
KB, DynamoDB will need to consume additional write capacity units. The
total number of write capacity units required depends on the item
size.
So it means, if you writing a document which is of size 4.5 KB, than it will consume 5 WCU, DyanamoDB roundoff it to next integer number.
Also your understanding
here each item is of size 0.2KB then you need to provision 1000 WCU
(i.e. 0.2/1 = 0.2 which makes nearest 1KB, so 1000 items(to write) *
1KB() = 1000WCU).
is correct.
To save the WCU, unit you need to design your system in such a way that your document size is always near to round-off.
Note :- To avoid the large cost associated with DynamoDB, if you are having lots of reads, you can use caching on top of dynamoDB, which is also suggested by them and was implemented by us as well.(If your application is write heavy, than this approach will not work and you should consider some other alternative like Elasticsearch etc).
According to http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html doc , see below thing
A caching solution can mitigate the skewed read activity for popular
items. In addition, since it reduces the amount of read activity
against the table, caching can help reduce your overall costs for
using DynamoDB.

huge inserts, no updates, moderate reads with billions of rows datastore

Our system need 10 million inserts per day(structural data), storage upto 3 months(which adds up to 300 million records, after which we can purge older records), no updates are required, and it should support simple queries(like queries on some particular columns sorted by date). Which data storage solution is efficient for this case?. We are thinking RDBMS will be slow for billions of records.
MySql might work, if you can optimize the way you insert.
http://dev.mysql.com/doc/refman/5.7/en/optimizing-innodb-bulk-data-loading.html
Of course it depends if your 1M inserts are evenly spread or you have picks.
If you have picks of very height TPS I would recommend aerospike.
MongoDb can also work, however you will need to use sharding.