AWS Elasticsearch indexing memory usage issue - amazon-web-services

The problem: very frequent "403 Request throttled due to too many requests" errors during data indexing which should be a memory usage issue.
The infrastructure:
Elasticsearch version: 7.8
t3.small.elasticsearch instance (2 vCPU, 2 GB memory)
Default settings
Single domain, 1 node, 1 shard per index, no replicas
There's 3 indices with searchable data. 2 of them have roughly 1 million documents (500-600 MB) each and one with 25k (~20 MB). Indexing is not very simple (has history tracking) so I've been testing refresh with true, wait_for values or calling it separately when needed. The process is using search and bulk queries (been trying sizes of 500, 1000). There should be a limit of 10MB from AWS side so these are safely below that. I've also tested adding 0,5/1 second delays between requests, but none of this fiddling really has any noticeable benefit.
The project is currently in development so there is basically no traffic besides the indexing process itself. The smallest index generally needs an update once every 24 hours, larger ones once a week. Upscaling the infrastructure is not something we want to do just because indexing is so brittle. Even only updating the 25k data index twice in a row tends to fail with the above mentioned error. Any ideas how to reasonably solve this issue?
Update 2020-11-10
Did some digging in past logs and found that we used to have 429 circuit_breaking_exception-s (instead of the current 403) with a reason among the lines of [parent] Data too large, data for [<http_request>] would be [1017018726/969.9mb], which is larger than the limit of [1011774259/964.9mb], real usage: [1016820856/969.7mb], new bytes reserved: [197870/193.2kb], usages [request=0/0b, fielddata=0/0b, in_flight_requests=197870/193.2kb, accounting=4309694/4.1mb]. Used cluster stats API to track memory usage during indexing, but didn't find anything that I could identify as a direct cause for the issue.

Ended up creating a solution based on the information that I could find. After some searching and reading it seemed like just trying again when running into errors is a valid approach with Elasticsearch. For example:
Make sure to watch for TOO_MANY_REQUESTS (429) response codes
(EsRejectedExecutionException with the Java client), which is the way
that Elasticsearch tells you that it cannot keep up with the current
indexing rate. When it happens, you should pause indexing a bit before
trying again, ideally with randomized exponential backoff.
The same guide has also useful information about refreshes:
The operation that consists of making changes visible to search -
called a refresh - is costly, and calling it often while there is
ongoing indexing activity can hurt indexing speed.
By default, Elasticsearch periodically refreshes indices every second,
but only on indices that have received one search request or more in
the last 30 seconds.
In my use case indexing is a single linear process that does not occur frequently so this is what I did:
Disabled automatic refreshes (index.refresh_interval set to -1)
Using refresh API and refresh parameter (with true value) when and where needed
When running into a "403 Request throttled due to too many requests" error the program will keep trying every 15 seconds until it succeeds or the time limit (currently 60 seconds) is hit. Will adjust the numbers/functionality if needed, but results have been good so far.
This way the indexing is still fast, but will slow down when needed to provide better stability.

Related

DynamoDB query subsegments show some of them take way too long - How to debug

I tried using https://github.com/shelfio/dynamodb-query-optimized to speed up the dynamodb response time when querying time-series data. Nothing fancy, uses a GSI for the query (id and timestamp) and uses them in the key expression. Not much to filter out in the filter expression so the index is optimized.
For some reason, if I use the queryOptimized method (that queries 2-way i.e scan forward and backward), some of the dynamodb query subsegments start taking way too long. In the range of 16seconds-25seconds which is killing the performance and resulting in API gateway timeouts.
The first 20-25 subsegments seem to start at approximately the same time and have decent performance i.e under 500ms, However, after 20-25 subsegments, It starts about 300ms later and the duration of most of these subsegments is really long.
How do I debug what is going on here? Doesn't seem to give any further info in traces. Retries are 0 for all of those subsegments, No failures or errors either. The table is configured to have on-demand capacity so do not imagine capacity to be the issue. Lambda function also has the max (10GB memory allocated) to it. Any ideas on how to figure out what's going on?

Why does the "hatch rate" matter when performance testing?

I'm using Locust for performance testing. It has two parameters: the number of users and the rate at which the users are generated. But why are the users not simply generated all at once? Why does it make a difference?
Looking at Locust Configuration Options I think correct option is spawn-rate
Coming back to your question, in Performance Testing world the more common term is ramp-up
The idea is to increase the load gradually, as this way you will be able to correlate other performance metrics like response time, throughput, etc. with the increasing load.
If you release 1000 users at once you will get a limited view and will be able to answer only to question whether your system supports 1000 users or not. However you won't be able to tell what is the maximum number, what is the saturation point, etc.
When you increase the load gradually you can state that i.e.
Up to 250 users the system behaves normally, i.e. response time is the same, throughput increases as the load increases
After 250 users response time starts growing
After 400 users response time starts exceeding acceptable thresholds
After 600 users errors start occurring
etc.
Also if you decrease the load gradually you can tell whether the system gets back to normal when the load decreases.

Is there any way to specify a max number of retries when using s3cmd?

I've looked through the usage guide as well as the config docs and I'm just not seeing it. This is the output for my bash script that uses s3cmd sync when S3 appeared to be down:
WARNING: Retrying failed request: /some/bucket/path/
WARNING: 503 (Service Unavailable):
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /some/bucket/path/
WARNING: 503 (Service Unavailable):
WARNING: Waiting 6 sec...
ERROR: The read operation timed out
It looks like it is retrying twice using exponential backoffs, then failing. Surely there must be some way to explicitly state how many times s3cmd should retry a failed network call?
I don't think you can the set the maximum retry count. I had a look at its source code on GitHub (https://github.com/s3tools/s3cmd/blob/master/S3/S3.py).
Looks like that value is 5 and hard-coded:
Line 240:
## Maximum attempts of re-issuing failed requests
_max_retries = 5
And the retry interval is calculated as:
Line 1004:
def _fail_wait(self, retries):
# Wait a few seconds. The more it fails the more we wait.
return (self._max_retries - retries + 1) * 3
and the actual code that carries out the retries:
if response["status"] >= 500:
e = S3Error(response)
if response["status"] == 501:
## NotImplemented server error - no need to retry
retries = 0
if retries:
warning(u"Retrying failed request: %s" % resource['uri'])
warning(unicode(e))
warning("Waiting %d sec..." % self._fail_wait(retries))
time.sleep(self._fail_wait(retries))
return self.send_request(request, retries - 1)
else:
raise e
So I think after the second try some other error occurred and it caused it to get out of the retry loop.
Its very unlikely the 503 is because S3 is down, its almost never, ever 'down'. More likely you account has been throttled because you are making too many requests in too short a period.
You should either slow down your requests, if you control the speed, or I would recommend picking better keys, i.e. keys that don't all start with the same prefix - a nice wide range of keys will allow s3 to spread the workload better.
From Jeff Barr's blog post:
Further, keys in S3 are partitioned by prefix.
As we said, S3 has automation that continually looks for areas of the
keyspace that need splitting. Partitions are split either due to
sustained high request rates, or because they contain a large number
of keys (which would slow down lookups within the partition). There is
overhead in moving keys into newly created partitions, but with
request rates low and no special tricks, we can keep performance
reasonably high even during partition split operations. This split
operation happens dozens of times a day all over S3 and simply goes
unnoticed from a user performance perspective. However, when request
rates significantly increase on a single partition, partition splits
become detrimental to request performance. How, then, do these heavier
workloads work over time? Smart naming of the keys themselves!
We frequently see new workloads introduced to S3 where content is
organized by user ID, or game ID, or other similar semi-meaningless
identifier. Often these identifiers are incrementally increasing
numbers, or date-time constructs of various types. The unfortunate
part of this naming choice where S3 scaling is concerned is two-fold:
First, all new content will necessarily end up being owned by a single
partition (remember the request rates from above…). Second, all the
partitions holding slightly older (and generally less ‘hot’) content
get cold much faster than other naming conventions, effectively
wasting the available operations per second that each partition can
support by making all the old ones cold over time.
The simplest trick that makes these schemes work well in S3 at nearly
any request rate is to simply reverse the order of the digits in this
identifier (use seconds of precision for date or time-based
identifiers). These identifiers then effectively start with a random
number – and a few of them at that – which then fans out the
transactions across many potential child partitions. Each of those
child partitions scales close enough to linearly (even with some
content being hotter or colder) that no meaningful operations per
second budget is wasted either. In fact, S3 even has an algorithm to
detect this parallel type of write pattern and will automatically
create multiple child partitions from the same parent simultaneously –
increasing the system’s operations per second budget as request heat
is detected.
https://aws.amazon.com/blogs/aws/amazon-s3-performance-tips-tricks-seattle-hiring-event/

High rate of InternalServerError from DynamoDB

So according to Amazon's DynamoDB error handling docs, it's expected behavior that you might receive 500 errors sometimes (they do not specify why this might occur or how often). In such a case, you're suppose to implement retries with exponential backoff, starting about 50ms.
In my case, I'm processing and batch writing a very large amount of data in parallel (30 nodes, each running about 5 concurrent threads). I expect this to take a very long time.
My hash key is fairly well balanced (user_id) and my throughput is set to 20000 write capacity units.
When I spin things up, all starts off pretty well. I hit the throughput and start backing off and for a while I oscillate nicely around the max capacity. However, pretty soon I start getting tons of 500 responses with InternalServerError as the exception. No other information is provided, of course. I back off and I back off, until I'm waiting about 1 minute between retries and everything seizes up and I'm no longer getting 200s at all.
I feel like there must be something wrong with my queries, or perhaps specific queries, but I have no way to investigate. There's simply no information about my error coming back from the server beyond the "Internal" and "Error" part.
Halp?

Amazon SimpleDB Woes: Implementing counter attributes

Long story short, I'm rewriting a piece of a system and am looking for a way to store some hit counters in AWS SimpleDB.
For those of you not familiar with SimpleDB, the (main) problem with storing counters is that the cloud propagation delay is often over a second. Our application currently gets ~1,500 hits per second. Not all those hits will map to the same key, but a ballpark figure might be around 5-10 updates to a key every second. This means that if we were to use a traditional update mechanism (read, increment, store), we would end up inadvertently dropping a significant number of hits.
One potential solution is to keep the counters in memcache, and using a cron task to push the data. The big problem with this is that it isn't the "right" way to do it. Memcache shouldn't really be used for persistent storage... after all, it's a caching layer. In addition, then we'll end up with issues when we do the push, making sure we delete the correct elements, and hoping that there is no contention for them as we're deleting them (which is very likely).
Another potential solution is to keep a local SQL database and write the counters there, updating our SimpleDB out-of-band every so many requests or running a cron task to push the data. This solves the syncing problem, as we can include timestamps to easily set boundaries for the SimpleDB pushes. Of course, there are still other issues, and though this might work with a decent amount of hacking, it doesn't seem like the most elegant solution.
Has anyone encountered a similar issue in their experience, or have any novel approaches? Any advice or ideas would be appreciated, even if they're not completely flushed out. I've been thinking about this one for a while, and could use some new perspectives.
The existing SimpleDB API does not lend itself naturally to being a distributed counter. But it certainly can be done.
Working strictly within SimpleDB there are 2 ways to make it work. An easy method that requires something like a cron job to clean up. Or a much more complex technique that cleans as it goes.
The Easy Way
The easy way is to make a different item for each "hit". With a single attribute which is the key. Pump the domain(s) with counts quickly and easily. When you need to fetch the count (presumable much less often) you have to issue a query
SELECT count(*) FROM domain WHERE key='myKey'
Of course this will cause your domain(s) to grow unbounded and the queries will take longer and longer to execute over time. The solution is a summary record where you roll up all the counts collected so far for each key. It's just an item with attributes for the key {summary='myKey'} and a "Last-Updated" timestamp with granularity down to the millisecond. This also requires that you add the "timestamp" attribute to your "hit" items. The summary records don't need to be in the same domain. In fact, depending on your setup, they might best be kept in a separate domain. Either way you can use the key as the itemName and use GetAttributes instead of doing a SELECT.
Now getting the count is a two step process. You have to pull the summary record and also query for 'Timestamp' strictly greater than whatever the 'Last-Updated' time is in your summary record and add the two counts together.
SELECT count(*) FROM domain WHERE key='myKey' AND timestamp > '...'
You will also need a way to update your summary record periodically. You can do this on a schedule (every hour) or dynamically based on some other criteria (for example do it during regular processing whenever the query returns more than one page). Just make sure that when you update your summary record you base it on a time that is far enough in the past that you are past the eventual consistency window. 1 minute is more than safe.
This solution works in the face of concurrent updates because even if many summary records are written at the same time, they are all correct and whichever one wins will still be correct because the count and the 'Last-Updated' attribute will be consistent with each other.
This also works well across multiple domains even if you keep your summary records with the hit records, you can pull the summary records from all your domains simultaneously and then issue your queries to all domains in parallel. The reason to do this is if you need higher throughput for a key than what you can get from one domain.
This works well with caching. If your cache fails you have an authoritative backup.
The time will come where someone wants to go back and edit / remove / add a record that has an old 'Timestamp' value. You will have to update your summary record (for that domain) at that time or your counts will be off until you recompute that summary.
This will give you a count that is in sync with the data currently viewable within the consistency window. This won't give you a count that is accurate up to the millisecond.
The Hard Way
The other way way is to do the normal read - increment - store mechanism but also write a composite value that includes a version number along with your value. Where the version number you use is 1 greater than the version number of the value you are updating.
get(key) returns the attribute value="Ver015 Count089"
Here you retrieve a count of 89 that was stored as version 15. When you do an update you write a value like this:
put(key, value="Ver016 Count090")
The previous value is not removed and you end up with an audit trail of updates that are reminiscent of lamport clocks.
This requires you to do a few extra things.
the ability to identify and resolve conflicts whenever you do a GET
a simple version number isn't going to work you'll want to include a timestamp with resolution down to at least the millisecond and maybe a process ID as well.
in practice you'll want your value to include the current version number and the version number of the value your update is based on to more easily resolve conflicts.
you can't keep an infinite audit trail in one item so you'll need to issue delete's for older values as you go.
What you get with this technique is like a tree of divergent updates. you'll have one value and then all of a sudden multiple updates will occur and you will have a bunch of updates based off the same old value none of which know about each other.
When I say resolve conflicts at GET time I mean that if you read an item and the value looks like this:
11 --- 12
/
10 --- 11
\
11
You have to to be able to figure that the real value is 14. Which you can do if you include for each new value the version of the value(s) you are updating.
It shouldn't be rocket science
If all you want is a simple counter: this is way over-kill. It shouldn't be rocket science to make a simple counter. Which is why SimpleDB may not be the best choice for making simple counters.
That isn't the only way but most of those things will need to be done if you implement an SimpleDB solution in lieu of actually having a lock.
Don't get me wrong, I actually like this method precisely because there is no lock and the bound on the number of processes that can use this counter simultaneously is around 100. (because of the limit on the number of attributes in an item) And you can get beyond 100 with some changes.
Note
But if all these implementation details were hidden from you and you just had to call increment(key), it wouldn't be complex at all. With SimpleDB the client library is the key to making the complex things simple. But currently there are no publicly available libraries that implement this functionality (to my knowledge).
To anyone revisiting this issue, Amazon just added support for Conditional Puts, which makes implementing a counter much easier.
Now, to implement a counter - simply call GetAttributes, increment the count, and then call PutAttributes, with the Expected Value set correctly. If Amazon responds with an error ConditionalCheckFailed, then retry the whole operation.
Note that you can only have one expected value per PutAttributes call. So, if you want to have multiple counters in a single row, then use a version attribute.
pseudo-code:
begin
attributes = SimpleDB.GetAttributes
initial_version = attributes[:version]
attributes[:counter1] += 3
attributes[:counter2] += 7
attributes[:version] += 1
SimpleDB.PutAttributes(attributes, :expected => {:version => initial_version})
rescue ConditionalCheckFailed
retry
end
I see you've accepted an answer already, but this might count as a novel approach.
If you're building a web app then you can use Google's Analytics product to track page impressions (if the page to domain-item mapping fits) and then to use the Analytics API to periodically push that data up into the items themselves.
I haven't thought this through in detail so there may be holes. I'd actually be quite interested in your feedback on this approach given your experience in the area.
Thanks
Scott
For anyone interested in how I ended up dealing with this... (slightly Java-specific)
I ended up using an EhCache on each servlet instance. I used the UUID as a key, and a Java AtomicInteger as the value. Periodically a thread iterates through the cache and pushes rows to a simpledb temp stats domain, as well as writing a row with the key to an invalidation domain (which fails silently if the key already exists). The thread also decrements the counter with the previous value, ensuring that we don't miss any hits while it was updating. A separate thread pings the simpledb invalidation domain, and rolls up the stats in the temporary domains (there are multiple rows to each key, since we're using ec2 instances), pushing it to the actual stats domain.
I've done a little load testing, and it seems to scale well. Locally I was able to handle about 500 hits/second before the load tester broke (not the servlets - hah), so if anything I think running on ec2 should only improve performance.
Answer to feynmansbastard:
If you want to store huge amount of events i suggest you to use distributed commit log systems such as kafka or aws kinesis. They allow to consume stream of events cheap and simple (kinesis's pricing is 25$ per month for 1K events per seconds) – you just need to implement consumer (using any language), which bulk reads all events from previous checkpoint, aggregates counters in memory then flushes data into permanent storage (dynamodb or mysql) and commit checkpoint.
Events can be logged simply using nginx log and transfered to kafka/kinesis using fluentd. This is very cheap, performant and simple solution.
Also had similiar needs/challenges.
I looked at using google analytics and count.ly. the latter seemed too expensive to be worth it (plus they have a somewhat confusion definition of sessions). GA i would have loved to use, but I spent two days using their libraries and some 3rd party ones (gadotnet and one other from maybe codeproject). unfortunately I could only ever see counters post in GA realtime section, never in the normal dashboards even when the api reported success. we were probably doing something wrong but we exceeded our time budget for ga.
We already had an existing simpledb counter that updated using conditional updates as mentioned by previous commentor. This works well, but suffers when there is contention and conccurency where counts are missed (for example, our most updated counter lost several million counts over a period of 3 months, versus a backup system).
We implemented a newer solution which is somewhat similiar to the answer for this question, except much simpler.
We just sharded/partitioned the counters. When you create a counter you specify the # of shards which is a function of how many simulatenous updates you expect. this creates a number of sub counters, each which has the shard count started with it as an attribute :
COUNTER (w/5shards) creates :
shard0 { numshards = 5 } (informational only)
shard1 { count = 0, numshards = 5, timestamp = 0 }
shard2 { count = 0, numshards = 5, timestamp = 0 }
shard3 { count = 0, numshards = 5, timestamp = 0 }
shard4 { count = 0, numshards = 5, timestamp = 0 }
shard5 { count = 0, numshards = 5, timestamp = 0 }
Sharded Writes
Knowing the shard count, just randomly pick a shard and try to write to it conditionally. If it fails because of contention, choose another shard and retry.
If you don't know the shard count, get it from the root shard which is present regardless of how many shards exist. Because it supports multiple writes per counter, it lessens the contention issue to whatever your needs are.
Sharded Reads
if you know the shard count, read every shard and sum them.
If you don't know the shard count, get it from the root shard and then read all and sum.
Because of slow update propogation, you can still miss counts in reading but they should get picked up later. This is sufficient for our needs, although if you wanted more control over this you could ensure that- when reading- the last timestamp was as you expect and retry.