How exactly batching works in Microsoft sync framework - microsoft-sync-framework

Let's say i am uploading large amount of data, which leads to 1000 Batches.
after 500 batches the connection is broken. will i be able to see the changes to corresponding to these 500 change batches.
or, they will be enabled only when the whole 1000 batches are downloaded after the connection is reestablished.

batches are applied in a single transaction. so you should not be seeing partial synchs.

Related

Kastrel tuning for large (>1MB) response json

I have .net core MVC API implementation. In my controller I try to query for 800 records from DB. In result my response body size is abound 6MB. In that case response time is over 6s. My service is in AWS cloud.
I made several tests to make diagnostic of service. In all these scenarios I still ready 800 records from DB. Here is list of my experiments:
Return only 10 records - my response time were under 800ms always and size of response body 20kB.
Return only 100 records - my response time were over 800ms but not timeouts and size of response body 145kB
Try to use my custom json serialization in controller await JsonSerializer.SerializeAsync(HttpContext.Response.Body, limitedResult); - a bit better result but like 10% only
Return only 850 records - my response time were over 6s but and size of response body 6MB
In service I don't have problem with memory or with restart service.
Looks like for Kastrel the problem is to serve large response data.
My objections are connected with buffers I/O which for large response will use disk what can affect performance of AWS docker image.
Question is how to optimize Kastrel to serve large response size?
UPDATE:
I enabled zip compression on server side. My files are compressed quite good because of json format. But result is exact the SAME. Network bandwidth is not a problem. So looks like between my controller and compression is bottleneck. Any suggestion how to configure .net core service to handle large response (>1 MB)?
Since you are already sure that network is not an issue - this mostly points to time being spent in serialization. You can try running the application on local machine and using a profiler such as PerfView to see where the time is spent most for big json.

AWS S3 rate limit and SlowDown errors

I'm refactoring a job that uploads ~1.2mln small files to AWS; previously this upload was made file by file on a 64 CPUs machine with processes. I switched to an async + multiprocess approach following S3 rate limits and best practices and performance guidelines to make it faster. With sample data I can achieve execution times as low as 1/10th. With production loads S3 is returning "SlowDown" errors.
Actually the business logic makes the folder structure like this:
s3://bucket/this/will/not/change/<shard-key>/<items>
The objects will be equally splitted across ~30 shard-keys, making every prefix contain ~40k items.
We have every process writing on its own prefix and launching batches of 3k PUT requests in async until completion. There is a sleep after the batch write operation to ensure that we do not send another batch before 1.1sec has passed, so we will respect the 3500 PUT requests per second.
The problem is that we receive SlowDown errors for ~1 hour and then the job writes all the files in ~15 minutes. If we lower the limit to 1k/sec this gets even worse, running for hours and never finishing.
This is the distribution of the errors over time for the 3k/sec limit:
We are using Python 3.6 with aiobotocore to run async.
Doing some sort of trial and error to try to understand how to mitigate this takes forever on production data and testing with a lower quantity of data gives us different results (flawlessly works).
Did I miss any documentation regarding how to make the system scale up correctly?

handling multiple identical messages in aws sqs

I have an architecture where a customer will upload a file or set of files to process to S3, the files are then moved (and untarred/zipped/etc) to a more appropriate S3 bucket, then a message will be placed in SQS from the lambda to be picked up by a compute engine. In most cases, only one message per customer request is generated. However, there might be a case where the customer will load say 200 images to the same request (all 200 images are slices from a single 3D image) one at a time. This will generate 200 lambda calls and 200 messages. My compute engine can process the same request multiple times without a problem, but I would like to avoid processing the same request 200+ times (each such process takes > 5mins on a large ec2 instance).
Is there a way working within the amazon tools to either coalesce messages in a queue having the same message body to a single message? Or to peek into a queue for a message having a specific message body?
The only thing I can think to do is have a "special" file in my destination S3 bucket that records the last time a lambda put this message in the queue, but the issue with that is, say the first image slice comes in and I put in the queue "Do this guy", 50 more images come in and I notice that the "special" file is there, the message is picked up and processing starts, the rest of the images come in, then processing finishes and fails due to only 50 out of 60 needed images, but there are no pending messages in the queue because I blocked them all...
Or, I just suck it up and let the compute run for 200 times, fail quickly ~199 times, then succeed 1 (or more) times...

Get a Constant rps in Jmeter

I have been trying to use JMeter to test my server. I have a cloudsearch endpoint on AWS. I have to test if it can scale upto 25000 requests per second without failing. I have tried JMeter with a constant throughput timer with throughput = 1500000 per second and running 1000 threads. I ran it for 10 mins. But When I review the aggregate report it shows an average of only 25 requests per second. How do i get an average of around 25000 requests per second?
Constant Throughput Timer can only pause the threads to reach specified "Target Throughput" value so make sure you provide enough virtual users (threads) to generate desired "requests per minute" value.
You don't have enough Threads to achieve such Requests per second!!!
To get an average (~25000) requests per second, you have to increase the Number of threads.
Remember, The number of threads will impact results if your server faces slowdowns. If so and you don't have enough threads then you will not be injecting the expected load and end up with fewer transactions performed.
You need to increase the number of concurrent users to be at least 25000 (it assumes 1 second response time, if you have 2 seconds response time - you will need 50000)
JMeter default configuration is not suitable for high loads, it is good for tests development and debugging, however when it comes to the real load you need to consider some constraints, i.e.:
Run JMeter test in non-GUI mode
Increase JVM Heap Size and tune other parameters
Disable all listeners during the test
If above tips don't help follow other recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article or consider Distributed Testing

Counting number of requests per second generated by JMeter client

This is how application setup goes -
2 c4.8xlarge instances
10 m4.4xlarge jmeter clients generating load. Each client used 70 threads
While conducting load test on a simple GET request (685 bytes size page). I came across issue of reduced throughput after some time of test run. Throughput of about 18000 requests/sec is reached with 700 threads, remains at this level for 40 minutes and then drops. Thread count remains 700 throughout the test. I have executed tests with different load patterns but results have been same.
The application response time considerably low throughout the test -
According to ELB monitor, there is reduction in number of requests (and I suppose hence the lower throughput ) -
There are no errors encountered during test run. I also set connect timeout with http request but yet no errors.
I discussed this issue with aws support at length and according to them I am not blocked by any network limit during test execution.
Given number of threads remain constant during test run, what are these threads doing? Is there a metrics I can check on to find out number of requests generated (not Hits/sec) by a JMeter client instance?
Testplan - http://justpaste.it/qyb0
Try adding the following Test Elements:
HTTP Cache Manager
and especially DNS Cache Manager as it might be the situation where all your threads are hitting only one c4.8xlarge instance while the remaining one is idle. See The DNS Cache Manager: The Right Way To Test Load Balanced Apps article for explanation and details.