Anyone know the best future proof practice / pattern for enabling gzip compression in apollo v2, given that the road map for v3 details that all apollo-server- are going to be deprecated in favour of the in house http transport layer?
The short answer is you'll have to use something like NGINX https://docs.nginx.com/nginx/admin-guide/web-server/compression/
Use apollo-server-express
v2: https://www.apollographql.com/docs/apollo-server/v2/migration-two-dot/#apollo-server-2-new-pattern
v3: https://www.apollographql.com/docs/apollo-server/integrations/middleware/#apollo-server-express
with express/compression: http://expressjs.com/en/resources/middleware/compression.html
Related
I have a client with a guaranteed execution timeout setting here (which can be configured per request)
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/AmazonWebServiceRequest.html#setSdkClientExecutionTimeout-int-
But I cannot find an equivalent for the SDKV2, sync or async.
I was wondering if anyone in SO or AWS would know about this. Is this an intentional feature drop? Or am I missing some other setting.
Found the solution : https://github.com/aws/aws-sdk-java-v2/pull/657#pullrequestreview-799397170. This is for async clients and at client level. if you want to do request level (for async clients), use the orComplete functionality of completable future instead. https://docs.oracle.com/javase/9/docs/api/java/util/concurrent/CompletableFuture.html#orTimeout-long-java.util.concurrent.TimeUnit-
I am trying to sync 1 million record to ES, and I am doing it using bulk API in batch of 2k.
But after inserting around 25k-32k, elastic search is giving following exception.
Unable to parse response body: org.elasticsearch.ElasticsearchStatusException
ElasticsearchStatusException[Unable to parse response body]; nested: ResponseException[method [POST], host [**********], URI [/_bulk?timeout=1m], status line [HTTP/1.1 403 Request throttled due to too many requests]
403 Request throttled due to too many requests /_bulk]; nested: ResponseException[method [POST], host [************], URI [/_bulk?timeout=1m], status line [HTTP/1.1 403 Request throttled due to too many requests]
403 Request throttled due to too many requests /_bulk];
I am using aws elastic search.
I think, I need to implement wait strategy to handle it, something like keep checking es status and call bulk insert if status all of ES okay.
But not sure how to implement it? Does ES offers anything pre-build for it?
Or Anything better way to handle this?
Thanks in advance.
Update:
I am using AWS elastic search version 6.8
Thanks #dravit for including my previous SO answer in the comment, after following the comments it seems OP wants to improve the performance of bulk indexing and want exponential backoff, which i don't think Elasticsearch provides out of the box.
I see that you are putting a pause of 1 second after every second which will not work in all the cases, and if you have large number of batches and documents to be indexed, for sure it will take a lot of time. There are few more suggestions from my side to improve the performance.
Follow my tips to improve the reindex speed in Elasticsearch and see what all things listed here is applicable and doing them improves speed by what factor.
Find a batching strategy which best suits to your environment, I am not sure but this article from #spinscale who is the developer of java high level rest client might help or you can ask a question on https://discuss.elastic.co/, I remembered he shared a very good batching strategy in one of his webinar but couldn't find the link of it.
Notice various ES metrics apart from bulk threadpool and queue size, and see if your ES still has capacity can you increase the queue size and increase the rate by which you can send requests to ES.
Check the error handling guide here
If you receive persistent 403 Request throttled due to too many requests or 429 Too Many Requests errors, consider scaling vertically. Amazon Elasticsearch Service throttles requests if the payload would cause memory usage to exceed the maximum size of the Java heap.
Scale your application vertically or increase delay between requests.
I`m working on custom implementing of NTLM and NTLMv2 protocol for authorization on Lync (Skype for business) server. While reading the offical specification and http://davenport.sourceforge.net/ntlm.html I have got several questions which I cant find answer on. One of them is following:
The question is about NTLMv2 responce (especially the blob). It says that blob should contain timestamp
since January 1, 1601
. What is this for? How does it ensure security if the server doesnt know my local time. Or maybe I should use timestamp provided in type 2 message by server?
Incomplete answer, but I haven't better for now
Or maybe I should use timestamp provided in type 2 message by server?
Yes. As said in the documentation linked above in my comment MS-NLMP
If NTLM v2 authentication is used, the client SHOULD send the timestamp in the
CHALLENGE_MESSAGE.<47>
If there exists a CHALLENGE_MESSAGE.TargetInfo.AvId ==
MsvAvTimestamp
Set Time to CHALLENGE_MESSAGE.TargetInfo.Value of that AVPair
Else
Set Time to Currenttime
Endif
I noticed that uploading small files to S3 bucket is very slow. For a file with size of 100KB, it takes 200ms to upload. Both the bucket and our app are in Oregon. App is hosted on EC2.
I googled it and found some blogs; e.g. http://improve.dk/pushing-the-limits-of-amazon-s3-upload-performance/
It's mentioned that http can bring much speed gain than https.
We're using boto 2.45; I'm wondering whether both uses https or http by default? Or is there any param to configure this behavior in boto?
Thanks in advance!
The boto3 client includes a use_ssl parameter:
use_ssl (boolean) -- Whether or not to use SSL. By default, SSL is used. Note that not all services support non-ssl connections.
Looks like it's time for you to move to boto3!
I tried boto3, which has a nice parameter "use_ssl" in connection constructor. However, it turned out that boto3 is significantly slower than boto2.... there're actually already many posts online about this issue.
Finally, I found that, in boto2, there's also a similar param "is_secure"
self.s3Conn = S3Connection(config.AWS_ACCESS_KEY_ID, config.AWS_SECRET_KEY, host=config.S3_ENDPOINT, is_secure=False)
Setting is_secure to False saves us about 20ms. Not bad..........
I'm trying to call a webservice using the WSClient API from Play Framework.
The main issue is that I want to transfer huge JSON payloads (more than 2MB) without exceeding the maximal payload size.
To do so, I would like to compress the request using gzip (with the HTTP header Content-Encoding: gzip). In the documentation, the parameter play.ws.compressionEnabled is mentioned, but it only seems to enable WSResponse compression.
I have tried to manually compress the payload (using a GZipOutputStream) and to put the header Content-Encoding:gzip, but the server throws a io.netty.handler.codec.compression.DecompressionException : Unsupported compression method 191 in the GZIP header.
How could I correctly compress my request ?
Thanks in advance
Unfortunately I don't think you can compress the request (it is not supported by Netty, the underlying library). You can find more info in https://github.com/AsyncHttpClient/async-http-client/issues/93 and https://github.com/netty/netty/issues/2132