RestHighLevelClient with BulkProcessor Elastic Search issue - amazon-web-services

Earlier I was using TransportClient in my app.
Recently moving towards AWS manages Elastic Search services.
Learned that AWS managed ES Cluster would not support TransportClient.
So migrating the code where it was using BulkProcessort to insert documents to ES.
When I refactor the code as a part of ES documentation I added this line.
BulkProcessor bulkProcessor = BulkProcessor.builder(client::bulkAsync, listener).build();
and I get an error at client::bulkAsync saying Client is not a functional interface.
Need help understanding what am I doing wrong.
Document Link For reference,
https://www.elastic.co/guide/en/elasticsearch/client/java-rest/master/java-rest-high-document-bulk.html#java-rest-high-document-bulk-processor

What is the type of your client object?
It must be a RestHighLevelClient instance.
Here is a working code: https://github.com/dadoonet/legacy-search/blob/02-bulk/src/main/java/fr/pilato/demo/legacysearch/dao/ElasticsearchDao.java

Related

How do I add a description to a serverless deploy call?

I have a pretty complex backend project that I deploy to AWS using the Serverless framework. The problem I'm facing is related to versioning. I have a React app on the FE, which has a version on it, but I didn't add a version to the BE for simplicity (it is the same app, I'm not exposing any special API so didn't want to deal with versioning matrices between the FE and the BE, backward compatibility, etc..) --> Is this a mistake?
When I deploy my BE code, AWS does keeps track of the deploy calls and adds versions in the Versions tab of the Lambdas page, and it has a Description property. I'd like to access that Description to at least have an idea which code is running at any given time.
I was looking at the serverless docs and couldn't find a way to send a Description up to AWS. I'm calling it like so:
serverless deploy -s integration
NOTE: I don't have CI/CD hooked up yet, but the idea would be that only checkins to a specific branch (master or develop) would do a deploy to AWS (as opposed to doing it manually on a feature branch while developing). Is this something anyone is doing?
Any thoughts and/or ideas on versioning serverless backend are appreciated.

AWS Lambda C# .net core runtime to use RDS Proxy

I have a Lambda function written in c# .net core 3.1 runtime. In which I am using MySQL for some DB-related stuff. I want to use RDS Proxy on this function (will apply to other functions later) as my application is making too many connections. I've searched on the internet about how to apply RDS Proxy in code level on MySQL client in c# (.NEt core 3.1 runtime) but couldn't find anything helpful. I'll really appreciate it if you can help.
You have to create & add database proxy for Lambda. Once created, you'll get the endpoint which you can use to connect to database like you normally use. The magic in that proxy endpoint URL and transparent to us.
Reference to documentation
https://aws.amazon.com/blogs/compute/using-amazon-rds-proxy-with-aws-lambda/

Leveraging AWS Neptune Gremlin Client Library

We're looking to leverage the Neptune Gremlin client library to get load balancing and refreshes automatic.
There is a blog article here: https://aws.amazon.com/blogs/database/load-balance-graph-queries-using-the-amazon-neptune-gremlin-client/
This is also a repo containing the code here:
https://github.com/awslabs/amazon-neptune-tools/tree/master/neptune-gremlin-client
However, the artifacts aren't published anywhere. Is it still possible to do this? Ideally, we avoid vendoring the code into our codebase since we would then forefeit updates.
The artifacts for several of the tools in that repo can be found here.
https://github.com/awslabs/amazon-neptune-tools/releases/tag/amazon-neptune-tools-1.2

AWS Elastic Search migration from 1.5 to 5.5

I was using elastic search 1.5 and now needs to migrate to 5.5. However there's no direct way supported by AWS. I'm using cloudwatch streaming support for elastic search to feed events.
Now only the new events get feed in to elastic search. I'm thinking of following steps to migrate.
Create a new ES domain with 5.5.
Do a onetime import of existing cloudwatch logs.
Change the ES domain endpoint in the lambda function to point to the new ES domain.
Drop the old ES domain.
Is there a way to achieve step 2 in the process? Or is there any better way of achieving this migration?
Your strategy looks good to me. We have done this ES migration in past. Only thing you need to remember is that 1.5 to 5.5 is not a straight forward migration. There are lots of code changes also involved. Lots of classes are not even available in 5.5.
For import; you might have to write a custom export and importer.

Parse Server resource limits

I am looking into migrating my parse.com app to parse server with either AWS or Heroku.
The primary frustration I encountered with Parse in the past has been the resource limits
https://parse.com/docs/cloudcode/guide#cloud-code-resource-limits
Am I correct in assuming that following a migration the resource limits will be dependant on the new host (i.e. AWS or Heroku)?
Yes. Parse Server is simply a nodejs module which means that wherever you choose to host your nodejs app will decide which resource limits that will be imposed. You might also be able to set them yourself.
I recently moved it to AWS , so yes as stated in a comment its just a nodejs module so you have complete control over it. So mainly constraints here will be cpu , i/o and network of AWS. I would suggest reading the documentation provided here https://github.com/ParsePlatform/parse-server , they have also mentioned which ec2 instances we should take so that we can scale node and mongo properly.