Mutation between two microservices on Apollo Federation - apollo

I'm using Apollo Federation to handle a lot of GraphQL microservices. I want to create a Notificaction when a user was followed so I need to send a mutation from UserService to NotificationService. How can I achieve this?
Thank you in advance!

I believe there's no way to make mutation work across multiples microservices. The only thing i can think of, is to always resolve some field in response and create a resolver in Notification MS. But that sounds more like a hack than a architecture solution.
This is especially done because apollo doesn't have any transaction system, which is not required in case of queries, but does require for mutations.
You should stick to default message exchange between microserivces, either an event bus (like kafka, or rabbitmq) or simple http api.

Related

AWS appsync graphql subscription

I have two seprate apps each has seprate AWS cognito userpool appsync api but has shared dynamodb. I want to create subscription for chat feature where app 1 (client app) and app 2 (admin app) will communicate. Is it possible please advise.
I have followed this article from aws:
enter link description here
need advise how would that work in my case with two different apps.
I think you first need to take a step back and separate the problem from the technology. You want to create a chat app, how would you build this? Do you need a data store or a queue? First think in abstractions then look what technology fits those abstractions best. If you start with the technology first you will most likely end up with a sub-par solution.
If you really want to go for the technology first approach you could think of storing chats in DynamoDB(DDB) and using DDB streams to update the subscription. It can work but it will most likely be expensive.

POST Request to REST API with Apache Beam

I have the use case that we're pulling messages from PubSub, and then the idea is to POST those messages to the REST API of PowerBI. We want to create a Live Report using the PushDatasets feature.
The main idea should be something like this:
PubSub -> Apache Beam -> POST REST API -> PowerBI Dashboard
I haven't found any implementation about POST Request inside an Apache Beam job (the runner is not a problem right now), just a GET request inside a DoFn. I don't even know if this is possible.
Has someone experienced doing something like this? or maybe another framework/tool that may be more helpful?
Thanks.
Sending POST requests to an external API is certainly possible, but requires some care. It could be as simple as making the POST inside the body of a DoFn, but be aware that this could lead to duplicates since messages within your pipeline belong to a batch and the Beam model allows entire batches to be reprocessed in case of worker failures, exceptions, etc.
There is some advice in the beam docs on grouping elements for efficient external service calls.
Choosing the best course of action here largely depends on the details of the API you're calling. Does it take message IDs that can be used for deduplication on the PowerBI side? Can the API accept batches of messages? Is there rate limiting?

What are the limitations of using AWS appsync api(GraphQL) through Amplify?

I just want to avoid use of custom/manual resolvers in appsync completely. So I'm using Amplify to setup GraphQL appsync API in my app. I'm doing all the stuffs by changing schema.graphql and amplify push.
I have 2 questions :
1. What are the limitations and what problems I'm going to face in future?
2. Can graphql subscriptions get update when app is not running(like user should be notified)?
tons of business logic will be exposed on the client side code.
I think for push notifications you would still have to go via external integrations like FCM/APNS. Multiple integration options are available in SNS
Just to preamble these answers, the fact that you use an amplify generated graphQL and resolvers doesn't stop you from later including custom resolvers and pipeline functions - it's just that you need to learn quite a bit about where to include them in the backend file structure of amplify.
1. What are the limitations and what problems I'm going to face in future?
This depends on how well your applications use-case matches the graphQL schema design and if your application is relatively self-contained. Amplify becomes more complex when your application needs to talk to other back-end systems, you'll need to start using DynamoDB triggers to notify other state machines/event bridge/SNS or similar services.
As mentioned none of these problems are crippling, you can deal with them later but it will be a step up in the AWS knowledge required to implement them.
For small high-volume/availability apps Amplify and DynamoDB as-it-comes is great. If your application matures into many micro-services and sites then you'll need to learn quite a bit more AWS to make them play together well. Amplify does determine your DynamoDB on a table per object basis and you'll probably be stuck with (paying for) that. Think hard about if you ever might want to go to a different optimised data source (RDS or single dynamo table) to reduce the number of queries required to fulfil your graphQL requests.
2. Can graphql subscriptions get update when app is not running(like user should be notified)?
No. Anurag mentions SNS which would be a good option to out-app notify users, best to blend subscriptions and another service.

Implementing a simple Restful service to store and retrieve data using AWS API Gateway/Lambda

I'm new to AWS, so apologies in advance if this question is missing some important considerations, or has incorrect assumptions.
But basically I want to implement a service on AWS to store and retrieve data from multiple clients, which may be Android apps, Windows applications, websites etc. The way I've considered doing this is via a RESTful service using API Gateway front end, with a Lambda back end and maybe an S3 bucket to hold the data.
The basic requirements are:
(1) Clients can publish data to the server, where it is stored, perhaps with some kind of key/value structure.
(2) Clients can retrieve said data by key.
(3) If it is possible, clients to be able to subscribe to events from the service, so that they are notified if the value of a piece of data changes. This would avoid the need to poll the service, which would presumably start racking up unnecessary charges if the data doesn't change often.
Any pointers on how to get started with this welcome!
Creating a RESTful API on top of Lambda and API Gateway is one of the main use cases for this architecture. You can think of Lambda functions as controllers with methods and API Gateway as a router that forwards requests to functions based on the URL pattern. There are many frameworks and approaches that can help out here if you don't want to write from scratch:
Lambdasync
https://medium.com/#fredrikanderzon/create-a-rest-api-on-aws-lambda-using-lambdasync-e46c68f8043f
Serverless
https://serverless.com/framework/docs/providers/aws/events/apigateway/
Swagger
https://cloudonaut.io/create-a-serverless-restful-api-with-api-gateway-swagger-lambda-and-dynamodb/
As far as event subscriptions go (requirement #3) you can model this in many datastores, certainly in a relational/SQL database, with a table like this:
Subscription (key_of_interest, user_id, events_of_interest)
I'm leaving out data types for you to figure out, but you get the idea hopefully. After each data modification on a particular key, see if that key is of interest in the subscription table, then wire up a response to the user's who indicated interest. The details of this of course depend on your particular requirements. A caution though: this approach will increase the cost of data modifications because of the additional overhead needed to process subscriptions.
EDIT: One other thing I forgot. S3 is better suited for non-structured data (think 'files'). For relational databases, checkout RDS. For a simple NoSQL database you might use DynamoDB, or host your own NoSQL database of choice on an EC2 instance.

Querying dynamodb on mobile client vs backend query and response via api?

I am querying my contacts to match a list of contacts (primary keys) on dynamodb to see if any are using my service.
I have two options to go about this:
1) client side: I call the aws sdk directly in my mobile device and handle the response accordingly.
2) via API Gateway: I send a json of my contacts to my backend (aws lambda), which computes off client and responds via json.
I am wondering what are the pros and cons of each, or if one is clearly better?
Thanks
Like many things, it depends. I don't think one is clearly better than the other.
*1 client side sdk is good because it's probably the easiest and quickest way to get going and less to build/configure/maintain.
*2 API gateway is good because it will probably be easier to call your lambda from different clients(browsers, other services,etc) and those clients wouldn't need to depend on the SDK, they could just use RESTful calls if that's how you set it up. You would also be able to support different content-types with a mapping template such as XML, YAML, etc.
It really just comes down to your use case, style, plans for reuse in the near future. You could probably start with #1 and migrate to #2 if you find you need more of the API Gateway features.