I am trying to use a workflow as aws appsync-> lambda1-> athena-> SNS-> lambda2
I want to understand how can I use two different lambda resolvers in the appsync api. So while calling a get task function I am passing customer_id as an input to lambda1 and it is supposed to invoke athena. Once the query is successful SNS will publish a message and lambda 2 is invoked to display the result. Here is how my schema looks like:
type Query {
getTask(cust_id: String!): String
}
type Task {
cust_id: String!
description: String!
}
schema {
query: Query
}
Related
I'm trying to add todo from lambda function in AWS.
I created flutter project, added api (Graphql basic Todo sample), added function (enabled mutations).
Lambda is working and effectively adding entry to TODO list. Unfortunately, subscription started in flutter returns error:
Cannot return null for non-nullable type: 'AWSDateTime' within parent 'Chat' (/onCreateChat/createdAt)
I see this problem got solved in Github where aleksvidak states.
I had the similar problem and what I realised is that mutation which in fact triggers the subscription has to have the same response fields as the ones specified for the subscription response. This way it works for me.
This seems to solve many people's problem. Unfortunately, I don't understand what it means for basic TODO sample code.
Mutation:
type Mutation {
createTodo(input: CreateTodoInput!, condition: ModelTodoConditionInput): Todo
...
Subscription:
type Subscription {
onCreateTodo(filter: ModelSubscriptionTodoFilterInput): Todo
#aws_subscribe(mutations: ["createTodo"])
...
Isn't this code aligned with what Aleksvidak said? Mutation has (Todo) the same response type as Subscription (Todo), right?
For my case it was missing updatedAt within Query that I perform inside Lambda function.
const query = /* GraphQL */ `
mutation CREATE_TODO($input: CreateTodoInput!) {
createTodo(input: $input) {
id
name
updatedAt // <-- this was missing
}
}
`;
I'm building a simple multiplayer game using AppSync (to avoid having to manage WebSocket from zero) and Amplify (specifically on Android, using Kotlin).
The problem is that when players are connected to a game, should listen for updates for the other players (consider that I have a table on dynamo with Primary Key a UUID and the corresponding game-id and position of the player).
So what I've done, is that after the player joins his game session register itself to onUpdate of Player... however now he receives the notification of the update of all the player... and even though I can easily filter this incoming events client side, I think it's a terrible idea, since AppSync have to notify to all registered users, all the updates.
Instead I would like to specify a filter on the subscription, like when you perform a query, something like:
Amplify.API.subscribe(
ModelSubscription.onUpdate(
Player::class.java,
Player.GAMEROOM.eq(currentPlayer.gameroom.id) // only players in the same gameroom
),
{ Log.i("ApiQuickStart", "Subscription established") },
{ onCreated ->
...
},
{ onFailure -> Log.e("ApiQuickStart", "Subscription failed", onFailure) },
{ Log.i("ApiQuickStart", "Subscription completed") }
)
Are there ways to obtain this?
Since this is the only tipe of subscription that I will do to Player Update, maybe with a custom resolver something might be done?
You need to add one more param into your subscription, and its subscribed mutations. I guess your GraphQL schema looks like
type Subscription {
onUpdate(gameRoomId: ID!, playerId: ID): UpdateUserResult
#aws_subscribe(mutations: ["leaveCurrentGameRoom", <other_interested_mutation>])
}
type Mutation {
leaveCurrentGameRoom: UpdateUserResult
}
type UpdateUserResult {
gameRoomId: ID!
playerId: ID!
// other fields
}
You notice Subscription.onUpdate has an optional param, playerId. The subscription can be used as follows
onUpdate("room-1") // AppSync gives updates of all players in "room-1"
onUpdate("room-1", "player-A") // AppSync gives updates of "player-A" in "room-1"
In order for AppSync to filter out irrelevant updates, the subscribed mutations, such as leaveCurrentGameRoom, have to return a type which includes the two fields, gameRoomId, and playerId. Please note that you may also update the resolvers of the mutations to resolve data for the fields
Here is another example of subscription arguments
https://docs.aws.amazon.com/appsync/latest/devguide/aws-appsync-real-time-data.html#using-subscription-arguments
I really tried everything. Surprisingly google has not many answers when it comes to this.
When a certain .csv file is uploaded to a S3 bucket I want to parse it and place the data into a RDS database.
My goal is to learn the lambda serverless technology, this is essentially an exercise. Thus, I over-engineered the hell out of it.
Here is how it goes:
S3 Trigger when the .csv is uploaded -> call lambda (this part fully works)
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DownloadCsv downloads the csv from S3 and finishes with essentially the plaintext of the file. It is then supposed to pass it to the next lambda. The way I am trying to do this is by putting the second lambda as destination. The function works, but the second lambda is never called and I don't know why.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_ParseCsv gets the plaintext as input and returns a javascript object with the parsed data.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_DecryptRDSPass only connects to KMS, gets the encrcypted RDS password, and passes it along with the data it received as input to the last lambda.
AAA_Thomas_DailyOverframeS3CsvToAnalytics_PutDataInRds then finally puts the data in RDS.
I created a custom VPC with custom subnets, route tables, gateways, peering connections, etc. I don't know if this is relevant but function 2. only has access to the s3 endpoint, 3. does not have any internet access whatsoever, 4. is the only one that has normal internet access (it's the only way to connect to KSM), and 5. only has access to the peered VPC which hosts the RDS.
This is the code of the first lambda:
// dependencies
const AWS = require('aws-sdk');
const util = require('util');
const s3 = new AWS.S3();
let region = process.env;
exports.handler = async (event, context, callback) =>
{
var checkDates = process.env.CheckDates == "false" ? false : true;
var ret = [];
var checkFileDate = function(actualFileName)
{
if (!checkDates)
return true;
var d = new Date();
var expectedFileName = 'Overframe_-_Analytics_by_Day_Device_' + d.getUTCFullYear() + '-' + (d.getUTCMonth().toString().length == 1 ? "0" + d.getUTCMonth() : d.getUTCMonth()) + '-' + (d.getUTCDate().toString().length == 1 ? "0" + d.getUTCDate() : d.getUTCDate());
return expectedFileName == actualFileName.substr(0, expectedFileName.length);
};
for (var i = 0; i < event.Records.length; ++i)
{
var record = event.Records[i];
try {
if (record.s3.bucket.name != process.env.S3BucketName)
{
console.error('Unexpected notification, unknown bucket: ' + record.s3.bucket.name);
continue;
}
if (!checkFileDate(record.s3.object.key))
{
console.error('Unexpected file, or date is not today\'s: ' + record.s3.object.key);
continue;
}
const params = {
Bucket: record.s3.bucket.name,
Key: record.s3.object.key
};
var csvFile = await s3.getObject(params).promise();
var allText = csvFile.Body.toString('utf-8');
console.log('Loaded data:', {Bucket: params.Bucket, Filename: params.Key, Text: allText});
ret.push(allText);
} catch (error) {
console.log("Couldn't download CSV from S3", error);
return { statusCode: 500, body: error };
}
}
// I've been randomly trying different ways to return the data, none works. The data itself is correct , I checked with console.log()
const response = {
statusCode: 200,
body: { "Records": ret }
};
return ret;
};
While this shows how the lambda was set up, especially its destination:
I haven't posted on Stackoverflow in 7 years. That's how desperate I am. Thanks for the help.
Rather than getting each Lambda to call the next one take a look at AWS managed service for state machines, step functions which can handle this workflow for you.
By providing input and outputs you can pass output to the next function, with retry logic built into it.
If you haven't much experience AWS has a tutorial on setting up a step function through chaining Lambdas.
By using this you also will not need to account for configuration issues such as Lambda timeouts. In addition it allows your code to be more modular which improves testing the individual functionality, whilst also isolating issues.
The execution roles of all Lambda functions, whose destinations include other Lambda functions, must have the lambda:InvokeFunction IAM permission in one of their attached IAM policies.
Here's a snippet from Lambda documentation:
To send events to a destination, your function needs additional permissions. Add a policy with the required permissions to your function's execution role. Each destination service requires a different permission, as follows:
Amazon SQS – sqs:SendMessage
Amazon SNS – sns:Publish
Lambda – lambda:InvokeFunction
EventBridge – events:PutEvents
Does AppSync support nested single mutation?
I want to call a single mutation which will insert records into two tables, eg: User and Roles tables in DynamoDB.
Something like this for example:
createUser(
input: {
Name: "John"
Email: "user#domain.com"
LinesRoles: [
{ Name: "Role 1" }
{ Name: "Role 2" }
]
}) {
Id
Name
LinesRoles {
Id
Name
}
}
Do I need to create two resolvers in AppSync for User and Roles to insert the records in both tables?
I can think of three ways to achieve this:
Use a BatchPutItem to save records into two tables at once. However, you won’t be able to use any ConditionExpression
Use a pipeline resolver with two AppSync functions where one function makes a PutItem to the Roles table and the other to the User table. However, you need to be ok with potentially inconsistent scenarios where the record has been inserted in one table but not in the other.
Use a Lambda resolver that does the write to 2 tables inside a DynamoDB transaction.
I'm trying to set an AWS IOT rule to send data to DynamoDB without the help of a lambda.
My rule query statement is : SELECT *, topic() AS topic, timestamp() AS timestamp FROM '+/#'
My data is fine in AWS IOT as I'm successfully retrieving it with a lambda. However, even by following the developer guide to create the rule, in order to get the information passed on to Dynamo, by setting the 2 form fields with ${topic} and ${timestamp} as it should work, I get nothing in DynamoDB and I can find the following exception in Cloudwatch :
MESSAGE:Dynamo Insert record failed. The error received was NoSuchElementException. Message arrived on: myTopic/data, Action: dynamo, Table: myTable, HashKeyField: topic, HashKeyValue: , RangeKeyField: Some(timestamp), RangeKeyValue:
HashKeyValue and RangeKeyValue seem to be empty. Why ?
I also posted the question on the AWS forum : https://forums.aws.amazon.com/thread.jspa?threadID=267987
Suppose your devide sends this payload:
mess={"reported":
{"light": "blue","Temperature": int(temp_data)),
"timestamp": str(pd.to_datetime(time.time()))}}
args.message=mess
You should query as:
SELECT message.reported.* FROM '#'
Then, set up DynamoDB hash key value as ${MessageID()}
You will get:
MessageID || Data
1527010174562 { "light" : { "S" : "blue" }, "Temperature" : { "N" : "41" }, "timestamp" : { "S" : "1970-01-01 00:00:01.527010174" }}
Then you can easily extract values using Lambda and send to S3 via Data Pipeline or to Firehose to create a data stream.