Can google cloud functions listen to multiple event triggers? - google-cloud-platform

I am currently trying to deploy a Google cloud function using the REST API, in order to listen to a Google cloud storage bucket for changes/deletions.
However, I noticed that I can only specify one EventTrigger
{
"name": string,
"description": string,
"status": enum (CloudFunctionStatus),
"entryPoint": string,
"runtime": string,
...
"sourceUploadUrl": string
// End of list of possible types for union field source_code.
// Union field trigger can be only one of the following:
"httpsTrigger": {
object (HttpsTrigger)
},
"eventTrigger": {
object (EventTrigger)
}
// End of list of possible types for union field trigger.
}
With my options for what to listen to being the following choices
google.storage.object.finalize
google.storage.object.delete
google.storage.object.archive
google.storage.object.metadataUpdate
What if I want to listen to multiple triggers, such as google.storage.object.finalize and google.storage.object.delete, at the same time? Do I need to deploy separate cloud functions for each one? That seems quite inconvenient. Any suggestions or advice would be appreciated.

Yes, you need to deploy multiple functions. Each function can have exactly one trigger.
You could deploy multiple configurations that use the same sources, or use one trigger which has a response to call another function that aggregates different kinds of events.

It can be actually achieved, we do so in our company.
The trick is: put your Cloud Function / Cloud Run as only-authenticated http trigger
Then you get a URL (which only can be touched via authenticated http request).
Then you can both:
Invoke it directly (postman / whatever) if you pass the Bearer token (i.e. get via gcloud auth print-identity-token)
Trigger from pubsub with a PUSH subscription authenticated to that endpoint.
Note that it requires you to give permission from your PubSub subscriber to that resource, but it works! and makes your function/service both sync and async behaviour.

Paul answer is correct if you want to handle only a subset of available events. Or if you want to want to plug directly function on storage events.
However, if you want to catch all, or if you want to choose yourselves in your function the events type to handle, you can 'cheat'.
Indeed, you can publish bucket notification in pubsub, and plug a function on pubsub events.

Related

AWS Step Function manual approval process

I am working on the requirement where the data entered in the form needs to be validated manually and once validated , a approval mail be be sent out and then data will be stored in the database.I plan to use AWS step function for this with token.
https://aws.amazon.com/blogs/compute/implementing-serverless-manual-approval-steps-in-aws-step-functions-and-amazon-api-gateway/
I plan to use a similar design like in the link above.However is there a way not to use API Gateway for sending back the task token to step function to resume processing.Did anybody worked on the similar requirement and how the functionality was achieved. Thank you.
Step function can be invoked by the AWS Lambda function as well.
Once the form is validated and stored in database, you can trigger the Lambda function based on the database events(ex- if DynamoDB used then based on the DynamDB streams), and the lambda can start the step function.

List all LogGroups using cdk

I am quite new to the CDK, but I'm adding a LogQueryWidget to my CloudWatch Dashboard through the CDK, and I need a way to add all LogGroups ending with a suffix to the query.
Is there a way to either loop through all existing LogGroups and finding the ones with the correct suffix, or a way to search through LogGroups.
const queryWidget = new LogQueryWidget({
title: "Error Rate",
logGroupNames: ['/aws/lambda/someLogGroup'],
view: LogQueryVisualizationType.TABLE,
queryLines: [
'fields #message',
'filter #message like /(?i)error/'
],
})
Is there anyway I can add it so logGroupNames contains all LogGroups that end with a specific suffix?
You cannot do that dynamically (i.e. you can't make this work such that if you add a new LogGroup, the query automatically adjusts), without using something like AWS lambda that periodically updates your Log Query.
However, because CDK is just a code, there is nothing stopping you from making an AWS SDK API call inside the code to retrieve all the log groups (See https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatchLogs.html#describeLogGroups-property) and then populate logGroupNames accordingly.
That way, when CDK compiles, it will make an API call to fetch LogGroups and then generated CloudFormation will contain the log groups you need. Note that this list will only be updated when you re-synthesize and re-deploy your stack.
Finally, note that there is a limit on how many Log Groups you can query with Log Insights (20 according to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html).
If you want to achieve this, you can create a custom resource using AwsCustomResource and AwsSdkCall classes to do the AWS SDK API call (as mentioned by #Tofig above) as part of the deployment. You can read data from the API call response as well and act on it as you want.

Can I create temporary users through Amazon Cognito?

Does Amazon Cognito support temporary users? For my use case, I want to be able to give access to external users, but limited to a time period (e.g. 7 days)
Currently, my solution is something like:
Create User in User Group
Schedule cron job to run in x days
Job will disable/remove User from User Group
This all seems to be quite manual and I was hoping Cognito provides something similar automatically.
Unfortunately there is no functionality used to automate this workflow so you would need to devise your own solution.
I would suggest the below approach to handling this:
Create a Lambda function that is able to post process a user sign up. This Lambda function would create a CloudWatch Event with a schedule for 7 days in the future. Using the SDK you would create the event and assign a target of another Lambda function. When you specify the target in the put_targets function use the Input parameter to pass in your own JSON, this should contain a metadata item related to the user.
You would then create a post confirmation Lambda trigger which would trigger the Lambda you created in the above step. This would allow you to schedule an event every time a user signs up.
Finally create the target Lambda for the CloudWatch event, this will access the input passed in from the trigger and can use the AWS SDK to perform any cognito functions you might want to use such as deleting the user.
The benefit to using these services rather a cron, is that you can perform the most optimal processing only when it is required. If you have many users in this temporary group you would need to loop through every user and compare if its ready to be removed for a one time script (and perhaps sometimes never remove users).
My solution for this is the following: Instead of creating a post confirmation lambda trigger you can also create a pre authentication lambda trigger. This trigger will check for the user attribute "valid_until" which contains a unix timestamp. The pre authentication lambda trigger will only let the user in if the value of the "valid_until" attribute is in the future. Main benefit of this solution is that you don't need any cron-jobs.

Multiple API Gateway instances, one lambda function

I am trying to build a system where multiple APIs gateway instances should execute the same lambda function.
My problem is that I would like to change only the lambda configuration in function of the API gateway used.
Let's take the name of a database as an example that should change if the lambda is being triggered from one API or the other.
Example:
API Gateways:
https://my-first-api-gateway.execute-api.eu-west-1.amazonaws.com
https://my-second-api-gateway.execute-api.eu-west-1.amazonaws.com
I then have one lambda function being called by the two APIs: say_hello.
This function has to retrieve a quote from a database. If the function has been called from my-first-api-gateway the lambda function has to use my_first_database and if the function has been called from my-second-api-gateway, it has to use my_second_database.
The only solution I came with is to deploy as many lambda functions as I have API Gateways. And then use environment variables to store my database name.
I don't like my solution because when I update a single line of code I'll have to redeploy all of my lambda functions.. (if I have 300 different databases to use, that would mean to update 300 lambda functions at once..)
Thank you for your ideas on this subject!
If you use a Lambda-Proxy integration you should be fine.
Full details of the event can be found here, however critically the headers Host, origin and Referer all point the the API Gateway uri:
"headers": {
"Host": "j3ap25j034.execute-api.eu-west-2.amazonaws.com",
"origin": "https://j3ap25j034.execute-api.eu-west-2.amazonaws.com",
"Referer": "https://j3ap25j034.execute-api.eu-west-2.amazonaws.com/dev/"
}
Therefore it should be fairly trivial for you to branch/look-up your database behaviour from this information.

Consuming RSS feed with AWS Lambda and API Gateway

I'm a newbie rails programmer, and I have even less experience with all the AWS products. I'm trying to use lambda to subscribe to and consume an rss feed from youtube. I am able to send the subscription request just fine with HTTParty from my locally hosted rails app:
query = {'hub.mode':'subscribe', 'hub.verify':'sync', 'hub.topic': 'https://www.youtube.com/feeds/videos.xml?channel_id=CHANNELID', 'hub.callback':'API Endpoint for Lambda'}
subscribe = 'HTTParty.post(https://pubsubhubbub.appspot.com/subscribe, :query=>query)
and it will ping the lambda function with a get request. I know that I need to echo back a hub.challenge string, but I don't know how. The lambda event is empty, I didn't see anything useful in the context. I tried formatting the response in the API gateway but that didn't work either. So right now when I try to subscribe I get back a 'Challenge Mismatch' error.
I know this: https://pubsubhubbub.googlecode.come/git/pubsubhubbub-core-0.3.html#subscribing explains what I'm trying to do better than what I just did, and section 6.2.1 is where the breakdown is. How do I set up either the AWS Lambda function and/or the API Gateway to reflect back the 'hub.challenge' verification token string?
You need to use the parameter mapping functionality of API Gateway to map the parameters from the incoming query string to a parameter passed to your Lambda function. From the documentation link you provided, it looks like you'll at least need to map the hub.challenge query string parameter, but you may also need the other parameters (hub.mode, hub.topic, and hub.verify_token) depending on what validation logic (if any) that you're implementing.
The first step is to declare your query string parameters in the method request page. Once you have declared the parameters open the integration request page (where you specify which Lambda function API Gateway should call) and use the "+" icon to add a new template. In the template you will have to specify a content type (application/json), and then the body you want to send to Lambda. You can read both query string and header parameters using the params() function. In that input mapping field you are creating the event body that is posted to AWS Lambda. For example: { "challenge": "$input.params('hub.challenge')" }
Documentation for mapping query string parameters