Amazon Quicksight - is it possible to invoke APIs in Quicksight? - amazon-web-services

I am new to AWS. I have to design an architecture for a client specifically with AWS, as they already use some AWS components. Their request is to have a frontend in Quicksight. The alerts should be visible in a table, each row representing one alert. The alerts should have a status column (new/in progress/solved etc.) and there should be a button at the end of every row, and when the user clicks on the button, the user should receive a pop-up, where they can put in some comments, then after clicking on an OK button, the status should be updated and the comments should be stored.
Their expectation is to store the alerts history in an AWS database. They currently use ProstgreSQL on premise. The question is, is Quicksight able to fulfill these requirements? Is it possible to add buttons with which the user can make updates in an underlying database, and refresh Quicksight, when the update was done? If not, what would be a better tool for the visualization?
(I was planning to create some APIs for making updates in a database, my original plan was to use DynamoDB, but I might need to reconsider this due to their preference regarding ProsgreSQL).

I don't think this is directly possible in QuickSight. There is no way to provision external controls (like buttons) and trigger actions right away. However, a possible workaround can be using a blended way to use QuickSight APIs and trigger events from a web app using AWS Lambda. This is not a proven practice yet, but I anticipate this is possible.
Cheers!

Related

(GCloud SQL) Is there a way to opting in to maintenance notifications via CLI or API?

I'm looking for a way to toggle this notification via gcloud CLI or API call since I need to automate it.
Is there a way of doing it? If not, is this going to be available in the future?
Having a lot of environment is hard to keep track of all of them via UI.
I have checked the Cloud SQL Admin API and it seems it is not possible yet. The best way to proceed in this cases is to create a Feature Request in the Public Issue Tracker. I searched for an existing one, but I didn't find any. When submitting a Feature Request, the Engineering team has more visibility of your needs and they can prioritize those requests by the number of users affected.

AWS service for managing state data - dynamodb/step functions/sqs?

I am building a Desktop-on-Demand solution using AWS Workspaces product and I am trying to understand what is the best AWS service to fit my requirements for managing state data for new users.
In a nutshell, solution will create a new AWS Workspace (virtual desktop instance) for a user when multiple conditions are met and checks are satisfied. These tasks would be satisfied by multiple lambda functions.
DynamoDB would be used as a central point for storing confguration data details like user data, user groups data and deployed virtual desktops data.
Logic for Desktops creation would be implemented using Step Functions like below:
Event hook comes from Identity Management system firing a lambda function that checks if user desktop already exists in DynamoDB table
If it does not exist, another lambda creates AWS AD connector
Once this is done, another lambda builds custom image for new desktop if needed
Another lambda pulls latest data from Identity Management system and updates DynamoDB table for users and groups.
Other lambda functions that may be fired up as a dependency
To ensure we have transactional mechanism, we only deploy new desktop when all conditions are met. I can think about few ways of implementing this check:
Use DynamoDB table for keeping State data. When all attributes in item are in expected state, desktop can be created. If any lambda fails or produces data that does not fit, dont' create desktop.
Just use Step Functions and design it's logic flow that all conditions must satisfy before desktop is created
Someone suggested using SQS queue but I don't see how this can be used for my purpose.
What is the best way to keep this data?
Step Functions is the method I would use for this. The DynamoDB solution would also work, but this seems like exactly the sort of thing Step Functions was designed to handle.
I agree that SQS would not be a correct solution.

Using Amazon SWF and Reactive Web Application

I have general question regarding Amazon SWF and web application which has a reactive style. For example, I have a shopping website where user ad products to cart, validate the quantity, enter the shipping and billing address, payment processing, order shipping and tracking. If I implement a work flow for the order fulfillment, how this should be designed in the SWF. Do this order fulfillment work flow begin only after all inputs received? How this work flow notifies to the customer on the progress of order process, any validation issues etc. How this should be distributed?
The simplest approach is to use SWF to perform backend order fulfillment and a separate data store to hold the order information and status. When an order is configured through the website the data store is updated. Later when the order is placed a workflow instance is created for it. The workflow uses information (by loading it using activities) from the data store. Then the workflow updates the data store using activities and the website queries the status and other progress information of the workflow from the data store.
Another option is to use execution state feature of SWF. See Exposing Execution State from SWF Developer Guide.
The Cadence (which is open sourced version of SWF) in the near future is going to add a query feature. It would allow synchronously query for the workflow state through the service API. It is different from execution state as it would allow multiple query types and query parameters.

Implementing a simple Restful service to store and retrieve data using AWS API Gateway/Lambda

I'm new to AWS, so apologies in advance if this question is missing some important considerations, or has incorrect assumptions.
But basically I want to implement a service on AWS to store and retrieve data from multiple clients, which may be Android apps, Windows applications, websites etc. The way I've considered doing this is via a RESTful service using API Gateway front end, with a Lambda back end and maybe an S3 bucket to hold the data.
The basic requirements are:
(1) Clients can publish data to the server, where it is stored, perhaps with some kind of key/value structure.
(2) Clients can retrieve said data by key.
(3) If it is possible, clients to be able to subscribe to events from the service, so that they are notified if the value of a piece of data changes. This would avoid the need to poll the service, which would presumably start racking up unnecessary charges if the data doesn't change often.
Any pointers on how to get started with this welcome!
Creating a RESTful API on top of Lambda and API Gateway is one of the main use cases for this architecture. You can think of Lambda functions as controllers with methods and API Gateway as a router that forwards requests to functions based on the URL pattern. There are many frameworks and approaches that can help out here if you don't want to write from scratch:
Lambdasync
https://medium.com/#fredrikanderzon/create-a-rest-api-on-aws-lambda-using-lambdasync-e46c68f8043f
Serverless
https://serverless.com/framework/docs/providers/aws/events/apigateway/
Swagger
https://cloudonaut.io/create-a-serverless-restful-api-with-api-gateway-swagger-lambda-and-dynamodb/
As far as event subscriptions go (requirement #3) you can model this in many datastores, certainly in a relational/SQL database, with a table like this:
Subscription (key_of_interest, user_id, events_of_interest)
I'm leaving out data types for you to figure out, but you get the idea hopefully. After each data modification on a particular key, see if that key is of interest in the subscription table, then wire up a response to the user's who indicated interest. The details of this of course depend on your particular requirements. A caution though: this approach will increase the cost of data modifications because of the additional overhead needed to process subscriptions.
EDIT: One other thing I forgot. S3 is better suited for non-structured data (think 'files'). For relational databases, checkout RDS. For a simple NoSQL database you might use DynamoDB, or host your own NoSQL database of choice on an EC2 instance.

What is the "proper" way to use DynamoDB for an iOS app?

I've just started messing around with AWS DynamoDB in my iOS app and I have a few questions.
Currently, I have my app communicating directly to my DynamoDB database. I've been reading around lately and people are saying this isn't the proper way to go about getting data from my database.
By this I mean is I just have a function in my code querying my Dynamo database and returning the result.
How I do it works but is there a better way I should be going about this?
Amazon DynamoDB itself is a highly-scalable service and standing up another server in front of it requires scaling the service also in line with the RCU/WCU configured for your tables, which we can and should avoid.
If your mobile application doesn't need a backend server and you can perform all the business functions from the mobile device, then you should probably think about
Using the AWS DynamoDB SDK for iOS devices to write your client application that runs on the mobile device
Use AWS Token Vending Machine to authenticate your mobile users to grant them credentials to be used to run operations on DynamoDB tables.
Control access (i.e what operations should be allowed on tables etc.,) using IAM policies.
HTH.
From what you say, I can guess that you are talking about a way you can distribute data to many clients (ios apps).
There are few integration patterns (a very good book on this: Enterprise Integration Patterns), one of which is called shared database. It is essentially about using a common database for multiple clients to share the data. Main drawback for that pattern (in your case) is that you are doing assumption about how the database schema looks like. It can potentially bring you some headache supporting the schema in the future, if your business logic changes.
The more advanced approach would be sending events on every change in your data instead of directly writing changes to the database from client apps. This way you can add additional processing to the events before the data they carry is written to the database. For example, you may want to change the event format in the new version of your app, but still want to support legacy users, so you add translation procedure which transforms both types of events to the format which fits the database schema. It's basically a question of whether to work with diffs vs snapshots.
You should be aware of added complexity of working with events, and it can be an overkill if your app is simple and changes in schema are unlikely.
Also consider that you can do data preprocessing using DynamoDB Streams, which gives you some advantages of using events still keeping it simple to implement.