AWS quicksight and influx db - amazon-web-services

Hi I have an influxDb installed on aws ec2 instance, wanted to show its data on aws quicksight. As I don't see influx dB in predefined data source list of aws quicksight. Will it be possible to create data source for influxDb and show its data on quick sight view. How i can defined my custom datasource for influxdb.
As I know influxDb compatibility is good with Grafana, so not sure whether I will be abel get data on quicksight view. Please let me know if you are aware, how can I achieve it.
Thanks.

Currently, AWS Quicksight doesn't support showing the metrics from InfluxDB.
To visualize your metric in AWS Quicksight, simply do following steps:
Collect data from InfluxDB and store in S3, GlueCatalog, Athena or relational database.
Show your data in Quicksight from stored data.
Let's see official documentation for more detailed information
https://docs.aws.amazon.com/quicksight/latest/user/welcome.html

Not yet a supported database worth using the send feedback setting in Quicksight to see if it is in the roadmap.

Related

AWS - DynamoDB table visualization

Please what is the best or recommended way how to visualize data from the DynamoDB table ? We need to create some simple dashboard with graphs connected to data table on the AWS account.
We prefer to use one of the services from AWS to keep everything in one place. I read about the QuickSight but it would be great to know some experience.
You can use quicksight to visualize your table by using the Athena-DynamoDB connector. This will allow you to use DynamoDB as a table source in Athena which can then act as a source for Quicksight.
https://docs.aws.amazon.com/athena/latest/ug/connectors-dynamodb.html

Build s3 Datalake Using Dynamo DB data source

i'am a data engineer using AWS, we want to build a data pipeline in order to visualise our Dynmaodb data on QuickSigth, as u know, it's not possible de connect directly dynamo to Quick...u have to pass by S3.
S3 Will be our datalake, the issue is that the date updates frequently (for exemple column named can change / costumer status can evolve..)
So i'am looking for a batch solution in order to always get the lastest data from dynamo on my s3 datalake and visualise it in quicksigth.
Thank u
You can access your tables at DynamoDB, in the console, and export data to S3 under the Streams and Exports tab. This blog post from AWS explains just what you need.
You could also try this approach with Athena instead of S3.

AWS Quicksight - translate IDs to Names Using Foreign Key

I have live data on DocumentDB
Provisioning data on Aurora (Postgres)
I would like to have both datasets on Quicksight for BI
The DocumentDB data uses internal IDs
The SQL holds the mapping to meaningful names
Is there a way to achieve such a thing?
I have a Glue job that extracts the data from the DocumentDB and outputs it as a JSON in S3
Regarding Aurora - Quicksight natively integrates
Thanks
I have managed to achieve this after adding another data source to the same data set.
AWS then enables you to do a join between the data sources.

Amazon S3 to Amazon Athena to Tableau

I am working on a project to get data from an Amazon S3 bucket into Tableau.
The data needs to reorganised and combined from multiple .CSV files. Is Amazon Athena capable of connecting from the S3 to Tableau directly and is it relatively easy/cheap? Or should I instead look at another software package to achieve this?
I am looking to visualise the data and provide a forecast based on observed trend (may need to incorporate functions to generate data to fit linear regression).
It appears that Tableau can query data from Amazon Athena.
See: Connect to your S3 data with the Amazon Athena connector in Tableau 10.3 | Tableau Software
Amazon Athena can query multiple CSV files in a given path (directory) and run SQL against the data. So, it sounds like this is a feasible solution for you.
Yes, you can integrate Athena with Tableau to query your data in S3. There are plenty resource online that describe how to do that, e.g. link 1, link 2, link 3. But obviously, tables that define meta information of your data have to be defined before hand.
Amazon Athena pricing is based on on the amount of data scanned by each query, i.e. 5$ per 1TB of data scanned. So it all comes down how much data you have and how it is structured, i.e. partitioning, bucketing file format etc. Here is a nice blog post that covers these aspects.
While you prototype a dashboard there is one thing to keep in mind. By deafult, each time you would change list of parameters, filters etc, Tableau would automatically send a request to AWS Athena to execute your query. Luckily, you can disable auto querying of the data source and do it manually.

Is it possible to see a history of interaction with tables in a Redshift schema?

Ultimately, I would like to obtain a list of tables in a particular schema that haven't been queried in the last two weeks (say).
I know that there are many system tables that track various things about how the Redshift cluster is functioning, but I have yet to find one that I could use to obtain the above.
Is what I want to do possible?
Please have a look at our "Unscanned Tables" query: https://github.com/awslabs/amazon-redshift-utils/blob/master/src/AdminScripts/unscanned_table_summary.sql
If you have enabled audit logging for the cluster, activity data stored inside a S3 bucket which you configured while enabling logging.
According to AWS Documentation, audit log bucket structure is as follows.
AWSLogs/AccountID/ServiceName/Region/Year/Month/Day/AccountID_ServiceName_Region_ClusterName_LogType_Timestamp.gz
For example: AWSLogs/123456789012/redshift/us-east-1/2013/10/29/123456789012_redshift_us-east-1_mycluster_userlog_2013-10-29T18:01.gz