Output table in slack slash command - web-services

I want slash command to output data in a table format?
I know that I will have to setup a custom integration for this. I did that using GET Method.
I can setup my own web service on EC2 machine, but how should I make sure that data comes in table format.
May be something like this
My problem is how should I make available data present in tabular format?

It's unfortunately not possible to format Slack messages as a table in this way. You would need to resort to generating an image and referencing it in a message attachment. There is limited support for displaying simple fields and labels, but may not quite meet your needs.

We had the same problem. So We made a slack app and made it free for public. Please feel free to check it out. https://rendreit.digital
After installing the app to your slack. You can do /tableit and paste in csv data or anything you copied from a spreadsheet (Excel or Google Sheet).
It also let your preview the rendered table before you send it to the chat.

Related

Cloud Data Fusion - Input HTTP Post Body from BQ rows

I am a new cloud data fusion user and have run into a problem I cant find a solution for.
I have a table in BQ with ~150 rows of latitude and longitude points. For each row, I want to pass the lat and lng into an HTTP post request to get a result from TravelTime API. Ultimately I want to have a table with all my original rows with a column with the response for each one/
Where I am stuck is that so far I have only been able to hard-code the body of the post request into the HTPP Source plugin and successfully write the response to a file in gcs. However, I expect the rows will change over time, so I would like to dynamically generate and pass in the POST request body from my BQ data.
Is this possible with data fusion? Is this an advisable approach? Or is there a better way?
As #Albert Shau and #user3750486 agreed in the comments:
There is no out-of-the-box way to pass data from BQ rows dynamically in a POST HTTP request.
A possible workaround is to have an HTTP transform plugin that sits in the middle of the pipeline and can be configured to make calls based on the input data. Then you would have a BQ source followed by that plugin followed by the GCS sink. I think your best bet would be to write a custom transform.
This can be done by following this link that #Albert Shau provided or to do a custom code using GCP's Cloud Function as OP did.
Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future.
Feel free to edit this answer for additional information.

Attachment not filtered - Power Automate Send email with PBI pdf report attached

Try follow the steps in this article which describes exactly my situation
(https://wisedatadecisions.com/2021/01/18/filter-email-power-bi-report-pages-using-power-automate-excel/)
Everything worked as described, except for the part where it was supposed to attach a filtered pdf to the email, the urlfilter was not working, the pdf report attached to email contains everything unfiltered.
Also tried remove the space and include x0020 in the place of the spaces per this article (https://learn.microsoft.com/en-us/power-bi/collaborate-share/service-url-filters).....no luck either.
Anyone had similar experience can share some knowledge, any idea what I've missed or done wrong?
Sample Excel table below. The Power Automate flow was able to extract and use all columns A-E and sent the email, only the pdf attached is not really filtered. I think something's not right with the URLFilter column, but don't know what's wrong.
thanks,

Is there any way in dss to read data from multiple sheet of single excel and insert those data into multiple tables of database using wso2 6.4.0?

I am new to wso2 6.4.0 dss, i have to do retrieve the data from multiple sheets of single excel file and insert those data into multiple tables. Please help me to do this. just guide me.
It looks like you need sophisticated logic to implement. Excel files may be a source of data. First of all how wsodss does know about a moment when it must start read excel? It sounds like wsoesb job, which supports a virtual file-system, and can truck directory and generate an event if there are any changes.
Why don't you use wsoesb to read sheet by sheet and insert data?
It provides the necessary tools (mediators) to execute.
Anyway, it does look like a ETL job.

Pulling Instagram data into Google Big Query

I am new to development, so I am sorry if this is a really basic question. I am trying to access some of the data available from instagram's API as documented here. https://developers.facebook.com/docs/instagram-api/insights.
I would like some kind of data repository to pull the data into, so I am looking at Google Big Query to see if I can pull in the data. (The ultimate place will be PowerBi so I can publish online)
Looking at the Facebook request code - is it possible to put this into Google Big query to return the data?
I am replacing the 'instagram-business-user-id' with an ID I have generated already - but it feels like perhaps it needs more markup to let Big Query know what language it is in.
Any help would be much appreciated.
GET graph.facebook.com/{instagram-business-user-id}/insights
?metric=impressions,reach,profile_views
&period=day
Looking at the Facebook request code - is it possible to put this into Google Big query to return the data?
Yes it's absolutely possible using bigQuery API or bigQuery CLI
You can use this Psuedo workflow as an example (using BigQuery API):
Create a table in bigQuery with the desired schema for this you also have 2 options:
Save the result in 1 column with the full JSON, This means to the select you need you use JSON_EXTRACT to fetch specific data
Process the JSON in your code and save it in specific columns to simplify the select statement
Call instagram's API
Call bigQuery API or bigQuery CLI to insert the data, This link provides one option how to do this
Call bigQuery API or bigQuery CLI to fetch the data, This link provides one option how to do this

Tableau Desktop: Live Vs Extract and Publishing to Tableau Server

I have a question that I can't seem to find a straightforward answer to. I am loading a roughly 20GB CSV file into Tableau Desktop to create some worksheets for future internal views. Where I am stumbling is whether to first use Extract or Live data source. The data itself will not change, only the reports or worksheets generated based on the data. The computations within the worksheets on Tableau Desktop take forever to complete.
On to the publishing to Tableau Server. I am under the assumption that I must upload my locally stored file to the server for future use. Would it be better to find a network drive to have the Tableau Server data source point to?
In short, what is the best method to manipulate a large dataset on Tableau Desktop and present on Tableau Server? Additionally, what Regex methodology does Tableau follow, as it doesn't seem to follow standards I use for Python.
Thanks!
When publishing a workbook with a file-based data source, choose the Include external files option in the publishing to server dialog box. This will eliminate the need to have the file in a networked location accessible by the server.
This approach only works if the data doesn't change. It remains static and embedded in the workbook. Under this option, if the data changes and you want your viz to reflect changes, you would need to update the data in Desktop and republish.
As you mentioned, you have to connect/load your 20GB csv file into Tableau Desktop for proper visualizations, if I understand your requirement correctly.
Please find below steps to do the same:
Yes, you have to manually update your data in that csv file whenever you want to show the same on Tableau (Note: Make sure "Name" of entire csv file & columns remain same)
After this on opening Tableau, you have to click on "Refresh data source" to get visualization of latest data present in csv file
enter image description here
For this case, I would say connection (LIVE / EXTRACT) won't help as this there is no role of this, However please use EXTRACT technique as per my understanding (means extract will update once we load latest data)