Exporting Redmine Production Log into Excel or CSV - redmine

AS the title goes, is there any way I can export Redmine production log into Excel or CSV? I need the production logs for viewing the users that logged-in in a certain time frame, the pages they viewed and the changes they may have made. The log file is a big chunk of text that would take be time consuming to parse.

Related

Got some data want to storage on cloud, whtat different between Google cloud store, firestore, Firebase Real time DB, s3 etc

I am learning to make an app. It can let users input data into the app and will show it to other users.(For example, a social media app user upload something)
The data might be text, images etc. (But no video store because want to keep it lowcost)
I am looking for cloud storage servers, but I have no idea what the difference between firestore, Firebase Real-time DB, s3 etc
For example, if the user uploads a pasta image and a text saying 'that is good'
What database shout I choose? Will it need to be separate the text and the image to different DB?
If need separate, what will the structure be?

Storing raw text data vs analytics

I’ve been working on a hobby project that’s a django react site that give analytics and data viz for texts. Most likely will host on AWS. The user uploads a csv of texts. The current logic is that they get stored in the db and then when the user calls the api it runs the analytics on them and sends the analytics. I’m trying to decide whether to store the raw text data (what I have now) or run the analytics on the texts once when they're uploaded and then discard them, only storing the analytics.
My thoughts are:
Raw data:
pros:
changes to analytics won’t require re uploading
probably simpler db schema
cons:
more sensitive data (not sure how safe it is in a django db on AWS, not sure what measures I could put in place to protect it more)
more data to store (not sure what it would cost to store a lot of rows of texts)
Analytics:
pros:
less sensitive, less space
cons:
if something goes wrong with the analytics on the first run (that doesn’t throw an error), then they could be inaccurate and will remain that way

Append CSV Data to Apache Superset Dataset

Using CSV upload in Apache Superset works as expected. I can use it to add data from CSV to a databse, e.g. Postgres. Now I want to apped data from a different CSV to this table/dataset. But how?
The CSVs all have the same format. But there is a new one for every day. In the end I want to have a dashboard which updates every day, taking the new data into account.
Generally, I agree with Ana that if you want to repeatedly upload new CSV data then you're better off operationalizing this into some type of process, pipeline, etc that runs on a schedule.
But if you need to stick with the uploading CSV route through the Superset UI, then you can set the Table Exists field to Append instead of Replace.
You can find a helpful GIF in the Preset docs: https://docs.preset.io/docs/tips-tricks#append-csv-to-a-database
Probably you'll be better served by creating a simple process to load the CSV to a table in the database and then querying that table in Superset.
Superset is a tool to visualize data, it allows uploading CSV for quick and dirty "only once" kind of charts, but if this is going to be a recurrent and structured periodical load of data, it's better to use whatever integrating tool you want to load the data, there are zillions of ETL (Extract-Transform-Load) tools out there (or scripting programs to do it), ask if your company is already using one, or choose the one that is simpler for you.

Monitor and Detect a change in Excel file and refresh location of the web page

I have an excel file where data is refreshing from third party application.
Problem to solve: My DJANGO web application should monitor that excel file continuously and detect a change from that excel file. Whenever there is a change then particular location of web page should be refreshed.
Could somebody please give suggestions to achieve this functionality?
Basically, you need to check with a runtime process (e.g a cron) if the previously version of your file is different to the new version calling your runtime process. This means that you could read the content of the CSV file (could be with import csv or using pandas ìmport pandas as pd) and store it in some place (e.g another temporal file), then (according the time of you define to execute your cron) the content will be check again, and compare it against the content of what you previously have stored. If this happens, you could use Ajax to refresh your website section or use a real time library to refresh automatically, and store the new csv content again.

Framework selection for a new project?

Problem Context
We have a set of excel reports which are generated from an excel input provided by the user and then fed into SAS for further transformation. SAS pulls data from Teradata database and then there is a lot of manipulation that happens with the input data & data pulled from Teradata. Finally, a dataset is generated which can either be sent to the client as a report, or be used for populating Tableau dashboard. Also the database is being migrated from Teradata to Google Cloud (Big Query EDW) as the Teradata pulls from SAS used to take almost 6-7 hours
Problem Statement
Now we need to automate this whole process, by creating front end for the user to upload the input files and from there on the process should trigger and in the end the user should receive the excel file or Tableau dashboard as an attachment in a mail.
Can you suggest what technologies should be used in the front end & middle tier to make this process feasible is least possible time with google cloud platform as the backend?
Can an R shiny front end be a solution given that we need to communicate with a Google Cloud backend ?
I have got suggestion from people that Django will be a good framework to accomplish this task. What are your views on this ?