Google Shared Drives Permissions in a tree structure - google-admin-sdk

For our business we are looking to find out which folders are shared with who. So we can keep it clean and organized.
I've been looking for some code that can help me with this, but the only option available at the moment is Gat+ from GatLabs. Witch isn't free and to be honest I don't want to put everything with a 3rd party.
Doing this manually isn't an option, we have more than 50 shared drive folders with subfolders.
Looked in google admin, google developer but there's no where the option or something of code to retract all of this information.

Related

Retrieving data from AWS S3 too slow in Shiny app

I know that this question can be mostly answered generally for any Web App, but because I am specifically using Shiny I figured that your answers may be considerably more useful.
I have made a relatively complex app. The data is not complex, but the user interface is.
I am storing the data in S3 using the aws.s3 package, and have built my app using golem. Because most shiny apps are used to analyse or enter some data, they usually deal with a couple of datasets, and a relational database is very useful and fast for that type of app.
However, my app is quite UI/UX extensive. Users can have their own/shared whiteboard space(s) where they drag around items. The coordinates of the items are stored in rds files in my S3 bucket, for each user. They can customise many aspects of the app just for them, font size, colours of various experimental groups (it's a research app), experimental visits that are storing pdf files, .html files and .rds files.
The .rds files stored can contain variables, lists, data.frames, reactiveValues, renderUI() objects etc.. So they are widely different.
As such I have dozens of rds files that are stored in a bucket and everytime the app loads each of these .rds files need to be read one by one in order to recreate the environment appropriate for each user. The number of files/folders in directories are queried to know how many divs need to be generated for the user to click inside their files etc..
The range of objects stored is too wide for me to use a relational database - but my app is taking at least 40 seconds to load. It is also generally slow when submitting data as well, mostly because the data entered often modified many UI elements that need to be pushed to S3 again. Because I have no background in proper Web Dev, I have no idea what is the best way to store user-related UX/UI elements and how to retrieve them seamlessly.
Could anyone please recommend me to appropriate resources for me to learn more about it?
Am I doing it completely wrong? I honestly do not know how else to store and retrieve all these R objects.
Thank you in advance for your help with the above.

Is there a workaround to make sure a Collection in a shared workspace is not archived in Postman Free Team account?

Our team is now more than 25 and I believe we have reached the shared history limits so Postman is automatically archiving collections. It is OK to archive some old ones, but then they are also archiving our Smoke Test which we run from time to time and some other developer collections that are used as reference by new team members. Has anyone found a way to select which Collection will remain unarchived?
I've searched the net for some official comment from Postman and came accross this: Archived Items in Free Teams Account Cannot be Deleted. It seems the Postman team has no plan of adding a delete button to clear out the history.
I also tried another work around that I saw wherein you download the archived data and re-share the specific collection that you want to be shared. I was able to share two collectections, but when I was out to share the 3rd one, the 2 got archived and I was left with only the 3rd one shared. It seems that this also generates multiple environment instances making it not really ideal since you have to do a cleanup as you do trial and error.
I just need to have at least 3 permanently shared collections.
Shared history and collections have separate limits (both 25 requests). Therefore, clearing out history won't affect your collections. Archived collections don't count towards this limit either, so deleting archived collections (as suggested in the Github thread) won't have any effect either.
Other then moving to a paid plan to avoid archiving collections, you can "Unshare" the collections so that they don't get archived. To do so, you have to remove them from all workspaces, but your personal workspace. If you go to "Share Collection", you'd be able to see all the workspaces the collection is in. If you still want to share the collections with other people, you can export it and give it to them. The downside it that you'll have to do this manually export/import every time you make a change to the collection.

Is there an implementation of a single instance blob store for Django?

I am new to Django so I apologize if I missed something. I would like to have a library that gives me a single-instance data store for Blob / Binary data. I want a library that masks whether or not the files are stored in the database, file system or some kind of back end like S3 on Amazon. I want a single API that lets me add files, and get back URLs to serve those files. Also it would be nice if the implementation supported some kind of migration if I had blobs in a database for a site when it just started out and then move those blobs to an S3 bucket behind the scenes without me needing to change how my application stores and serves the data.
An important sub-aspect of this is that the files have to be only shown to properly authorized users (i.e. just putting them in an open /media/ folder as files is not sufficient).
Perhaps I am asking too much - but I find this kind of service very useful in my applications. The main reason that I am asking is that unless I find such a thing - I will wander off and build my own library - I just don't want to waste the time if this kind of thing already exists.

Question about high level architecture required to process and visualize fitness app data (From Apple Health for example) using google cloud services?

I'm working on a project where I am tasked to use google cloud services to process and visualize fitness data. For example, I have exported some apple health data from my watch, and it is in .xml format. From a high level, I envision this .xml file starting off in object storage, and being converted to .csv through a cloud function (triggered by the creation of the .xml object in storage) and stored again in object storage (different bucket). Then I see these .csv files being processed by a DataFlow pipeline, which will reformat the data to the template schema that I would like the data to be organized with. This pipeline will output the resultant .csv to BigQuery, which will then be designated as a data source for Data Studio. I will then configure Data Studio to produce some simple reports that compare the health data to recommended values. I would like for this report to be accessible as a .pdf in object storage potentially as well. Am I on the right track, or am I missing some key services to accomplish this?
Also, I'm new to posting on StackOverflow, so if this question is against the rules or not welcome, please let me know.
Any feedback is greatly appreciated, as I have not been able to bounce these ideas off of other experienced cloud architects/developers.
This question is currently off-topics by the rule of StackOverflow, as it does not contain any problems to resolve. See point 4-5.
As a high-level advice, I do not see why it should not be possible based on the services you mentioned but you would need to implement it and try it on your side and evaluate the features of each service in your workflow.
In terms of solution or architecture advice, those are generally paid services and you would most likely find little help here for those unless you have a specific problem to solve with said services. You might find some help on the internet as well. ie.Cloud Solutions, Built it on GCP, etc
You might find this interesting to review as well as it mimics your solution. Hope this helps.

Where to store user file uploads?

In my compojure app, where should I store user upload files? Do I just make a user-upload dir in my project root and stick everything in there? Is there anything special I should do (classpath, permissions, etc)?
To properly answer your question, you need to think of the lifecycle of the uploaded files. I would start answering questions such as:
how big are the files going to be?
what storage options will hold enough data to store all the uploads?
how about SLAs, redundancy and disaster avoidance?
how and who to monitor the free space and health of the storage?
In general, the file system location is much less relevant than the block device sitting behind it: as long as your data is stored safely enough for your application, user-upload can be anywhere and be anything from a regular disk to an S3 bucket e.g. via s3fs-fuse.
Putting such folder in your classpath sounds odd to me. It gives no essential benefit, as you will always need to go through a configuration entry to state where to store and read files from.
Permission wise, your application will require at least write access to the upload storage (most likely read access as well). Granting such permissions depends on the physical device you choose: if you opt for the local file system as you suggest in your question, you need to make sure the Clojure app is run by a user with chmod +rw, but in case of S3, you will need to configure API keys.
For anything other than a practice problem, I would suggest using a database such as Postgres or Datomic. This way, you get the reliability of a DB with real transactions, along with the ability to access the files across a network from any location.