Is there a way to set a global expiry data on any shares from a corporate G-Drive?
I know you can do it when you share a file individually, but is there a way to effectively make it so that any shares created on a file over a year ago expire.
Related
I am building an application where users can upload their files and go to the payment page. After payment completion, I want to store the files uploaded by the user in the database. If the user does not complete the payment, then I do not want to save the files on the server or database.
Is there any way to do that?
Two things to unpack here.
First, it is not advised to store files in the database. It's better to use a storage or directly the server's filesystem.
Second, files are usually uploaded and saved according to your strategy (on the server's filesystem, the database or a third party storage), and then a cleanup would happen if the user hasn't proceeded with the payment. You need to define the conditions for the cleanup to happen, whether it's because the user has uploaded file and has been inactive for a certain period of time, or because they click on a specific button, or a combination of both.
To trigger the cleanup, you have different possibilities:
When uploading the files, schedule a task, for instance using django-q to check that, say, 1 hour after uploading the files, if the payment hasn't been completed, the files are deleted
Write a django command that gets triggered by a cron job every day that deletes the files of the user whose payment has been pending for more than 1 hour
You could also work with django-sessions and regularly scan for sessions that haven't been active for 1 hour, whose payment are pending and assume these users will not proceed with the payment and delete their uploaded files
I am looking for an answer regarding report data storage concept in Power BI.
I have published 3 reports to Power BI service (cloud):
Report1 with Excel source
Report2 with onpremise Sql server source
Report3 with azure sql source
Around 200 users in my organization will be accessing these reports. I want to understand whether:
The first time a particular report is accessed, will the data be fetched from the source and shown in the report or will it be stored to some cloud location from where the data will go to the report?
Suppose a user opens a report that was already viewed by another user, then will the data be fetched from the source again or is there any concept of cross user shared cache?
Suppose a user opens the report for the 2nd time (example: after having already accessed it, suppose user refreshes the web page), will the data will be fetched again? Or is there any concept of shared cache?
Does the answer to any of the above change if I had used the Power BI reporting server (onpremise) and deployed the report on the PBRS?
With the service, you typically upload a PBIX, which contains the report pages and all of the underlying data. Unless you set up a data gateway to accommodate DirectQueries and/or scheduled refreshes, the cloud service does not access your original data sources at all. With a scheduled refresh, it only accesses the original data during the refresh. A DirectQuery connection does access a server "live" but has many limitations.
The data is fetched when you load it into your Power BI desktop application and then loaded into the cloud when you publish the report to a workspace. Once it's there, the data shown to the user is fetched from the cloud copy, not the original data source.
Same answer as above regarding where the data is fetched from (the cloud copy). I don't believe there is shared cache between users but rather each user has some temporary caching individually. This type of caching saves the calculation results (computed on the underlying data) that are needed to populate the report visuals.
There is some caching done temporarily so that if a user switches among slicer combinations to one previously chosen you may see much quicker loading than when selecting a new configuration since it cached the results and doesn't need to recompute them. As far as I understand, this kind of caching is short-lived and not shared among users. Remember, this type of cache is not the same as the underlying data in the cloud copy of the PBIX.
I've not used an on-premise server, but I would expect the behavior to be similar with the exception that the service is on the local server instead of a could server somewhere else.
The upshot is that traffic in the service is separated from the requests to the original source data (assuming no DirectQuery connections). Those original sources are only accessed during data refreshes, which are independent of end-user actions (under the same assumption).
After my users log in the app makes too many requests to DynamoDB and I am thinking about different ways to reduce the number of calls.
The app allows user to trigger certain alerts that get sent to other users. For instance: "Shipment received, come to the deck", "Shipment completed", etc.
These are the calls made:
Get company's software license expiration date.
Get the computer's location in the building (i.e. "Office A").
Get the kinds of alerts that can be triggered (i.e. "Shipment received, come to the deck", "Shipment completed", etc).
Get information about the user (i.e. company teams the user belongs to, and admin level the user has (which can be 0, 1, 2, or 3).
Potential solutions I have though about:
Put the company's license expiration date as an attribute of each computer (This would reduce the number of queries by 1). However, if I need to update the company's license expiration date, then I need to update it for EVERY SINGLE computer I have in the system, which sounds impractical to me since I may have 200, 300 or perhaps even more computers in the database.
Add the company's license expiration date as an attribute of the alerts (This would reduce the number of queries by 1); which seems more reasonable because there are only about 15 different kinds of alerts, so if I need to change the license expiration date later on, it is not too bad.
Cache information on the user's device; however, I can't seem to find a good strategy to keep the information stored locally as updated as possible.
I still think these 3 options do not sound too good, so I am hoping someone can point me in the right direction. Is there a good way to reduce the number of calls? I am retrieving information about 4 different entities (license, computer, alert, user), should I leave those 4 calls after users log in?
here are few things that can be done wrt each component.
Get information about the user
keep it in session store and whenever details changes update the store. session stores are usually implemented using cache like redis.
Computer location
Keep it in a distributed cache like redis. lazily initialise it. and whenever new write happens to computer location (rare IMO) remove the entry from redis using dynamodb streams and aws lambda.
Kind of alerts
Same as Computer location
License expiration date
If possible don't allow license expiry date (issue a new one for these cases, so that traceability is maintained.) and cache licence expiry forever. OR same as Computer location.
I am working on an contact sync solution to be able to keep the contacts in our app in sync with the google contacts of the user.
Our code uses the php library for the google people api latest Version (v1).
Everything is working fine for one week with each user but after that week we get:
400 - Error "Sync token is expired.
Clear local cache and retry call without the sync token".
My question now:
Is this intended behaviour that you have to clear all your cache after one week with no changes or am I doing something wrong?
Is there any possibility to renew a syncToken if there were no changes?
I already checked the whole code to be sure that the new received nextSyncToken is saved at our side and used for the next incremental sync request. It seems that the new sync Token is always the same as the one sent in the request. Thus it is just clear that we get that errors if a sync token expires after exactly one week.
I also tried to set the option requestSyncToken to true for every list request, even if also a syncToken is set. No success. Sync token stays the same after each request with no changes.
Just in case someone is also facing this problem (syncToken expiration after one week without changes in the persons/contacts list):
Our solution was:
Save the creation date and time of a new syncToken each time you
get one together with the syncToken.
When you receive a syncToken in an incremental sync process compare that token to the stored one. If the syncToken is a new
one, overwrite the old one and its creation date/time.
Use a continuous process that checks each syncToken. If one is about one week old (for security reasons we used 6 days) create a new
syncToken (process see below). As the people API does not offer
things like the watch-channels of the calendar API you would anyway
need some continuous processes that do list-calls in fixed time
intervals for a complete real-time synchronization - so maybe you
could combine these tasks depending on your solution for this
problem?!
Process for creation of a new SyncToken:
Do a new list request without providing a syncToken.
For additional security do some checkups like compare the total persons received with the total persons expected by the old/current
data. And do this renew process at a time of the day when almost no
one does changes generally, for example like 2am.
Overwrite the old syncToken and date/time with the new one and the current date/time.
That's it.
But attention! You can still miss some changes that were made if your syncToken renew process is running exactly at the time a change is made!
Create a dummy contact before the sync to get a new syncToken. After the sync delete the dummy contact from both Google Contacts and your cache.
I´ve managed to create a bucket in my Amazon s3 account, and I´ve added to it a couple of files.
I´ve downloaded CloudBerry Explorer (the freeware edition), and I´ve uploaded a file and set it as private with an expiration date.
This is what my link looks like:
http://dja61b1p3y3bp.cloudfront.net/taller_parte1.flv?AWSAccessKeyId=AKIAIEZ23XILJFNS3ZVA&Expires=1379991642&Signature=GrBLzn13nkm8NiU6JXBj0jC0i%2F8%3D
The thing is that this is an "static" expiration date, it has a one week expiration date counting it from now.
I mean, If I have an online course, and I receive every week different students, and I want to set that course and use that file, I should go and update that expiration date every single week.
Is there any way to configure the file to expire a week later counting that period of time from the click that has made each individual user?
How may I do that?
Thanks for your insight!