Amazon QuickSight embedded dashboard - how to cache user session in my webapp (billing and timing concern) - amazon-web-services

I have embedded Amazon QuickSight dashboard in my web application by using amazon-quicksight-embedding-sdk (followed https://learnquicksight.workshop.aws/en/dashboard-embedding.html).
The user session seems to last many hours as mentioned in https://docs.aws.amazon.com/quicksight/latest/APIReference/API_GetDashboardEmbedUrl.html
When I requested the embed URL directly from my web browser, I could see that it was valid for many hours.
But my web app will request a new embed URL when user restarts it (by closing/reopening tab/browser). Does that mean a new user session was created and billed.
Is it possible to store the embed URL and to reuse it (as long as the user session lasts) for the case the same user closes the tab/browser and open the web app and the dashboard again (of course in the same browser)?
I tried to store the embedURL as a cookies named "embed_url". But calling amazon-quicksight-embedding-sdk.embedDashboard({url: embed_url}) resulted in
"Embedding failed because of invalid URL or authorization code. Both
of these must be valid and the authorization code must not be expired
for embedding to work."
I was sure the embed_url was still valid because requesting it by the browser directly worked.
Which "authorization code" is mentioned in the above error message? What did I miss or is it actually not possible?
Beside the billing concern, I've noticed that the call to get the embedURL took time (more than 5 seconds, eu-central-1) while the embedding took less (3 seconds). I thought I could improve the dashboard loading time by reusing the gotten embedURL. Any comments about the timing? Is it normal or did I do something wrong so that it was so slow? My test dashboard has only 1 diagram with unchanged dataset.

As per the Quicksight Pricing Page, if you're creating an embedded dashboard for a Quicksight "Reader", then you're paying $0.30/session per 30-minute logged-in-session for this Reader.
The validity of the session can be set in the SessionLifetimeInMinutes parameter of the GetDashboardEmbedUrl API, and has an upper bound of 600 minutes (10 hours).
As an example, suppose you set SessionLifetimeInMinutes to 600 mins for your Reader user. Also suppose that this user stayed logged in and uses the dashboard for 10 hours continuously, then that would equate to 20 sessions of usage (since the billing is in increments of 30-min chunks). At first glance it would seem that this would cause $0.30/session * 20 session-chunks = $6 to be billed.
However, as per the pricing page, there is an upper bound of $5.00 per month for every Reader. Which means that this Reader can never exceed $5 per month regardless of how many Quicksight sessions (of whatever duration) are created for them. So no matter how many times you call the GetDashboardEmbedUrl API for a given Reader, you're capped to $5/month for this user.
Also of use as to what constitutes a Reader session (from the pricing page):
When does a Reader Session start and end?
A Reader Session starts with user-initiated action (e.g., login, dashboard load, page refresh, drill-down or filtering) and runs for next 30-minutes.
Keeping Amazon QuickSight open in a background browser window/tab does not result in active sessions until the Reader initiates action on page.
But my web app will request a new embed URL when user restarts it (by closing/reopening tab/browser). Does that mean a new user session was created and billed.
I'm not 100% sure about this, but yes I believe a refresh (or open/close) of the tab results in a new session for the same user.
A Reader Session starts with user-initiated action (e.g., login, dashboard load, page refresh, drill-down or filtering) and runs for next 30-minutes.
The above excerpt is from the pricing page. So it does seem that page refresh (and thus another call to GetDashboardEmbedUrl) will trigger a new session for the user.
Which "authorization code" is mentioned in the above error message?
The GetEmbedDashboardUrl API response is a JSON object that looks like this:
{
"Status": 200,
"EmbedUrl": "https://us-east-1.quicksight.aws.amazon.com/embed/f4147cd0d4d_BLAH_BLAH_...",
"RequestId": "c15a7bad-629e-444a-b643-ff3142c9ae41"
}
If you look closer at the EmbedUrl, apart from the dashboard url itself, there are also these query-string parameters:
isauthcode
code
identityprovider
statePersistenceEnabled
potentially: other params too
The code parameter (embedded within the embedUrl) is the "authorization code" that you asked about.
Is it possible to store the embed URL and to reuse it (as long as the user session lasts) for the case the same user closes the tab/browser and open the web app and the dashboard again (of course in the same browser)?
No, that can't be done. As it says in the link you shared:
The following rules apply to the combination of URL and authorization code:
- They must be used together.
- They can be used one time only.
- They are valid for 5 minutes after you run this command.
So the embedURL and its associated auth code can only be used once together. Makes sense since this will prevent MITM replay attacks among other scenarios. Also I actually tried to cache the response and then re-use the embedUrl in case of a cache-hit, since this would improve the end-user experience. But this didn't work - a "replay" of the embedUrl is blocked by QuickSight, as mentioned in their doc.
Any comments about the timing?
This has been our experience also. The GetDashboardEmbedUrl REST API takes around 5-7 seconds (us-east-1) for our app and then the actual embedding takes another 3-5 seconds. Not great, but I don't see a way around this poor user experience as of now.

Related

How to handle background jobs in "Cloud Run" when new instance is created immediately?

I have a FastAPI project in Cloud Run and it has some background jobs inside it. (Not heavy stuff)
However, when a new instance is being created by Cloud Run due to number of requests etc. every instance runs the background job concurrently.
For example;
I have a task that creates some invoices for customers in the background and if three instances is created immediately, three invoices will be created.
I researched about "FOR UPDATE" usage in PostgreSQL etc. It seems like I can solve by modifying my database but I just wonder if it can be solved in Cloud's side.
I don't want to limit the max. number of instances to 1
What would you do in this situation?
Thank you for your time.
If you can potentially have N instances of a job (because you don't want to set the max limit to 1), you need to implement your jobs in an idempotent way. Broadly speaking, you have a few ways to achieve idempotency:
by enforcing a business constraint.
by storing an idempotency key.
by using the Etag HTTP response header.
For example, Stripe lets you define an idempotency key for all of your API requests. Stripe stores this key on its servers, and when you make a POST request with the same payload of a previous one, Stripe returns you the same result. POST requests are not idempotent, but using this "trick" they become idempotent.
Stripe's idempotency works by saving the resulting status code and body of the first request made for any given idempotency key, regardless of whether it succeeded or failed. Subsequent requests with the same key return the same result, including 500 errors.
https://stripe.com/docs/api/idempotent_requests
Tip: you could expand your question by clarifying how these background tasks are created, and where they run.

Not able to submit request for quota increase in GCP for any resource available

I have been trying to increase the quota limit for multiple GCP resources including compute engine and IP addresses but always get a popup that "not eligible for quota increase". I found this issue happening with other users as well but it was still unsolved for all of them. Just to clarify, the account I am running is were part of the "GCP for Startup" program with billing enabled globally. I have added relevant screen snips here and here
I have researched and replicated on my side. Basically this is modifiable going to the console by following the next steps:
Go to Cloud Console > IAM & admin > Quotas page
Search the quota limit for your appropriate region
Submit the request with the new limit and save the Case IDs shared with you. You should also receive an email confirmation.
On my side, I could checkbox and edit so after some minutes a received an email with the confirmation. As per your images, I see that the boxes are on gray and you are unable to edit the quotas, therefore you would need to contact the GCP sales team to inspect further.
You could reach by **1 800-654-2533** from Monday to Friday 6AM-6PM CST or make use of the chat or requesting a call back in the link contact provided
cheers,

Should I store failed login attempts in AWS Cognito or Dynamo DB?

I have a requirement to build a basic "3 failed login attempts and your account gets locked" functionality. The project uses AWS Cognito for Authentication, and the Cognito PreAuth and PostAuth triggers to run a Lambda function look like they will help here.
So the basic flow is to increment a counter in the PreAuth lambda, check it and block login there, or reset the counter in the PostAuth lambda (so successful logins dont end up locking the user out). Essentially it boils down to:
PreAuth Lambda
if failed-login-count > LIMIT:
block login
else:
increment failed-login-count
PostAuth Lambda
reset failed-login-count to zero
Now at the moment I am using a dedicated DynamoDB table to store the failed-login-count for a given user. This seems to work fine for now.
Then I figured it'd be neater to use a custom attribute in Cognito (using CognitoIdentityServiceProvider.adminUpdateUserAttributes) so I could throw away the DynamoDB table.
However reading https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-dg.pdf the section titled "Configuring User Pool Attributes" states:
Attributes are pieces of information that help you identify individual users, such as name, email, and phone number. Not all information about your users should be stored in attributes. For example, user data that changes frequently, such as usage statistics or game scores, should be kept in a separate data store, such as Amazon Cognito Sync or Amazon DynamoDB.
Given that the counter will change on every single login attempt, the docs would seem to indicate I shouldn't do this...
But can anyone tell me why? Or if there would be some negative consequence of doing so?
As far as I can see, Cognito billing is purely based on storage (i.e. number of users), and not operations, whereas Dynamo charges for read/write/storage.
Could it simply be AWS not wanting people to abuse Cognito as a storage mechanism? Or am I being daft?
We are dealing with similar problem and main reason why we have decided to store extra attributes in DB is that Cognito has quotas for all the actions and "AdminUpdateUserAttributes" is limited to 25 per second.
More information here:
https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html
So if you have a pool with 100k or more it can create a bottle neck if wanted to update a Cognito user records with every login etc.
Cognito UserAttributes are meant to store information about the users. This information can then be read from the client using the AWS Cognito SDK, or just by decoding the idToken on the client-side. Every custom attribute you add will be visible on the client-side.
Another downside of custom attributes is that:
You only have 25 values to set
They cannot be removed or changed once added to the user pool.
I have personally used custom attributes and the interface to manipulate them is not excellent. But that is just a personal thought.
If you want to store this information, and not depend on DynamoDB, you can use Amazon Cognito Sync. Besides the service, it offers a client with great features that you can incorporate to your app.
AWS DynamoDb appears to be your best option, it is commonly used for such use cases. Some of the benefits of using it:
You can store separate record for each login attempt with as much info as you want such as ip address, location, user-agent etc. You can also add datetime that can be used by pre-auth Lambda to query by time range for example failed attempt within last 30 minutes
You don't need to manage table because you can set TTL for DynamoDb record so that record will be deleted automatically after specified time.
You can also archive items in S3

Find files accessible outside of my organization

In the Drive.Files.List I can, using the 'q' parameter, get all files a user can read/write or own. I would like to be able to use regular expression in the query value. For example set q to be "not '.+#my-org.com' in writers".
Is such a query already supported?
Do I have another way (except invoking Drive.Permissions.List for each and every file in my Drive) to get this information from?
Seems the only account level drive API is part of the report API - activities list. This API (and admin console - audit - drive) section is only supported in the unlimited license. Still haven't found the proper API get the drive state (list all files metadata in the account, permissions etc.) seems that the state can only be inferred from analyzing the relevant activity events assuming the activity is not being evicted after a predefined period of time.
My conclusion, at the moment, is that there is no "root" directory at the account level. "root" is only with respect to the logged in user.
I would be more than happy to be proved wrong.
Uri

A way to bypass the per-ip limit retrieving profile picture?

My app download all the user's friends pictures.
All the requests are of this kind:
https://graph.facebook.com/<friend id>/picture?type=small
But, after a certain limit is reached, instead of the picture I get:
{"error""message":"(#4) Application request limit reached","type":"OAuthException"}}
Actually, the only way I found to prevent this is to change the server ip (manually).
There isn't a better way?
For the record:
The limit is related to the Graph Api only, and the graph.facebook.com/<user>/picture url is a graph api call that returns a redirect.
So, to avoid the daily limit, simply fetch all the images url from FQL, like:
SELECT uid, pic_small, pic_big, pic, pic_square FROM user WHERE uid = me() or IN (SELECT uid2 FROM friend WHERE uid1=me())
now these are the direct urls to the images, for eg:
http://profile.ak.fbcdn.net/hprofile-ak-snc4/275716_1546085527_622218197_q.jpg
so don't store them since they continuously change.
If it's needed for an online app better way not to download those images, but use an online version, there is couple of reasons for doing so:
Users change pictures (some frequently), do you need an updated version?
Facebook's servers probably faster than yours and friends pictures probably cached within browser of your user.
Update:
Since limit you reach is Graph API call limit, not the image retrieval, another solution that comes to my head just now is using friends connection of user in Graph API and specifying picture in fields argument, eq: https://graph.facebook.com/me/friends?fields=picture, this will return direct URL-s for friends pictures so you can do only one call to get all needed info to download the images for every user...