I'm using Directory API to fetch users.
Some archived users are returning Suspended = True and others Suspended = False. How can it happen? From my understanding an archived user can't be Suspended.
Moreover, when I look at my admin page both of then are Suspended (image bellow)
Can anyone explain me why this is happening, and if it's normal, is there any risk if an archived user is not suspended?
If you open the image you can see inside the red box that both users are suspended. yes for sure:
"kind":"admin#directory#user",
"id":"10901XXXXXX620",
"etag":"\"SEQQBYC70u6XXXXNYw6b0a5EzY0mTMShjiZga8A/yP85WF6T0tk9a_pgQVEqRq9kHtY\"",
"primaryEmail":"ad....#aaa.com",
"name":{
"givenName":"Aaaa",
"familyName":"John",
"fullName":"Aaaaa John"
},
"isAdmin":false,
"isDelegatedAdmin":false,
"lastLoginTime":"2022-01-10T20:35:25.000Z",
"creationTime":"2020-10-15T22:40:55.000Z",
"agreedToTerms":true,
"suspended":false,
"archived":true,
"changePasswordAtNextLogin":false,
"ipWhitelisted":false,
"emails":[
{
"address":"ad....#aaa.com",
"primary":true
}
],
"languages":[
{
"languageCode":"pt",
"preference":"preferred"
}
],
"customerId":"C00pnlc1u",
"orgUnitPath":"/Suspensos",
"isMailboxSetup":true,
"isEnrolledIn2Sv":true,
"isEnforcedIn2Sv":true,
"includeInGlobalAddressList":true,
"thumbnailPhotoUrl":"https://www.google.com/s2/photos/private/AIbEiAIAAABDCPSAwvv50PWPfSILdmNhcmRfcGhvdG8qKDFhZWFiOTk4NzM5NDY1MjJlOWE4MmE0ODgxMzc3MjM4MzJiYzYyNDUwAUuoUxHJzf7midKhUvdRVmS3n2UE",
"thumbnailPhotoEtag":"\"SEQQBYC70u6XQ2UUjmjNYw6b0a5EzY0mTMShjiZga8A/hU3SJUEhoSHtQtx1ZyG7nXFnWgw\"",
"recoveryEmail":"aaaa#gmail.com"
}```
What you can see in the red box in the screenshots is just the organizational unit where the user has been located in the Admin console, however that is just a name for the OU and does reflect the actual user status.
The user status can be seen below the user's profile picture as you can see in the following screenshot:
As you can see the name of the OU is Test OU Suspended, but the user status is Active so the name of the OU does not reflect the user status.
So in your case this means that the user was archived correctly but is not necessarily suspended. Now to answer your question:
Can anyone explain me why this is happening, and if it's normal, is there any risk if an archived user is not suspended?
You may not need the user to be suspended as it has already been archived. When archiving a user it enters into a partial suspension state where according to the official documentation this is what happens to the archived account:
Can’t sign in to their Google Account, on any system. This includes Google Workspace services, such as Gmail, Google Calendar, and Drive.
Don’t appear in the Global Address List. In user directory listings, the user appears with archived status. Learn about the Global Address List.
Can be deleted or unarchived, but not suspended in the Admin console.
The documentation also mentions the following:
You can archive both active and suspended users. If you unarchive a user, they return to their previous state and regain access to all their previous data.
In conclusion there is nothing wrong if the user is suspended or not, this just means that if an archived user returns True in the Suspended parameter when using the API this is just to save the status it had before being archived so that in case you decide to unarchive it later on it returns back to that specific state.
References:
How AU licensing works
Related
I have been trying to increase the quota limit for multiple GCP resources including compute engine and IP addresses but always get a popup that "not eligible for quota increase". I found this issue happening with other users as well but it was still unsolved for all of them. Just to clarify, the account I am running is were part of the "GCP for Startup" program with billing enabled globally. I have added relevant screen snips here and here
I have researched and replicated on my side. Basically this is modifiable going to the console by following the next steps:
Go to Cloud Console > IAM & admin > Quotas page
Search the quota limit for your appropriate region
Submit the request with the new limit and save the Case IDs shared with you. You should also receive an email confirmation.
On my side, I could checkbox and edit so after some minutes a received an email with the confirmation. As per your images, I see that the boxes are on gray and you are unable to edit the quotas, therefore you would need to contact the GCP sales team to inspect further.
You could reach by **1 800-654-2533** from Monday to Friday 6AM-6PM CST or make use of the chat or requesting a call back in the link contact provided
cheers,
I have embedded Amazon QuickSight dashboard in my web application by using amazon-quicksight-embedding-sdk (followed https://learnquicksight.workshop.aws/en/dashboard-embedding.html).
The user session seems to last many hours as mentioned in https://docs.aws.amazon.com/quicksight/latest/APIReference/API_GetDashboardEmbedUrl.html
When I requested the embed URL directly from my web browser, I could see that it was valid for many hours.
But my web app will request a new embed URL when user restarts it (by closing/reopening tab/browser). Does that mean a new user session was created and billed.
Is it possible to store the embed URL and to reuse it (as long as the user session lasts) for the case the same user closes the tab/browser and open the web app and the dashboard again (of course in the same browser)?
I tried to store the embedURL as a cookies named "embed_url". But calling amazon-quicksight-embedding-sdk.embedDashboard({url: embed_url}) resulted in
"Embedding failed because of invalid URL or authorization code. Both
of these must be valid and the authorization code must not be expired
for embedding to work."
I was sure the embed_url was still valid because requesting it by the browser directly worked.
Which "authorization code" is mentioned in the above error message? What did I miss or is it actually not possible?
Beside the billing concern, I've noticed that the call to get the embedURL took time (more than 5 seconds, eu-central-1) while the embedding took less (3 seconds). I thought I could improve the dashboard loading time by reusing the gotten embedURL. Any comments about the timing? Is it normal or did I do something wrong so that it was so slow? My test dashboard has only 1 diagram with unchanged dataset.
As per the Quicksight Pricing Page, if you're creating an embedded dashboard for a Quicksight "Reader", then you're paying $0.30/session per 30-minute logged-in-session for this Reader.
The validity of the session can be set in the SessionLifetimeInMinutes parameter of the GetDashboardEmbedUrl API, and has an upper bound of 600 minutes (10 hours).
As an example, suppose you set SessionLifetimeInMinutes to 600 mins for your Reader user. Also suppose that this user stayed logged in and uses the dashboard for 10 hours continuously, then that would equate to 20 sessions of usage (since the billing is in increments of 30-min chunks). At first glance it would seem that this would cause $0.30/session * 20 session-chunks = $6 to be billed.
However, as per the pricing page, there is an upper bound of $5.00 per month for every Reader. Which means that this Reader can never exceed $5 per month regardless of how many Quicksight sessions (of whatever duration) are created for them. So no matter how many times you call the GetDashboardEmbedUrl API for a given Reader, you're capped to $5/month for this user.
Also of use as to what constitutes a Reader session (from the pricing page):
When does a Reader Session start and end?
A Reader Session starts with user-initiated action (e.g., login, dashboard load, page refresh, drill-down or filtering) and runs for next 30-minutes.
Keeping Amazon QuickSight open in a background browser window/tab does not result in active sessions until the Reader initiates action on page.
But my web app will request a new embed URL when user restarts it (by closing/reopening tab/browser). Does that mean a new user session was created and billed.
I'm not 100% sure about this, but yes I believe a refresh (or open/close) of the tab results in a new session for the same user.
A Reader Session starts with user-initiated action (e.g., login, dashboard load, page refresh, drill-down or filtering) and runs for next 30-minutes.
The above excerpt is from the pricing page. So it does seem that page refresh (and thus another call to GetDashboardEmbedUrl) will trigger a new session for the user.
Which "authorization code" is mentioned in the above error message?
The GetEmbedDashboardUrl API response is a JSON object that looks like this:
{
"Status": 200,
"EmbedUrl": "https://us-east-1.quicksight.aws.amazon.com/embed/f4147cd0d4d_BLAH_BLAH_...",
"RequestId": "c15a7bad-629e-444a-b643-ff3142c9ae41"
}
If you look closer at the EmbedUrl, apart from the dashboard url itself, there are also these query-string parameters:
isauthcode
code
identityprovider
statePersistenceEnabled
potentially: other params too
The code parameter (embedded within the embedUrl) is the "authorization code" that you asked about.
Is it possible to store the embed URL and to reuse it (as long as the user session lasts) for the case the same user closes the tab/browser and open the web app and the dashboard again (of course in the same browser)?
No, that can't be done. As it says in the link you shared:
The following rules apply to the combination of URL and authorization code:
- They must be used together.
- They can be used one time only.
- They are valid for 5 minutes after you run this command.
So the embedURL and its associated auth code can only be used once together. Makes sense since this will prevent MITM replay attacks among other scenarios. Also I actually tried to cache the response and then re-use the embedUrl in case of a cache-hit, since this would improve the end-user experience. But this didn't work - a "replay" of the embedUrl is blocked by QuickSight, as mentioned in their doc.
Any comments about the timing?
This has been our experience also. The GetDashboardEmbedUrl REST API takes around 5-7 seconds (us-east-1) for our app and then the actual embedding takes another 3-5 seconds. Not great, but I don't see a way around this poor user experience as of now.
In the Drive.Files.List I can, using the 'q' parameter, get all files a user can read/write or own. I would like to be able to use regular expression in the query value. For example set q to be "not '.+#my-org.com' in writers".
Is such a query already supported?
Do I have another way (except invoking Drive.Permissions.List for each and every file in my Drive) to get this information from?
Seems the only account level drive API is part of the report API - activities list. This API (and admin console - audit - drive) section is only supported in the unlimited license. Still haven't found the proper API get the drive state (list all files metadata in the account, permissions etc.) seems that the state can only be inferred from analyzing the relevant activity events assuming the activity is not being evicted after a predefined period of time.
My conclusion, at the moment, is that there is no "root" directory at the account level. "root" is only with respect to the logged in user.
I would be more than happy to be proved wrong.
Uri
I am trying to create a role within Sitecore which can publish content, but only within a specific area(s) of the site. I've added the standard Sitecore\Client Publishing role to my role, but I can't see how to prevent the role from being able to publish all areas of the site. I've looked at the Security editor and the Access viewer, but setting the write access of the sections only seems to affect the ability to edit those sections and has no effect on the ability to publish on those sections.
Workflow is the typical way this is handled. Giving roles access to approve (this could be called 'publish') content of certain sections of the content tree will be the best way to achieve what you are describing. Combine this with an auto-publish action to make it more user friendly.
One thing to keep in mind though using this method is referenced items (images from media library the content may be using for example). Take a look at the 'Publishing Spider' module on the shared source library http://trac.sitecore.net/PublishingSpider
EDIT: Update
I recently discovered this setting in the web.config: "Publishing.CheckSecurity". If set to true, this setting will only publish items if the user has read + write on the item and will only remove items from the web DB if the user has delete permissions.
I had a similar situation once and I created roles per section which only had read and write to that section and no where else (let say 'editor section 1') and another role which only had publishing permission for that section (let say 'publisher section 1'). Then added 'editor section 1' role to 'publisher section 1' role which gives you the role for publishing only specific section.
You do not need multiple workflows, same workflow with multiple roles can also achieve this goal
Answer to this is to set Publishing.CheckSecurity to true
You need to find this code inside web
<!-- PUBLISHING SECURITY
Check security rights when publishing?
When CheckSecurity=true, Read rights are required for all source items. When it is
determined that an item should be updated or created in the target database,
Write right is required on the source item. If it is determined that the item
should be deleted from target database, Delete right is required on the target item.
In summary, only the Read, Write and Delete rights are used. All other rights are ignored.
Default value: false
-->
<setting name="Publishing.CheckSecurity" value="false" />
Set the value="true"
But again you have to govern the security tightly, and assign user role properly. Failed to
do so you will experience buggy publishing.
Hope that will help
I am using vc++. I am trying to create a "front end" which will create a task and put that in " Window's native Scheduler". The task's action is invoking a backup app. Every task needs some privileges to execute the given program. I need to assign administrator privileges to this task. I can assure that the front end can be run by admin only. Now I want to use assign the current user's(admin) privileges to the task. Upto the dig I did in internet/msdn , the api provides below two options( 3rd option is my assumption)
1) Provide account name,password for that task.
2) Use flag "TASK_FLAG_RUN_ONLY_IF_LOGGED_ON", and give the administrator "account name", and password as NULL.
3)Single Sign on
Now the constraints:
1->It is not a good idea to make the client to type the admin account name and password frequently
2->Admin Account name is not always the same(in XP it is possible to change it). So I can't provide a default admin account name.
3-> I don't know how to achieve it. The "single sign on" is something like once you logged in as admin, then the applications can get the current(logged in) user's privileges.
Searching MSDN for this is like "searching a needle in hay stack". Somebody, please shed a light on the solution.
Maybe LocalSystem Account
http://msdn.microsoft.com/en-us/library/ms684190(VS.85).aspx