My team just got Postman Cloud licences. Everything fine, they can share collections/environments, etc, but when I try to do it two messages pop up one after another and the collection isn't shared. I tried with any combination of sharing settings (share with individuals, with whole team, view/edit permissions).
The messages are:
"Collection has been shared"
"There was an error while sharing the collection"
What can I do?
Regards,
Remus
Related
The AWS console allows one connected session per browser instance. This is bothersome when one is frequently changing between accounts.
How can I have multiple AWS console sessions active at the same time (and be able to easily distinguish between them)?
If I understand correctly there is a way to do this. Currently, I handle 5-9 AWS Account Concurrently. If you use firefox there is a plugin offered by Firefox officially https://addons.mozilla.org/en-GB/firefox/addon/multi-account-containers/
Source Link - https://github.com/mozilla/multi-account-containers#readme
It's Really Good, you can add as many multiple containers you want under one Browser Window.
Also, if you want to login to CLI with multiple profiles - you can use named profile service by AWS https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
Use chrome's "people" feature to segregate AWS profiles. Each "person" in chrome context is a completely separate browser context that that shares nothing.
There's a great plugin that makes managing accounts/roles easier:
https://github.com/tilfin/aws-extend-switch-roles
It lets one control what account/role combinations are visible on the console account/role chooser:
But you're still limited to one login per browser context.
That's where chrome's "people" facility comes in.
One can create a different "person" for every account/role combination.
This means separate and distinct login sessions
The combination of the above plugin and distinct browser people/contexts for each account role combination allows one to map each "person" to a set of role(s) they are expected to use.
Given that each person will lose the context they have if they switch roles, I tend to create only 1 or 2 roles in the plugin config for each person.
So if I want a new account/role combination - create a new person, install the plugin, and setup the plugin to know only about that account/role combination.
This lets you have as many concurrent AWS console sessions as you can keep straight in your head. The "people" feature also lets you pick an identifier icon for each person so you can see at a glance which session(s) you have open.
This shows 4 active chrome canary sessions open, all logged in to different accounts.
Note the circled identifier icons.
eg:
"dev" is a bug
"prod" variants are balls, mnemonic "game day"
etc...
You can also assign a color to each account/role with the plugin. I found that it helps to match the color of the person to that of the account/role, so you can tell at a glance, or when minimized, which it is.
Finally, the reason to use canary is that I want my main "open link" behavior to not use any of these "AWS-specific" browser instances.
This ensures that all my link open in my actual chrome instance and canary is reserved for the proliferation of AWS sessions.
I've seen so many people futz around with opening/closing/re-logging-in/etc that I felt I had to post this as a Q&A.
There's a great plugin that allow you to manage multiple AWS accounts/: https://github.com/tilfin/aws-extend-switch-roles It lets one control what account/role combinations are visible on the console account/role chooser:
You can handle multiple AWS Account concurrently on certain browser. If you use FireFox Browser there is a plugin offered by Firefox officially.
https://addons.mozilla.org/en-GB/firefox/addon/multi-account-containers/
I'm looking to get help on the GCP billing. I know we can get cost info based on the service and project, however, is it possible to get info based on the access email ID? because I'm planning to give access to my colleagues and I want to know how much each one their access cost and against which service.
Something like: Date, Email ID, Service, Cost
With respect to another project, how should we know which access cost us so much?
We are running ~30 sandbox projects internally, each allocated to a specific person that can test and run his/her stuff on GCP.
I strongly suggest you create isolated workspaces (projects) for your colleagues so they don't accidentally delete/update services of other people. You will get a separate billing report for each project as well.
I am also setting up a billing alert for all my colleagues so they get an early notification if they left something running on their testbench.
There are three ways I think you could do that kind of cost segregation, I will number them in order of complexity.
1.- Cloud Export Billing, For this one the best practice is to segregate your resources and users by "Labels", as administrator, you may ask the users to use them and assign them to any resource they create, e.g. If they create a new VM instance, then you will be able to filter by field the exported table and create the reports as you want.(Also your GCP billing dashboard will show these "labels" segregations)
2.- Use Billing API to curl directly the information you need to get from it,you can manage to use in the request the information you need like SKU, User, Date and description.
3.- Usage Reports. This solution is more GSuite scope,and I can't vouch that will work as the documentation say but you can take a look to it, there is an option to get "Usage reports", this usage reports can be made from GSuite to any resource below, GCP included if you already have an organization.
I am building a Django app where there are several "user spaces" or "client domains". I was wondering how I could separate each clients from accessing other's data ? So far I have come up with several options :
Distribute one project per client with their own database
Keep one database and one django running (two options)
a) Add some top level table with some ID to be referenced in each model. But then how do I restrict someone from impersonnating someone else's ID and access their data ?
b) Build some Authorization layer on top of requests using a header or some query param to help discriminate which data the user can acces (but I don't really know how to do so)
What is the state of the art in the domain of multiple clients using a same Django app ? Are my solutions appropriate and if so, how do I implement them ?
I have seen few related posts / articles on internet so I'm resorting to ask here.
Thanks in advance !
Consider the following micro services for an online store project:
Users Service keeps account data about the store's users (including first name, last name, email address, etc')
Purchase Service keeps track of details about user's purchases.
Each service provides a UI for viewing and managing it's relevant entities.
The Purchase Service index page lists purchases. Each purchase item should have the following fields:
id, full name of purchasing user, purchased item title and price.
Furthermore, as part of the index page, I'd like to have a search box to let the store manager search purchases by purchasing user name.
It is not clear to me how to get back data which the Purchase Service does not hold - for example: a user's full name.
The problem gets worse when trying to do more complicated things like search purchases by purchasing user name.
I figured that I can obviously solve this by syncing users between the two services by broadcasting some sort of event on user creation (and saving only the relevant user properties on the Purchase Service end). That's far from ideal in my perspective. How do you deal with this when you have millions of users? would you create millions of records in each service which consumes users data?
Another obvious option is exposing an API at the Users Service end which brings back user details based on given ids. That means that every page load in the Purchase Service, I'll have to make a call to the Users Service in order to get the right user names. Not ideal, but I can live with it.
What about implementing a purchase search based on user name? Well I can always expose another API endpoint at the Users Service end which receives the query term, perform a text search over user names in the Users Service, and then return all user details which match the criteria. At the Purchase Service, map the relevant ids back to the right names and show them in the page. This approach is not ideal either.
Am I missing something? Is there another approach for implementing the above? Maybe the fact that I'm facing this issue is sort of a code smell? would love to hear other solutions.
This seems to be a very common and central question when moving into microservices. I wish there was a good answer for that :-)
About the suggested pattern already mentioned here, I would use the term Data Denormalization rather than Polyglot Persistence, as it doesn't necessarily needs to be in different persistence technologies. The point is that each service handles its own data. And yes, you have data duplication and you usually need some kind of event bus to share data across services.
There's another option, which is a sort of a take on the first - making the search itself as a separate service.
So in your example, you have the User service for managing users. The Purchases services manages purchases. Each handles its own data and only the data it needs (so, for instance, the Purchases service doesn't really need the user name, only the ID). And you have a third service - the Search Service - that consumes data produced by other services, and creates a search "view" from the combined data.
It's totally fine to keep appropriate data in different databases, it's called Polyglot Persistence. Yes, you would like to keep user data and data about purchases separately and use message queue for sync. Millions of users seems fine to me, it's scalability, not design issue ;-)
In case of search - you probably want to search more than just username, right? So, if you use message queue to update data between services you can also easily route this data to ElasticSearch, for example. And from ElasticSearch perspective it doesn't really matter what field to index - username or product title.
I usually use both approaches. Sometimes i have another service which is sitting on top on x other services and combines the data. I don't really like this approach because it is causing dependencies and coupling between services. So in general, within my last projects we tried to stick to polyglot persistence.
Also think about, if you need to have x sub http requests for combining data in some kind of middleware service, it will lead you to higher latency. We always try to cut down the amount of requests for one task and handle everything what is possible through asynchronous queues. ( especially data sync )
If you conceptualize modules as the owners and controllers of the data they work on, then your model must also communicate that data out of that module to others. In contrast, the modules in a manufacturing process have the access to change data without possessing and controlling it.
Microservices is an architecture for distributed processing, like most code, where modules pass the data around to work on it. From classic articles by Harvard Business Review and McKinsey on the subject of owning members of a supply chain, I identified complexities arising from this model and wrote an article teaching programmers what you need to know: http://www.powersemantics.com/p.html
Manufacturing is an architecture for integrated processing, where modules work on the data without passing it around from point to point. This can be accomplished by having modules configured to access the same memory, files or database tables. My architecture shows how to accomplish this on memory via reference properties.
When you consider "exposing an API at the Users Service end which brings back user details based on given ids", you need to be aware that creates what HBR calls "irreversible" complexity, which I've dubbed centralization complexity. Don't build A->B (distributed) systems, because you can't decentralize them later after failing to separate requirements. Requirements in production processes represent user instructions, and centralized modules only enable you to change the wrong users' processes. In other words, centralized modules don't document user groups or distinguish them from derived-product-users.
We have several teams working on different work packages involving same objects with single development server. My question: how can we mangae this kind of situations without losing the time of the resources. To elaborate more, I have two teams working on Account BC with different change orders with same release date and I want the work to be done in parallel. What are the best ways to handle this situation and my answer is we need to wait and not possible. Does any one have the solution to handle this situation?
Have both teams develop with an offline copy of the Accounts BC - sharing the object between themselves as a SIF file. Merge the two streams together with the Tools archive import function.
Create an new object manager for the server which has its srf point to one other than siebel_sia. Create as many users and mobile clients as the developers. Get them the extracted database on thier client names. Make on team work on main object manager (_enu) and get the other work on (_custom). Bouncing individual object managers as the development cycle continues