Hi I am trying to export the list of users that have signed up for an API key using Carbon but can't seem to figure out where to do this. I have tried accessing the h2 database (the WSO2Carbon_DB as well as the APIDB, however, there is no user information stored in these tables - at least I can't find them.
While going through directly in the APIM database, you may not be bale to find it,because tables are connected with constraints. But check the following tables; which contain the entries you needed
IDN_OAUTH_CONSUMER_APPS,IDN_OAUTH2_ACCESS_TOKEN,AM_SUBSCRIBER , AM_APPLICATION
Related
I am fairly new to PBI, and I want to create a powerBI dashboard and share it externally, meaning to different people (users) without a powerBI license. However, the data for each user is restricted based on their rights.
How can I best set this up? I'm thinking to either use a login system, or using different URLs per user. Right now some possible solutions I found are 1) Embedded PowerBi content using Azure, 2) using Row-level security (RLS) or 3) create different URLs based on a column value, and (somehow) have it restricted per user.
My apologies for this entry-level question, any tips are very welcome.
I'm currently trying to build a web app that would allow many users to query an external API (I cannot retrieve all the data served by this API at regular intervals to populate my PostgreSQL database for various reasons). I've read several thing about ACID and MVCC but still, I'm not sure there won't be any problem if several users are populating/reading my PostgreSQL database at the very same time. So here I'm asking for advice (I'm very new to this field)!
Let's say my users query the external API to retrieve articles. They make their search via a form, the back end gets it, queries the api, populates the database, then query the database to return some data to the front end.
Would it be okay to simply create a unique table to store the articles returned by the API when users are querying it ?
Shall I rather store the articles returned by the API and associate each of them to the user that requested it (the Article model will contain a foreign key mapping to a User model)?
Or shall I give each user a table (data isolation would be good but that sounds very inefficient)?
Thanks for your help !
Would it be okay to simply create a unique table to store the articles returned by the API when users are querying it ?
Yes. If the articles have unique keys (doi?) you could use INSERT...ON CONFLICT DO NOTHING to handle the (presumably very rare) case that an article is requested by two people nearly simultaneously.
Shall I rather store the articles returned by the API and associate each of them to the user that requested it (the Article model will contain a foreign key mapping to a User model)?
Do you want to? Is there a reason to? Do you care who requested each article? It sounds like you anticipating storing only the first person to request each article, and not every request?
Or shall I give each user a table (data isolation would be good but that sounds very inefficient)?
Right, you would be hitting the API a lot more often (assuming some large fraction of articles are requested more than once) and storing a lot of duplicates. It might not even solve the problem, if one person hits "submit" twice in a row, or has multiple tabs open, or writes a bot to hit your service in parallel.
I have a requirement to share data stored in azure cosmos db with customers. All customers data is stored in various collections. We would like to give read access to data belongs to respective customer only. So that one customer can only access his/her data and not of others from collections.
We would like to integrate this customer specific data with PowerBI in next step.
Could anyone suggest way(s) to give customer specific read access of their data from Azure cosmos db?
From what I was able to find below were the possible options. Though none seem to be fulfilling the requirement.
Master keys - which would give access to whole database and would give access to whole database, which is not desired.
By creating admin users - which is not desired.
By using Resource token - This is time bound and would have limited lifetime.
It would be helpful if anyone could add further insight on same to fulfill the requirement.
Thanks in advance!
We use both elasticsearch and postgres in our stack. My lead believes that it is better to perform the text search on elasticsearch and get ids of the hits. Then, fire an "IN" query and filter on postgres.
Eg:-
a = es.search({params})//returns a list of ids(pks)
b = Dummy.objects.filter(id__in=a).filter({params})
I believe that it is unnecessary when we can do everything on elasticsearch. Which approach would be faster?
EDIT: More details.
This is basically a file library for users. We will be storing the files in our S3 bucket. We are planning to store the file details like filename, s3prefix, format,meta-data on es. There are other fields like date_modified, date_created, owner, file_size. We want the user to be able to sort and filter on these params. Since, this is a new product, there will only be a few users. 10-20 max. And no of entries should be in thousands. But, these numbers can grow pretty fast.
If you can store all the data relevant to the response in elasticsearch then having a second hop in postgres is redundant, as ES can hold and retrieve all the right docs, including all the filters needed.
If on the other hand, postgres db contains some info which is additional layer on top of the data in ES then that second query to postgres is needed. Reasons for this setup can be that some data is pretty 'static' in ES and the postgres data have dynamic nature with many changes and updates.
So, both options would work. it all depends on the data and how it is being stored in the two DBs. having a second query to postgres will introduce additional latency, but that might be very small on a good setup and not noticed by users.
I have integrated the MWS API for my store. The issue is I was not able to get list of all products which I have submitted from feeds and also available products in Amazon store in account.
I have tried all the api of MWS no any api giving all products.
In Listmatchingproducts api it needs query parameter but for product listing there should not be query parameter required.
So for all product listing which api will be used and how?
In order for you to retrieve all of your products without input parameters, you can use the Reports API to request an inventory report or active listings reports or any of the report types here: http://docs.developer.amazonservices.com/en_US/reports/Reports_ReportType.html#ReportTypeCategories__ListingsReports
You can call the Reports API just like the Products API, but there are extra steps involved. You first request the report using the RequestReport operation, then you'll get back a GeneratedReportId. Take that Id and call the GetReport operation and you'll get back the report once it's available. If you need more than a report, but need to work with the data in some other way, you can just write a routine in whatever language you're using to parse out the data in memory.
Have you seen the client libraries? They do most of the work already, just plug in your keys. https://developer.amazonservices.com/gp/mws/api.html/188-4747010-1589520?ie=UTF8&group=bde§ion=reports&version=latest
Basically there is no specific API to call the product list available in your store. But you can get your products using Reports API (ReportType enumeration)
http://docs.developer.amazonservices.com/en_US/reports/Reports_ReportType.html#ReportTypeCategories__ListingsReports
There is multiple steps involve in order to work with reports.Here is the steps that involve to get the product listing.
1-RequestReport
2-GetReportRequestList (includes the ReportId when done)
3-GetReport
Have you tried Scratchpad. its for UK marketplace.
https://mws.amazonservices.co.uk/scratchpad/index.html