2 clarifications needed for ObjectID fields:
1) Are ObjectID's globally unique across a subscription or across all of Rally's subscriptions?
2) Are The ObjectID's of built in Rally things constants and the exact same for all subscriptions? For example, in one of my workspaces, to get the allowed ScheduleState values for a UserStory, i have to hit this endpoint:
/AttributeDefinition/-41562/AllowedValues
where -41562 is the ObjectID. Can I assume that every other subscription uses -41562 for the ObjectID in this URL to get the valid Schedule States?
1) ObjectID's are unique per stack. So all of the ObjectID's on the SaaS stack (rally1.rallydev.com) are unique.
2) ObjectID's that are negative like the ScheduleState one mentioned above will be the same across workspaces. However things like custom fields and portfolioitem type and attributes will have unique ObjectID's across different workspaces.
Are you wanting to cache these values for perf reasons or what is it you're looking to do with them?
Related
I have an application being built using AWS AppSync with a primary focus of sending telemetry data from a mobile application. I am stuck on how to partition and structure the DynamoDB tables for this as the users of the application belong to different organizations, in those organizations there will be admins who are able to view the data specific to their organization.
OrganizationA
-->Admin # View all the telemetry data
---->User # Send the telemetry data from their mobile application
Based on some research from these resources,
Link 1.
Link 2.
The advised manner is to create tables for individual periods i.e., a table for every day with the telemetry readings.
Example(not sure what pk is in this example):
The way in which I am planning to separate the users using AWS Cognito is by attaching a custom attribute when the user signs up such as Organization and Role(Admin or User) as per this answer then use a Pre-Signup Lambda Trigger.
How should I achieve this?
Since you really don't need users from one organization to read data from another organization, and for all your access patterns you will always know the organization id, then that attribute should be a factor in partitioning: either at the table level, or at the partition key level.
Then you have to determine if you can simply use the organization id as a partition key, or you need to further partition -- say, by concatenating the organization id and the hour value for each sample. This will depend on the amount of data you expect to generate by each organization in a given day. The tradeoff being more granular partitioning vs. cost of querying for data.
If organizations generate small amounts of data each day (say, a few events an hour) then just use organization id as the partition key. Otherwise, partition the data further.
In all of the above, the sort key should probably be the timestamp of the events, either with second or millisecond precision depending on your needs. That way your queries can retrieve ordered time-series data.
Keep in mind that when you make queries, you may need to execute multiple queries and stick the results together in your application to fully represent the results as the range may span multiple partitions, or even multiple tables.
I'm currently trying to build a web app that would allow many users to query an external API (I cannot retrieve all the data served by this API at regular intervals to populate my PostgreSQL database for various reasons). I've read several thing about ACID and MVCC but still, I'm not sure there won't be any problem if several users are populating/reading my PostgreSQL database at the very same time. So here I'm asking for advice (I'm very new to this field)!
Let's say my users query the external API to retrieve articles. They make their search via a form, the back end gets it, queries the api, populates the database, then query the database to return some data to the front end.
Would it be okay to simply create a unique table to store the articles returned by the API when users are querying it ?
Shall I rather store the articles returned by the API and associate each of them to the user that requested it (the Article model will contain a foreign key mapping to a User model)?
Or shall I give each user a table (data isolation would be good but that sounds very inefficient)?
Thanks for your help !
Would it be okay to simply create a unique table to store the articles returned by the API when users are querying it ?
Yes. If the articles have unique keys (doi?) you could use INSERT...ON CONFLICT DO NOTHING to handle the (presumably very rare) case that an article is requested by two people nearly simultaneously.
Shall I rather store the articles returned by the API and associate each of them to the user that requested it (the Article model will contain a foreign key mapping to a User model)?
Do you want to? Is there a reason to? Do you care who requested each article? It sounds like you anticipating storing only the first person to request each article, and not every request?
Or shall I give each user a table (data isolation would be good but that sounds very inefficient)?
Right, you would be hitting the API a lot more often (assuming some large fraction of articles are requested more than once) and storing a lot of duplicates. It might not even solve the problem, if one person hits "submit" twice in a row, or has multiple tabs open, or writes a bot to hit your service in parallel.
I would like to know if it is possible to have multiple dynamodb request using only one dynamo resolver in AppSync?
Or the only/best way to have more complicated processing is to use a lambda function ?
Practically, no. You even cannot query on multiple indices in a single resource definition for an query, indeed.
However, if you are to use that structure for joining multiple DynamoDB tables, you can attach resolvers not to the query entry; but to the field you want to relate on other fields.
I had an issue like relating users to another table for containing the posts and I've passed it by attaching a resolver aiming the Posts field of the User type.
This issue refers to a similar problem and is quite helpful for that kind of cases: https://github.com/awslabs/aws-mobile-appsync-sdk-js/issues/17
If it is not the case of yours, you can elaborate the question. I may look like guessing your purpose for relating tables, all in all.
Have you looked at batch resolvers with AWS AppSync?https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html
This will allow you to write to one or more tables in a single request, and also allow you to do multiple write/read/delete operations in a single request.
You can do it with pipeline resolvers
https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-pipeline-resolvers.html
I just started using loopback, and I am stuck with ACL. My database has a relation like so:
User has many tests and tests have many users ( many to many, in loopback I am using hasmanythrough).
Each test has several sections (one to many)
Each section has several question(one to many)
Now, I want to get all sections that a user has , or all questions that a user has. I know that using $owner needs a belongsTo in the respective model, but in my case that is not possible.
Is there any way to achieve this without having to completely write my own queries ?
Unfortunately the $owner role doesn't work as a filter, but as security access to end-points when an instance ID is specified; basically it only works when you perform a findById, but not when you perform a find.
Example:
GET /api/tests/ does nothing. The current user sees ALL The tests. No filtering is performed
GET /api/tests/{id} checks that the currently logged in userId corresponds to the userId in the test you are trying to retrieve. If the userIds match, then the user can view this particular test. if they do not match then you get an AUTHORIZATION_REQUIRED or ACCESS_DENIED error (I can't remember which).
as I just wrote in this question, you might want to look at creating a Mixin.
I have been working on a DJANGO back-end which main use case would be the capability to store a given set of pictures with its related tags.
The current design foresees dedicated REST-ful APIs for creating a new set, adding a picture to a given set and associating tags to a given set : this results into distinct client calls.
For instance :
BEGIN the "create new set" transaction
create a new set and receive the set ID
upload the first picture of the set
upload the second picture of the set (And so on depending on the total number of pictures...)
Add the tags related to this newly added set
END the transaction
How can I commit/rollback such a transaction knowing that it is split among different HTTP requests ?
Am I having a design issue here ? Shall I favor a single cumulative HTTP request approach ?
Please take into account that such a back-end is to be used with mobile devices which might suffer from temporary signal loss.
Any advice is welcome.
UPDATE:
Would it be convenient to use model versioning packages such as django-revisions to solve the issue ?