RallyDev - Requesting List of Iterations for a Users DefaultProject - web-services

Using the RallyDev Web Services API v2.0 I would like to request the iterations for a users default project.
I can do this now by first calling:
https://rally1.rallydev.com/slm/webservice/v2.0/iteration:current?pretty=true
Parsing out Iteration->Project->Ref, and then calling calling:
https://rally1.rallydev.com/slm/webservice/v2.0/project/[ProjectID]/Iterations?pretty=true
or
https://rally1.rallydev.com/slm/webservice/v2.0/iteration?query=(Project.Oid=[ProjectID])&pretty=true
Wondering if there is a better way?
I saw UserProfile had DefaultProject and DefaultWorkspace, but I couldn't figure out how to use them as fetching just returned 'null'.

Your queries on Iteration are spot on for looking up Iterations for a particular Project. Note that for UserProfile - the Default Workspace/Project settings are not required fields. They are empty unless the User has explicitly set these in his or her profile settings. Only the user him/herself can set these - a (Workspace/Subscription) Administrator cannot set them on behalf of the User. So if you're getting empty values back for these, it is likely because the User of concern does not have the Default Workspace/Project set.

Related

How to handle network errors when saving to Django Models

I have a Django .save() execution that loops at n times.
My concern is how to guard against network errors during saving, as some entries could be saved while others won't and there could be no telling.
What is the best way to make sure that the execution is completed?
Here's a sample of my code
# SAVE DEBIT ENTRIES
for i in range(len(debit_journals)):
# UPDATE JOURNAL RECORD
debit_journals[i].approval_no = journal_transaction_id
debit_journals[i].approval_status = 'approved'
debit_journals[i].save()
Either use bulk_create / bulk_update to execute a single DB query, or use transaction.atomic as decorator for your function so that any error on save will rollback your database before your function was run.
Try something like below (I suppose your model name is DebitJournal and debit_journals is a list).
for debit_journal in debit_journals:
debit_journal.approval_no = journal_transaction_id
debit_journal.approval_status = 'approved'
DebitJournal.objects.bulk_update(debit_journals, ["approval_no", "approval_status"])
If debit_journals is a QuerySet you can also try
debit_journals.update(approval_no=journal_transaction_id, approval_status='approved').
It depends of what you call a network error, if it's between the user and the django application or between the django application and the database. If it's only between the user and the app, note that if the request has been sent correctly even if the user lose the connection afterward the objects will be created. So a user might not have the request response, but objects will still be created.
If it's between the database and the django application some objects might still be created before the error.
Usually if you want a "All or Nothing" behaviour you should use manual transaction as described there: https://docs.djangoproject.com/en/4.1/topics/db/transactions/
Note that if the creation is really long you might hit the request timeout. If the creation takes more than a few seconds you should consider making it a background task. The request is only there to create the task.
See Python task queue alternatives and frameworks for 3rd party solutions.

Should I store failed login attempts in AWS Cognito or Dynamo DB?

I have a requirement to build a basic "3 failed login attempts and your account gets locked" functionality. The project uses AWS Cognito for Authentication, and the Cognito PreAuth and PostAuth triggers to run a Lambda function look like they will help here.
So the basic flow is to increment a counter in the PreAuth lambda, check it and block login there, or reset the counter in the PostAuth lambda (so successful logins dont end up locking the user out). Essentially it boils down to:
PreAuth Lambda
if failed-login-count > LIMIT:
block login
else:
increment failed-login-count
PostAuth Lambda
reset failed-login-count to zero
Now at the moment I am using a dedicated DynamoDB table to store the failed-login-count for a given user. This seems to work fine for now.
Then I figured it'd be neater to use a custom attribute in Cognito (using CognitoIdentityServiceProvider.adminUpdateUserAttributes) so I could throw away the DynamoDB table.
However reading https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-dg.pdf the section titled "Configuring User Pool Attributes" states:
Attributes are pieces of information that help you identify individual users, such as name, email, and phone number. Not all information about your users should be stored in attributes. For example, user data that changes frequently, such as usage statistics or game scores, should be kept in a separate data store, such as Amazon Cognito Sync or Amazon DynamoDB.
Given that the counter will change on every single login attempt, the docs would seem to indicate I shouldn't do this...
But can anyone tell me why? Or if there would be some negative consequence of doing so?
As far as I can see, Cognito billing is purely based on storage (i.e. number of users), and not operations, whereas Dynamo charges for read/write/storage.
Could it simply be AWS not wanting people to abuse Cognito as a storage mechanism? Or am I being daft?
We are dealing with similar problem and main reason why we have decided to store extra attributes in DB is that Cognito has quotas for all the actions and "AdminUpdateUserAttributes" is limited to 25 per second.
More information here:
https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html
So if you have a pool with 100k or more it can create a bottle neck if wanted to update a Cognito user records with every login etc.
Cognito UserAttributes are meant to store information about the users. This information can then be read from the client using the AWS Cognito SDK, or just by decoding the idToken on the client-side. Every custom attribute you add will be visible on the client-side.
Another downside of custom attributes is that:
You only have 25 values to set
They cannot be removed or changed once added to the user pool.
I have personally used custom attributes and the interface to manipulate them is not excellent. But that is just a personal thought.
If you want to store this information, and not depend on DynamoDB, you can use Amazon Cognito Sync. Besides the service, it offers a client with great features that you can incorporate to your app.
AWS DynamoDb appears to be your best option, it is commonly used for such use cases. Some of the benefits of using it:
You can store separate record for each login attempt with as much info as you want such as ip address, location, user-agent etc. You can also add datetime that can be used by pre-auth Lambda to query by time range for example failed attempt within last 30 minutes
You don't need to manage table because you can set TTL for DynamoDb record so that record will be deleted automatically after specified time.
You can also archive items in S3

How to deal with deep level granularization with XACML in enterprise application

I am using IS WSO2 for authorization with XACML. I am am able to achieve authorization for static resource. But I am not sure with the design when it comes to granularization.
Example : if I have method like getCarDetails(Object User) where I should get only those cars which are assigned to this particular user, then how to deal this with XACMl?
Wso2 provides support for PIP where we can use custom classes which can fetch data from database. But I am not sure if we should either make copy of original database at PDP side or give the original database to PIP to get updated with live data.
Because Cars would be dynamic for the application eg. currently 10 cars assigned to user Alice. suddenly supervisor add 20 more car in his list which will be in application level database. Then how these other 20 cars will be automatically assigned in policy at PDP level until it also have this latest information.
I may making some mistake in understanding. But I am not sure how to deal with this as in whole application we can have lots of this kind of complex scenario where some times we will get data for one user from more than 4 or 5 tables then how to handle that scenario?
Your question is a great and the answer will highlight the key benefits of XACML and externalized authorization as a whole.
In XACML, you define generic, global rules, about what is allowed and what isn't using what I would call high-level attributes e.g. attributes of the vehicle (in your case) or the user (role, department, ...)
For instance a simple rule could be (using the ALFA syntax):
policy viewCars{
target clause actionId=="view" and resourceType=="car"
apply firstApplicable
rule allowSameRegion{
permit
condition user.region==car.region
}
}
Both the user's region and the car's region are maintained inside the application's database. The values are read using a PIP or Policy Information Point (details here).
In your example, you talk about direct assignment, i.e. a user has been directly assigned to a vehicle. In that case, the rule would become:
policy viewCars{
target clause actionId=="view" and resourceType=="car"
apply firstApplicable
rule allowAssignedVehicle{
permit
condition user.employeeId==car.assignedUser
}
}
This means that the assigned user information must be kept somewhere, in the application database, a CSV file, a web service, or another source of information. It means that from a management perspective, an administrator would add / remove vehicles from a user's assigned list (or perhaps the other way around: add / remove assigned users from a vehicle's assigned user list).
The XACML rule itself will not change. If the supervisor adds 20 more cars to the employee's list (maintained in the application-level database), then the PDP will be able to use that information via the PIP and access will be granted or denied accordingly.
The key benefit of XACML is that you could add a second rule that would state a supervisor can see the cars he/she is assigned to (the normal rule) as well as the cars assigned to his/her subordinates (a new proxy-delegate rule).
This diagram, taken from the Axiomatics blog, summarizes the XACML flow:
HTH, let me know if you have further questions. You can download ALFA here and you can watch tutorials here.

django-webtest with multiple test client

In django-webtest, every test TestCase subclass comes with self.app, which is an instance of webtest.TestApp, then I could make it login as user A by self.app.get('/',user='A').
However, if I want to test the behavior if for both user A and user B in a test, how should I do it?
It seems that self.app is just DjangoTestApp() with extra_environ passed in. Is it appropriate to just create another instance of it?
I haven't tried setting up another instance of DjangoTestApp as you suggest, but I have written complex tests where, after making requests as user A I have then switched to making requests as user B with no issue, in each case passing the user or username in when making the request, e.g. self.app.get('/', user'A') as you have already written.
The only part which did not work as expected was when making unauthenticated requests, e.g. self.app.get('/', user=None). This did not work as expected and instead continued to use the user from the request immediately prior to this one.
To reset the app state (which should allow you to emulate most workflows with several users in a sequential manner) you can run self.renew_app() which will refresh your app state, effectively logging the current user out.
To test simultaneous access by more than one user (your question does not specify exactly what you are trying to test) then setting up another instance of DjangoTestApp would seem to be worth exploring.

A way to bypass the per-ip limit retrieving profile picture?

My app download all the user's friends pictures.
All the requests are of this kind:
https://graph.facebook.com/<friend id>/picture?type=small
But, after a certain limit is reached, instead of the picture I get:
{"error""message":"(#4) Application request limit reached","type":"OAuthException"}}
Actually, the only way I found to prevent this is to change the server ip (manually).
There isn't a better way?
For the record:
The limit is related to the Graph Api only, and the graph.facebook.com/<user>/picture url is a graph api call that returns a redirect.
So, to avoid the daily limit, simply fetch all the images url from FQL, like:
SELECT uid, pic_small, pic_big, pic, pic_square FROM user WHERE uid = me() or IN (SELECT uid2 FROM friend WHERE uid1=me())
now these are the direct urls to the images, for eg:
http://profile.ak.fbcdn.net/hprofile-ak-snc4/275716_1546085527_622218197_q.jpg
so don't store them since they continuously change.
If it's needed for an online app better way not to download those images, but use an online version, there is couple of reasons for doing so:
Users change pictures (some frequently), do you need an updated version?
Facebook's servers probably faster than yours and friends pictures probably cached within browser of your user.
Update:
Since limit you reach is Graph API call limit, not the image retrieval, another solution that comes to my head just now is using friends connection of user in Graph API and specifying picture in fields argument, eq: https://graph.facebook.com/me/friends?fields=picture, this will return direct URL-s for friends pictures so you can do only one call to get all needed info to download the images for every user...