Get code model from c++ file checked out from tfs - c++

I'm currently building a checkin policy plugin which performs some static code analysis based on the VCCodeModel code model from the VS sdk.
I can get the code model from an active document in a solution via
FileCodeModel fcm = _dte.ActiveDocument.ProjectItem.FileCodeModel;
But if I check out for example a single .cpp and not the whole solution there is no code model available (because it is not part of a solution?).
Is there any way to get access to the code model from a single file checked out from tfs?
Or is there any workaround like building a temp solution?
How can I access the code model for a file via the filepath/name?
Thanks

Related

Google App Engine - Add & Reflect pattern, working around eventual consistency

I'm building a web application on app engine.
In my case, that's built on django-nonrel, but the key point is that it's using Google's datastore.
I love the fact that I don't need to deal with replication, sharding, backups and such, but one thing that is always getting in my way is the eventual consistency, which seems to get in the way of implementing a common web app pattern which I'm calling "Add & Reflect".
Let's say I have a project management app. The Project is its central model.
Now there's a web page page where I see a list of all projects, can add a project, and then I'll reflect back the list of all projects, which should include the project I just added (assuming no errors).
So the pattern goes like this:
Get and display list of existing projects
User adds new project (using a form on that page)
New project is created
As a response, get and display list of existing projects (now includes the new project)
Now the thing is, that due to eventual consistency, there is no guarantee whatsoever that I will get that new project when I get a list of all projects right after adding a new project.
Now that would be fine if this momentary inconsistency happened when another request (e.g. another user: user B) requested the list of projects one second after the project was added by the first user (user A), but it's really a problem when user A performs an operation, and does not see the results of his action, therefore does not get feedback.
I have gotten used to doing something like this to work around this problem:
def create_project(request):
response_context = {}
new_project = Project(name=request.POST['name'])
project.save()
response_context['projects'] = Project.get_serialized_projects()
# on GAE, eventual consistency means we are not guaranteed to see the
# new projects while querying for all projects, therefore we might need
# to add it manually...
if project.serialize() not in response_context['projects']:
response_context['projects'].append(project.serialize())
return render('projects.html', response_context)
The problem is that this happens in many places in my code, so I'm thinking maybe I'm missing something there, since this pattern is such a basic web app pattern.
Any suggestions for other ways to handle this?
Yes its a common issue. No theres no magic fix.
From client-side once you know the commit succeeded you can save the item locally (globals or storage) and then when querying from datastore merge your saved data. Put an expiration on it so its temporary. Its not trivial to make it work in all cases (say added an item then removed/renamed it so also update cache etc).
From server-side its common to cache recent saves in memcache and also merge with your queries.

TFS 2010 process template work item customisations

I've created a new default Process Template called "My Agile V2" and uploaded it successfully into our TFS repository.
I also have some exported work item definition files, i.e. Bug.xml and UserStory.xml which have some customisations made to them i.e. adding a few additional fields and the like. We already successfully use these customised work items on some of our existing projects..
But I'd now like to include these customised work items in as part of the new "My Agile V2" Process Template so that every time we create a new project, we don't have to import these same bug.xml and userstory.xml files every single time.
I managed to export "Download" an existing Process Template, overwrite its bug.xml and userstory.xml files in the "WorkItem Tracking > TypeDefinitions" folder and then re-upload it. The Upload worked, however the work item customisations didn't come through whatsoever. I.e. they still have default behaviour and none of my custom fields...
Where am I going wrong???
Thx a lot
I think just importing WITs is not enough. Have a look at the page for Adding a Work Item Instance to a Process Template

In Django, how can I tell my project to read/write in distinct log directories at runtime?

I am working on a django project that, along with having a database for its models and relations, writes to a log directory called activity_logs outside of the project directory to keep track of formatted user activities, one file for each user. This is an alternative file-structure-based solution to having a database table carry this information along, because this offloads some storage from the DB and is relatively easy to format and express such activities. Perhaps some of you may recommend storing this kind of data in the database, which is fine, but I still believe there is question from all of this that I need help answering.
This django project has multiple apps that have an extensive test suite, one for each app. Additionally, there is a logging.py file that encapsulates the logging functionality (writing/reading activities to/from log files), and so both the test cases within the test suites as well as the view functions (and various other utility functions) all utilize these logging functions in order to store these user activities and retrieve them based on model relationships to emulate a user notification system. Since the logging module takes care of this logging, it needs to know where to write to, and so we have a directory structure called activity_logs to which it writes user log files, creating one for a new user and deleting one for a user removed from the database. One of the newest changes we would like to make in this project is to create a separate logging directory for testing this logging functionality, something like test_activity_logs, so that it would never be confused when writing to the test directory for test users or the regular activity log directory for real users.
My problem is this: at runtime, how can I tell the system, at whichever startpoint of execution (whether it be from a view function call through the django test Client object, a test case, an actual HTTPrequest made via a URL, etc.), when to look inside the activity_logs or the test_activity_logs directory? It solely depends on whether I am generating new information for a real user or a test user, but a User is a User in our system, and I'm facing some trouble trying to tell these functions that call some logging functions to write to the test log directory vs. the regular one. For example, one approach I am trying is to pass a keyword argument (kwarg) to the logging functions so that they can be made aware of which directory to read/write to/from, like so:
self.assertTrue(activity_has_been_logged(ACTIVITY_ACCOUNT_CREATED, user.get_profile(), use_test_activity_log_directory=True) == True)
the kwarg called use_test_activity_log_directory=True will tell the logging function called activity_has_been_logged to read the test activity log directory. Unfortunately, apart from being a little inflexible (but tolerable), this doesn't solve the situation where the django test client object sends a GET or POST request via a URL to a view function that writes activities to log files:
response = client.post(propose_match_url, post) #Can't write to test_log_directory if by default it writes to regular directory!
How do I let the client pass on this kwarg to those view functions? I think that it should totally be possible to do this, but I'm not sure if fiddling with these kwargs is the best way, or maybe create a global variable in the project settings file, but maybe that might cause some trouble with race conditions with a shared mutable variable.
Your help would be great. Thanks in advance!
So I just solved this problem. The logging file hosting all logging functionality is really the only place that needs to know where to look (either test_activity_logs or activity_logs), since all other components will invoke functions from the logging module to write/read to/from these directories. I gave an additional field to the model class of the UserProfile class called is_test that is a boolean field to determine whether to look in the test_activity_logs if is_test=True, or activity_logs if is_test=False. That way, the logging module needs only to check the input parameter of type UserProfile and its new field to determine where to perform its logging functionalities. Problem solved!
Check out daemontools if you're on a *nix box or launchd on OS X. Both can make sure your Django instance stays running in whatever mode you prefer (daemontools has a few more options for that) and can isolate a directory for logging stdout/stderr.
You can set environment variables for each instance to help other log files and temporary files know where to be created, which you then get from os.environment or simply use the current working directory as a base if using daemontools.
The directory is automatically created for you using daemontools.

Retrieving Enterprise Project Types using Project Server Interface

I am currently building an app to programatically create projects in Microsoft Project Server using the web services exposed through the Project Server Interface (PSI).
I am able to create a project with an Enterprise Project Type using the QueueCreateProject method, however, I need to specify the GUID of the EPT which I don't want to hard code into the code.
Is there another web service or way to get the GUID of a specific EPT found by its name?
Also, can the same be done for custom fields in the same way?
I think what you're looking for is PSI Filter parameters. Check out this post for an example of retrieving the Guid of a custom field.
Really, I think the key is setting the filter criteria:
cfFilter.Criteria = new PSLibrary.Filter.FieldOperator(equal, nameColumn, customFieldName);
Where nameColumn is cfDataSet.CustomFields.MD_PROP_NAMEColumn.ColumnName and customFieldName is a value you pass in.
If you are like me, you want to do this for a lot of fields. I used a filter to query all the field names and MD_PROP_UID's and then just put it in a hashtable so I don't have to keep making PSI calls.
Disclaimer: I use 2007 but I'm assuming it is mostly the same for custom fields (not for the EPT part which I didn't include).
Here is an answer: https://stackoverflow.com/a/12267251/1594383
In short: the methods are available from Workflow service.

What's the equivalent object model to SiteData.EnumerateFolder in MOSS Web Service?

There is a use in our project code in the SiteData WebService, for some reason that use is problematic and anyway it is code that runs on the server- therefore I look for object model equivalent for the EnumerateFolder method
Anyone has any ideas ?
SPList (amongst other classes) have a RootFolder property.
From that you can use Files to get a list of files. And SubFolders to get a collection of sub-folders.