How to setup activity automatically for a view? - build

In our build , we use to delete our view and create new view before build.
It was working without any issue in base clearcase.
But in UCM we face issues while check-out and check-in due to activity name has to be assigned every time.
Is it a good practice to create new activity whenever i build? [ But number of activities will be soon increased to enormous]
Is there any easy way to setup default activity automatically in UCM?
Has any body automated this in their build process? If so can you share link or something useful resource..

#Samselvaprabu , agreeing with VonC on other things and that activities are,at best, logical groupings of code/development tasks. For example 5 source code + 1 property files for resolving a QA defect.
Though when you ask how many activities ? there is no specific guideline which IBM provides (AFAIK) on this, but in my experience
A typical activity may not have too few, meaning 1,2 files or too many (say 15+) files.
An activity naming convention (though inconsistent when manual) would help. For example username_ShortDescreption/DefectID_date might help in organizing, sorting of activites when the need arises. And trust me, it will.
Obsoleting activities based on some criteria such as age (more than 1 month) is good for housekeeping.
An activity per build ?? - I would say this is subjective to how many times you build, how many artifacts go per build , how many view/developers you have and so on. You could have a build<>activity relationship which is 1<>n or n<>1 which is completely case-specific to your environment. That's the beauty and also the curse of UCM.
These suggestions are not exactly would you asked for but I feel this a a good time to give them as you are just starting with this complex, messy world called UCM :)

Use cleartool setact to set your activity.
setact/ivity [ –c/omment comment | –cfi/le pname | –cq/uery | –nc/omment ]
[ –vie/w view-tag ] { –none | activity-selector }
You might need to unset the current activity first from your view:
Cleared current activity from view java_int.
cleartool setactivity -none
Then, Set an activity to be the current activity.
cleartool setactivity create_directories
Set activity "create_directories" in view "webo_integ".
See "Setting UCM activities" for more:
You can set only one activity per view at a time, and all checkouts in your view are associated with the currently set activity until you unset the activity or set another one.
cleartool setact -view <myViewTag> <anActiviyName>
(-view set before the activity name)
Note that if you are changing the activity while you have pending checkouts, you will have a warning.
You usually associate activities to a development task instead of a build number.
Since you don't version what you are building (the executables), you don't have to make a new activity per build.

Related

Profile attribute being magically set in Siebel

We have a very weird issue in our Siebel 7.8 application.
In the Application_Start event we define a bunch of profile attributes, which determine if the logged user will be allowed to perform certain operations or not. The code is something like this:
if (userHasSuperpowers) {
TheApplication().SetProfileAttr("CanFly", "Y");
} else {
// CanFly is not set, and GetProfileAttr("CanFly") returns ''
}
Everything works fine, except for one of these profile attributes. The conditions are not met, so we don't set its value. But when we check it using GetProfileAttr... it returns 'Y' instead of ''.
I've checked the code. A lot. I've put traces everywhere, and I'm 100% sure that when the last line of the Application_Start event executes, the attribute is still empty. However, in the first Applet_Load event after the login (in the HLS Salutation Applet (HLS Home) applet), its value has already changed to 'Y'. Why!!? I've looked everywhere, but I can't find anywhere else where we'd be doing a SetProfileAttr. So far, I've ruled out:
Every browser and server script for all our applets, application, BCs and business services.
All the runtime business services (the ones defined directly in the application instead of the SRF).
The Personalization Profile business component fields.
SmartScripts (not that they would matter in this particular scenario, I just mention them to acknowledge that you can set profile attributes there too).
Workflows: every step invoking the SIS OM PMT Service method Set Profile Attribute.
Siebel magically setting its value. The profile attribute name is custom made, in Spanish, and it contains our project name and a row_id. I really don't think Siebel is using the same name for its own profile attributes :).
But wait, there is more, I left the best part for last: the problem only happens in our development environment!
It's not an SRF issue: if we promote the same SRF to our testing or production environments, it works and returns the expected value.
It's not a data problem: still with the same SRF, I can use my local thick client, connecting to our development database with the same login and password, and it works fine too.
It's not a concurrency problem: we are testing with only one user logged in. And even if we had more, they wouldn't share sessions. And even if they did, the value wouldn't be always 'Y'.
It's not a temporary glitch, or something due to a wrong incremental compilation or a corrupted SRF: we have been experiencing this for at least 6 months (obviously, in that time frame, we've had dozens of different SRF files... all of them having the same problem, but only in development, and only if you use the server and not the dedicated client... seriously...).
Where else could I search the profile attribute being set? I've read that they can be persisted to the DB, but in order to do so, you have to define them as a field in a BC based on an S_PARTY extension table, right?
Is there any way to trace profile attribute changes somehow? Maybe rising some loglevel?
How can I find out at least what's being executed after the Application_Start, before loading the first applet?
Any other ideas? I tried checking the SQL spool file too, but didn't find anything suspicious there either (i.e., any of the queries we use to check the conditions, being run twice with different parameters).
Update: following Ranjith R's suggestions, I've also checked:
Other vanilla business services which could be also invoked from a workflow to set a profile attr: User Registration > SetProfileAttr, SessionAccessService > SetProfileAttr and ISS Promotion Agreement Manager > SetProfileAttributes.
Runtime events setting profile attributes directly or using a business service (we don't have any runtime events apart from the vanilla ones).
Business services being called from DVMs (we only have vanilla data validation rules, and none of them apply to our buscomps).
Still no luck...
Ok... finally we found what's happening:
We access the URL to our server and get to the login page. This triggers a first Application_Start event, for the SADMIN user.
We set the profile attributes in that session. SADMIN is the Siebel administrator user, so yes, he hasSuperpowers and therefore we do TheApplication().SetProfileAttr("CanFly", "Y");.
The Application_Start event finishes.
We enter our username and password in the login screen to access into Siebel. This triggers a second Application_Start event, this time for our user. This is the one I was monitoring with the trace files.
We set the profile attributes again in the new session. Our user doesn't hasSuperpowers, so we don't set any value for the CanFly attribute.
The Application_Start event finishes, and CanFly is still empty.
Siebel merges both sessions into one before loading the first screen!! Or at least, it transfers over the profile attributes we had set for SADMIN.
I'm sure it happens that way, for two reasons. First, we changed the profile attribute name to include the username too. And second, instead of storing just an "Y", we are storing now the current date:
var time = (new Date()).getTime();
TheApplication().SetProfileAttr("CanFly_" + TheApplication().LoginName(), time);
We end up having CanFly_SADMIN, but no CanFly_USER, and the time value stored is the same we see in the log file for step 2... which is smaller than any of the values for the *_USER attributes.
So that's what happening. I still don't know why Siebel behaves this way, but that would be matter for another question. According to the Siebel bookshelf:
The Start event is called when the client starts and again when the user interface is first displayed.
...but it doesn't say anythign about it being called from two different sessions, different users too, and then merging them together. It must be something misconfigured in our dev environment, considering it doesn't happen in the other ones.
Does Siebel 7.8 have runtime Events? I can't recall. Runtime events have an action set for setevent, which can set/clear profile attributes.
There are still other vanilla business services which can set profile attributes, try searching in tools flat under business service methods for *rofile*tt*.
The SIS OM service can also be invoked from DVMs for from RunTime events directly, so thats also a possibility.
There is no logging system to see values of Profile Attributes changing, testing is the only way out.

Sharing data across Sitecore pipelines

I´m trying to perform some actions in the pipeline "httpRequestBegin" only when necessary.
My processor is executed after Sitecore resolves the user (processor type="Sitecore.Pipelines.HttpRequest.UserResolver, Sitecore.Kernel" ), as i´m resolving the user too if Sitecore is not able to resolve it first.
Later, i want to add some rendering in the pipeline "insertRenderings", only if actions in the previous pipeline were executed (If i resolved the user, show a message), so i´m trying to save some "flag" in the first step, to check in the second.
My question is, where can I store that flag? I´m trying to find some kind of "per request" cache...
So far, I've tried:
The session: Wrong, it's too early, session doesn't exists yet.
Items (HttpContext.Current.Items): It doesn't work either, my item is not there on the seconds step.
So far i'm using the application cache (HttpContext.Current.Cache) with some unique key, but I don´t like this solution.
Anybody body knows a better approach to share this "flag"?
You could add a flag to the request header and then check it's existence in the latter pipelines, e.g.
// in HttpRequest pipeline
HttpContext.Current.Request.Headers.Add("CustomUserResolve", "true");
// in InsertRenderings pipeline
var customUserResolve = HttpContext.Current.Request.Headers["CustomUserResolve"];
if (Sitecore.MainUtil.GetBool(customUserResolve, false))
{
// custom logic goes here
}
This feels a little dirty, I think adding to Request.QueryString or Request.Params would been nicer but those are readonly. However, if you only need this for a one time deal (i.e. only the first time it is resolved) then it will work since in the next request the Headers are back to default without your custom header added.
HttpContext.Current.Cache or HttpRuntime.Cache could be the fastest solution here. Though this approach would not preserve data when the AppPool gets recycled.
If you add only a few keys to the cache and then maintain them, this solution might work for you. If each request puts an entry into the cache, it may eventually overflow the memory used by worker process in a long run.
As alternative to this you may try to use Sitecore.Context.ClientData property. It uses ClientDataStore that employs a database (look for clientDataStore section in the web.config file) to store data. These entries can survive the AppPool recycle.
Though if you use them a lot, it may become a bottleneck under the load when you need to write to and/or read from the entries.
If you do know that there could be a lot of entries created for sharing purposes, I'd create a scheduled task to clean up the data store from obsolete entries.
I know this is a very old question, but I just want post solution I worked around
Below will hold data per http request basis.
HttpContext.Current.Items["ModuleInfo"] = "Custom Module Info"
we can store data to httpcontext in one sitecore pipeline and retrieve in another...
https://www.codeproject.com/Articles/146455/When-Can-We-Use-HttpContext-Current-Items-to-Store

Google App Engine - Add & Reflect pattern, working around eventual consistency

I'm building a web application on app engine.
In my case, that's built on django-nonrel, but the key point is that it's using Google's datastore.
I love the fact that I don't need to deal with replication, sharding, backups and such, but one thing that is always getting in my way is the eventual consistency, which seems to get in the way of implementing a common web app pattern which I'm calling "Add & Reflect".
Let's say I have a project management app. The Project is its central model.
Now there's a web page page where I see a list of all projects, can add a project, and then I'll reflect back the list of all projects, which should include the project I just added (assuming no errors).
So the pattern goes like this:
Get and display list of existing projects
User adds new project (using a form on that page)
New project is created
As a response, get and display list of existing projects (now includes the new project)
Now the thing is, that due to eventual consistency, there is no guarantee whatsoever that I will get that new project when I get a list of all projects right after adding a new project.
Now that would be fine if this momentary inconsistency happened when another request (e.g. another user: user B) requested the list of projects one second after the project was added by the first user (user A), but it's really a problem when user A performs an operation, and does not see the results of his action, therefore does not get feedback.
I have gotten used to doing something like this to work around this problem:
def create_project(request):
response_context = {}
new_project = Project(name=request.POST['name'])
project.save()
response_context['projects'] = Project.get_serialized_projects()
# on GAE, eventual consistency means we are not guaranteed to see the
# new projects while querying for all projects, therefore we might need
# to add it manually...
if project.serialize() not in response_context['projects']:
response_context['projects'].append(project.serialize())
return render('projects.html', response_context)
The problem is that this happens in many places in my code, so I'm thinking maybe I'm missing something there, since this pattern is such a basic web app pattern.
Any suggestions for other ways to handle this?
Yes its a common issue. No theres no magic fix.
From client-side once you know the commit succeeded you can save the item locally (globals or storage) and then when querying from datastore merge your saved data. Put an expiration on it so its temporary. Its not trivial to make it work in all cases (say added an item then removed/renamed it so also update cache etc).
From server-side its common to cache recent saves in memcache and also merge with your queries.

How (in code) can I prevent two people starting the same crowdsourcing task at once?

I'm trying to build a Django app for a translation crowdsourcing task.
For each task in the database, I have an is_completed boolean flag that is set when the user completes the task. I also have a 'give me a random task' button, which chooses from the list of uncompleted tasks.
My question is this. How do I prevent two users being given the same task, if one user clicks the button shortly after another?
I was thinking of setting a has_started flag on the row when a task is loaded, and removing started tasks from the list of random available tasks: but what if the user starts a task and then closes the page without finishing it, so it never gets unset? I'll end up with a lot of unfinished tasks.
Could I flag this in a cleverer way with session variables that expire, perhaps? But I know it's hard to capture the 'user closes page' event reliably in JavaScript.
Thanks!
Instead of making has_started a flag, you could make it a timestamp and decide on a reasonable amount of time for task completion (which will allow you to assume that a task has been dropped after X minutes).
There is a risk that this will result in multiple translations of the same thing (i.e. if someone is really really slow and the job is recirculated early), but I think it will cover most cases.
I would use locking, you add a field "lock_time" to your database. You update this to the current time as soon as a user starts a task. Then, with an event that's called every, let's say: 10 seconds in javascript, you update the lock_time. Now you can check if the lock_time is more than 30 seconds ago, if so: you "break" the lock.
You'll have to use a timeout. There are no javascript events for "user spills coffee on computer" or "user does a hard reset" etc.
I think you'd best set the userid and the startdate on start.
When you update a database like this --
UPDATE task t
SET t.userid = :USERID, t.lastprogress = sysdate()
WHERE t.userid is null and t.taskid = :TASKID
-- you will notice 0 modified records when a task is already assigned to a user. This addresses your first problem.
Then, when you save a last modified date, you can run a cron job to clean up abandoned tasks, being tasks that haven't been modified in a certain period of time. But this is a different problem altogether. It's hard to find the right balance of deciding too early or too late whether a task is abandoned or not.
If every modification also updates this date, a user can even work on a task for a longer time, without it being stolen by someone else, as long as they do regular saves.
Also, when saving the modification data (you can write a routine to do that), you can check if the userid still matches. If the userid of the task is NULL (cron decided 'abandoned') or another userid (abandoned task picked up by someone else), you can raise an error to tell the user that the task no longer belongs to them.

In mTurk, how can I use participation in a previous HIT (or series of HITs) as a qualification?

I am using mTurk for surveys, and I need a way of making sure that people who have participated in a previous survey / HIT do not participate in certain future surveys / HITs. I am not sure whether I should do this as a qualification or in some other way.
I know there is some way to do this, but I have no idea how. I have very limited programming experience and would greatly, greatly appreciate specific instructions on how I might do this. My understanding is that I might need to use AWS? Many thanks!
Mass rejections as suggested above are a really, really bad idea in terms of your reputation as a requester. You are much better off creating a Qualification for the new HIT, which automatically grants a score of 100 (or whatever) to anyone who takes it, and assigning scores of zero to everyone who has done the previous surveys. This prevents repeats but doesn't annoy any of your workers.
The easiest way to create a Qualification is at https://requester.mturk.com/qualification_types.
If you download the csv of workers from here https://requester.mturk.com/workers, you can assign scores to workers who have done the previous HIT(s).
To make the qualification grant scores to new workers automatically requires the API, though.
Here's a hacky way to do it:
When you accept HITs for surveys, save every participating worker's ID.
In the writeup, note that "if you've done previous surveys w/ us, then you can't do this one (IE, you can, but we won't approve it)".
When you approve HITs, cross-reference the worker ids with anybody who participated in a previous survey, and reject the hits of any that match.
If you're doing enough surveys, then yes, you probably want to use AWS API for at least the approval part. otherwise, most things appear to be do-able from the requester interface.
Amazon Mechanical Turk service has this option for requesters to grant their workers by Qualification_Type. In this way by connecting your HITs to a qualification_type naming "A", then granting workers exactly the same qualification_type, only workers who have that qualification can see and work on HITs.
First, creating desired qualification types through mturk web UI.(it is only name and description) requester.mturk.com > manage > QualificationTypes. It will give you a qualification id after generating it. (you will need it soon)
Second, in HIT creation loop, you have to use QualificationRequirement class. (I am using java code and it looks like the below-mentioned code):
QualificationRequirement[] qualReq = new QualificationRequirement[1];
qualReq[0] = new QualificationRequirement();
qualReq[0].setQualificationTypeId(qualID);
qualReq[0].setComparator(Comparator.EqualTo);
qualReq[0].setIntegerValue(100);
qualReq[0].setRequiredToPreview(false);
then in HIT creation loop, I will use this:
try {
hit = this.service.createHIT(null,
props.getTitle(),
props.getDescription(),
props.getKeywords(),
question.getQuestion(),
new Double(props.getRewardAmount()),
new Long(props.getAssignmentDuration()),
new Long(props.getAutoApprovalDelay()),
new Long(props.getLifetime()),
new Integer(props.getMaxAssignments()),
props.getAnnotation(),
qualReq,
null);
Third is assigning the qualification type to the workers that you want them to work on your HITs. It is very straightforward, I usually use mturk UI to do it. https://requester.mturk.com/ > manage tab > Workers. You should download the CSV file if you want to assign this qualification to a bunch of workers.
(Workers are who worked with you in the past)
you could notify workers by sending them an email after qualifying them
Notice: Some workers are very slow in answering your new HITs after qualifying them; so keep in mind that you should have some backup plan and time if you will not receive enough response in a certain amount of time.