First of all I must say I am totally new to MT so forgive me if I am thinking in a totally wrong way.
I have to create a task for workers where they have to classify a sentence if it is spam or if it falls into a certain category. I will have about 2500 sentences to classify a day.
What is the best way to use the API to do this. I understand how to create a HIT using the API, but it is my understanding that I can't create a recurrent HIT that changes itself once each of the sentence is classified. Do I need to create 2500 HITs?
I researched and found out about the External Question which I can setup in my server and make it change with each form submit.
In that case will it be just 1 HIT? is that the correct way to do this?
I am confused in the dynamic part of MT.
Any tip, documentation (updated) or suggestion will be appreciated.
Thanks!
You likely want to create separate HITs.
If you create an single External HIT (hosted on your server), a
MTurk Worker who takes your HIT will not be eligible to take another
task (e.g. a classification task) since Workers are not allowed to
take a single HIT more than once. However, if you create separate
HITs, a Worker can take as many of them as they wish, which is
probably what you want.
You are correct that you cannot automatically change a HIT
dynamically unless it is run on your own server.
Related
I'm using Django for developing an app for booking cars, then I need an automated code to check if a car is booked automatically every hour, I'm new in Django and I don't have idea how to do this
I agree with abhijeetviswa's answer and also the linked answer that mentioned Celery (if you need something a bit more complex).
I'd also think very carefully about what it is you are trying to achieve and to consider if there is a different way to do it. Unless you are going to use Django signals to be able to respond to the user when it finds that the car is booked, you might not actually need this to be a Django thing at all.
For example, if you just wanted to know if the car was booked or not, you could consider refreshing that information just before you need to know it, (i.e. before building some results) rather than polling for it every hour.
Obviously, this depends heavily on what you want to achieve.
You can look into management commands and cronjobs. You basically create a management command that will perform a specific function. You can then schedule a cronjob to run every hour which will execute this management command.
Check out this answer as well.
How can I make sure than participants who are taking the survey designed by me not allowed to take the survey more than once on Amazon Mechanical Turk?
If you create a HIT, a given worker can only take that HIT once. If you have, e.g., multiple HITs that are all the same study (either different conditions you launch simultaneously or multiple HITs that you post over time), then workers will have access to each version. Of course, someone might have multiple accounts or something (but that is rare and against Terms of Use). So, as long as you only have one HIT (with however many assignments you need - one assignment being one worker), then you will be fine.
While it's true that posting a HIT once means a Turker can only take it once, many people find that some participants malingered, satisficed, etc. and have to resubmit their HITs a second or third time. Requsters also sometimes realize they need more responses, and therefore post their HITs again. In these situations your solution is the DoesNotExist qualifier: http://mechanicalturk.typepad.com/blog/2014/07/new-qualification-comparators-add-greater-flexibility-to-qualifications-.html
I hope the title is chosen well enough to ask this question.
Feel free to edit if not and please accept my apologies.
I am currently laying out an application that is interacting with the web.
Explanation of the basic flow of the program:
The user is entering a UserID into my program, which is then used to access multiple xml-files over the web:
http://example.org/user/userid/?xml=1
This file contains several ID's of products the user owns in a DRM-System. This list is then used to access stats and informations about the users interaction with the product:
http://example.org/user/appid/stats/?xml=1
This also contains links to various images which are specific to that application. And those may change at any time and need to be downloaded for display in the app.
This is where the horror starts, at least for me :D.
1.) How do I store that information on the PC of the user?
I thought about using a directory for the userid, then subfolders with the appid to cache images and the xml-files to load them on demand. I also thought about using a zipfile while using the same structure.
Or would one rather use a local db like sqlite for that?
Average Number of Applications might be around ~100-300 and stats and images per app from basically 5-700.
2.) When should I refresh the content?
The bad thing is, the website from where this data is downloaded, or rather the xmls, do not contain any timestamps when it was refreshed/changed the last time. So I would need to hash all the files and compare them in the moment the user is accessing that data, which can take an inifite amount of time, because it is webbased. Okay, there are timeouts, but I would need to block the access to the content until the data is either downloaded and processed or the timeout occurs. In both cases, the application would not be accessible for a short or maybe even long time and I want to avoid that. I could let the user do the refresh manually when he needs it, but then I hoped there are some better methods for that.
Especially with the above mentioned numbers of apps and stuff.
Thanks for reading and all of that and please feel free to ask if I forgot to explain something.
It's probably worth using a DB since it saves you messing around with file formats for structured data. Remember to delete and rebuild it from time to time (or make sure old stuff is thoroughly removed and compact it from time to time, but it's probably easier to start again, since it's just a cache).
If the web service gives you no clues when to reload, then you'll just have to decide for yourself, but do be sure to check the HTTP headers for any caching instructions as well as the XML data[*]. Decide a reasonable staleness for data (the amount of time a user spends staring at the results is a absolute minimum, since they'll see results that stale no matter what you do). Whenever you download anything, record what date/time you downloaded it. Flush old data from the cache.
To prevent long delays refreshing data, you could:
visually indicate that the data is stale, but display it anyway and replace it once you've refreshed.
allow staler data when the user has a lot of stuff visible, than you do when they're just looking at a small amount of stuff. So, you'll "do nothing" while waiting for a small amount of stuff, but not while waiting for a large amount of stuff.
run a background task that does nothing other than expiring old stuff out of the cache and reloading it. The main app always displays the best available, however old that is.
Or some combination of tactics.
[*] Come to think of it, if the web server is providing reasonable caching instructions, then it might be simplest to forget about any sort of storage or caching in your app. Just grab the XML files and display them, but grab them via a caching web proxy that you've integrated into your app. I don't know what proxies make this easy - you can compile Squid yourself (of course), but I don't know whether you can link it into another app without modifying it yourself.
I'm writing a project in C++/Qt and it is able to connect to any type of SQL database supported by the QtSQL (http://doc.qt.nokia.com/latest/qtsql.html). This includes local servers and external ones.
However, when the database in question is external, the speed of the queries starts to become a problem (slow UI, ...). The reason: Every object that is stored in the database is lazy-loaded and as such will issue a query every time an attribute is needed. On average about 20 of these objects are to be displayed on screen, each of them showing about 5 attributes. This means that for every screen that I show about 100 queries get executed. The queries execute quite fast on the database server itself, but the overhead of the actual query running over the network is considerable (measured in seconds for an entire screen).
I've been thinking about a few ways to solve the issue, the most important approaches seem to be (according to me):
Make fewer queries
Make queries faster
Tackling (1)
I could find some sort of way to delay the actual fetching of the attribute (start a transaction), and then when the programmer writes endTransaction() the database tries to fetch everything in one go (with SQL UNION or a loop...). This would probably require quite a bit of modification to the way the lazy objects work but if people comment that it is a decent solution I think it could be worked out elegantly. If this solution speeds up everything enough then an elaborate caching scheme might not even be necessary, saving a lot of headaches
I could try pre-loading attribute data by fetching it all in one query for all the objects that are requested, effectively making them non-lazy. Of course in that case I will have to worry about stale data. How would I detect stale data without at least sending one query to the external db? (Note: sending a query to check for stale data for every attribute check would provide a best-case 0x performance increase and a worst-caste 2x performance decrease when the data is actually found to be stale)
Tackling (2)
Queries could for example be made faster by keeping a local synchronized copy of the database running. However I don't really have a lot of possibilities on the client machines to run for example exactly the same database type as the one on the server. So the local copy would for example be an SQLite database. This would also mean that I couldn't use an db-vendor specific solution. What are my options here? What has worked well for people in these kinds of situations?
Worries
My primary worries are:
Stale data: there are plenty of queries imaginable that change the db in such a way that it prohibits an action that would seem possible to a user with stale data.
Maintainability: How loosely can I couple in this new layer? It would obviously be preferable if it didn't have to know everything about my internal lazy object system and about every object and possible query
Final question
What would be a good way to minimize the cost of making a query? Good meaning some sort of combination of: maintainable, easy to implement, not too aplication specific. If it comes down to pick any 2, then so be it. I'd like to hear people talk about their experiences and what they did to solve it.
As you can see, I've thought of some problems and ways of handling it, but I'm at a loss for what would constitute a sensible approach. Since it will probable involve quite a lot of work and intensive changes to many layers in the program (hopefully as few as possible), I thought about asking all the experts here before making a final decision on the matter. It is also possible I'm just overlooking a very simple solution, in which case a pointer to it would be much appreciated!
Assuming all relevant server-side tuning has been done (for example: MySQL cache, best possible indexes, ...)
*Note: I've checked questions of users with similar problems that didn't entirely satisfy my question: Suggestion on a replication scheme for my use-case? and Best practice for a local database cache? for example)
If any additional information is necessary to provide an answer, please let me know and I will duly update my question. Apologies for any spelling/grammar errors, english is not my native language.
Note about "lazy"
A small example of what my code looks like (simplified of course):
QList<MyObject> myObjects = database->getObjects(20, 40); // fetch and construct object 20 to 40 from the db
// ...some time later
// screen filling time!
foreach (const MyObject& o, myObjects) {
o->getInt("status", 0); // == db request
o->getString("comment", "no comment!"); // == db request
// about 3 more of these
}
At first glance it looks like you have two conflicting goals: Query speed, but always using up-to-date data. Thus you should probably fall back to your needs to help decide here.
1) Your database is nearly static compared to use of the application. In this case use your option 1b and preload all the data. If there's a slim chance that the data may change underneath, just give the user an option to refresh the cache (fully or for a particular subset of data). This way the slow access is in the hands of the user.
2) The database is changing fairly frequently. In this case "perhaps" an SQL database isn't right for your needs. You may need a higher performance dynamic database that pushes updates rather than requiring a pull. That way your application would get notified when underlying data changed and you would be able to respond quickly. If that doesn't work however, you want to concoct your query to minimize the number of DB library and I/O calls. For example if you execute a sequence of select statements your results should have all the appropriate data in the order you requested it. You just have to keep track of what the corresponding select statements were. Alternately if you can use a looser query criteria so that it returns more than one row for your simple query that ought to help performance as well.
I've started working on a basic instant search tool.
This is a workflow draft.
User presses a key
Current value gets passed to the function which will make an Ajax call to a web service
Web service will run a select on a database through LINQ-To-SQL and will retrieve a list of values that match my value. I will achieve this by using SQL Like clause
Web service will return data to the function.
Function will populate relative controls through jQuery.
I have the following concerns/considerations:
Problem: Fast typists: I have typed in this sentence within few seconds. This means that on each key press I will send a request to a database. I may have 10 people doing the same thing. Server may return a list of 5 records, or it may return a list of 1000 records. Also I can hold down a key and this will send few hundred requests to a database - this can potentially slow the whole system down.
Possible solutions:
Timer where I will be able to send a request to database once every 2-4 seconds
Do not return any data unless the value is at least 3 characters long
Return a limited number of rows?
Problem: I'm not sure whether LINQ-to-SQL will cope with the potential load.
Solution: I can use stored procedures, but is there any other feasible alternatives?
I'm interested to hear if anybody else is working on a similar project and what things you have considered before implementing it.
Thank you
When to call the web service
You should only call the web service when the user is interested in suggestions. The user will only type fast if he knows what to type. So while he's typing fast, you don't have to provide suggestions to the user.
When a fast typist pauses for a short time, then he's probably interested in search suggestions. That's when you call the web service to retrieve suggestions.
Slow typists will always benefit from search suggestions, because it can save them time typing in the query. In this case you will always have short pauses between the keystrokes. Again, these short pauses are your queue to retrieve suggestions from the web service.
You can use the setTimeout function to call your web service 500 milliseconds after the user has pressed a key. If the user presses a key, you can reset the timeout using clearTimeout. This will result in a call to the web service only when the user is idle for half a second.
Performance of LINQ-to-SQL
If your query isn't too complex, LINQ-to-SQL will probably perform just fine.
To improve performance, you can limit the number of suggestions to about twenty. Most users aren't interested in thousands of suggestions anyway.
Consider using a full text catalog instead of the like clause if you are searching through blocks of text to find specific keywords. Besides being much faster, it can be configured to recognize multiple forms of the same word (like mouse and mice or leaf and leaves).
To really make your search shine, you can correct many common misspellings using the levenshtein distance to compare the search term to a list of similar terms when no matches are found.