For example, I select some requests from a collection, order them as I wish, then can I just save all the requests along with the ordering, configuration, etc, so I can easily run again? I don't find any 'Save' button on the 'Run Collection' page.
For those who might be seeking the same goal, Postman has this beta feature called 'Flow'. It's still not full-fledged and not very convenient to use. But it can do things.
Related
I'm designing a web crawler with C++,but there is a web page asking me "Do you at least 18 years of age?" when I first fetch the web page by using URLDownloadToFileW,and of course I must click YES.
In javascript,I can use document.getElementsByTagName('button')[0].click(); to simulate a button click,so is there any other way to solve such problem with C++?
That is not really easy to do, but if you want to do it, you need several requests.
What the click (i.e. document.getElementsByTagName('button')[0].click(); in JavaScript) does is to trigger an associated click event. Your first step should be to find the event handler code and take a look into it. The event may for example send another (AJAX) request to the website. If that is the case, you have to perform the request in C++ in your crawler, too. Many sites also use cookies to store the user's answer to such questions (or at least the fact that the user selected "I'm at least 18 years of age"). So your crawler has to accept such cookies, too, and store them between requests.
I am aware of the fact that this answer is rather general, but it is difficult to give a more specific answer without knowing the exact website you are crawling.
Alternative approach: Instead of writing a crawler that downloads the website content directly, you might utilize frameworks like Selenium. Selenium allows to automate a browser and is intended to be used for testing, but one could also use it to crawl a website. The advantage is that you can also perfom things like clicks easier in the browser, given you know the ID or the XPath of the element you want to click. This might be easier to do than a "classical" crawler.
However, you should be aware that many websites have some kind of protection against flooding them with requests in place. That is, if you intent to do a lot of request to the same server in a short amount of time, you might get blocked from the server. So try to limit the requests to the absolute minimum.
I would like to download a course and work offline on that course. How can I track my results?
I would like to record all my progress(slides that I viewed, quiz results, time for each content....), for example saving them on a file or a database, and then generate statements to send to an LRS when I'm online.
Someone could explain me how can I do that?
With TinCan statements (commonly including information about the student(actor) and then what they did, objectives, status etc) are being posted to a endpoint. Depending on how the content is written it may or may not failover to some alternative. If its a native application I would suspect you'll have limited ability to intercept these statements. If its a HTML course you may be able to locate where the content attempts to post these statements and re-direct those to local storage or some other sql/nosql option. Ultimately, it will depend on what content you're attempting to run, and what type of controls you'll have to attempt to. Based on what I know, the content itself would have to detect its 'offline' and store the statements until it is back online. Similar to this post - How tin-can-api works offline?
SCORM ultimately doesn't work like TinCan. LMS exposes a JavaScript API, and the HTML based content locates it in the DOM using JavaScript. Content then makes gets and set calls to it. The LMS is more responsible for committing this information to a server, or persisting the data in another fashion. This doesn't stop content developers from creating new and alternative ways to persist data if the LMS is not present. For this type of content its probably easier to intercept since you can be the LMS in this situation and expose that API for the content to use. In a offline situation you'd just have to manage the student attempts and then once online- sync them with your server.
I have setup a very basic first application where I can add and remove names from a list, which are then added/removed from a database using a RESTful API, using Ember-Data with the default REST Adapter.
I'd like to implement some form of polling/long-polling so my interface remains up-to-date.
So for example, lets say I open my 'list' in two tabs, delete a few names in one tab - I'd like for the changes to then (eventually) show up in the other tab.
How can this be done easily with Ember?
What you want to do is really a job for WebSockets, which would allow you to push changes to your models from the server to the Ember app whenever they happen. This type of approach can easily take care of keeping thing in sync between tabs. I would recommend checking out Socket.io, which has a great client-side JS library and many server side libraries. By default it will try to use WebSockets, which are better than long-polling, but will degrade to long-polling if it needs to. This might force you to change a bunch of your application set-up, but I would consider this the "right" way to go.
Lets say I have django.contrib.sessions.middleware.SessionMiddleware installed in django and I'm using SessionAuthentication class for API authentication in tastypie. Within a session I'm doing some changes in models through my API and after that I want to roll back. Can I do it through tastypie? If yes, what method should I execute? I can't find such a method in tastypie docs. Do you have any working example of that?
Django supports database transactions, which will commit multiple state changes atomically. (Documentation...)
It is unclear in your question how you want to trigger the rollback. One option is to use request transactions, which will rollback if an unhandled exception is issued by the view function. If you want more fine grained control, familiarize yourself with the options in the linked-to documentation. For example, you may explicitly create a transaction and then roll it back inside your view.
With respect to Tastypie, you may need to place your transaction management inside the appropriate method on the Resource interface.
I hope this gives you some pointers. Please update your question with more details if necessary.
So you want to commit changes to your models to the database, and then roll them back on a future request? That's not something that TastyPie supports (or, for that matter, Django or SQL). It's not really possible to do a clean rollback like that, considering other requests could have interacted with/changed/ built relationships with those objects in the mean time.
The best solution would probably be to integrate something like Reversion that would allow you to restore objects to a previous state.
If you want to be able to roll back all of the operations during a session, you'd probably need to keep track of the session start time and the list of objects that had been changed. If you wanted to do a rollback, you'd just have to iterate over that list and invoke reversion's revert method, like
reversion.get_for_date(your_model, session_start_datetime).revert()
However, again, that will also roll back any changes any other users have made in the same time frame, but that will be the case for any solution to this requirement.
I'm trying to configure a wiki to allow a two staged approval process. The basic work flow requires something like:
A group of users submits a short form
After admin approval, a larger form becomes available to the group
The group submits the larger form
After admin approval, the page (filled by the form) becomes public
I've been looking at TikiWiki and MediaWiki for a while trying to configure each to get even close to this model, but I'm having some problems.
With TikiWiki, it seems like the approval stage should be a transition, either changing the group permissions to allow access to a new tracker or changing the form category to close one form and open the other, but I haven't been able to nail down the permissions for that configuration.
With MediaWiki, the main problem seems to be that the back-end was not made to have complex permissions. I've been using SMWHalo along with SemanticForms to construct this, but I can't find anything like Tikiwiki's transitions for changing the permissions for either the group or the form automatically.
I'm a bit new to Wiki development and I know that there are a lot of options for wiki frameworks, so I'm asking for suggestions for a good work flow for this product. My goal is to only start actually touching the framework code to make the final adjustments and not to start off modifying an already well developed code base.
You should really ask yourself why you want this and why you want this in a wiki.
A Wiki's main advantage is being quick and easy and thus encouraging to the user. Adding approval stages will discourage users to participate. The hardest part in any wiki is not preventing vandalism or false information. The hardest part is to encourage participation.
If you really need a difficult approval workflow you might want to look at CMS systems. AFAIK typo3 has something like this built in.
If really you want to go with a wiki and an approval process, for DokuWiki you could have a look a the publish plugin: http://www.dokuwiki.org/plugin:publish
The FlaggedRevs extension to MediaWiki adds a basic permissions workflow:
http://www.mediawiki.org/wiki/Extension:FlaggedRevs
However, it's geared more at controlling changes to existing pages, not adding entirely new ones. You could set it up to create new pages as drafts and defaulting the public view to show only approved versions, but it sounds like you want to hide unapproved versions entirely, which would require some extra hacking (and, as Andreas says, kind of defeats the point of a wiki in the first place).