Fetch data from website real time - c++

Ok basically i'm fetching data from website using curl and parsing the contents using CkHtmlToText.
My issue is how to fetch new data website is writing down.
For example website contents are as follow:
-test1
-test2
After 1 second contents are :
-test1
-test2
-test3
How to fetch only the next line website wrote down that i didnt get yet which is " test3".
Any ideas ? Thank you.
Language im using is : Visual c++

HTTP requests are stateless. You make a request, you get a result, then you make another completely independent request, you get another result, and so on. If the resource you are trying to access is changing over time, you need to make multiple requests, where each time you will get the full updated resource.
I imagine you may be describing a web page that automatically updates while you are looking at it (like a Twitter feed, for example). In that case, the response contains a script that allows the browser to fetch new data and inject it into the DOM. Unless you also plan to build the DOM and use a JavaScript engine (basically implementing a web browser) to run the script, this is probably not useful to you. Instead, you are better off finding an API that gives you data in a format that is easy to parse and get updates for. If this API is a REST API (built on HTTP), then you will still need to make independent requests to get updates.

Related

Google cloud - file pipe

I have a system that is able to produce a custom video (based on input text) faster than real-time.
I would like to create an HTTP endpoint: /create_video?description=dog riding a horse that, as part of the response, returns the URL to the produced video.
Video can be quite long, and its generation can take some time. Rather than waiting for it to complete, I would like to return the response as soon as first frames are available, such that the user can watch instantly using the provided URL (we generate faster than real-time, so there will be no buffering). The URL must point to generate video indefinitely (even months after generation).
I am using Google Cloud. What would be the recommended way to do that?
I could create a custom endpoint that serves the videos, and has the described properties, but maybe something as simple as Cloud Storage could work (I was not able to get it to read while writing was not finalized though)?
As per #Piotr Dabkowski, after doing some extra research it seems to not be that easy. My best idea is to implement a custom endpoint that streams the result, while the file is being generated using temporary array entry in DB. Once the file is fully generated (db entry will be empty and point to cloud storage location), redirects to cloud storage.

Hande Series of Web Requests in a specific way

I am sorry in advance; I am just learning Web development and my knowledge of it is quite limited.
I will describe my problem first.
I have relatively large amount of data (1.8-2 GB), which should be hidden from a public web access. However, a user should be able to request via url call a specific small subset of data and see it on his / her webpage.
Ideally, I would like to write a program on a web server. Let's call it ./oracle, which stores the large amount of data in primary memory.
Each web user should be able to make a specific string calls to oracle and see oracle'sresponse on a web page as html elements.
There should only one instance of oracle, and web users should make asynchronous calls to it.
Can I accomplish the above task with FastCGI or any other protocols?
If yes could you please explain which tools / protocols should I use / learn?
I would recommend setting up an Apache server because it's very common and you'll be able to find a lot of answers to any specific questions here on StackOverflow already.
You could also look into things like http://Swagger.io which can help you generate your API.
Unfortunately, everything past this really depends on what you use to set up your server. Big picture though:
You'll need to open up a port to listen to incoming requests
You'll need to have requests include the parameters they want to send to oracle
You could accomplish this the URI, like localhost/oracle-request?PARAMETER="foo"
You could alternatively use JSON in the body of the http request
Again, this largely depends on how you set up step 1
You'll need to route those requests to the oracle
This implementation depends entirely on step 1
You'll need to capture the output from the oracle and return it to the user
Once you decide on how you want to set up your server, feel free to edit your question and we may be able to provide more specific help.

Tacking offline data with xAPI

I would like to download a course and work offline on that course. How can I track my results?
I would like to record all my progress(slides that I viewed, quiz results, time for each content....), for example saving them on a file or a database, and then generate statements to send to an LRS when I'm online.
Someone could explain me how can I do that?
With TinCan statements (commonly including information about the student(actor) and then what they did, objectives, status etc) are being posted to a endpoint. Depending on how the content is written it may or may not failover to some alternative. If its a native application I would suspect you'll have limited ability to intercept these statements. If its a HTML course you may be able to locate where the content attempts to post these statements and re-direct those to local storage or some other sql/nosql option. Ultimately, it will depend on what content you're attempting to run, and what type of controls you'll have to attempt to. Based on what I know, the content itself would have to detect its 'offline' and store the statements until it is back online. Similar to this post - How tin-can-api works offline?
SCORM ultimately doesn't work like TinCan. LMS exposes a JavaScript API, and the HTML based content locates it in the DOM using JavaScript. Content then makes gets and set calls to it. The LMS is more responsible for committing this information to a server, or persisting the data in another fashion. This doesn't stop content developers from creating new and alternative ways to persist data if the LMS is not present. For this type of content its probably easier to intercept since you can be the LMS in this situation and expose that API for the content to use. In a offline situation you'd just have to manage the student attempts and then once online- sync them with your server.

How to create an API and then dynamically retrieve data from and add new data to it?

To start off, I am extremely sorry if my question is not clear but I have very little knowledge about web services in general and the vast nature of varying available information has driven me crazy over the past few weeks. So please do bear with me.
Summary: I want to create a live score update app for android. (I haven't added android as a tag because I do know how to retrieve data from say twitter's JSON api.) However, like the twitter JSON api, I want to be able to add(POST maybe?) data to the Apache 7.0 service that I have running. I then want the app to be able to be able to retrieve this data that I have posted.
I had asked a more generic question earlier and I was told that I should look up some api's. I did that but I have still not been unable to make a break through.
So my questions is:
Is setting up an API on my local web service the correct way to do this?
If so, how can I setup an API that will return JSON objects to the Android app. Also, I would need to be able to constantly update this API with new data.
Additionally, would I also need to setup a database for all this?
Any links to well explained matter would be appreciated too.
Note: I would like to carry this out using a RESTful Web Service through Jersey and use JSON Objects during retrieval.
Again, I am sorry about my terrible knowledge with web services in general despite trying my best to research a lot. The best I could do was get my RESTful Web to respond to a GET with some pre-defined text that I had set in Eclipse.
Thanks.
If I understand you correctly, what you try to do is something like this:
There will be a match or multiple matches of some sort. Whenever a team/player scores someone (i.e. you) will use the app to update the score. People who previously subscribed to the match, will be notified and see the updated score.
Even though I'm not familiar with backends based on Java, the implementation should be fairly similar to other programming languages.
First of all a few words to REST in general. REST is generally needed, when you need to share information between multiple devices and or users. This seems to be the case here. To implement the REST you are going to need an API of some sorts. Within the web APIs are implemented by webservers answering to certain predefined HTTP Requests.
Thus setting up an API on a web server is the correct way.
Next a few words on databases. A database is generally needed, if you want to store information persistently. This might, or might not be what you are planning to do. If there are just going to be a few matches at the same time and you don't care about persistence of the data, you can use Java to store a collection of match objects in memory. I'm just saying it is possible, not that it is a good idea. Once your server crashes or you run out of memory due to w/e reason, data is going to be lost. (Of course within the actual implementation you want to cache data for current matches in some way and keeping objects in memory is way to do so).
I'd recommend to use a database.
Within the database, you can then store and access information about the matches like the score, which users subscribed, who played, etc.
JSON is just a way to represent the data/objects that will be shared between the server and the client. You can use JSON to encode request and response data/bodies.
The user has to be informed about the updated score. There are two basic ways to do so. Push or Pull. With pull, the client will check for updated scores after fixed intervals or actions. With push, the server will notify the client about changed scores which will cause him to update the information. Since you are planning on doing a live application and using Java anyways, push seems to be the better way to go.
Last but not least let's have a look at a possible implementation using
Webserver (API endpoints + database)
Administrator (keeps score updated)
User (receives updates)
We assume that the server will respond to HTTP Requests (POST#/api/my-endpoint) with JSON-Objects.
Possible flow
1)
First the administrator creates a match
REQUEST
POST # /api/matches
body: team1=someteam&team2=someotherteam
The server now will create a match object and store it in the database. The response will contain information about the object and whether the action was successful.
2)
The user asks for a list of matches
REQUEST
GET # /api/matches/curret
The response will be a JSON object containing a list of current matches.
RESPONSE
{
matches: [
{id: 1, teams:...}, ...
]
}
3)
(If push)
A user subscribes to a match
REQUEST
GET # /api/SOME_MATCH_ID/observe
The user will now be added as an observer for the match. Again, the response contains information about whether the action was successful or not.
4)
The administrator updates a score
REQUEST
UPDATE # /api/SOME_MATCH_ID
body: team1scored...
The score now gets update on the server (in memory/database) and the user will be notified about the updated score.
5)
The user gets the updated score
REQUEST
GET # /api/SOME_MATCH_ID
RESPONSE
... (Updated score in some way)

.NET get data from printers which have www interface

I would like to get data about ink, free pages (paper) etc from the printers in company network. Each of these printers (mostly Minolta) has an www interface, so I can get these data by creating browser process in my program, direct it to go to the address "http://192.168.X.YY/data.htm", download all the page code and retrieve the data from it. Is this possible without this process? If I know that these data are under each IP/data.htm can I use this information to download data in the different way: socket, ftp, etc.
In general: if you have some data on the website (no database access obviously), how you retrieve this data?
From the looks of it, your printer provides REST-based services. You can use libcurl to make REST based API calls. (This holds for most websites also!)
It doesn't sound like it implements a REST interface from the description you provided.
It sounds like your idea is to scrape data from an HTML page. That's fine too, although it's somewhat fragile (e.g. could break with a printer firmware upgrade).
Anyway, you tagged the question with .NET so if you want to use the .NET approach, you might want to look at creating a WebClient and parsing the resulting data return from the DownloadString method.