How to work with Google Finance? - google-finance

I want to develop a small application to get stock price from Google Finance automatically and store it in my local machine for future analysis.
Can anyone give me some clue how to get started?
I know some C#. Will it be suitable for this purpose?
Thank you in advance.

The Google Finance Gadget API has been officially deprecated since October 2012, but as of April 2014, it's still active:
http://www.google.com/finance/info?q=NASDAQ:ADBE
Note that if your application is for public consumption, using the Google Finance API is against Google's terms of service.
This gives a JSON response which can be parsed using a simple JSON parser in C# after chopping off the first two chars ('//').
For downloading historic data again, you could use the Google APIs.
http://www.google.com/finance/historical?q=NASDAQ:ADBE&startdate=Jan+01%2C+2009&enddate=Aug+2%2C+2012&output=csv
gives out a CSV of end of day stock prices from startdate to enddate. Use a simple CSV parser to get meaningful data out of this stored on your db. However, this format=csv option does not work for a few stock exchanges.

If you want to download historical data you can use the Google Finance API (which still works as of May 2016). You do not have to provide an end date, it will automatically fetch data from the start date (or later if the stock did not trade then) to the last full trade date:
http://www.google.com/finance/historical?q=NASDAQ:AAPL&startdate=Jan+01%2C+2000&output=csv
Remember that Google Finance API are for personal consumption ONLY. I suggest you their terms of service.
If you want to simply download the latest date (which could be useful to update your local db) you can use the googlefinance library developed by Hongtao Cai:
https://pypi.python.org/pypi/googlefinance

I have just implemented this with PHP. It might be useful.
<?php
echo readGoogle('AAPL', 'Aug+21%2C+2017', 'Aug+22%2C+2017');
function readGoogle($ticker, $startDate, $endDate) {
$fp = fopen("http://finance.google.com/finance/historical?q=".$ticker."&startdate=".$startDate."&enddate=".$endDate."&output=csv", 'r');
if (FALSE === $fp) return 'Can not open data.';
$buffer = '';
while (!feof($fp)) $buffer .= implode(',', (array)fgetcsv($fp, 5000));
fclose($fp);
return $buffer;
}
?>

Related

Downloading data from imagenet

I am told that the following list of "puppy" image URL's are from imagenet.
https://github.com/asharov/cute-animal-detector/blob/master/data/puppy-urls.txt
How do I download another category for e.g. "cats"?
Where can I get the entire list of imagenet categories along with their explanation in csv?
Unfortunately, ImageNet is no longer as easily accessible as it previously was. You now have to create a free account, and then request access to the database using an email address that demonstrates your status as a non-commercial researcher. Following is an excerpt of the announcement posted on March 11, 2021 (does not specifically address the requirements to obtain an account and request access permission but explains some of their reasons for changing the website generally).
We are proud to see ImageNet's wide adoption going beyond what was originally envisioned. However, the decade-old website was burdened by growing download requests. To serve the community better, we have redesigned the website and upgraded its hardware. The new website is simpler; we removed tangential or outdated functions to focus on the core use case—enabling users to download the data, including the full ImageNet dataset and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).
ORIGINAL ANSWER (LINKS NO LONGER VALID):
You can interactively explore available synsets (categories) at http://www.image-net.org/explore, each synset page has a "Downloads" tab where you can download category image URLs.
Alternatively, you can use the ImageNet API. You can download image URLs for a particular synset using the synset id or wnid. The image URL download link below uses the wnid n02121808 for domestic cat, house cat, Felis domesticus, Felis catus.
http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=n02121808
You can find the wnid for a particular synset using the explore link above (the id for a selected synset will be displayed in the browser address bar).
You can retrieve a list of all available synsets (by id) from:
http://www.image-net.org/api/text/imagenet.synset.obtain_synset_list.
You can retrieve the words associated with any synset id as follows (another cat example).
http://www.image-net.org/api/text/wordnet.synset.getwords?wnid=n02121808
or you can download smaller size of imagenet, mini-imagenet:
https://github.com/yaoyao-liu/mini-imagenet-tools
2-1. https://github.com/dragen1860/LearningToCompare-Pytorch/issues/4
2-2. https://github.com/twitter/meta-learning-lstm/tree/master/data/miniImagenet
You can easily use the python package MLclf to download and transform the mini-imagenet data for the traditional image classification task or the meta-learning task. just use:
pip install MLclf
You can also see for more details:
https://pypi.org/project/MLclf/

How to fetch Edge browsing history programmatically? Is there is any way using COM/Windows API to fetch it? [duplicate]

I used FindFirstUrlCacheEntry/FindNextUrlCacheEntry Win API to get Internet Explorer's history programmatically in C++.
Can you tell me how to get Microsoft Edge History using C++ (Windows API)?
Not possible at this point in time. Might want to use the 'suggestions routes' at some of the links below.
Developer Feedback Home - https://wpdev.uservoice.com/forums/257854-microsoft-edge-developer
Developer Feedback Twitter - https://www.twitter.com/msedgedev
Feature Suggestions - https://windowsphone.uservoice.com/forums/101801-feature-suggestions/category/18985-web-browsing
Healy in Tampa
The history is stored in \AppData\Local\Microsoft\Windows\WebCache\WebCacheV01.dat. It uses Microsoft’s Extensible Storage Engine to store data. There is a C++ wrapper for accessing Extensible Storage Engine files I've used to access data from this file.
The "Containers" table inside WebCacheV01.dat tells which "Container_X" tables have type of "Content" or "History", as well as the Secure Directories and their order. You can use the ESEDatabaseView utility to view the data inside the WebCacheV01.dat file.

twilio Record entire incoming call with gather

I am conducting a survey using Voice with Twilio. Part of that survey is to let the user give a voice feedback so i used Record at first. That was exactly what i needed since i could get the mp3 + the transcription. Problem is : i can't set the language to French so my transcription come in french, instead it is recorded and transcribed as english even though i speak french.
So i decided to switch and use the Gather with the speech option which works quite well, the text comes back in french, but using that option, i can't get the mp3.
So i decided that i would record the entire call, and use Gather which would have solved my problem, except you only get to set the record parameter to true when you initiate the call (outgoing call). My survey will be taken by incoming call 95% of the time..
I there any way to achieve this without having to use Record and use another API to do the transcription ?
Twilio developer evangelist here.
I'm afraid, in the situation you are in, where you need the mp3s and French transcription, then I can only recommend using <Record> and a third party transcription service. <Gather> with input type speech just doesn't support this kind of recording yet.

Generating WagtailCMS Document Previews

Is there a way to extend Wagtail's documents to show file previews? There's a service ( I have not tested ), which looks cool, but the free plan to Pro plan is a huge leap in cost. I am hoping someone has figured this out already and can point me to the solution. Thank you.
FilePreviews.io's 100 documents a month on the free plan seems pretty generous to me. You could try to build something similar, e.g. using ImageMagick to create PDF thumbnails:
http://duncanlock.net/blog/2013/11/18/how-to-create-thumbnails-for-pdfs-with-imagemagick-on-linux/
and use a service like Aspose to convert common file formats to PDFs:
https://www.aspose.com
Or you could do it manually, by adding a thumbnail field to your document model, and telling users to provide their own thumbnails.
But if you or your client are uploading more than 100 documents a month, and you want reliable thumbnail generation for multiple document types, $49 a month may be better value than working it all out yourself.

How can I remotely (via web services) determine date format of SharePoint 2003 site, for use in Versions.asmx returned XML?

The GetVersions() call to the Versions.asmx web service in SharePoint 2003 returns a localised date format, with no way of determining what the format is. It's the site regional setting of date format, but I can't find a way to get even that out of SharePoint 2003. Locally, it looks like SPRegionalSettings can be used (http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spregionalsettings.aspx) but what about a web service version of this?
Sadly, it isn't available. However, you can specify a query option to specify that you want the values returned in UTC:
http://www.sharepointblogs.com/pm4everyone/archive/2006/10/03/sharepoint-2003-querying-with-gmt-datetime.aspx
Unfortunately, the parameter that asks for the values in UTC is not supported for this call. I've just had to look for a month greater than 12 and use that as the hint to switch date formats. It'll mess up some dates, but I can't see a way around that. The code is at http://sourceforge.net/projects/splistcp/ if anyone is interested.