How to get Intraday backfilled FOREX data programmatically? - web-services

I've been looking for real time intraday quotes data for major FOREX pairs, backfilled to several days. I want to use it programmatically for Android or web application, therefore a CSV or XML formated webservice would be ideal.
So far I found a few websites by googling such as finam.ru. But it is in Russian and I was not able to find any clearly documented source on exact URL format how to obtain it.
Could someone guide me how to do it?

On the url: http://www.finam.ru/analysis/profile041CA00007/ you can choose the data to export in csv.
You should choose "Мировые валюты" (International Currencies) on the left combobox and you can choose the currency on the right.
Then you click on the button and download a csv file. Here is an example with minute data:
http://195.128.78.52/AUDCAD_140501_140526.txt?market=5&em=181410&code=AUDCAD&df=1&mf=4&yf=2014&dt=26&mt=4&yt=2014&p=2&f=AUDCAD_140501_140526&e=.txt&cn=AUDCAD&dtf=1&tmf=1&MSOR=0&mstime=on&mstimever=1&sep=1&sep2=1&datf=2&at=1
I have never seen a documentation for it. Just experimented with the date and got the needed period.
You can add the header:Referer:http://www.finam.ru/analysis/profile2C4A000007/ to get the tick data. The code in C# that allows to bypass the restrictions is at the end of the answer.
In the form the periods are the following:
<select id="issuer-profile-export-period" name="p" style="width: 135px; display: none;">
<option value="1">тики</option> ticks
<option value="2">1 мин.</option> 1 minute
<option value="3">5 мин.</option> 5 minutes
<option value="4">10 мин.</option> 10 minutes
<option value="5">15 мин.</option> 15 minutes
<option value="6">30 мин.</option> 30 minutes
<option value="7" selected="selected">1 час</option> 1 hour
<option value="8">1 день</option> 1 day
<option value="9">1 неделя</option> 1 week
<option value="10">1 месяц</option> 1 month
</select>
mstimer = 1 - moscow time
mstimer = 0 - not moscow time
f - mane of file
It is most likely that to change the currency you need to change it in three places: em=(these are the codes from the code below), code=, cn=
Currencies:
<ul>
<li>Aud/Cad</li>
<li>Aud/Chf</li>
<li>Aud/Dkk</li>
<li>Aud/Jpy</li>
<li>Aud/Nok</li>
<li>Aud/Nzd</li>
<li>Aud/Sek</li>
<li>Aud/Sgd</li>
<li>Aud/Usd</li>
<li>Cad/Chf</li>
<li>Cad/Jpy</li>
<li>Cad/Usd</li>
<li>Chf/Dkk</li>
<li>Chf/Jpy</li>
<li>Chf/Sgd</li>
<li>Chf/Usd</li>
<li>Dkk/Usd</li>
<li>Eur/Aud</li>
<li>Eur/Byr</li>
<li>Eur/Cad</li>
<li>Eur/Chf</li>
<li>Eur/Cny</li>
<li>Eur/Gbp</li>
<li>Eur/Hkd</li>
<li>Eur/Huf</li>
<li>Eur/Jpy</li>
<li>Eur/Kzt</li>
<li>Eur/Lvl</li>
<li>Eur/Mdl</li>
<li>Eur/Nok</li>
<li>Eur/Nzd</li>
<li>Eur/Rub</li>
<li>Eur/Sek</li>
<li>Eur/Sgd</li>
<li>Eur/Tjs</li>
<li>Eur/Uah</li>
<li>Eur/Usd</li>
<li>Eur/Uzs</li>
<li>Gbp/Aud</li>
<li>Gbp/Cad</li>
<li>Gbp/Chf</li>
<li>Gbp/Jpy</li>
<li>Gbp/Nok</li>
<li>Gbp/Sek</li>
<li>Gbp/Sgd</li>
<li>Gbp/Usd</li>
<li>Hkd/Usd</li>
<li>Huf/Usd</li>
<li>Jpy/Usd</li>
<li>Mxn/Usd</li>
<li>Nok/Usd</li>
<li>Nzd/Cad</li>
<li>Nzd/Jpy</li>
<li>Nzd/Sgd</li>
<li>Nzd/Usd</li>
<li>Pln/Usd</li>
<li>Rub/Eur</li>
<li>Rub/Lvl</li>
<li>Rub/Usd</li>
<li>Sek/Usd</li>
<li>Sgd/Jpy</li>
<li>Sgd/Usd</li>
<li>Usd/Byr</li>
<li>Usd/Cad</li>
<li>Usd/Chf</li>
<li>Usd/Cny</li>
<li>Usd/Dem</li>
<li>Usd/Idr</li>
<li>Usd/Inr</li>
<li>Usd/Jpy</li>
<li>Usd/Kzt</li>
<li>Usd/Lvl</li>
<li>Usd/Mdl</li>
<li>Usd/Rub</li>
<li>Usd/Tjs</li>
<li>Usd/Uah</li>
<li>Usd/Uzs</li>
<li>Xag</li>
<li>Xau</li>
<li>Zar/Usd</li></ul>
The only difference in requests that bypass the restrictions and that don't is the header: "Referer:http://www.finam.ru/analysis/profile2C4A000007/"
Here is the link to tick data
http://195.128.78.52/AUDJPY_140526_140526.txt?market=5&em=181408&code=AUDJPY&df=26&mf=4&yf=2014&dt=26&mt=4&yt=2014&p=1&f=AUDJPY_140526_140526&e=.txt&cn=AUDJPY&dtf=1&tmf=1&MSOR=0&mstime=on&mstimever=1&sep=1&sep2=1&datf=6&at=1
Here are the headers of the original request that worked:
Remote Address:195.128.78.52:80
Request URL:http://195.128.78.52/AUDJPY_140526_140526.txt?market=5&em=181408&code=AUDJPY&df=26&mf=4&yf=2014&dt=26&mt=4&yt=2014&p=1&f=AUDJPY_140526_140526&e=.txt&cn=AUDJPY&dtf=1&tmf=1&MSOR=0&mstime=on&mstimever=1&sep=1&sep2=1&datf=6&at=1
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding:gzip,deflate,sdch
Accept-Language:ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4,it;q=0.2
Connection:keep-alive
Host:195.128.78.52
Referer:http://www.finam.ru/analysis/profile2C4A000007/
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36
Here are the headers of the request that did not work:
Remote Address:195.128.78.52:80
Request URL:http://195.128.78.52/AUDJPY_140526_140526.txt?market=5&em=181408&code=AUDJPY&df=26&mf=4&yf=2014&dt=26&mt=4&yt=2014&p=1&f=AUDJPY_140526_140526&e=.txt&cn=AUDJPY&dtf=1&tmf=1&MSOR=0&mstime=on&mstimever=1&sep=1&sep2=1&datf=6&at=1
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding:gzip,deflate,sdch
Accept-Language:ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4,it;q=0.2
Connection:keep-alive
Host:195.128.78.52
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36
The code to bypass restrictions and get the tick data:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApplication6
{
class Program
{
static void Main(string[] args)
{
var client = new WebClient();
client.Headers.Add("user-agent", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36");
client.Headers.Add("Accept-Language", "ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4,it;q=0.2");
client.Headers.Add("Referer", "http://www.finam.ru/analysis/profile2C4A000007/");
string url = "http://195.128.78.52/AUDJPY_140526_140526.txt?market=5&em=181408&code=AUDJPY&df=26&mf=4&yf=2014&dt=26&mt=4&yt=2014&p=1&f=AUDJPY_140526_140526&e=.txt&cn=AUDJPY&dtf=1&tmf=1&MSOR=0&mstime=on&mstimever=1&sep=1&sep2=1&datf=6&at=1";
string htmlString = client.DownloadString(url);
Console.WriteLine(htmlString);
Console.ReadLine();
}
}
}

Related

Query fields in Kibana with RegEx

I need to search in Kibana Logs for fields with a specific content. The field is "message", that looks like this:
11.111.72.58 - - [26/Nov/2020:08:44:23 +0000] "GET /images/image.jpg HTTP/1.1" 200 123456 "https://website.com/questionnaire/uuid/result" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.14 (KHTML, like Gecko) Version/14.0.1 Safari/605.1.14" "5.158.163.231"
This field contains URIs, for example here "https://website.com/questionnaire/uuid/result". How can I search for specific URIs in that field?
I need to get all Logs, where the field "message" contains "https://website.com/questionnaire/someUUID*/result"
or where the URI is exactly "https://website.com/"
I've tried with Lucene:
message:/https://.+/result/
nothing found
message:https.*\result
URIs with "https: at the beginning found, but also returns URIs without "result" at the end
message : "https://website.com/questionnaire" AND message : "result"
This works, but this would also work, if "result" would not be related to the URI, but would just stay alone at the end of the "message"-field. And I need something, that would really query those URIs between " ".
I need to visualise the amount of requests for each URI with Kibana later. So I think I need to use Lucene or Query DSL.
Any ideas?
This is a good use case for the new wildcard field type (introduced in 7.9), which allows you to better search within potentially long strings.
If you declare your message field as wildcard like this:
PUT test
{
"mappings": {
"properties": {
"message": {
"type": "wildcard"
}
}
}
}
And then index your documents
PUT test/_doc/1
{
"message": """11.111.72.58 - - [26/Nov/2020:08:44:23 +0000] "GET /images/image.jpg HTTP/1.1" 200 123456 "https://website.com/questionnaire/uuid/result" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.14 (KHTML, like Gecko) Version/14.0.1 Safari/605.1.14" "5.158.163.231"
"""
}
You can then run wildcard searches (even with leading wildcards which are discouraged to run on normal keyword fields) and find your document easily.
GET test/_search
{
"query": {
"wildcard": {
"message": {
"value": "*https*uuid*"
}
}
}
}

Grok Filter on Proxy Logs

I am attempting to parse and structured a raw proxy data using GROK filter in the ELK stack and I can;t get the timestamp and user agent string in the correct format. Do refer to sample log as follows:
"1488852784.440 1 10.11.62.19 TCP_DENIED/403 0 GET http://xxx.xxx.com/xxx - NONE/- - BLOCK_WEBCAT_12-XXX-XXX-NONE-NONE-NONE-NONE <IW_aud,0.0,-,""-"",-,-,-,-,""-"",-,-,-,""-"",-,-,""-"",""-"",-,-,IW_aud,-,""-"",""-"",""Unknown"",""Unknown"",""-"",""-"",0.00,0,-,""-"",""-"",-,""-"",-,-,""-"",""-""> - L ""http://xxx.xxx.xxx"" 10.11.11.2 - 403 TCP_DENIED ""Streaming Audio"" - - - GET ""Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"" http://xxx.xxx.xxx"
I am using the following filter:
%{NUMBER:timestamp}%{SPACE}%{NUMBER:request_msec:float} %{IPORHOST:src_ip} %{WORD}/%{NUMBER:response_status:int} %{NUMBER:response_size} %{WORD:http_method} (%{URIPROTO:http_proto}://)?%{IPORHOST:dst_host}(?::%{POSINT:port})?(?:%{NOTSPACE:uri_param})? %{USERNAME:user} %{WORD}/(%{IPORHOST:dst_ip}|-)%{GREEDYDATA:content_type}
based on http://grokconstructor.appspot.com, I am able to parse out some of the fields except the timestamp (1488852784.440) and User Agent String. I have tried different Drok default filters on the timestamp but it still shows as numbers.
That's because Grok can't convert to a date datatype. For that, you need to use the date filter which does this exact conversion for you.
filter {
date {
match => [ "timestamp", UNIX_MS ]
}
}
Which will set the #timestamp field of the event to the parsed timestamp in the timestamp field.

Python 2.7 BeautifulSoup4 is returning an empty set

I am trying to grab the links from a google search with bs4 but my code is returning an empty set.
import requests
from bs4 import BeautifulSoup
website = "https://www.google.co.uk/?gws_rd=ssl#q=science"
response=requests.get(website)
soup = BeautifulSoup(response.content)
link_info = soup.find_all("h3", {class": "r"})
print link_info
The <h3 class="r"> is where the links for all the results are not just the link for the first result.
In response I get [] and this is for any other class I try to request including <div class="rc">.
Here is a prt sc of what I am after,
Try to use following code
url = 'http://www.google.com/search?'
params = {'q': 'science'}
response = requests.get(url, params=params).content
soup = BeautifulSoup(response)
link_info = soup.find_all("h3", {"class": "r"})
print link_info
You're looking for this:
# select container with needed elements and grab each element in a loop
for result in soup.select('.tF2Cxc'):
# grabs each <a> tag from the container and then grabs an href attribute
link = result.select_one('.yuRUbf a')['href']
Have a look at the SelectorGadget Chrome extension to grab CSS selectors by clicking on the desired element in your browser. CSS selectors reference.
Code from Andersson will throw an error because there's no such r CSS selector. It's changed.
Make sure you're using user-agent because default requests user-agent is python-requests thus Google blocks a request because it knows that it's a bot and not a "real" user visit and you'll receive a different HTML with some sort of an error. User-agent fakes user visit by adding this information into HTTP request headers.
I wrote a dedicated blog about how to reduce the chance of being blocked while web scraping search engines that cover multiple solutions.
Pass user-agent in request headers:
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
requests.get('YOUR_URL', headers=headers)
Code and example in the online IDE:
from bs4 import BeautifulSoup
import requests, lxml
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
params = {
"q": "samurai cop what does katana mean",
"gl": "us",
"hl": "en"
}
html = requests.get("https://www.google.com/search", headers=headers, params=params)
soup = BeautifulSoup(html.text, 'lxml')
for result in soup.select('.tF2Cxc')[:5]:
link = result.select_one('.yuRUbf a')['href']
print(link, sep='\n')
--------
'''
https://www.youtube.com/watch?v=paTW3wOyIYw
https://www.quotes.net/mquote/1060647
https://www.reddit.com/r/NewTubers/comments/47hw1g/what_does_katana_mean_it_means_japanese_sword_2/
https://www.imdb.com/title/tt0130236/characters/nm0360481
http://www.subzin.com/quotes/Samurai+Cop/What+does+Katana+mean%3F+-+It+means+Japanese+sword
'''
Alternatively, you can achieve the same thing by using Google Organic Results API from SerpApi. It's a paid API with a free plan.
The difference in your case is that you don't have to deal with selecting correct selectors or figuring out why certain things don't work as expected and then maintain it over time. Instead, you only need to iterate over structured JSON and get the date you want fast.
Code to integrate:
import os
from serpapi import GoogleSearch
params = {
"engine": "google",
"q": "samurai cop what does katana mean",
"hl": "en",
"gl": "us",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results["organic_results"][:5]:
print(result['link'])
--------
'''
https://www.youtube.com/watch?v=paTW3wOyIYw
https://www.quotes.net/mquote/1060647
https://www.reddit.com/r/NewTubers/comments/47hw1g/what_does_katana_mean_it_means_japanese_sword_2/
https://www.imdb.com/title/tt0130236/characters/nm0360481
http://www.subzin.com/quotes/Samurai+Cop/What+does+Katana+mean%3F+-+It+means+Japanese+sword
'''
Disclaimer, I work for SerpApi.

Use PhantomJS to pre compile handlebars template

I have a rails application as back end and a BackboneJS application as front end. BackboneJS app starts with rails server. Now, I have a problem of search engine indexing pages of my application, which I tried to solve with phantomjs [I user this gem]. I wrote a before_filter in application controller like this:
def phantom_response
if params["_escaped_fragment_"].present?
url = "#{request.base_url}/#!#{params["_escaped_fragment_"]}"
output = Phantomjs.run('/lib/assets/get_page.js', url)
render :text => output
end
end
And my get_page.js looks like this:
var page = require('webpage').create();
page.settings.userAgent = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36';
page.settings.localToRemoteUrlAccessEnabled = true;
page.open(phantom.args[0], function (d) {
var body = page.evaluate(function(s) {
return document.querySelector(s).innerText;
}, '#provider_details_tpl');
console.log(body);
phantom.exit();
});
The element with "provider_details_tpl" contains a handlebars template.
But phantomjs returns just my template without precompilation. Can you help me?

Phantomjs - Cookies not enabled error

I'm pretty new to phantomjs. Just started out with headless automation of the application that I work on. Somehow, the following code seems to work just fine for websites like hotmail,facebook etc but it doesn't work for my application under test. Following is the code that I'm using :-
var page = require("webpage").create();
page.settings.userAgent="Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36"
phantom.clearCookies();
phantom.cookiesEnabled = true;
var homePage = "https://www.somewebsite.com";
page.open(homePage, function(status) {
var url = page.url;
console.log("Status: " + status);
console.log("Loaded: " + url);
page.evaluate(function(){
document.getElementById('myUsername').value='username;
document.getElementById('myPassword').value='password';
});
page.render("before.png");
page.evaluate(function(){
document.getElementById('myLoginButton').click();
});
setTimeout(function() {
page.render("after.png");
phantom.exit();
}, 10000);
});
The error message that I get is "Your browser has been set to block all cookies. Please enable them to log into the website."
Although I have written the statement "phantom.cookiesEnabled = true;" it doesn't seem to enable it. I already tried changing the user agent but with no luck. Am I missing on something ?
Thanks in Advance,
Harshit Kohli
For anyone that might face this issue. Setting a user agent for the page should work
page.settings.userAgent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:44.0) Gecko/20100101 Firefox/44.0"