How do I query an AWS OpenSearch index using a Vega visualization? - amazon-web-services

I have data in an index in JSON format, and I want to use a Vega visualization to display this (full Vega, not Vega-Lite). I've found however that every example out there is for Vega-Lite and all they're trying to do it stick their data into a time series graph. I'd like to do something different, and thus I find myself at a dead-end.
A sample doc in my index:
{
"_index": "myindex",
"_type": "doc",
"_id": "abc123",
"_version": 1,
"_score": null,
"timestamp": "2022-05-23T07:43:21.123Z",
"_source": {
"fruit": [{
"amount": 15,
"type": {
"grower_id": 47,
"grower_country": "US",
"name": "apple"
}
},
{
"amount": 43,
"type": {
"grower_id": 47,
"grower_country": "CAN",
"name": "apple"
}
},
{
"amount": 7,
"type": {
"grower_id": 23,
"grower_country": "US",
"name": "orange"
}
},
{
"amount": 14,
"type": {
"grower_id": 23,
"grower_country": "CAN",
"name": "orange"
}
}
]
}
}
What I want to do is create 2 text marks on the visualization that will display the sum of the values as follows.
Symbol1 = sum of all apples (i.e. all apples grown in the US and CAN combined)
Symbol2 = sum of all oranges (i.e. all oranges grown in the US and CAN combined)
I tried the following data element with no success:
"data": [{
"name": "mydata",
"url": {
"index": "myindex",
"body": {
"query": "fruit.type.name:'apple'",
},
}
}
]
However obviously this query isn't even correct. What I want to be able to do is return a table of values and then be able to use those values in my marks as values to drive the mark behaviour or color. I'm comfortable with doing the latter in Vega, but getting the data queried is where I'm stuck.
I've read and watched so many tutorials which cover Vega-Lite, but I'm yet to find a single working example for Vega on AWS OpenSearch.
Can anyone please help?

Related

Show negative values in Deneb column chart in red

I've been playing around with the Deneb visual in PowerBI Desktop and (amongst many other things) have been trying to create a simple column chart that shows negative values in red and positive values in green, however can't for the life of me seem to get it working - I believe the condition/test in my script is correct, but it refuses to 'fire' when it's 'true'
I've read through the condition page of the Vega-Lite documentation https://vega.github.io/vega-lite/docs/condition.html and have a condition section within the encoding/color
I've added Month End and MonthYear columns from my Calendar table and an EBITDA measure from a fact table to the Deneb visual
Month End
MonthYear
EBITDA
31/7/2021
"Jul-21"
8277.56
31/8/2021
"Aug-21"
-15123.66
30/9/2021
"Sep-21"
9502.11
31/10/2021
"Oct-21"
13090.99
{
"data": {"name": "dataset"},
"mark": "bar",
"encoding": {
"x": {
"field": "MonthYear",
"sort": {"field": "Month End"}
},
"y": {
"field": "EBITDA",
"aggregate": "sum"
},
"color": {
"condition": {
"test": "datum['EBITDA']<0",
"value": "red"
},
"value": "green"
}
}
}
If I adjust the condition to be "test": "1==1" then the 'true' path works, so I assume I've got something wrong with my test line, though this seems to be correct per a lot of blogs, stackoverflow questions etc.
I've also tried using a "tranform:" channel to create a new Neg field in the Deneb dataset and referring to that field in my test, but it still won't adjust the colour.
It doesn’t like your aggregation. It looks like the data you are sending in is already aggregated by Power BI. If so, this will work:
"y": {
"field": "b",
"type": "quantitative"
},
View sample in the Vega Editor
If your data isn’t aggregated, add an aggregate transform like this:
"transform": [
{"aggregate": [{
"op": "sum",
"field": "b",
"as": "bsum"
}],
"groupby": ["a"]}
],
"mark": "bar",
"encoding": {
"x": {
"field": "a",
"sort": {"field": "a"}
},
"y": {
"field": "bsum",
"type": "quantitative"
},
"color": {
"condition": {
"test": "datum['bsum']<0",
"value": "red"
},
"value": "green"
}
}
}
Open the Chart in the Vega Editor

Can I get all active ARB subscription in a single API call

Current I'm exporting all ARB data by calling API to get all active ARB ids then go through each ARB id to get info stored in each ID. But this process is too long and it makes lots of requests. Is there any way so that I can get all active ARB ids data in one request like that of any database?
https://developer.authorize.net/api/reference/index.html#recurring-billing-get-a-list-of-subscriptions
This function gives only small amount data while I need complete data stored in a profile like this one: https://developer.authorize.net/api/reference/index.html#recurring-billing-get-subscription
But this function only works for single ID.
New Answer
No. The ARBGetSubscriptionListRequest only returns a limited amount of information. If you want detailed information you would need to call ARBGetSubscriptionListRequest and then loop through the results and make an API call for each subscription to get the more granular data.
Due to the potentially large amount of results, you probably should store the results in a database and then have a bunch of scheduled scripts make the subsequent API calls.
Old Answer
Yes. You can call ARBGetSubscriptionListRequest.
Request:
{
"ARBGetSubscriptionListRequest": {
"merchantAuthentication": {
"name": "5KP3u95bQpv",
"transactionKey": "346HZ32z3fP4hTG2"
},
"refId": "123456",
"searchType": "subscriptionActive",
"sorting": {
"orderBy": "id",
"orderDescending": "false"
},
"paging": {
"limit": "1000",
"offset": "1"
}
}
}
Response:
{
"totalNumInResultSet": 1273,
"totalNumInResultSetSpecified": true,
"subscriptionDetails": [
{
"id": 100188,
"name": "subscription",
"status": "canceled",
"createTimeStampUTC": "2004-04-28T23:59:47.33",
"firstName": "Joe",
"lastName": "Tester",
"totalOccurrences": 12,
"pastOccurrences": 6,
"paymentMethod": "creditCard",
"accountNumber": "XXXX5454",
"invoice": "42820041325496571",
"amount": 10,
"currencyCode": "USD"
},
{
"id": 100222,
"name": "",
"status": "canceled",
"createTimeStampUTC": "2004-10-22T21:00:15.503",
"firstName": "asdf",
"lastName": "asdf",
"totalOccurrences": 12,
"pastOccurrences": 0,
"paymentMethod": "creditCard",
"accountNumber": "XXXX1111",
"invoice": "",
"amount": 1,
"currencyCode": "USD"
},
{
"id": 100223,
"name": "",
"status": "canceled",
"createTimeStampUTC": "2004-10-22T21:01:27.69",
"firstName": "asdf",
"lastName": "asdf",
"totalOccurrences": 12,
"pastOccurrences": 1,
"paymentMethod": "eCheck",
"accountNumber": "XXXX3888",
"invoice": "",
"amount": 10,
"currencyCode": "USD"
}
],
"refId": "123456",
"messages": {
"resultCode": "Ok",
"message": [
{
"code": "I00001",
"text": "Successful."
}
]
}
}

Elastic Search Sort

I have a table for some activities like
[
{
"id": 123,
"name": "Ram",
"status": 1,
"activity": "Poster Design"
},
{
"id": 123,
"name": "Ram",
"status": 1,
"activity": "Poster Design"
},
{
"id": 124,
"name": "Leo",
"categories": [
"A",
"B",
"C"
],
"status": 1,
"activity": "Brochure"
},
{
"id": 134,
"name": "Levin",
"categories": [
"A",
"B",
"C"
],
"status": 1,
"activity": "3D Printing"
}
]
I want to get this data from elastic search 5.5 by sorting on field activity, but I need all the data corresponding to name = "Ram" first and then remaining in a single query.
You can use function score query to boost the result based on match for the filter(this case ram in name).
Following query should work for you
POST sort_index/_search
{
"query": {
"function_score": {
"query": {
"match_all": {}
},
"boost": "5",
"functions": [{
"filter": {
"match": {
"name": "ram"
}
},
"random_score": {},
"weight": 1000
}],
"score_mode": "max"
}
},
"sort": [{
"activity.keyword": {
"order": "desc"
}
}]
}
I would suggest using a bool query combined with the should clause.
U will also need to use the sort clause on your field.

How to interpret user search query (in Elasticsearch)

I would like to serve my visitors the best results possible when they use our search feature.
To achieve this I would like to interpret the search query.
For example a user searches for 'red beds for kids 120cm'
I would like to interpret it as following:
Category-Filter is "beds" AND "children"
Color-filter is red
Size-filter is 120cm
Are there ready to go tools for Elasticsearch?
Will I need NLP in front of Elasticsearch?
Elasticsearch is pretty powerful on its own and is very much capable of returning the most relevant results to full-text search queries, provided that data is indexed and queried adequately.
Under the hood it always performs text analysis for full-text searches (for fields of type text). A text analyzer consists of a character filter, tokenizer and a token filter.
For instance, synonym token filter can replace kids with children in the user query.
Above that search queries on modern websites are often facilitated via category selectors in the UI, which can easily be implemented with querying keyword fields of Elasticsearch.
It might be enough to model your data correctly and tune its indexing to implement the search you need - and if that is not enough, you can always add some extra layer of NLP-like logic on the client side, like #2ps suggested.
Now let me show a toy example of what you can achieve with a synonym token filter and copy_to feature.
Let's define the mapping
Let's pretend that our products are characterized by the following properties: Category, Color, and Size.LengthCM.
The mapping will look something like:
PUT /my_index
{
"mappings": {
"properties": {
"Category": {
"type": "keyword",
"copy_to": "DescriptionAuto"
},
"Color": {
"type": "keyword",
"copy_to": "DescriptionAuto"
},
"Size": {
"properties": {
"LengthCM": {
"type": "integer",
"copy_to": "DescriptionAuto"
}
}
},
"DescriptionAuto": {
"type": "text",
"analyzer": "MySynonymAnalyzer"
}
}
},
"settings": {
"index": {
"analysis": {
"analyzer": {
"MySynonymAnalyzer": {
"tokenizer": "standard",
"filter": [
"MySynonymFilter"
]
}
},
"filter": {
"MySynonymFilter": {
"type": "synonym",
"lenient": true,
"synonyms": [
"kid, kids => children"
]
}
}
}
}
}
}
Notice that we selected type keyword for the fields Category and Color.
Now, what about these copy_to and synonym?
What will copy_to do?
Every time we send an object for indexing into our index, value of the keyword field Category will be copied to a full-text field DescritpionAuto. This is what copy_to does.
What will synonym do?
To enable synonym we need to define a custom analyzer, see MySynonymAnalyzer which we defined under "settings" above.
Roughly, it will replace every token that matches something on the left of => with the token on the right.
How will the documents look like?
Let's insert a few example documents:
POST /my_index/_doc
{
"Category": [
"beds",
"adult"
],
"Color": "red",
"Size": {
"LengthCM": 150
}
}
POST /my_index/_doc
{
"Category": [
"beds",
"children"
],
"Color": "red",
"Size": {
"LengthCM": 120
}
}
POST /my_index/_doc
{
"Category": [
"couches",
"adult",
"family"
],
"Color": "blue",
"Size": {
"LengthCM": 200
}
}
POST /my_index/_doc
{
"Category": [
"couches",
"adult",
"family"
],
"Color": "red",
"Size": {
"LengthCM": 200
}
}
As you can see, DescriptionAuto is not present in the original documents - though due to copy_to we will be able to query it.
Let's see how.
Performing the search!
Now we can try out our index with a simple query_string query:
POST /my_index/_doc/_search
{
"query": {
"query_string": {
"query": "red beds for kids 120cm",
"default_field": "DescriptionAuto"
}
}
}
The results will look something like the following:
"hits": {
...
"max_score": 2.3611186,
"hits": [
{
...
"_score": 2.3611186,
"_source": {
"Category": [
"beds",
"children"
],
"Color": "red",
"Size": {
"LengthCM": 120
}
}
},
{
...
"_score": 1.0998137,
"_source": {
"Category": [
"beds",
"adult"
],
"Color": "red",
"Size": {
"LengthCM": 150
}
}
},
{
...
"_score": 0.34116736,
"_source": {
"Category": [
"couches",
"adult",
"family"
],
"Color": "red",
"Size": {
"LengthCM": 200
}
}
}
]
}
The document with categories beds and children and color red is on top. And its relevance score is twice bigger than of its follow-up!
How can I check how Elasticsearch interpreted the user's query?
It is easy to do via analyze API:
POST /my_index/_analyze
{
"text": "red bed for kids 120cm",
"analyzer": "MySynonymAnalyzer"
}
{
"tokens": [
{
"token": "red",
"start_offset": 0,
"end_offset": 3,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "bed",
"start_offset": 4,
"end_offset": 7,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "for",
"start_offset": 8,
"end_offset": 11,
"type": "<ALPHANUM>",
"position": 2
},
{
"token": "children",
"start_offset": 12,
"end_offset": 16,
"type": "SYNONYM",
"position": 3
},
{
"token": "120cm",
"start_offset": 17,
"end_offset": 22,
"type": "<ALPHANUM>",
"position": 4
}
]
}
As you can see, there is no token kids, but there is token children.
On a side note, in this example Elasticsearch wasn't able, though, to parse the size of the bed: token 120cm didn't match to anything, since all sizes are integers, like 120, 150, etc. Another layer of tweaking will be needed to extract 120 from 120cm token.
I hope this gives an idea of what can be achieved with Elasticsearch's built-in text analysis capabilities!

JSON formatting error using Boost JSON parser

I'm attempting to use Boost to read a JSON file from my Firefox configuration folder called sessionstore.js, where the information on the current/last Firefox session is saved for purposes of recovery. I've written a program based on the XML-based tutorial from the Boost website, simply swapping out the XML parts for the JSON parts, which can be seen below
#include <boost/property_tree/ptree.hpp>
#include <boost/property_tree/json_parser.hpp>
#include <boost/foreach.hpp>
#include <string>
#include <set>
#include <exception>
using boost::property_tree::ptree;
using namespace std;
const string FILENAME = "sessionstore.js";
const string WINDOW_TAG = "windows";
struct session_settings
{
void load (const string &FILENAME);
};
void session_settings::load (const string &FILENAME)
{
ptree pt;
read_json (FILENAME, pt);
}
int main()
{
try
{
session_settings Settings;
Settings.load(FILENAME);
}
catch (exception &e)
{
cout << "Error: " << e.what() << endl;
}
return 0;
}
The contents of the JSON file I'm trying to read are
{"windows":[{"tabs":[{"entries":[{"url":"about:home","title":"Mozilla Firefox Start Page","ID":5,"docshellID":11,"owner_b64":"NhAra3tiRRqhyKDUVsktxQAAAAAAAAAAwAAAAAAAAEYAAQAAAAAAAS8nfAAOr03buTZBMmukiq4HoizADOUR05MxABBLoP1AAAAAAAVhYm91dAAAAARob21l4NodcC97EdOM0ABgsPwUoweiLMAM5RHTkzEAEEug/UAAAAAADm1vei1zYWZlLWFib3V0AAAABGhvbWUAAAA=","docIdentifier":5},{"url":"http://www.google.co.uk/","title":"Google","ID":6,"docshellID":11,"docIdentifier":6,"children":[{"url":"about:blank","ID":7,"docshellID":12,"owner_b64":"NhAra3tiRRqhyKDUVsktxQAAAAAAAAAAwAAAAAAAAEYAAQAAAAAAAd6UctCANBHTk5kAEEug/UAHoizADOUR05MxABBLoP1AAAAAAv////8AAABQAQAAABhodHRwOi8vd3d3Lmdvb2dsZS5jby51ay8AAAAAAAAABAAAAAcAAAAQAAAAB/////8AAAAH/////wAAAAcAAAAQAAAAFwAAAAEAAAAXAAAAAQAAABcAAAABAAAAGAAAAAAAAAAY/////wAAABf/////AAAAF/////8AAAAX/////wEAAAAAAAAAAAABAAA=","docIdentifier":7,"scroll":"0,0"}],"formdata":{"#csi":"1","#hcache":"{\"BInSTfL-EtSt8QOl24nrCg\":[[69,{}],[14,{}],[60,{}],[81,{\"persisted\":true}],[42,{}],[43,{}],[83,{}],[95,{\"kfe\":{\"kfeHost\":\"clients1.google.co.uk\",\"kfeUrlPrefix\":\"/webpagethumbnail?r=2&f=2&s=300:585&query=&hl=en&gl=uk\",\"maxPrefetchConnections\":2,\"prefetch\":90,\"slowConnection\":false},\"logging\":{\"csiFraction\":0.05,\"gen204Fraction\":0.05},\"msgs\":{\"loading\":\"Still loading...\",\"mute\":\"Mute\",\"noPreview\":\"Preview not available\",\"sound\":\"Sound:\",\"soundOff\":\"off\",\"soundOn\":\"on\",\"unmute\":\"Unmute\"},\"pb\":{\"desiredHeight\":585,\"desiredWidth\":300,\"minHeight\":200,\"minWidth\":300},\"time\":{\"hoverClose\":300,\"hoverModeTimeout\":60,\"hoverOpen\":125,\"loading\":100,\"longHoverOpen\":725,\"prefetchOnLoad\":3000,\"timeout\":2500}}],[78,{}],[25,{\"m\":{\"bks\":true,\"blg\":true,\"dsc\":true,\"evn\":true,\"frm\":true,\"isch\":true,\"klg\":true,\"mbl\":true,\"nws\":true,\"plcs\":true,\"ppl\":true,\"prc\":true,\"pts\":true,\"rcp\":true,\"shop\":true,\"vid\":true},\"t\":null}],[64,{}],[105,{}],[22,{\"m_errors\":{\"32\":\"Sorry, no more results to show.\",\"default\":\"<font color=red>Error:</font> The server could not complete your request. Try again in 30 seconds.\"},\"m_tip\":\"Click for more information\"}],[77,{}],[84,{}],[99,{}],[29,{\"mcr\":5}],[92,{\"avgTtfc\":2000,\"fd\":1000,\"fl\":true,\"focus\":true,\"hpt\":250,\"kn\":true,\"mds\":\"clir,clue,dfn,evn,frim,klg,prc,rl,show,sp,sts,ww,mbl_he,mbl_hs,mbl_re,mbl_rs,mbl_sv,isch\",\"msg\":{\"dym\":\"Did you mean:\",\"gs\":\"Google Search\",\"kntt\":\"Use the up and down arrow keys to select each result. Press Enter to go to the selection.\",\"sif\":\"Search instead for\",\"srf\":\"Showing results for\"},\"odef\":true,\"ophe\":true,\"pq\":true,\"rpt\":41,\"tct\":\" ?\",\"tdur\":50}],[24,{}],[38,{}]]}"},"scroll":"0,0"}],"index":2,"hidden":false,"attributes":{"image":"http://www.google.co.uk/favicon.ico"},"storage":{"http://www.google.co.uk":{"web-v":"12_c9c918f0"}}}],"selected":1,"_closedTabs":[],"width":994,"height":688,"screenX":1650,"screenY":24,"sizemode":"normal","title":"Google"}],"selectedWindow":0,"_closedWindows":[{"tabs":[{"entries":[{"url":"about:home","title":"Mozilla Firefox Start Page","ID":0,"docshellID":5,"owner_b64":"NhAra3tiRRqhyKDUVsktxQAAAAAAAAAAwAAAAAAAAEYAAQAAAAAAAS8nfAAOr03buTZBMmukiq4HoizADOUR05MxABBLoP1AAAAAAAVhYm91dAAAAARob21l4NodcC97EdOM0ABgsPwUoweiLMAM5RHTkzEAEEug/UAAAAAADm1vei1zYWZlLWFib3V0AAAABGhvbWUAAAA="},{"url":"http://www.facebook.com/","title":"Welcome to Facebook - Log In, Sign Up or Learn More","ID":1,"docshellID":5,"docIdentifier":1,"formdata":{"//xhtml:div[#id='reg_form_box']/xhtml:table/xhtml:tbody/xhtml:tr[6]/xhtml:td[2]/xhtml:div/xhtml:div/xhtml:select":0,"//xhtml:div[#id='reg_form_box']/xhtml:table/xhtml:tbody/xhtml:tr[6]/xhtml:td[2]/xhtml:div/xhtml:div/xhtml:select[2]":0,"#sex":0,"#birthday_month":0,"#birthday_day":0,"#birthday_year":0},"scroll":"0,0"}],"index":2,"hidden":false,"attributes":{"image":"http://www.facebook.com/favicon.ico"}},{"entries":[{"url":"http://twitter.com/","title":"Twitter","ID":3,"docshellID":6,"docIdentifier":3,"children":[{"url":"http://api.twitter.com/receiver.html","ID":4,"docshellID":7,"referrer":"http://twitter.com/","docIdentifier":4,"scroll":"0,0"}],"formdata":{},"scroll":"0,0"}],"index":1,"hidden":false,"attributes":{"image":"http://twitter.com/phoenix/favicon.ico"}}],"selected":2,"_closedTabs":[],"width":994,"height":688,"screenX":1366,"screenY":307,"sizemode":"normal","cookies":[{"host":".facebook.com","value":"J4-69","path":"/","name":"lsd"},{"host":".facebook.com","value":"http%3A%2F%2Fwww.facebook.com%2F","path":"/","name":"reg_fb_gate"},{"host":".facebook.com","value":"http%3A%2F%2Fwww.facebook.com%2F","path":"/","name":"reg_fb_ref"},{"host":".facebook.com","value":"994x624","path":"/","name":"wd"},{"host":".twitter.com","value":"43838368","path":"/","name":"__utmc"},{"host":"twitter.com","value":"4bfz%2B%2BmebEkRkMWFCXm%2FCUOsvDoVeFTl","path":"/","name":"original_referer"},{"host":"scribe.twitter.com","value":"4bfz%2B%2BmebEkRkMWFCXm%2FCUOsvDoVeFTl","path":"/","name":"original_referer"},{"host":".twitter.com","value":"BAh7CToPY3JlYXRlZF9hdGwrCDoVZ%252F4vAToMY3NyZl9pZCIlODE2MGI1ZjJh%250AYmViNDMwODMxNDlkN2U5ZDg5Yjk4ZmU6B2lkIiU2N2I4YjdmNGExNWFkNzlk%250AODI0MDVjMGM1NmMzYjVhYSIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6%250ARmxhc2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA%253D%253D--8b0d751e9774c5cfaa61fdec567cb782aa8757dd","path":"/","name":"_twitter_sess","httponly":true},{"host":".twitter.com","value":"43838368","path":"/","name":"__utmc"},{"host":"twitter.com","value":"4bfz%2B%2BmebEkRkMWFCXm%2FCUOsvDoVeFTl","path":"/","name":"original_referer"},{"host":"scribe.twitter.com","value":"4bfz%2B%2BmebEkRkMWFCXm%2FCUOsvDoVeFTl","path":"/","name":"original_referer"},{"host":".twitter.com","value":"BAh7CToPY3JlYXRlZF9hdGwrCDoVZ%252F4vAToMY3NyZl9pZCIlODE2MGI1ZjJh%250AYmViNDMwODMxNDlkN2U5ZDg5Yjk4ZmU6B2lkIiU2N2I4YjdmNGExNWFkNzlk%250AODI0MDVjMGM1NmMzYjVhYSIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6%250ARmxhc2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA%253D%253D--8b0d751e9774c5cfaa61fdec567cb782aa8757dd","path":"/","name":"_twitter_sess","httponly":true}],"title":"Twitter"}],"session":{"state":"stopped","lastUpdate":1305658398727}}
and when I tried to load that with my program I got the error
Error: sessionstore.js(1): expected value
Since the file is formatted all on a single line, this meant the error could be anywhere in the file, so I ran it though a Javascript beautifier, keeping the default options, and pasted the results back into the original file and executed the program.
The formatted JSON is
{
"windows": [{
"tabs": [{
"entries": [{
"url": "about:home",
"title": "Mozilla Firefox Start Page",
"ID": 5,
"docshellID": 11,
"owner_b64": "NhAra3tiRRqhyKDUVsktxQAAAAAAAAAAwAAAAAAAAEYAAQAAAAAAAS8nfAAOr03buTZBMmukiq4HoizADOUR05MxABBLoP1AAAAAAAVhYm91dAAAAARob21l4NodcC97EdOM0ABgsPwUoweiLMAM5RHTkzEAEEug/UAAAAAADm1vei1zYWZlLWFib3V0AAAABGhvbWUAAAA=",
"docIdentifier": 5
}, {
"url": "http://www.google.co.uk/",
"title": "Google",
"ID": 6,
"docshellID": 11,
"docIdentifier": 6,
"children": [{
"url": "about:blank",
"ID": 7,
"docshellID": 12,
"owner_b64": "NhAra3tiRRqhyKDUVsktxQAAAAAAAAAAwAAAAAAAAEYAAQAAAAAAAd6UctCANBHTk5kAEEug/UAHoizADOUR05MxABBLoP1AAAAAAv////8AAABQAQAAABhodHRwOi8vd3d3Lmdvb2dsZS5jby51ay8AAAAAAAAABAAAAAcAAAAQAAAAB/////8AAAAH/////wAAAAcAAAAQAAAAFwAAAAEAAAAXAAAAAQAAABcAAAABAAAAGAAAAAAAAAAY/////wAAABf/////AAAAF/////8AAAAX/////wEAAAAAAAAAAAABAAA=",
"docIdentifier": 7,
"scroll": "0,0"
}],
"formdata": {
"#csi": "1",
"#hcache": "{\"BInSTfL-EtSt8QOl24nrCg\":[[69,{}],[14,{}],[60,{}],[81,{\"persisted\":true}],[42,{}],[43,{}],[83,{}],[95,{\"kfe\":{\"kfeHost\":\"clients1.google.co.uk\",\"kfeUrlPrefix\":\"/webpagethumbnail?r=2&f=2&s=300:585&query=&hl=en&gl=uk\",\"maxPrefetchConnections\":2,\"prefetch\":90,\"slowConnection\":false},\"logging\":{\"csiFraction\":0.05,\"gen204Fraction\":0.05},\"msgs\":{\"loading\":\"Still loading...\",\"mute\":\"Mute\",\"noPreview\":\"Preview not available\",\"sound\":\"Sound:\",\"soundOff\":\"off\",\"soundOn\":\"on\",\"unmute\":\"Unmute\"},\"pb\":{\"desiredHeight\":585,\"desiredWidth\":300,\"minHeight\":200,\"minWidth\":300},\"time\":{\"hoverClose\":300,\"hoverModeTimeout\":60,\"hoverOpen\":125,\"loading\":100,\"longHoverOpen\":725,\"prefetchOnLoad\":3000,\"timeout\":2500}}],[78,{}],[25,{\"m\":{\"bks\":true,\"blg\":true,\"dsc\":true,\"evn\":true,\"frm\":true,\"isch\":true,\"klg\":true,\"mbl\":true,\"nws\":true,\"plcs\":true,\"ppl\":true,\"prc\":true,\"pts\":true,\"rcp\":true,\"shop\":true,\"vid\":true},\"t\":null}],[64,{}],[105,{}],[22,{\"m_errors\":{\"32\":\"Sorry, no more results to show.\",\"default\":\"<font color=red>Error:</font> The server could not complete your request. Try again in 30 seconds.\"},\"m_tip\":\"Click for more information\"}],[77,{}],[84,{}],[99,{}],[29,{\"mcr\":5}],[92,{\"avgTtfc\":2000,\"fd\":1000,\"fl\":true,\"focus\":true,\"hpt\":250,\"kn\":true,\"mds\":\"clir,clue,dfn,evn,frim,klg,prc,rl,show,sp,sts,ww,mbl_he,mbl_hs,mbl_re,mbl_rs,mbl_sv,isch\",\"msg\":{\"dym\":\"Did you mean:\",\"gs\":\"Google Search\",\"kntt\":\"Use the up and down arrow keys to select each result. Press Enter to go to the selection.\",\"sif\":\"Search instead for\",\"srf\":\"Showing results for\"},\"odef\":true,\"ophe\":true,\"pq\":true,\"rpt\":41,\"tct\":\" ?\",\"tdur\":50}],[24,{}],[38,{}]]}"
},
"scroll": "0,0"
}],
"index": 2,
"hidden": false,
"attributes": {
"image": "http://www.google.co.uk/favicon.ico"
},
"storage": {
"http://www.google.co.uk": {
"web-v": "12_c9c918f0"
}
}
}],
"selected": 1,
"_closedTabs": [],
"width": 994,
"height": 688,
"screenX": 1650,
"screenY": 24,
"sizemode": "normal",
"title": "Google"
}],
"selectedWindow": 0,
"_closedWindows": [{
"tabs": [{
"entries": [{
"url": "about:home",
"title": "Mozilla Firefox Start Page",
"ID": 0,
"docshellID": 5,
"owner_b64": "NhAra3tiRRqhyKDUVsktxQAAAAAAAAAAwAAAAAAAAEYAAQAAAAAAAS8nfAAOr03buTZBMmukiq4HoizADOUR05MxABBLoP1AAAAAAAVhYm91dAAAAARob21l4NodcC97EdOM0ABgsPwUoweiLMAM5RHTkzEAEEug/UAAAAAADm1vei1zYWZlLWFib3V0AAAABGhvbWUAAAA="
}, {
"url": "http://www.facebook.com/",
"title": "Welcome to Facebook - Log In, Sign Up or Learn More",
"ID": 1,
"docshellID": 5,
"docIdentifier": 1,
"formdata": {
"//xhtml:div[#id='reg_form_box']/xhtml:table/xhtml:tbody/xhtml:tr[6]/xhtml:td[2]/xhtml:div/xhtml:div/xhtml:select": 0,
"//xhtml:div[#id='reg_form_box']/xhtml:table/xhtml:tbody/xhtml:tr[6]/xhtml:td[2]/xhtml:div/xhtml:div/xhtml:select[2]": 0,
"#sex": 0,
"#birthday_month": 0,
"#birthday_day": 0,
"#birthday_year": 0
},
"scroll": "0,0"
}],
"index": 2,
"hidden": false,
"attributes": {
"image": "http://www.facebook.com/favicon.ico"
}
}, {
"entries": [{
"url": "http://twitter.com/",
"title": "Twitter",
"ID": 3,
"docshellID": 6,
"docIdentifier": 3,
"children": [{
"url": "http://api.twitter.com/receiver.html",
"ID": 4,
"docshellID": 7,
"referrer": "http://twitter.com/",
"docIdentifier": 4,
"scroll": "0,0"
}],
"formdata": {},
"scroll": "0,0"
}],
"index": 1,
"hidden": false,
"attributes": {
"image": "http://twitter.com/phoenix/favicon.ico"
}
}],
"selected": 2,
"_closedTabs": [],
"width": 994,
"height": 688,
"screenX": 1366,
"screenY": 307,
"sizemode": "normal",
"cookies": [{
"host": ".facebook.com",
"value": "J4-69",
"path": "/",
"name": "lsd"
}, {
"host": ".facebook.com",
"value": "http%3A%2F%2Fwww.facebook.com%2F",
"path": "/",
"name": "reg_fb_gate"
}, {
"host": ".facebook.com",
"value": "http%3A%2F%2Fwww.facebook.com%2F",
"path": "/",
"name": "reg_fb_ref"
}, {
"host": ".facebook.com",
"value": "994x624",
"path": "/",
"name": "wd"
}, {
"host": ".twitter.com",
"value": "43838368",
"path": "/",
"name": "__utmc"
}, {
"host": "twitter.com",
"value": "4bfz%2B%2BmebEkRkMWFCXm%2FCUOsvDoVeFTl",
"path": "/",
"name": "original_referer"
}, {
"host": "scribe.twitter.com",
"value": "4bfz%2B%2BmebEkRkMWFCXm%2FCUOsvDoVeFTl",
"path": "/",
"name": "original_referer"
}, {
"host": ".twitter.com",
"value": "BAh7CToPY3JlYXRlZF9hdGwrCDoVZ%252F4vAToMY3NyZl9pZCIlODE2MGI1ZjJh%250AYmViNDMwODMxNDlkN2U5ZDg5Yjk4ZmU6B2lkIiU2N2I4YjdmNGExNWFkNzlk%250AODI0MDVjMGM1NmMzYjVhYSIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6%250ARmxhc2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA%253D%253D--8b0d751e9774c5cfaa61fdec567cb782aa8757dd",
"path": "/",
"name": "_twitter_sess",
"httponly": true
}, {
"host": ".twitter.com",
"value": "43838368",
"path": "/",
"name": "__utmc"
}, {
"host": "twitter.com",
"value": "4bfz%2B%2BmebEkRkMWFCXm%2FCUOsvDoVeFTl",
"path": "/",
"name": "original_referer"
}, {
"host": "scribe.twitter.com",
"value": "4bfz%2B%2BmebEkRkMWFCXm%2FCUOsvDoVeFTl",
"path": "/",
"name": "original_referer"
}, {
"host": ".twitter.com",
"value": "BAh7CToPY3JlYXRlZF9hdGwrCDoVZ%252F4vAToMY3NyZl9pZCIlODE2MGI1ZjJh%250AYmViNDMwODMxNDlkN2U5ZDg5Yjk4ZmU6B2lkIiU2N2I4YjdmNGExNWFkNzlk%250AODI0MDVjMGM1NmMzYjVhYSIKZmxhc2hJQzonQWN0aW9uQ29udHJvbGxlcjo6%250ARmxhc2g6OkZsYXNoSGFzaHsABjoKQHVzZWR7AA%253D%253D--8b0d751e9774c5cfaa61fdec567cb782aa8757dd",
"path": "/",
"name": "_twitter_sess",
"httponly": true
}],
"title": "Twitter"
}],
"session": {
"state": "stopped",
"lastUpdate": 1305658398727
}
}
The error
Error: sessionstore.js(179): expected value
now identifies the fault as being on the third-last line, the one that reads "lastUpdate": 1305658398727. From what I've read about the JSON format, this sounds to me like a comma or bracket is missing from this line, but this is a file that has been produced my Mozilla to work with Firefox, and I don't believe that they would make a mistake like that, so I am lead to believe that there is a problem with the JSON parser in Boost. Can anyone please confirm if this is the case, or if I'm the one doing something wrong?
I think the problem is this value is bigger than an int or a double. I don't know what data type uses BOOST JSON for reading numbers. To test this, just change the number to be a string and parse it again. In the standard, numbers are not limited, but you have to select a data type to represent them, and maybe they selected double, clearly not enough for this number. I'll take a look to see if you can configure the type used for numbers.
EDIT:
Looking again at the implementation, the "number" rule is implemented using Spirit as follows:
number
= strict_real_p
| int_p
;
Looking at Spirit strict_real_p uses double as the underlying type, and int_p actually uses an int.
The bad news is that, for what I see in the code, this is not configurable, so you have to change the JSON to be interpreted.
After receiving answers from Diego Sevilla and c-smile, I did a bit of Googling to figure out how I would incorporate their suggestions into Boost, since changing the JSON file unfortunately isn't an option in my case, and I came across this ticket on the Boost bug tracker that describes my exact problem. It has since been fixed and released with Boost 1.45. I, however, am using version 1.42 from the Ubuntu repositories, so will need to install the newer version manually.
As Diego said that is because 1305658398727 does not fit into neither strict_real_p nor int_p production.
I suspect you will need either other JSON parser or to modify Spirit definitions by yourself.
Either like this:
number
= strict_real_p
| int_p
| int64_p
;
or just as:
number
= real_p;
Ideally date/time in JSON should be presented by strings in ISO format. In this case you will not have such problems. I suspect that data there is just a number of milliseconds since 1970-01-01 (JavaScript Date.valueOf())