NEAR FunctionCall `args` field - blockchain

In the near_primitives::views, the args field on the FunctionCall is represented as a String type. From the chain data model, which is transaction::Action::FunctionCall, its args field there is a `Vec.
The question is, does this args field will always content a valid JSON payload as the content? We assume the answer is probably a No since the underlying field contains pure bytes.
In which circumstances this would a valid JSON string and in which circumstances it would be a binary format?
Finally, if binary format is possible (likely), how is it possible to decode it? Is this in developers hand and could be any binary format?
See
https://github.com/near/nearcore/blob/14711926391d3ec1d23116658a295a62e77bc701/core/primitives/src/views.rs#L768
https://github.com/near/nearcore/blob/14711926391d3ec1d23116658a295a62e77bc701/core/primitives/src/transaction.rs#L113

In most cases args will be base64 encoded JSON string.
Here's an example of how we decode them on NEAR Indexer for Explorer side.
ActionView::FunctionCall {
method_name,
args,
gas,
deposit,
} => {
if let Ok(decoded_args) = base64::decode(args) {
if let Ok(mut args_json) = serde_json::from_slice(&decoded_args) {
escape_json(&mut args_json);
arguments["args_json"] = args_json;
}
}
Is this in developers hand and could be any binary format?
Yes.
Rainbow Bridge-related transactions have borsh-serialized args which are not possible to decode into JSON.
ref: https://github.com/near/near-indexer-for-explorer/blob/master/src/models/serializers.rs#L94-L103

args are not limited to any format at all, they are just binary blob. What you see in the views.rs is partially serialized data where args are expected to be in base64 encoding thus it is a String (thus, it is always base64 data there; be it JSON, Borsh-serialized data, or just raw binary blob, e.g. PNG image)

Related

How to read field type from boost property tree

I'm using boost property trees to read values from a json file.
{
"some_values":
{
"field_1": "value_1",
"field_2": true
}
}
I can read the values with:
spTree->get<string>("some_values.field_1", "");
spTree->get<bool>("some_values.field_2", false);
But can I read the type of the variable stored in any given field?
Documentation says
[...] the following JSON / property tree mapping is used:
[...]
JSON values are mapped to nodes containing the value. However, all type information is lost; numbers, as well as the literals "null", "true" and "false" are simply mapped to their string form.
Property tree nodes containing both child nodes and data cannot be mapped.
so there is no way if you intend to use the JSON parser unless you write your own code or add additional metadata.

How to encode an attachment to Base64 in Jmeter?

I'm trying to encode a file to Base64 in Jmeter to test a web service using the following script:
String file = FileUtils.readFileToString(new File("${filepath}"), "UTF-8");
vars.put("file", new String(Base64.encodeBase64(file.getBytes("UTF-8"))));
This works fine for plain/text files and does not work for other file types correctly.
How could I make it work for other file types?
Jmeter has many potions to convert a variable to "Base64", below are a few options
Bean shell pre processor
BeanShell PostProcessor
BeanShell Sampler.
Below is the "bean shell" code, which used in "Bean shell pre processor" to convert variable to Base64
import org.apache.commons.codec.binary.Base64;
String emailIs= vars.get("session");
byte[] encryptedUid = Base64.encodeBase64(emailIs.getBytes());
vars.put("genStringValue",new String(encryptedUid));
Example :
Before Base64 :jdwqoeendwqqm12sdw
After Base64 :amR3cW9lZW5kd3FxbTEyc2R3
Converted using Jmeter :
Converted Using base64 site:
As Groovy is now the preferred JSR223 script language in each JMeter Sampler, Pre- & PostProcessor, Listener, Assertion, etc. this task is pretty easy.
def fileAsBase64 = new File("${filepath}").bytes.encodeBase64().toString()
vars.put("file", fileAsBase64)
Or as one liner:
vars.put("file", new File("${filepath}").bytes.encodeBase64().toString())
That's it.
Use the function ${__base64Encode(nameofthestring)} to encode the string value and ${__base64Decode(nameofthestring)} to decode the string value.
Your issue comes from the fact that you're considering binary files as text when reading them which is wrong.
Use this for reading the file:
https://commons.apache.org/proper/commons-io/apidocs/org/apache/commons/io/FileUtils.html#readFileToByteArray(java.io.File)
Then encode the byte array in Base64
Try this
import org.apache.commons.codec.binary.Base64;
String forEncoding = vars.get("userName") + ":" + vars.get("passWord");
byte[] token = Base64.encodeBase64(forEncoding.getBytes());
vars.put("token", new String(token));
Alternative for this is to directly create a groovy function in User Defined Variable as
${__groovy(new File(vars.get("filepath")).bytes.encodeBase64().toString(),)}

Django: Localization Issue

In my application, I have a dictionary of phrases that are used throughout of the application. This same dictionary is used to create PDFs and Excel Spreadsheets.
The dictionary looks like so:
GLOBAL_MRD_VOCAB = {
'fiscal_year': _('Fiscal Year'),
'region': _('Region / Focal Area'),
'prepared_by': _('Preparer Name'),
'review_cycle':_('Review Period'),
... snip ...
}
In the code to produce the PDF, I have:
fy = dashboard_v.fiscal_year
fy_label = GLOBAL_MRD_VOCAB['fiscal_year']
rg = dashboard_v.dashboard.region
rg_label = GLOBAL_MRD_VOCAB['region']
rc = dashboard_v.review_cycle
rc_label = GLOBAL_MRD_VOCAB['review_cycle']
pb = dashboard_v.prepared_by
pb_label = GLOBAL_MRD_VOCAB['prepared_by']
Now, when the PDF is produced, in the PDF, I don't see these labels but rather, I see:
<django.utils.functional.__proxy__ object at 0x10106fdd0>
Can somebody help me with this? How do I get the properly translated labels?
Thanks
Eric
"Lazy translation"
The result of a ugettext_lazy() call can be used wherever you would use a unicode string (an object with type unicode) in Python. If you try to use it where a bytestring (a str object) is expected, things will not work as expected, since a ugettext_lazy() object doesn't know how to convert itself to a bytestring. You can't use a unicode string inside a bytestring, either, so this is consistent with normal Python behavior.
...
If you ever see output that looks like "hello <django.utils.functional...>", you have tried to insert the result of ugettext_lazy() into a bytestring. That's a bug in your code.
Either pass it to unicode() to get the unicode from it, or don't use lazy translation.

basic json > struct question ( using 'Go')

I'm working with twitter's api, trying to get the json data from
http://search.twitter.com/trends/current.json
which looks like:
{"as_of":1268069036,"trends":{"2010-03-08 17:23:56":[{"name":"Happy Women's Day","query":"\"Happy Women's Day\" OR \"Women's Day\""},{"name":"#MusicMonday","query":"#MusicMonday"},{"name":"#MM","query":"#MM"},{"name":"Oscars","query":"Oscars OR #oscars"},{"name":"#nooffense","query":"#nooffense"},{"name":"Hurt Locker","query":"\"Hurt Locker\""},{"name":"Justin Bieber","query":"\"Justin Bieber\""},{"name":"Cmon","query":"Cmon"},{"name":"My World 2","query":"\"My World 2\""},{"name":"Sandra Bullock","query":"\"Sandra Bullock\""}]}}
My structs look like:
type trend struct {
name string
query string
}
type trends struct {
id string
arr_of_trends []trend
}
type Trending struct {
as_of string
trends_obj trends
}
and then I parse the JSON into a variable of type Trending. I'm very new to JSON so my main concern is making sure I've have the data structure correctly setup to hold the returned json data.
I'm writing this in 'Go' for a project for school. (This is not part of a particular assignment, just something I'm demo-ing for a presentation on the language)
UPDATE: In accordance with PeterSO's comment I'm going the regexp route. Using:
Cur_Trends := new(Current)
/* unmarshal the JSON into our structures */
//find proper json time-name
aoUnixTime, _, _ := os.Time()
// insert code to find and convert as_of Unix time to aoUnixTime
aoName := time.SecondsToUTC(aoUnixTime).Format(`"2006-01-02"`)
fmt.Printf("%s\n", aoName)
regexp_pattern := "/" + aoName + "/"
regex, _ := regexp.Compile(regexp_pattern);
cleaned_json := regex.ReplaceAllString(string(body2), "ntrends")
os.Stdout.WriteString(cleaned_json)
Doesn't show any changes. Am I specifying the regexp wrong? It seems like 'Go' only allows for one regexp at a time...
UPDATE:
can now change the date/time to "ntrends" but the "Unmarshaling" isn't working. I can get everything moved into an interface using json.Decode, but can't iterate through it...
I guess my new question is, How do I iterate through:
map[as_of:1.268176902e+09 trends:map[ntrends:[map[name:#nowplaying query:#nowplaying] map[name:#imtiredofseeing query:#imtiredofseeing] map[name:#iWillNever query:#iWillNever] map[name:#inmyfamily query:#inmyfamily] map[name:#raiseyourhandif query:#raiseyourhandif] map[name:#ripbig query:#ripbig] map[name:QVC query:QVC] map[name:#nooffense query:#nooffense] map[name:#RIPLaylaGrace query:#RIPLaylaGrace] map[name:Justin Bieber query:"Justin Bieber"]]]]
using "for...range" is giving me weird stuff...
Twitter is famous for its Fail Whale, and the Twitter API gets a failing grade too; it's horrible.
Twitter trends current Search API method response can be expressed (using just two trends to simplify the examples) in canonical, normalized JSON form as:
{
"as_of":1268069036,
"trends":[
{"name":"Happy Women's Day","query":"\"Happy Women's Day\" OR \"Women's Day\""},
{"name":"#MusicMonday","query":"#MusicMonday"},{"name":"#MM","query":"#MM"}
]
}
The as_of date is in Unix time, the number of seconds since 1/1/1970.
In Go, this can be described by:
type Trend struct {
Name string
Query string
}
type Current struct {
As_of int64
Trends []Trend
}
Twitter mangles the canonical, normalized JSON form to become:
{
"as_of":1268069036,
"trends":{
"2010-03-08 17:23:56":[
{"name":"Happy Women's Day","query":"\"Happy Women's Day\" OR \"Women's Day\""},
{"name":"#MusicMonday","query":"#MusicMonday"}
]
}
}
Sometimes, Twitter returns this equivalent JSON form.
{
"trends":{
"2010-03-08 17:23:56":[
{"name":"Happy Women's Day","query":"\"Happy Women's Day\" OR \"Women's Day\""},
{"name":"#MusicMonday","query":"#MusicMonday"}
]
},
"as_of":1268069036
}
"2010-03-08 17:23:56": is a JSON object name. However, it's -- nonsensically -- a string form of as_of.
If we replace "2010-03-08 17:23:56": by the object name "ntrends": (for nested trends), overwriting the redundant as_of string time, we have the following revised Twitter JSON form:
{
"as_of":1268069036,
"trends":{
"ntrends":[
{"name":"Happy Women's Day","query":"\"Happy Women's Day\" OR \"Women's Day\""},
{"name":"#MusicMonday","query":"#MusicMonday"}
]
}
}
It's easy to scan the Twitter JSON form for "as_of":, read the following number as the as_of Unix time, and convert it to JSON name form e.g.:
var aoUnixTime int64
// insert code to find and convert as_of Unix time to aoUnixTime
aoName := time.SecondsToUTC(aoUnix).Format(`"2006-01-02 15:04:05":`)
Now we can scan the Twitter JSON form for the aoName value and replace it with "ntrends": to get the revised Twitter JSON form.
In Go, the revised Twitter JSON form can be described by:
type Trend struct {
Name string
Query string
}
type NTrends struct {
NTrends []Trend
}
type Current struct {
As_of int64
Trends NTrends
}
Note: the first character of the struct and field identifiers are uppercase so that they can be exported.
I've programmed and tested this approach and it seems to work. Since this is a school project for you, I haven't published my code.
Ugh, this seems like JSON that Go can't parse. Twitter pulls this kind of weird stuff all the time in their API.
The 'trends' object is a map from date objects to an array of trending topics. Unfortunately the Go JSON parser isn't smart enough to handle this.
In the meantime you could manually parse this format, or just do a regular expression search for the topics.
Either way, I'd post this as a Go issue and see what they say:
http://code.google.com/p/go/issues/list
Revision to earlier answer.
The Twitter Search API Method trends response is in convenient canonical and normalized JSON form:
{"trends":[{"name":"#amazonfail","url":"http:\/\/search.twitter.com\/search?q=%23amazonfail"},... truncated ...],"as_of":"Mon, 13 Apr 2009 20:48:29 +0000"}
The Twitter Search API Methods trends current, daily and weekly responses are, however, in an unnecessarily inconvenient JSON form similar to:
{"trends":{"2009-03-19 21:00":[{"query":"AIG","name":"AIG"},... truncated ...],... truncated ...},"as_of":1239656409}
In clear violation of the rules for the encapsulation of algorithms and data structures, this unnecessarily discloses that currently these methods use a map or dictionary for their implementation.
To read the JSON data from Twitter current trends into Go data structures, using the json package, we can do something similar to the following.
package main
import (
"fmt"
"json"
)
type Trend struct {
Name string
Query string
}
type Current struct {
As_of int64
Trends map[string][]Trend
}
var currentTrends = `{"as_of":1268069036,"trends":{"2010-03-08 17:23:56":[{"name":"Happy Women's Day","query":"\"Happy Women's Day\" OR \"Women's Day\""},{"name":"#MusicMonday","query":"#MusicMonday"},{"name":"#MM","query":"#MM"},{"name":"Oscars","query":"Oscars OR #oscars"},{"name":"#nooffense","query":"#nooffense"},{"name":"Hurt Locker","query":"\"Hurt Locker\""},{"name":"Justin Bieber","query":"\"Justin Bieber\""},{"name":"Cmon","query":"Cmon"},{"name":"My World 2","query":"\"My World 2\""},{"name":"Sandra Bullock","query":"\"Sandra Bullock\""}]}}`
func main() {
var ctJson = currentTrends
var ctVal = Current{}
ok, errtok := json.Unmarshal(ctJson, &ctVal)
if !ok {
fmt.Println("Unmarshal errtok: ", errtok)
}
fmt.Println(ctVal)
}

How do I read binary C++ protobuf data using Python protobuf?

The Python version of Google protobuf gives us only:
SerializeAsString()
Where as the C++ version gives us both:
SerializeToArray(...)
SerializeAsString()
We're writing to our C++ file in binary format, and we'd like to keep it this way. That said, is there a way of reading the binary data into Python and parsing it as if it were a string?
Is this the correct way of doing it?
binary = get_binary_data()
binary_size = get_binary_size()
string = None
for i in range(len(binary_size)):
string += i
message = new MyMessage()
message.ParseFromString(string)
Update:
Here's a new example, and a problem:
message_length = 512
file = open('foobars.bin', 'rb')
eof = False
while not eof:
data = file.read(message_length)
eof = not data
if not eof:
foo_bar = FooBar()
foo_bar.ParseFromString(data)
When we get to the foo_bar.ParseFromString(data) line, I get this error:
Exception Type: DecodeError
Exception Value: Too many bytes when decoding varint.
Update 2:
It turns out, that the padding on the binary data was throwing protobuf off; too many bytes were being sent in, as the message suggests (in this case it was referring to the padding).
This padding comes from using the C++ protobuf function, SerializeToArray on a fixed-length buffer. To eliminate this, I have used this temproary code:
message_length = 512
file = open('foobars.bin', 'rb')
eof = False
while not eof:
data = file.read(message_length)
eof = not data
string = ''
for i in range(0, len(data)):
byte = data[i]
if byte != '\xcc': # yuck!
string += data[i]
if not eof:
foo_bar = FooBar()
foo_bar.ParseFromString(string)
There is a design flaw here I think. I will re-implement my C++ code so that it writes variable length arrays to the binary file. As advised by the protobuf documentation, I will prefix each message with it's binary size so that I know how much to read when I'm opening the file with Python.
I'm not an expert with Python, but you can pass the result of a file.read() operation into message.ParseFromString(...) without having to build a new string type or anything.
Python strings can contain any character, i.e. they are capable of holding "binary" data directly. There should be no need to convert from string to "binary".