Finding all obligations in spl-token-lending program - blockchain

If an Obligation becomes unhealthy, it can be liquidated by calling LiquidateObligation instruction, however, I cannot liquidate it if I don't know it exists, and the process of finding them is still unclear to me.
What is the expected way for me to find all currently "working" Obligations?

The only way to get all of the Obligation accounts is to use the getProgramAccounts RPC endpoint with a filter, which fetches every account owned by the lending program that has a certain size. Since an Obligation has a size of 916 according to the code: https://github.com/solana-labs/solana-program-library/blob/9123a80a6a5b5f8a378a56c4501f99df7debda55/token-lending/program/src/state/obligation.rs#L329, you can do:
curl YOUR_RPC_ENDPOINT_HERE -X POST -H "Content-Type: application/json" -d '
{
"jsonrpc": "2.0",
"id": 1,
"method": "getProgramAccounts",
"params": [
"LENDING_PROGRAM_PUBKEY_IN_BASE_58",
{
"filters": [
{
"dataSize": 916
}
]
}
]
}
'
This was adapated from https://docs.solana.com/developing/clients/jsonrpc-api#example-35

Related

GCP recommendation data format for catalog

I am currently working on recommendation AI. since I am new to GCP recommendation, I have been struggling with data format for catalog. I read the documentation and it says each product item JSON format should be on a single line.
I understand this totally, but It would be really great if I could get what the JSON format looks like in real because the one in their documentation is very ambiguous to me. and I am trying to use console to import data
I tried to import data looking like down below but I got error saying invalid JSON format 100 times. it has lots of reasons such as unexpected token and something should be there and so on.
[
{
"id": "1",
"title": "Toy Story (1995)",
"categories": [
"Animation",
"Children's",
"Comedy"
]
},
{
"id": "2",
"title": "Jumanji (1995)",
"categories": [
"Adventure",
"Children's",
"Fantasy"
]
},
...
]
Maybe it was because each item was not on a single line, but I am also wondering if the above is enough for importing. I am not sure if those data should be included in another property like
{
"inputConfig": {
"productInlineSource": {
"products": [
{
"id": "1",
"title": "Toy Story (1995)",
"categories": [
"Animation",
"Children's",
"Comedy"
]
},
{
"id": "2",
"title": "Jumanji (1995)",
"categories": [
"Adventure",
"Children's",
"Fantasy"
]
},
}
I can see the above in the documentation but it says it is for importing inline which is using POST request. it does not mention anything about importing with console. I just guess the format is also used for console but I am not 100% sure. that is why I am asking
Is there anyone who can show me the entire data format to import data by using console?
Problem Solved
For those who might have the same question, The exact data format you should import by using gcp console looks like
{"id":"1","title":"Toy Story (1995)","categories":["Animation","Children's","Comedy"]}
{"id":"2","title":"Jumanji (1995)","categories":["Adventure","Children's","Fantasy"]}
No square bracket wrapping all the items.
No comma between items.
Only each item on a single line.
Posting this Community Wiki for better visibility.
OP edited question and add solution:
The exact data format you should import by using gcp console looks like
{"id":"1","title":"Toy Story (1995)","categories":["Animation","Children's","Comedy"]}
{"id":"2","title":"Jumanji (1995)","categories":["Adventure","Children's","Fantasy"]}
No square bracket wrapping all the items.
No comma between items.
Only each item on a single line.
However I'd like to elaborate a bit.
There are a few ways to import Importing catalog information:
Importing catalog data from Merchant Center
Importing catalog data from BigQuery
Importing catalog data from Cloud Storage
I guess this is what was used by OP, as I was able to import catalog using UI and GCS with below JSON file.
{
"inputConfig": {
"catalogInlineSource": {
"catalogItems": [
{"id":"111","title":"Toy Story (1995)","categories":["Animation","Children's","Comedy"]}
{"id":"222","title":"Jumanji (1995)","categories":["Adventure","Children's","Fantasy"]}
{"id":"333","title":"Test Movie (2020)","categories":["Adventure","Children's","Fantasy"]}
]
}
}
}
Importing catalog data inline
At the bottom of the Importing catalog information documentation you can find information:
The line breaks are for readability; you should provide an entire catalog item on a single line. Each catalog item should be on its own line.
It means you should use something similar to NDJSON - convenient format for storing or streaming structured data that may be processed one record at a time.
If you would like to try inline method, you should use this format, however it's single line but with breaks for readability.
data.json file
{
"inputConfig": {
"catalogInlineSource": {
"catalogItems": [
{
"id": "1212",
"category_hierarchies": [ { "categories": [ "Animation", "Children's" ] } ],
"title": "Toy Story (1995)"
},
{
"id": "5858",
"category_hierarchies": [ { "categories": [ "Adventure", "Fantasy" ] } ],
"title": "Jumanji (1995)"
},
{
"id": "321123",
"category_hierarchies": [ { "categories": [ "Comedy", "Adventure" ] } ],
"title": "The Lord of the Rings: The Fellowship of the Ring (2001)"
},
]
}
}
}
Command
curl -X POST \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
--data #./data.json \
"https://recommendationengine.googleapis.com/v1beta1/projects/[your-project]/locations/global/catalogs/default_catalog/catalogItems:import"
{
"name": "import-catalog-default_catalog-1179023525XX37366024",
"done": true
}
Please keep in mind that the above method requires Service Account authentication, otherwise you will get PERMISSION DENIED error.
"message" : "Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the translate.googleapis.com. We recommend that most server applications use service accounts instead. For more information about service accounts and how to use them in your application, see https://cloud.google.com/docs/authentication/.",
"status" : "PERMISSION_DENIED"

Google Admin SDK - Create posix attributes on existing user

I can't create posix attributes on existing account in admin.google.com (also known as Google Cloud Identity / Google Directory) using Admin SDK (Directory API).
To explain my issue, I will use the API tester : https://developers.google.com/admin-sdk/directory/v1/reference/users/update?apix=true
I use the update function to update an existing account without POSIX attributes.
To do that I copy the request body below and use request key : testmdr#contoso.com :
{
"posixAccounts": [
{
"username": "testmdr_contoso_com",
"uid": "2147483645", # I use id between 65535 and 2147483647 (explain: in google documentation)
"gid": "1001",
"homeDirectory": "/home/testmdr_contoso.com",
"shell": "/bin/bash"
}
]
}
I obtain an 503 error :
{
"error": {
"errors": [
{
"domain": "global",
"reason": "backendError",
"message": "Service unavailable. Please try again"
}
],
"code": 503,
"message": "Service unavailable. Please try again"
}
}
If I update name or other, it works.
If I update existing POSIX attribute (existing because create when connection on GCE using OS Login functionality :Here), it works.
Please help me if it's limitation or bug
The requestKey should be the UUID of the user . . . There are probably better ways to do this, but you can get the username / name(requestKey/UUID) by querying the metadata on an oslogin-enabled instance, e.g. (first column is username, second column is requestKey for API tester):
curl -s "http://metadata.google.internal/computeMetadata/v1/oslogin/users?pagesize=50&pagetoken=0" -H "Metadata-Flavor: Google" | \
jq -r '.loginProfiles[]|.posixAccounts[].username,.name' | \
paste - -
(You may have to play with the pagesize & pagetoken parameters)

Not able to invoke erisdb.eventPoll rpc call

I have my blockchain running locally . I was using node-json-rpc module to make rpc calls . I was able to make few calls like erisdb.getBlockchainInfo.
I tried the erisdb.eventSubscribe call :
client.call(
{
"jsonrpc": "2.0", "method": "erisdb.eventSubscribe", "params": {
"event_id": "NewBlock"
}, "id": "0"
},
and it successfully returned a sub_id to me :
{ result: { sub_id: '7878EB2ECC668AEE19D958B89C4ED6E145D9298E91366D67F93CD2A20E995829' },
error: null,
id: '0',
jsonrpc: '2.0' }
I used that sub_id to invoke erisdb.eventPoll call :
client.call(
{
"jsonrpc": "2.0", "method": "erisdb.eventPoll", "params": {
"sub_id":"7878EB2ECC668AEE19D958B89C4ED6E145D9298E91366D67F93CD2A20E995829"
}, "id": "1"
},
but it is giving the following error :
{ result: null,
error:
{ code: -32603,
message: 'Subscription not active. ID: 7878EB2ECC668AEE19D958B89C4ED6E145D9298E91366D67F93CD2A20E995829' },
id: '1',
jsonrpc: '2.0' }
My eris-db version is 0.12.1.
We have two different APIs at the moment. The one you are using we call the 'v0' API. It is optimised for long-polling Javascript clients. My guess is that your subscription is getting reaped before a certain hard-coded timeout that happens to be 10 seconds. Have you tried making the eventPoll call in quick sucession after the eventSubscribe call?
This is the 'v0' reaping function: https://github.com/eris-ltd/eris-db/blob/master/event/event_cache.go#L72. It runs in a loop clearing out old subscriptions that have not been polled recently. If you have waited longer than 10 seconds before polling then your subscription has probably been reaped (deleted).
We have another API optimised for chain administration called the 'tendermint' API (because of its heritage from the Tendermint consensus engine). It is something of a parallel API and it is used by the eris-pm tool. It also has a subscribe method accessible by a websocket endpoint. This might be useful for you because its subscriptions are never reaped.
You could try it out like this:
Start your chain:
$ eris chains start testchain
Get a simple websocket client:
$ go get github.com/raphael/wsc
Connect to the websocket endpoint:
$ wsc ws://0.0.0.0:46657/websocket
2017/01/21 01:03:51 connecting to ws://0.0.0.0:46657/websocket...
2017/01/21 01:03:51 ready, exit with CTRL+C.
Subscribe to the NewBlock event by pasting { "jsonrpc": "2.0", "method": "subscribe", "params": ["NewBlock"] } into the terminal as a single line:
>> { "jsonrpc": "2.0", "method": "subscribe", "params": ["NewBlock"] }
Then you should receive a stream of new block events (about 1 per second) like:
<< {"jsonrpc":"2.0","id":"#event","result":[19 {"event":"NewBlock","data":[1,{"block":{"header":{"chain_id":"testchain","height":206320,"time":"2017-01-21T01:04:01.095Z","num_txs":0,"last_block_hash":"2DB0D0AE6D92DA6DA07F8E7D1605AAB6CB96D8D2","last_block_parts":{"total":1,"hash":"A4AD1708714CF0BE3E5125B65F495DDDFA1ED8D9"},"last_commit_hash":"4C301C0367B7CECDD4E00C955D2F155802B2377E","data_hash":"","validators_hash":"46E43215C6C332446114BF7320D2D007114C5EEB","app_hash":"9A72DE9AAD6BD820A64DB98462CD706594217E1
<< 1"},"data":{"txs":[]},"last_commit":{"precommits":[{"height":206319,"round":0,"type":2,"block_hash":"2DB0D0AE6D92DA6DA07F8E7D1605AAB6CB96D8D2","block_parts_header":{"total":1,"hash":"A4AD1708714CF0BE3E5125B65F495DDDFA1ED8D9"},"signature":"45A6C3D0B0BD380A239F014681A29FD6217B52653CC7FC189FF5B7DC840A61062CF12FC652687A30A5CBBF0270937F32542D6075BA94A12180568560B322EC07"}]}}}]}],"error":""}
You could use a programmatic websocket client of your choice to interact with the chain using this websocket API and your subscriptions will never get reaped.
There is a grand unification of these APIs planned soon that should make them easier to use and better documented.
If you need help debugging, join us on: https://monax.slack.com/

Trello API: getting boards / lists / cards information

Using Trello API:
- I've been able to get all the cards that are assigned to a Trello user
- I've been able to get all the boards that are assigned to an Organization
But I can't get any API call that returns all the lists that are in an Organization or User.
Is there any function that allows that ?
Thanks in advance
For the users who want the easiest way to access the id of a list :
Use the ".json" hack !
add ".json" at the end of your board URL to display the same output of the API query for that board, in your browser ! (no other tool needed, no hassle dealing with authentication).
For instance, if the URL of your board is :
https://trello.com/b/EI6aGV1d/blahblah
point your browser to
https://trello.com/b/EI6aGV1d/blahblah.json
And you will obtain something like
{
"id": "5a69a1935e732f529ef0ad8e",
"name": "blahblah",
"desc": "",
"descData": null,
"closed": false,
[...]
"cards": [
{
"id": "5b2776eba95348dd45f6b745",
"idMemberCreator": "58ef2cd98728a111e6fbd8d3",
"data": {
"list": {
"name": "Bla blah blah blah blah",
"id": "5a69a1b82f62a7af027d0378"
},
"board": {
[...]
Where you can just search for the name of your list to easily find its id next to it.
tip: use a json viewer extension to make your browser display a nice json. Personnally I use https://github.com/tulios/json-viewer/tree/0.18.0 but I guess there are a lot of good alternatives out there.
I don't believe there is a method in the Trello API to do this, so you'll have to get a list of boards for a user or organization:
GET /1/members/[idMember or username]/boards
Which returns (truncated to show just the parts we care about):
[{
"id": "4eea4ffc91e31d1746000046",
"name": "Example Board",
"desc": "This board is used in the API examples",
...
"shortUrl": "https://trello.com/b/OXiBYZoj"
}, {
"id": "4ee7e707e582acdec800051a",
"name": "Public Board",
"desc": "A board that everyone can see",
...
"shortUrl": "https://trello.com/b/IwLRbh3F"
}]
Then get the lists for each board:
GET /1/boards/[board_id]/lists
Which returns (truncated to only show the list id and name:
[{
"id": "4eea4ffc91e31d174600004a",
"name": "To Do Soon",
...
}, {
"id": "4eea4ffc91e31d174600004b",
"name": "Doing",
...
}, {
"id": "4eea4ffc91e31d174600004c",
"name": "Done",
...
}]
And go through this response for each board to build a list of all the lists a user or organization has.
You can do it by calling
GET /1/organizations/[idOrg]/boards?lists=all
It is here: https://developers.trello.com/advanced-reference/organization#get-1-organizations-idorg-or-name-boards
Look at the arguments.
There are several filters and fields. You can customize it.
to get all your boards use
Trello.get("/members/me/boards")
worked for me using client.js
Here is a quick and dirty bash script using curl and jq to get any given Board's ID or List's Id:
key="<your-key>"
token="<your-token>"
trelloUsername="<you-trello-username>"
boardName="<board-name>"
listName="<list-name>"
boardID=$(curl -s --request GET --url "https://api.trello.com/1/members/$trelloUsername/boards?key=$key&token=$token" --header 'Accept: application/json' | jq -r ".[] | select(.name == \"$boardName\").id")
echo "boardID: ${boardID}"
listID=$(curl -s --request GET --url "https://api.trello.com/1/boards/$boardID/lists?key=$key&token=$token" | jq -r ".[] | select(.name == \"$listName\").id")
echo "listID: ${listID}"
Example output:
boardID: 5eab513c719d2d681bafce0e
listID: 5eab519e66dd4272gb720e22

Riak MapReduce - map works, reduce receives very small subset of results

I'm using Riak 2.0.0b1 on Ubuntu 12.10 (up to date). This is a developer box, so I have only one Riak instance - no clusters, etc.
I've put about 100k JSON documents (about 300 bytes each) into a bucket and now am trying to mapreduce over it. The data is random and I've also got 2i index on one of the keys which is basically dividing the dataset into 10 almost even parts of ~10k documents.
This query works as expected:
curl -XPOST -d'{
"inputs": {"bucket": "bucket", "index": "idx_bin", "key": "10"},
"query": [
{
"map": {
"language": "javascript",
"source": "Riak.mapValuesJson"
}
}
]
}' http://localhost:8080/mapred -H 'Content-Type: application/json' | python -m json.tool | egrep '^ {4}\{' | wc -l
9974
Got about ~10k results. Now if I want to do something in the reduce step, I get an answer which doesn't make sense:
curl -XPOST -d'{
"inputs": {"bucket": "bucket", "index": "idx_bin", "key": "10"},
"query": [
{
"map": {
"language": "javascript",
"source": "Riak.mapValuesJson"
}
},
{
"reduce": {
"language": "javascript",
"source": "function(o) { return [o.length] }"
}
}
]
}' http://localhost:8080/mapred -H 'Content-Type: application/json' | python -m json.tool
[
15
]
I'd like to see an error here if I'm reaching some (un)documented limits or a full list of objects please, not 15. (This number differs between runs; sometimes there's a couple more.) I went to the configs and done this:
javascript.map_pool_size = 64
javascript.reduce_pool_size = 64
javascript.maximum_stack_size = 32MB
javascript.maximum_heap_size = 64MB
Didn't help at all.
What is going on and how to get all objects in the reduce phase?
The reduce function is called many times. The map function will be run on about 1/3 of the vnodes in the cluster (that's 22 times in a cluster with ring_size 64), the reduce function will be called each time results are available from a map function, with it's first argument being a list containing both the result from the previous run of the reduce function, and the results from the map function. In your case, you counted the values returned from the first vnode, which was then passed as a value included with the second vnode's results, and only counted as a single value.
What you will need to do is have the reduce function return a value/object that is easily differentiated from the other values, such as
function(o) {
var prevCount = 0;
var countObjects = 0;
for each (e in o) {
if (typeof e === 'object' && typeof e.reduce_running_total === 'number') {
prevCount += e.reduce_running_total;
countObjects += 1;
}
}
return([{"reduce_running_total":o.length + prevCount - countObjects}]);
}
Or, you could save some network bandwidth, and instead of having the map phase return all of the objects, have the map function return a literal [1] for each key found, then the reduce function simply sums up all the numbers in the input list and returns them.