I'm trying to play with web3js over the Binance Smart Chain blockchain and I hit a wall understanding the transaction data.
Looking at this transaction for example there are three transaction transfers (Tokens Transferred) most of the time there are like two (I've seen 2, 3, and 5 so far).
I don't understand what determines the number of transfers for a single transaction. And how to retrieve that data using web3js.
I would like to know the amount of BNB paid and the amount of the Tokens received in that transaction and vice versa if the transaction was about selling the tokens instead of buying.
I managed to get the Price paid and tokens amount but only for transactions where there are 2 Token transfers. But if there are 3 or more I can't manage to get this information.
web3.eth.getTransaction('0x899e7f3c2138d051eb5246850ded99d519ab65eba58e5f806245cf346ab40e83').then((result) => {
console.log(result)
console.log(web3.utils.fromWei(result.value))
let tx_data = result.input;
let input_data = '0x' + tx_data.slice(10); // get only data without function selector
let params = web3.eth.abi.decodeParameters([
{
indexed: false,
internalType: 'uint256',
name: 'value',
type: 'uint256'
},
{
indexed: false,
internalType: 'uint256',
name: 'ethReceived',
type: 'uint256'
},
]
, input_data);
console.log(params)
})
This portion of the code gives me data only for 2 token transfers. How to make it to return me always the amount of paid/received cash/tokens no matter how many transfers there are in the transactions?? Is it possible?? From what I can see always the 1st transfer and the last transfer in the transaction would be the values that I'm interested in. IS there an easy way to get those? I'm struggling with understanding this and getting work with the ABIs for decoding. Can they be somewhat generic??
The "Tokens Transferred" information comes from event logs. Most token standards define an event Transfer(address indexed from, address indexed to, uint256 value), so you can look for logs of this event in the transaction.
Event logs are available in getTransactionReceipt(), not the regular getTransaction().
The indexed modifier in the event definition means that the value is going to be available in the topics property (topics[0] is the keccak256 hash of the event signature, following the indexed values). The "unindexed" values are then stored in the data property - ordered according to the order of their definition.
const transferEventSignature = web3.utils.keccak256('Transfer(address,address,uint256)'); // 0xddf252...
const jsonAbi = [{
"constant" :true,
"inputs": [],
"name": "decimals",
"outputs": [{"name":"","type":"uint8"}],
"type": "function"
}]; // simplified JSON abi that is only able to read decimals
web3.eth.getTransactionReceipt('0x899e7f3c2138d051eb5246850ded99d519ab65eba58e5f806245cf346ab40e83').then(async (result) => {
for (const log of result.logs) {
if (log.topics[0] !== transferEventSignature) {
continue; // only interested in Transfer events
}
const from = web3.eth.abi.decodeParameter('address', log.topics[1]);
const to = web3.eth.abi.decodeParameter('address', log.topics[2]);
const value = web3.eth.abi.decodeParameter('uint256', log.data);
const tokenContractAddress = log.address;
const contractInstance = new web3.eth.Contract(jsonAbi, tokenContractAddress);
const decimals = await contractInstance.methods.decimals().call();
console.log('From: ', from);
console.log('To: ', to);
console.log('Value: ', value);
console.log('Token contract: ', tokenContractAddress);
console.log('Token decimals: ', decimals);
console.log('---');
}
});
Output:
From: 0xC6A93610eCa5509E66f9B2a95A5ed1d576cC9b7d
To: 0xE437fFf464c6FF2AA5aD5c15B4CCAD98DF38cF52
Value: 31596864050517135
Token contract: 0x78F1A99238109C4B834Ac100d1dfCf14e3fC321C
Token decimals: 9
---
From: 0xE437fFf464c6FF2AA5aD5c15B4CCAD98DF38cF52
To: 0x58F876857a02D6762E0101bb5C46A8c1ED44Dc16
Value: 4064578781674512
Token contract: 0xbb4CdB9CBd36B01bD1cBaEBF2De08d9173bc095c
Token decimals: 18
---
From: 0x58F876857a02D6762E0101bb5C46A8c1ED44Dc16
To: 0xC6A93610eCa5509E66f9B2a95A5ed1d576cC9b7d
Value: 2552379452401563824
Token contract: 0xe9e7CEA3DedcA5984780Bafc599bD69ADd087D56
Token decimals: 18
Note: Some token implementations are incorrect (i.e. not following the token standards) and don't mark the event parameters as indexed. In this case, the topics[0] is still the same, but the addresses from and to are not present in the topics, but you'll have to parse them from the data field. Length of an address is 64 hex characters (prepended with zeros before the actual 40-char address).
Related
I am working on a POC using Kendra and Salesforce. The connector allows me to connect to my Salesforce Org and index knowledge articles. I have been able to set this up and it is currently working as expected.
There are a few custom fields and data points I want to bring over to help enrich the data even more. One of these is an additional answer / body that will contain key information for the searching.
This field in my data source is rich text containing HTML and is often larger than 2048 characters, a limit that seems to be imposed in a String data field within Kendra.
I came across two hooks that are built in for Pre and Post data enrichment. My thought here is that I can use the pre hook to strip HTML tags and truncate the field before it gets stored in the index.
Hook Reference: https://docs.aws.amazon.com/kendra/latest/dg/API_CustomDocumentEnrichmentConfiguration.html
Current Setup:
I have added a new field to the index called sf_answer_preview. I then mapped this field in the data source to the rich text field in the Salesforce org.
If I run this as is, it will index about 200 of the 1,000 articles and give an error that the remaining articles exceed the 2048 character limit in that field, hence why I am trying to set up the enrichment.
I set up the above enrichment on my data source. I specified a lambda to use in the pre-extraction, as well as no additional filtering, so run this on every article. I am not 100% certain what the S3 bucket is for since I am using a data source, but it appears to be needed so I have added that as well.
For my lambda, I create the following:
exports.handler = async (event) => {
// Debug
console.log(JSON.stringify(event))
// Vars
const s3Bucket = event.s3Bucket;
const s3ObjectKey = event.s3ObjectKey;
const meta = event.metadata;
// Answer
const answer = meta.attributes.find(o => o.name === 'sf_answer_preview');
// Remove HTML Tags
const removeTags = (str) => {
if ((str===null) || (str===''))
return false;
else
str = str.toString();
return str.replace( /(<([^>]+)>)/ig, '');
}
// Truncate
const truncate = (input) => input.length > 2000 ? `${input.substring(0, 2000)}...` : input;
let result = truncate(removeTags(answer.value.stringValue));
// Response
const response = {
"version" : "v0",
"s3ObjectKey": s3ObjectKey,
"metadataUpdates": [
{"name":"sf_answer_preview", "value":{"stringValue":result}}
]
}
// Debug
console.log(response)
// Response
return response
};
Based on the contract for the lambda described here, it appears pretty straight forward. I access the event, find the field in the data called sf_answer_preview (the rich text field from Salesforce) and I strip and truncate the value to 2,000 characters.
For the response, I am telling it to update that field to the new formatted answer so that it complies with the field limits.
When I log the data in the lambda, the pre-extraction event details are as follows:
{
"s3Bucket": "kendrasfdev",
"s3ObjectKey": "pre-extraction/********/22736e62-c65e-4334-af60-8c925ef62034/https://*********.my.salesforce.com/ka1d0000000wkgVAAQ",
"metadata": {
"attributes": [
{
"name": "_document_title",
"value": {
"stringValue": "What majors are under the Exploratory track of Health and Life Sciences?"
}
},
{
"name": "sf_answer_preview",
"value": {
"stringValue": "A complete list of majors affiliated with the Exploratory Health and Life Sciences track is available online. This track allows you to explore a variety of majors related to the health and life science professions. For more information, please visit the Exploratory program description. "
}
},
{
"name": "_data_source_sync_job_execution_id",
"value": {
"stringValue": "0fbfb959-7206-4151-a2b7-fce761a46241"
}
},
]
}
}
The Problem:
When this runs, I am still getting the same field limit error that the content exceeds the character limit. When I run the lambda on the raw data, it strips and truncates it as expected. I am thinking that the response in the lambda for some reason isn't setting the field value to the new content correctly and still trying to use the data directly from Salesforce, thus throwing the error.
Has anyone set up lambdas for Kendra before that might know what I am doing wrong? This seems pretty common to be able to do things like strip PII information before it gets indexed, so I must be slightly off on my setup somewhere.
Any thoughts?
since you are still passing the rich text as a metadata filed of a document, the character limit still applies so the document would fail at validation step of the API call and would not reach the enrichment step. A work around is to somehow append those rich text fields to the body of the document so that your lambda can access it there. But if those fields are auto generated for your documents from your data sources, that might not be easy.
I have entities that look like that:
{
name: "Max",
nicknames: [
"bestuser"
]
}
how can I query for nicknames to get the name?
I have created the following index,
indexes:
- kind: users
properties:
- name: name
- name: nicknames
I use the node.js client library to query the nickname,
db.createQuery('default','users').filter('nicknames', '=', 'bestuser')
the response is only an empty array.
Is there a way to do that?
You need to actually fetch the query from datastore, not just create the query. I'm not familiar with the nodejs library, but this is the code given on the Google Cloud website:
datastore.runQuery(query).then(results => {
// Task entities found.
const tasks = results[0];
console.log('Tasks:');
tasks.forEach(task => console.log(task));
});
where query would be
const query = db.createQuery('default','users').filter('nicknames', '=', 'bestuser')
Check the documentation at https://cloud.google.com/datastore/docs/concepts/queries#datastore-datastore-run-query-nodejs
The first point to notice is that you don't need to create an index to this kind of search. No inequalities, no orders and no projections, so it is unnecessary.
As Reuben mentioned, you've created the query but you didn't run it.
ds.runQuery(query, (err, entities, info) => {
if (err) {
reject(err);
} else {
response.resultStatus = info.moreResults;
response.cursor = info.moreResults == TNoMoreResults? null: info.endCursor;
resolve(entities);
};
});
In my case, the response structure was made to collect information on the cursor (if there is more data than I've queried because I've limited the query size using limit) but you don't need to anything more than the resolve(entities)
If you are using the default namespace you need to remove it from your query. Your query needs to be like this:
const query = db.createQuery('users').filter('nicknames', '=', 'bestuser')
I read the entire plob as a string to get the bytes of a binary file here. I imagine you simply parse the Json per your requirement
I understand how to perform geo-spatial queries through AppSync to find events within a distance range from a gps coordinate by attaching a resolver linked to ElasticSesarch, as described here.
However, what if I want my client to subscribe to new events being created within this distance range as well?
user subscribes to a location
if an event is created near that location, notify user
I know I can attach resolver to subscription types but it seems like it forces you to provide a data source when I just want to filter subscriptions by checking distance between gps coordinates.
This is a great question and I would think there are a couple ways to solve this. The tough part here is that you are going to have figure out a way to ask the question "What subscriptions are interested in an event at this location". Here is one possible path forward.
The following assumes these schema parts:
// Whatever custom object has a location
type Post {
id: ID!
title: String
location: Location
}
input PublishPostInput {
id: ID!
title: String
location: Location
subscriptionID: ID
}
type PublishPostOutput {
id: ID!
title: String
location: Location
subscriptionID: ID
}
type Location {
lat: Float,
lon: Float
}
input LocationInput {
lat: Float,
lon: Float
}
# A custom type to hold custom tracked subscription information
# for location discover
type OpenSubscription {
subscriptionID: ID!
location: Location
timestamp: String!
}
type OpenSubscriptionConnection {
items: [OpenSubscription]
nextToken: String
}
type Query {
# Query elasticsearch index for relevant subscriptions
openSubscriptionsNear(location: LocationInput, distance: String): OpenSubscriptionConnection
}
type Mutation {
# This mutation uses a local resolver (e.g. a resolver with a None data source) and simply returns the input as is.
publishPostToSubscription(input: PublishPostInput): PublishPostOutput
}
type Subscription {
# Anytime someone passes an object with the same subscriptionID to the "publishPostToSubscription" mutation field, get updated.
listenToSubscription(subscriptionID: ID!): PublishPostOutput
#aws_subscribe(mutations:["publishPostToSubscription"])
}
Assuming you are using DynamoDB as your primary source of truth, setup a DynamoDB stream that invokes a "PublishIfInRange" lambda function. That "PublishIfInRange" function would look something like this
// event - { location: { lat, lon }, id, title, ... }
function lambdaHandler(event) {
const relevantSubscriptions = await callGraphql(`
query GetSubscriptions($location: LocationInput) {
openSubscriptionsNear(location:$location, distance: "10 miles") {
subscriptionID
}
}
`, { variables: { location: event.location }})
for (const subscription of relevantSubscriptions) {
callGraphql(`
mutation PublishToSubscription($subID: ID!, $obj: PublishPostInput) {
publishPostToSubscription(input: $obj) {
id
title
location { lat lon }
subscriptionID
}
}
`, { variables: { input: { ...subscription, ...event }}})
}
}
You will need to maintain a registry of subscriptions indexed by location. One way to do this is to have your client app call a mutation that creates a subscription object with a location and subscriptionID (e.g. mutation { makeSubscription(loc: $loc) { ... } } assuming you are using $util.autoId() to generate the subscriptionID in the resolver). After you have the subscriptionID, you can make the subscription call through graphql and pass in the subscriptionID as an argument (e.g.subscription { listenToSubscription(subscriptionID: "my-id") { id title location { lat lon } } }). When you make this above subscription call, AppSync creates a topic and authorizes the current user to subscribe to that topic. The topic is unique to subscription field being called and the set of arguments passed to the subscription field. In other words, the topic only receives objects
Now whenever an object is created, the record goes to the lambda function via DynamoDB streams. The lambda function queries elasticsearch for all open subscriptions near that object and then publishes a record to each of those open subscriptions.
I believe this should get you reasonably far but if you have millions of users in tight quarters you will likely run into scaling issues. Hope this helps
I have a table in dynamodb, where I need to update multiple related items at once(I can't put all data in one item because of 400kb size limit).
How can I make sure that either multiple rows are updated successfully or none.
End goal is to read consistent data after update.
On November 27th, 2018, transactions for Dynamo DB were announced. From the linked article:
DynamoDB transactions provide developers atomicity, consistency, isolation, and durability (ACID) across one or more tables within a single AWS account and region. You can use transactions when building applications that require coordinated inserts, deletes, or updates to multiple items as part of a single logical business operation. DynamoDB is the only non-relational database that supports transactions across multiple partitions and tables.
The new APIs are:
TransactWriteItems, a batch operation that contains a write set, with one or more PutItem, UpdateItem, and DeleteItem operations. TransactWriteItems can optionally check for prerequisite conditions that must be satisfied before making updates. These conditions may involve the same or different items than those in the write set. If any condition is not met, the transaction is rejected.
TransactGetItems, a batch operation that contains a read set, with one or more GetItem operations. If a TransactGetItems request is issued on an item that is part of an active write transaction, the read transaction is canceled. To get the previously committed value, you can use a standard read.
The linked article also has a JavaScript example:
data = await dynamoDb.transactWriteItems({
TransactItems: [
{
Update: {
TableName: 'items',
Key: { id: { S: itemId } },
ConditionExpression: 'available = :true',
UpdateExpression: 'set available = :false, ' +
'ownedBy = :player',
ExpressionAttributeValues: {
':true': { BOOL: true },
':false': { BOOL: false },
':player': { S: playerId }
}
}
},
{
Update: {
TableName: 'players',
Key: { id: { S: playerId } },
ConditionExpression: 'coins >= :price',
UpdateExpression: 'set coins = coins - :price, ' +
'inventory = list_append(inventory, :items)',
ExpressionAttributeValues: {
':items': { L: [{ S: itemId }] },
':price': { N: itemPrice.toString() }
}
}
}
]
}).promise();
You can use an API like this one for Java, http://aws.amazon.com/blogs/aws/dynamodb-transaction-library/. The transaction library API will help you manage atomic transactions.
If you're using node.js, there are other solutions for that using an atomic counter or conditional writes. See answer here, How to support transactions in dynamoDB with javascript aws-sdk?.
So I want to get a list of all the objects in my S3 bucket. I've just put it in an express route application I quickly setup (doesn't really matter it's in express just what i'm comfortable with).
So i'm doing :
var allObjs = [];
s3.listObjects({Bucket: 'myBucket'}, function(err, data) {
var stringifiedObjs = JSON.stringify(allObjs);
fs.writeFile("test", stringifiedObjs, function(err) {})
}
Which grabs my objects, stringifys them and writes them to a file called test. The issue i'm having is that it's only getting 1,000 results.
I read somewhere (I can't find where) that AWS limits you to 1,000 results per call.
How can I rerun this and grab the next 1,000? But so make sure that it's the next incremented 1,000 not still the first one?
In short, how can I get every object in my S3 bucket? I've been getting lost in the documentation.
Thank you!
EDIT
This is my object I get back :
{ Key: 'bucket_path/e11_19_9a31mv3ot51tm384grjd6rdj51boxx_q_q112.png',
LastModified: Sat Apr 23 2016 09:16:23 GMT+0100 (BST),
ETag: '"7d50fsdfsd4sda159b32cf85c683c5924"',
Size: 704222,
StorageClass: 'STANDARD',
Owner:
{ DisplayName: 'servers',
ID: '58af203151c51eddf2sdfs411e0b91d274a8fda23f58280f9b06371e436f7' } },
You need to set the marker property as the last element of the previous get
check the documentation as reference (as you already did :) )
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html
When you receive your response from the listObjects call, your response should include 2 very important fields in the data property:
IsTruncated - True if there are more keys to return. False otherwise.
NextMarker - The value to use for the Marker property in the next call to listObjects.
So after you call listObjects, you need to check the IsTruncated field to see if it's True. If it is, then feed the value from NextMarker into the value for Marker and call listObjects again.
Update:
It would appear that AWS.Request object has an .eachPage method which can be used to automatically make multiple calls. So there is a magical function to do this work for you.
var pages = 1;
s3.listObjects().eachPage(function(err, data) {
if (err) return;
console.log("Page", pages++);
console.log(data);
});
Source: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Request.html