I am using tron web to query transactions of an address but it does not return transactions sent to that address where token transferred is TRC20.
This does not work.
I want to get the transactions on an address and get both TRX, trc10 and trc20 transactions.
What am I doing wrong or how to do that?
Here is my code block:
tronWeb.setDefaultBlock("latest");
var result = await tronGrid.account.getTransactions(address, {
only_confirmed: true,
only_to: true,
limit: 10
});
console.log(JSON.stringify(result));
})();
After a lot of research, I found out one can easily query contract events at intervals to get transactions on that contract address and you can then filter it for the address you are watching since you can't get a webhook or websocket with your trongrid/tronweb implementation.
Here is a sample file I used to achieve this and it works great for monitoring many address even with different contract addresses.
Note: In my own implementation, this node file is called from another file and other logistics are handled in the other file, but below you see how I queried the transfer events emitted by the specified contract
const TronWeb = require("tronweb");
const TronGrid = require("trongrid");
const tronWeb = new TronWeb({
fullHost: "https://api.trongrid.io"
});
const tronGrid = new TronGrid(tronWeb);
const argv = require("minimist")(process.argv.slice(2));
var contractAddress = argv.address;
var min_timestamp = Number(argv.last_timestamp) + 1; //this is stored for the last time i ran the query
(async function() {
tronWeb.setDefaultBlock("latest");
tronWeb.setAddress("ANY TRON ADDRESS"); // maybe being the one making the query not necessarily the addresses for which you need the transactions
var result = await tronGrid.contract.getEvents(contractAddress, {
only_confirmed: true,
event_name: "Transfer",
limit: 100,
min_timestamp: min_timestamp,
order_by: "timestamp,asc"
});
result.data = result.data.map(tx => {
tx.result.to_address = tronWeb.address.fromHex(tx.result.to); // this makes it easy for me to check the address at the other end
return tx;
});
console.log(JSON.stringify(result));
})();
You are free to customize the config data passed to the tronGrid.contract.getEvents method. Depending on how frequently transactions come on the contract you are monitoring you should DYOR to know at what interval is great for you and what limit value you should pass.
Refer to https://developers.tron.network/docs/trongridjs for details.
I found a API that can take TRC20 transactions, but I haven't found an implementation in webtron.
https://api.shasta.trongrid.io/v1/accounts/address/transactions
Related document:
https://developers.tron.network/reference#transaction-information-by-account-address
Related
I want to iterate over all token ids of a ethereum ERC-721 contract.
Some contracts have counting ids (0, 1, 2, 3, ...) which is easy, but some have random ids, e.g. https://etherscan.io/token/0xf87e31492faf9a91b02ee0deaad50d51d56d5d4d#inventory
Sadly etherscan only shows the last 10000 token ids used, but I want to iterate over all 79490.
Is there a way to accomplish this? For me, everything is fine. Setup my own ethereum node, using some API.
You can loop through all Transfer() events emitted by the collection contract.
You're looking for transfers from address 0x0 (minted tokens). And excluding from the list transfers to address 0x0 (destroyed tokens).
One way to achieve this is by using the Web3 Contract getPastEvents() function (docs).
const myContract = new web3.eth.Contract(abiJson, contractAddress);
myContract.getPastEvents('Transfer', {
filter: {
_from: '0x0000000000000000000000000000000000000000'
},
fromBlock: 0
}).then((events) => {
for (let event of events) {
console.log(event.returnValues._tokenId);
}
});
There's no easy way to do it with an Ethereum node in a contract-agnostic way...the ERC-721 does not specify any interface methods that allow querying for all token ID, so unless the contract you're looking at uses sequential token ids, there's no way to guess all token ids from a simple node query.
Unless you want to iterate over the whole transaction history of the contract to get the ids of every minted NFT (you'd need an archive node for that, as a full node would not have the full transaction history) you should use an API from services that index all NFT activity.
You could use this API from CovalentHQ:
https://www.covalenthq.com/docs/api/#/0/Class-A/Get-NFT-Token-IDs-for-contract/lng=en
Or this one from Moralis:
https://docs.moralis.io/moralis-server/web3-sdk/token#getalltokenids
I needed the same with Ethers instead of Web3, here i the code snippet for ethers.js:
const getTransferEvents = async () => {
const provider = new ethers.providers.Web3Provider(window.ethereum)
const contract = new ethers.Contract("address", "abi", provider);
const events = await contract.queryFilter('Transfer', 0);
console.log(events);
};
I am newbie in the chatbot domain. I need to develop a dialogflow chatbot which can store the data collected from user to Google Cloud Datastore Entities(not Firebase real time database) and retrieve it back when the user want to search.
I can able to write the data collected from user to datastore. But I am struggling in retrieving the data. I am writing the function in the dialogflow inline editor.
Write function :
function order_pizza(agent) {
var pizza_size = agent.parameters.size;
var pizza_topping = agent.parameters.pizza_topping;
var date_time = agent.parameters.size;
const taskKey = datastore.key('order_item');
const entity = {
key: taskKey,
data: {
item_name: 'pizza',
topping: pizza_topping,
date_time: date_time,
order_time: new Date().toLocaleString(),
size: pizza_size }
};
return datastore.save(entity).then(() => {
console.log(`Saved ${entity.key.name}: ${entity.data.item_name}`);
agent.add(`Your order for ${pizza_topping} pizza has been placed!`);
});
}
where "order_item" is the kind(table in datastore) the data is being stored. It is storing the data successfully.
Read data:(the function not working)
function search_pizza(agent){
const taskKey = datastore.key('order_item');
var orderid = agent.parameters.id;
const query = datastore.createQuery('taskKey').filter('ID','=', orderid);
return datastore.runQuery(query).then((result) =>{
agent.add(result[0]);
});
}
This is what i tried so far! Whereever i search I can find the result for firebase realtime database. But can't find solution for google datastore!
Followed many tutorial. But can't quite get it right! Kindly help!
This might sound a little odd, but I'm facing a situation where I have a micro-service that assembles some pricing logic, but for that, it needs a bunch of information that another micro-service provides.
I believe I have two options: (1) grab all the data I need from the database and ignore the GraphQL work that was done in this other micro-service or (2) somehow hit this other micro-service from within my current service and get the data I need.
How would someone accomplish (2)?
I have no clear path of how to get that done without creating a mess.
I imagine that turning my pricing micro-service into a small client could work, but I'm just not sure if that's bad practice.
After much consideration and reading the answers I got here, I decided to turn my micro-service into a mini-client by using apollo-client.
In short, I have something like this:
import { ApolloClient } from 'apollo-client';
import { InMemoryCache } from 'apollo-cache-inmemory';
import { HttpLink } from 'apollo-link-http';
// Instantiate required constructor fields
const cache = new InMemoryCache();
const link = new HttpLink({
uri: 'http://localhost:3000/graphql',
});
const client = new ApolloClient({
// Provide required constructor fields
cache: cache,
link: link,
});
export default client;
That HttpLink is the federated schema, so I can call it from my resolver or anywhere else like this:
const query = gql`
query {
something(uid: "${uid}") {
field1
field2
field3
anotherThing {
field1
field2
}
}
}
`;
const response = await dataSources.client.query({query});
I have a query like this in my React/Apollo application:
const APPLICATIONS_QUERY = gql`
{
applications {
id
applicationType {
name
}
customer {
id
isActive
name
shortName
displayTimezone
}
deployments {
id
created
user {
id
username
}
}
baseUrl
customerIdentifier
hostInformation
kibanaUrl
sentryIssues
sentryShortName
serviceClass
updown
updownToken
}
}
`;
The majority of the items in the query are in a database and so the query is quick. But a couple of the items, like sentryIssues and updown rely on external API calls, so they make the duration of the query very long.
I'd like to split the query into the database portion and the external API portion so I can show the applications table immediately and add loading spinners for the two columns that hit an external API... But I can't find a good example of incremental/progressive querying or merging the results of two queries with Apollo.
This is a good example of where the #defer directive would be helpful. You can indicate which fields you want to defer for a given query like this:
const APPLICATIONS_QUERY = gql`
{
applications {
id
applicationType {
name
}
customer #defer {
id
isActive
name
shortName
displayTimezone
}
}
}
`
In this case, the client will make one request but receive 2 responses -- the initial response with all the requested fields sans customer and a second "patch" response with just the customer field that's fired once that resolver is finished. The client does the heavy lifting and pieces these two responses together for you -- there's no additional code necessary.
Please be aware that only nullable fields can be deferred, since the initial value sent with the first response will always be null. As a bonus, react-apollo exposes a loadingState property that you can use to check the loading state for your deferred fields:
<Query query={APPLICATIONS_QUERY}>
{({ loading, error, data, loadingState }) => {
const customerComponent = loadingState.applications.customer
? <CustomerInfo customer={data.applications.customer} />
: <LoadingIndicator />
// ...
}}
</Query>
The only downside is this is an experimental feature, so at the moment you have to install the alpha preview version of both apollo-server and the client libraries to use it.
See the docs for full details.
I'd like to alert on the lack of a heartbeat (or 0 bytes received) from any one of large number of Google IOT core devices. I can't seem to do this in Stackdriver. It instead appears to let me alert on the entire device registry which does not give me what I'm looking for (How would I know that a particular device is disconnected?)
So how does one go about doing this?
I have no idea why this question was downvoted as 'too broad'.
The truth is Google IOT doesn't have per device alerting, but instead offers only alerting on an entire device registry. If this is not true, please reply to this post. The page that clearly states this is here:
Cloud IoT Core exports usage metrics that can be monitored
programmatically or accessed via Stackdriver Monitoring. These metrics
are aggregated at the device registry level. You can use Stackdriver
to create dashboards or set up alerts.
The importance of having per device alerting is built into the promise assumed in this statement:
Operational information about the health and functioning of devices is
important to ensure that your data-gathering fabric is healthy and
performing well. Devices might be located in harsh environments or in
hard-to-access locations. Monitoring operational intelligence for your
IoT devices is key to preserving the business-relevant data stream.
So its not easy today to get an alert if one among many, globally dispersed devices, loses connectivity. One needs to build that, and depending on what one is trying to do, it would entail different solutions.
In my case I wanted to alert if the last heartbeat time or last event state publish was older than 5 minutes. For this I need to run a looping function that scans the device registry and performs this operation regularly. The usage of this API is outlined in this other SO post: Google iot core connection status
For reference, here's a Firebase function I just wrote to check a device's online status, probably needs some tweaks and further testing, but to help anybody else with something to start with:
// Example code to call this function
// const checkDeviceOnline = functions.httpsCallable('checkDeviceOnline');
// Include 'current' key for 'current' online status to force update on db with delta
// const isOnline = await checkDeviceOnline({ deviceID: 'XXXX', current: true })
export const checkDeviceOnline = functions.https.onCall(async (data, context) => {
if (!context.auth) {
throw new functions.https.HttpsError('failed-precondition', 'You must be logged in to call this function!');
}
// deviceID is passed in deviceID object key
const deviceID = data.deviceID
const dbUpdate = (isOnline) => {
if (('wasOnline' in data) && data.wasOnline !== isOnline) {
db.collection("devices").doc(deviceID).update({ online: isOnline })
}
return isOnline
}
const deviceLastSeen = () => {
// We only want to use these to determine "latest seen timestamp"
const stamps = ["lastHeartbeatTime", "lastEventTime", "lastStateTime", "lastConfigAckTime", "deviceAckTime"]
return stamps.map(key => moment(data[key], "YYYY-MM-DDTHH:mm:ssZ").unix()).filter(epoch => !isNaN(epoch) && epoch > 0).sort().reverse().shift()
}
await dm.setAuth()
const iotDevice: any = await dm.getDevice(deviceID)
if (!iotDevice) {
throw new functions.https.HttpsError('failed-get-device', 'Failed to get device!');
}
console.log('iotDevice', iotDevice)
// If there is no error status and there is last heartbeat time, assume device is online
if (!iotDevice.lastErrorStatus && iotDevice.lastHeartbeatTime) {
return dbUpdate(true)
}
// Add iotDevice.config.deviceAckTime to root of object
// For some reason in all my tests, I NEVER receive anything on lastConfigAckTime, so this is my workaround
if (iotDevice.config && iotDevice.config.deviceAckTime) iotDevice.deviceAckTime = iotDevice.config.deviceAckTime
// If there is a last error status, let's make sure it's not a stale (old) one
const lastSeenEpoch = deviceLastSeen()
const errorEpoch = iotDevice.lastErrorTime ? moment(iotDevice.lastErrorTime, "YYYY-MM-DDTHH:mm:ssZ").unix() : false
console.log('lastSeen:', lastSeenEpoch, 'errorEpoch:', errorEpoch)
// Device should be online, the error timestamp is older than latest timestamp for heartbeat, state, etc
if (lastSeenEpoch && errorEpoch && (lastSeenEpoch > errorEpoch)) {
return dbUpdate(true)
}
// error status code 4 matches
// lastErrorStatus.code = 4
// lastErrorStatus.message = mqtt: SERVER: The connection was closed because MQTT keep-alive check failed.
// will also be 4 for other mqtt errors like command not sent (qos 1 not acknowledged, etc)
if (iotDevice.lastErrorStatus && iotDevice.lastErrorStatus.code && iotDevice.lastErrorStatus.code === 4) {
return dbUpdate(false)
}
return dbUpdate(false)
})
I also created a function to use with commands, to send a command to the device to check if it's online:
export const isDeviceOnline = functions.https.onCall(async (data, context) => {
if (!context.auth) {
throw new functions.https.HttpsError('failed-precondition', 'You must be logged in to call this function!');
}
// deviceID is passed in deviceID object key
const deviceID = data.deviceID
await dm.setAuth()
const dbUpdate = (isOnline) => {
if (('wasOnline' in data) && data.wasOnline !== isOnline) {
console.log( 'updating db', deviceID, isOnline )
db.collection("devices").doc(deviceID).update({ online: isOnline })
} else {
console.log('NOT updating db', deviceID, isOnline)
}
return isOnline
}
try {
await dm.sendCommand(deviceID, 'alive?', 'alive')
console.log('Assuming device is online after succesful alive? command')
return dbUpdate(true)
} catch (error) {
console.log("Unable to send alive? command", error)
return dbUpdate(false)
}
})
This also uses my version of a modified DeviceManager, you can find all the example code on this gist (to make sure using latest update, and keep post on here small):
https://gist.github.com/tripflex/3eff9c425f8b0c037c40f5744e46c319
All of this code, just to check if a device is online or not ... which could be easily handled by Google emitting some kind of event or adding an easy way to handle this. COME ON GOOGLE GET IT TOGETHER!