I have already watched videos and tuto about these topics but some subjects staus unsolved in my mind.
I am informing myself for some time about Dynamic NFT with Chainlink, so if I understand it well :
Every time I request a data feed it's 1 LINK per request/data right? (for ETH mainnet).
Is there a solution to have fewer Oracle fees?
Can I create my own data feed?
Chainlink price feed are free to use (no gas fees)?
Thank you
Related
I am fetching the past liquidation orders of a specific trading pairs (say ETHUSDT) in a time interval (3/2021-3/2022) to perform some market simulation.
However when I looked up the official documentation of binance api, I only found the websocket market stream to fetch the real-time liquidation order by the websocket uri : wss://fstream.binance.com/ws/!forceOrder#arr.
I searched the past Postman Collection and saw that there was a REST endpoint to get all the liquidation order by the url" https://fapi.binance.com/fapi/v1/allForceOrders?. However, this endpoint is deprecated. I can not find anything to retrieve the historical data for the liquidation.
Anyone has any idea where should I go to? Thank you very much!
I want to feed real time data into aws personalize to build a recommendation engine. I've read online resources and in those guides, I could see that the training user-interaction data, user data and item data is provided in the beginning while creating the recommendation engine.
However, I have an app and I will gather data in the app and want to feed those realtime data into aws personalize. I want to know if building the recommendation engine is possible without providing any data at first and then stream real time data from my app later with the putevents, putItem and putUser api from aws-sdk? I'm quite new to this so I'm quite confused with this initial step
I want to know if building the recommendation engine is possible without providing any data at first and then stream real time data from my app later with the putevents, putItem and putUser api from aws-sdk?
Yes, it is possible. You just need to adjust the sequence of creating resources.
Interaction data is required for all Personalize recipes before a recommender can be created that provides recommendations. However, if you don't have interaction data (or enough data; see quotas and limits) to start with, you can create a dataset group and an interactions dataset, feed interactions to the dataset using the PutEvents API (see recording events page), and then create a domain recommender or custom solution when enough data has been ingested.
The minimum amount of interaction data (and potentially item metadata) required before you can train a model/recommender depends on the recipe that you select. Generally speaking, you will need 1000 interactions across 25 distinct users where each of those users has 2+ interactions. The domain recommenders also require specific event types. Check the docs linked above. The quality and relevance of recommendations will improve as you collect more data and retrain.
I want to get the "entire" ethereum blockchain data, not just from a few sets of smart contracts. By data I mean, transaction details including the generated logs.
I can get real-time data using Infura, but it's pretty much impossible to fetch all the old data, it would simply cost too much because I would simply have to do too many network requests.
I need the old data because I am trying to make an indexed database out of the "append-only" ethereum transaction data so that I can easily query it.
To be more precise, I would like to retrieve all NFT(ERC721, ERC1155) transfer transactions and their logs. So that I can do the following queries and much more: all the NFT owned by a particular wallet, transfer histories of a particular NFT token.
You can do this by
Run your own node
Query data from your node - locally it is fast
For some data, you might need to run the node in archival mode
You can use the same Web3 / JSON-RPC APIs on a local node than you are using on Infura.
Two solutions I have discovered.
Just like #Mikko has mentioned, you can run your own node. And it seemed not be as complex as I have expected. You can search for "geth" and then simply connect this node to your web3 library, just like connecting to Infura.
But I have not tried this and found a much better solution.
Google cloud Bigquery's public data set has all the old ethereum data. Bigquery is Google's data warehouse service, where you can use simple SQL to query your data. It adds new data every day. I have already tested some simple queries from its console and the result was good.
I am planning to fetch all the old data I need from bigquery and store it in my own database and afterwards get real time data from infura. Now that I dont have to fetch all the old data from infura, the price becomes very affordable.
you may check this https://github.com/blockchain-etl/ethereum-etl
It is a Python library for ETL (extract, transform and load) jobs for Ethereum blocks, transactions, ERC20 / ERC721 tokens, transfers, receipts, logs, contracts, and internal transactions.
For example, you may run the cli command
> ethereumetl export_token_transfers --start-block 0 --end-block 500000 \
--provider-uri file://$HOME/Library/Ethereum/geth.ipc --output token_transfers.csv
You may export ERC20 and ERC721 transfers by specific the block number which enable you to query the old data.
Data is also available in Google BigQuery.
Is it possible to retrieve the service fee charges independent of the SKU like Subscription Fee, FBA Inventory Storage Fee etc. using amazon market API.
I tried the Financial Event API which returns the service fees in the format
<ServiceFeeEvent>
<FeeList>
<FeeComponent>
<FeeType>FBADisposalFee</FeeType>
<FeeAmount>
<CurrencyAmount>-0.15</CurrencyAmount>
<CurrencyCode>USD</CurrencyCode>
</FeeAmount>
</FeeComponent>
</FeeList>
</ServiceFeeEvent>
Which does not contains the data like PostedDate. Is there any oter APIs availabile to get the detailed data of service fee amounts?
In case it's useful for someone else, I figured out an approach that kind of works for me, though it's not ideal.
I'm using the Reports API to download the _GET_V2_SETTLEMENT_REPORT_DATA_FLAT_FILE_ report, which has the fees and a posted-date column. Some of Amazon's documentation about it can be found here: http://docs.developer.amazonservices.com/en_US/reports/Reports_ReportType.html
The disadvantage is that it's only generated once every two weeks. The advantage, compared to the Finances API, is that you get the posting date and the specific transaction that the fee came from.
I am trying to create a MODEL using Amazon machine language service, but i am facing a problem that my data set is of 20 KB and contain only 120 rows. As the data set is very small it is impossible that the learning algorithm will learn correctly the data pattern and eventually my prediction model will be incorrect.
Is there any way that amazon itself can create some data on my behalf using the data that i have provided for training ?
You’re correct that 120 rows of data would not be enough to train a good model and then there is data for evaluation part too. so the data that you end up with will be 30% lesser for training.
Amazon cannot create additional data on your behalf. but if you can collect more data points but not the labels (“correct” answers that are used to train the model), you may find Amazon Mechanical Turk useful for collecting these labels.
The next question is how to use Mechanical turk , that i am unsure of and trying to learn. so can't help you on that.