I'm using Ethers.js to get the token price from BSC using getReserves successfully.
const nodeRandom = !node ? wssNodes() : node;
const provider = new ethers.providers.WebSocketProvider(nodeRandom);
const pairAddress = await pancake.getPair(token0, token1);
if (pairAddress === "0x0000000000000000000000000000000000000000") {
return {
status: "Pair not found",
};
}
const pairContract = new ethers.Contract(pairAddress, pancakePair, provider);
const reserves = await pairContract.getReserves();
I want to create a price chart for that token, but I get into trouble when I don't know how to get the historical price data from BSC.
Does Ethers.js support getting the token price history, or should we store the price we get into our database? If then, is there any way we can build the price chart of a token from the very beginning of the first block when we don't have that in our DB?
Any idea?
You can use the blockTag field of the overrides object - docs. It queries the node to return the value from a specific block instead of the current.
const reserves = await pairContract.getReserves({
blockTag: <blockNumer>
});
Note that it depends on the node provider if they support these historical queries or not. Most providers support it only in higher tier plans or not at all.
Related
I am currently learning how to use AWS pricing SDK. My objective is to get all the prices of aws virtual machines, as the prices can be different from a region to another.
Basically, I am running this code :
AmazonPricingClient client = new(keyId, key, RegionEndpoint.USEast1);
// Developement filters to handle smaller amount of data
GetProductsRequest request = new() {
ServiceCode = "AmazonEC2",
Filters = new() {
new ()
{
Field = "vcpu",
Type = "TERM_MATCH",
Value = "2"
},
new()
{
Field = "currentGeneration",
Type = "TERM_MATCH",
Value = "Yes"
},
new()
{
Field = "regionCode",
Type = "TERM_MATCH",
Value = "eu-west-1"
},
new()
{
Field = "operatingSystem",
Type = "TERM_MATCH",
Value = "Windows"
}
}
};
GetProductsResponse response = await client.GetProductsAsync(request);
Taking in consideration the filters (added to reduce the amount of data while testing the code out), I will only get the prices of the matching virtual machines for region eu-west-1.
If I delete this dev filter (for production for exemple), I will get the prices of every region, but each time, I will also get this part of the returned json :
"product":{
"productFamily":"Compute Instance",
"attributes":{
"enhancedNetworkingSupported":"Yes",
"intelTurboAvailable":"No",
"memory":"16 GiB",
"dedicatedEbsThroughput":"Up to 3500 Mbps",
[...]
"operation":"RunInstances:000g",
"availabilityzone":"NA"
},
"sku":"2A56CED7V5PFGAH8"
}
And this part would be duplicated for each region.
Is there a way to tell the api that I just want the different prices of a specific virtual machine ? Using either the request or the response objects ?
I may have missed some possibilities offered by the SDK, feel free to tell me anything I can improve in that snippet, good practices,...
Thanks !
I have embedded powerbi report which was working fine until I changed my database.
I observed datasets.IsEffectiveIdentityRequired (in below code) was false earlier, now as it is true, I'm getting an error - {"error":{"code":"InvalidRequest","message":"Creating embed token for accessing dataset 02c90e15-35dd-4036-a525-4f5d158bfade requires roles to be included in provided effective identity"}}
I'm using standard Embed service code.
// Create a Power BI Client object. It will be used to call Power BI APIs.
using (var client = new PowerBIClient(new Uri(ApiUrl), m_tokenCredentials))
{
// Get a list of reports.
var reports = await client.Reports.GetReportsInGroupAsync(WorkspaceId);
Report report = reports.Value.FirstOrDefault(r => r.Id.Equals(ReportId, StringComparison.InvariantCultureIgnoreCase));
var datasets = await client.Datasets.GetDatasetByIdInGroupAsync(WorkspaceId, report.DatasetId);
m_embedConfig.IsEffectiveIdentityRequired = datasets.IsEffectiveIdentityRequired;
m_embedConfig.IsEffectiveIdentityRolesRequired = datasets.IsEffectiveIdentityRolesRequired;
GenerateTokenRequest generateTokenRequestParameters;
// This is how you create embed token with effective identities
// HERE username IS NULL
if (!string.IsNullOrWhiteSpace(username))
{
var rls = new EffectiveIdentity(username, new List<string> { report.DatasetId });
if (!string.IsNullOrWhiteSpace(roles))
{
var rolesList = new List<string>();
rolesList.AddRange(roles.Split(','));
rls.Roles = rolesList;
}
// Generate Embed Token with effective identities.
generateTokenRequestParameters = new GenerateTokenRequest(accessLevel: "view", identities: new List<EffectiveIdentity> { rls });
}
else
{
// Generate Embed Token for reports without effective identities.
generateTokenRequestParameters = new GenerateTokenRequest(accessLevel: "view");
}
var tokenResponse = await client.Reports.GenerateTokenInGroupAsync(WorkspaceId, report.Id, generateTokenRequestParameters);
}
First, I completely understand that this error occurs as I'm not passing any identity. So, is there any option to disable IsEffectiveIdentityRequired?
Second, how to set users and roles in powerbi?
--I'm not a PowerBI expert--
IsEffectiveIdentityRequired is a read only property so you can't control it and there is no option to disable it.
Depending on the data source you are connecting to an effective identity may or may not be required.
If IsEffectiveIdentityRequired is true you need to pass an EffectiveIdentity when calling GenerateTokenRequest to generate an embed token. If the data source requires an effective identity and you do not pass one you will get an error when calling GenerateTokenRequest. You will also get an error if you pass an incomplete EffectiveIdentity, such as one that is missing roles when calling GenerateTokenRequest.
Here is an example of how you can use the IsEffectiveIdentityRequired property to generate an embed token with or without an effective identity depending on if the data source requires it or not.
List<EffectiveIdentity> eil = new List<EffectiveIdentity>();
EffectiveIdentity ef = new EffectiveIdentity();
// UserName
ef.Username = FullADUsername;
// Roles
List<string> Roles = new List<string>();
ef.Roles = Roles;
// Datasets
List<string> _Datasets = new List<string>();
_Datasets.Add(report.DatasetId);
ef.Datasets = _Datasets;
eil.Add(ef);
// Look up the data set of the report and look if we need to pass an Effective Identify
Dataset d = client.Datasets.GetDatasetByIdInGroup(WorkspaceId, report.DatasetId);
if (d.IsEffectiveIdentityRequired == true){
GenerateTokenRequest gtr = new GenerateTokenRequest("View", null, false, eil);
newEmbedToken = client.Reports.GenerateTokenInGroup(WorkspaceId, ReportId, gtr);
}
else
{
GenerateTokenRequest gtr = new GenerateTokenRequest();
newEmbedToken = client.Reports.GenerateTokenInGroup(WorkspaceId, ReportId, gtr);
}
I am trying to set ttl for a loopback model so that the document gets deleted automatically after specified time.
Here is the property I have added:
"ttl": {
"type": "number",
"required": true
}
This is not AccessToken model but a separate model whose documents I want to be deleted after specified time interval.
AccessTokens don't get deleted after their ttl is up, they just invalidate the token for login purposes. I'm not sure that any database/ORM will just delete rows after they've been around for a certain mount of time I was wrong mongodb does this, however loopback does not actually use this feature. This script will create a job which deletes all rows who have expired according to their ttl column.
server/boot/job-delete-expired.js
module.exports = (server) => {
const myModel = server.models.myModel;
if (!myModel) {
throw new Error("My model not found!");
}
const deleteExpiredModels = async () => {
const now = new Date();
const all = await myModel.find();
// If the created time + the ttl is paste now then it can be deleted
const expired = all.filter(m => (m.created + m.ttl) > now);
// Delete them all
await Promise.all(expired.map(e => myModel.destroyById(e.id)));
};
// Execute this every 10 minutes
setInterval(() => deleteExpiredModels(), 60000)
};
Disclaimer: This code has no error handling, and setInterval does not wait for promises to resolve, if you're using this in production consider a while loop with async/await to make sure that only one instance of deleteExpiredModels is ever executed.
I was able to this solve as follows:
MyCollection.getDataSource().connector.connect(function(err, db) {
if(!err){
var collection = db.collection('MyCollection');
collection.createIndex( { "expireAt": 1 }, { expireAfterSeconds: 0 } );
}
});
Then for each document, I inserted expireAtwhich corresponds to the time the document should expire.
MongoDB automatically deletes documents from the collection at the documents’ expireAt time.
I used the model.json file to solve this
"indexes":{
"expireAt_1":{
"keys": {"createdOn": 1},
"options":{"expireAfterSeconds": 2592000}
}
}
I used a name for the index.
The key defined has the object property that has the date value.
The expireAfterSeconds value needs to be set in the options property.In this case I have set it to 30 days after createdOn date
I have entities that look like that:
{
name: "Max",
nicknames: [
"bestuser"
]
}
how can I query for nicknames to get the name?
I have created the following index,
indexes:
- kind: users
properties:
- name: name
- name: nicknames
I use the node.js client library to query the nickname,
db.createQuery('default','users').filter('nicknames', '=', 'bestuser')
the response is only an empty array.
Is there a way to do that?
You need to actually fetch the query from datastore, not just create the query. I'm not familiar with the nodejs library, but this is the code given on the Google Cloud website:
datastore.runQuery(query).then(results => {
// Task entities found.
const tasks = results[0];
console.log('Tasks:');
tasks.forEach(task => console.log(task));
});
where query would be
const query = db.createQuery('default','users').filter('nicknames', '=', 'bestuser')
Check the documentation at https://cloud.google.com/datastore/docs/concepts/queries#datastore-datastore-run-query-nodejs
The first point to notice is that you don't need to create an index to this kind of search. No inequalities, no orders and no projections, so it is unnecessary.
As Reuben mentioned, you've created the query but you didn't run it.
ds.runQuery(query, (err, entities, info) => {
if (err) {
reject(err);
} else {
response.resultStatus = info.moreResults;
response.cursor = info.moreResults == TNoMoreResults? null: info.endCursor;
resolve(entities);
};
});
In my case, the response structure was made to collect information on the cursor (if there is more data than I've queried because I've limited the query size using limit) but you don't need to anything more than the resolve(entities)
If you are using the default namespace you need to remove it from your query. Your query needs to be like this:
const query = db.createQuery('users').filter('nicknames', '=', 'bestuser')
I read the entire plob as a string to get the bytes of a binary file here. I imagine you simply parse the Json per your requirement
I'm trying to retrieve data about my load balancers using the AWSSDK.CloudWatch package, but having no luck in actually getting any values out of it. It seems no matter what, the Values property of the MetricData in the response is an empty array.
AmazonCloudWatchClient client = new AmazonCloudWatchClient("MyAccessKeyId", "MySecretAccessKey", Amazon.RegionEndpoint.MyRegion);
GetMetricDataRequest request = new GetMetricDataRequest()
{
StartTime = DateTime.UtcNow.AddHours(-12),
EndTime = DateTime.UtcNow,
MetricDataQueries = new List<MetricDataQuery>()
{
new MetricDataQuery()
{
Id = "MyMetric",
MetricStat = new MetricStat()
{
Metric = new Metric()
{
Namespace = "AWS/ELB",
MetricName = "HealthyHostCount",
Dimensions = new List<Dimension>()
{
new Dimension()
{
Name = "LoadBalancerName",
Value = "MyLoadBalancerName"
}
}
},
Period = 300,
Stat = "Sum",
Unit = "None"
}
}
},
ScanBy = ScanBy.TimestampDescending,
MaxDatapoints = 1000
};
GetMetricDataResponse response = client.GetMetricData(request);
I'm struggling to find any relevant examples of this. I'd prefer to be able to obtain this value per-load balancer.
There are many things that could cause your query to return no data. This is how I would approach debugging this:
Was the response 200 OK? If not, something is wrong with the query itself, missing required parameter, credentials are not valid or policy does not allow GetMetricData calls.
Is the metric name correct? Full metric name must be correct, that includes namespace, metric name and all of the dimensions. CloudWatch will not distinguish between no data case and no metric case, you will just get no data back. This is a potential issue in your request, if your hosts are in a target group you may need to specify the target group dimension.
Is the region endpoint correct? Metrics are separated by region and you have to call the correct region endpoint.
Are the credentials from the correct account?
Is the unit correct? If you are not sure about the unit, don't specify it. This is the second thing that could be an issue with your request, this metric could have the unit Count. Try it without specifying the unit.
Is the time range correct? Was the data being published for the time range you are requesting?