BSC Testnet: Truffle Migrate ETIMEDOUT - blockchain

I need to deploy my smart contract to BSC Testnet
I always got this :
Error: PollingBlockTracker - encountered an error while attempting to update latest block:
Error: ETIMEDOUT
I tried to change the RPC specified here https://docs.binance.org/smart-chain/developer/rpc.html#rate-limit
All of them, yet still the same.
One thing is, I tried to deploy it to ropsten instead just for fun.
And it is success.
Is there any problem with BSC Testnet RPC nowadays ?
Here is my snip for truffle-config.js
testnet: {
provider: () => new HDWalletProvider(mnemonic, `https://data-seed-prebsc-1-s2.binance.org:8545`),
network_id: 97, // 3 for ropsten, 97 for bsc test
confirmations: 2,
timeoutBlocks: 2000,
skipDryRun: true,
networkCheckTimeout: 1000000
},
I searched, some people use websocket (wss), some change the RPC Url, some add the networkCheckTimeout.
I tried all of them (except wss, since I don't see it is provided by BSC Testnet).
But nothing work.
Any suggestion ? Thank you

When I used other endpoints, The issue was fixed. You can try the below endpoints.
BSC RPC Endpoints:
https://data-seed-prebsc-1-s1.binance.org:8545/
https://data-seed-prebsc-2-s1.binance.org:8545/
https://data-seed-prebsc-1-s2.binance.org:8545/
https://data-seed-prebsc-2-s2.binance.org:8545/
https://data-seed-prebsc-1-s3.binance.org:8545/
https://data-seed-prebsc-2-s3.binance.org:8545/

I searched for more than a week. Finally, I found the answer here, Not only change the pollingInterval, but also do this: in the module web3-provider-engine, modify the timeout a bigger number. Remember that, the module maybe imported more than one times, so change the value everywhere in your projects.
xhr({
uri: targetUrl,
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify(newPayload),
rejectUnauthorized: false,
timeout: 2000, // change the value bigger
The timeout value is hardcoded, maybe nowadays most people have very good Internet connection and only a few techs suffering, I tried very hard to find out the answer. After I change this configuration, I never suffer timeout!

bsc: {
networkCheckTimeout: 999999,
provider: () => new HDWalletProvider(mnemonic, `https://data-seed-prebsc-1-s1.binance.org:8545`),
network_id: 97, // Ropsten's id
gas: 5500000, // Ropsten has a lower block limit than mainnet
confirmations: 10, // # of confs to wait between deployments. (default: 0)
timeoutBlocks: 200, // # of blocks before a deployment times out (minimum/default: 50)
skipDryRun: true // Skip dry run before migrations? (default: false for public nets )
},
adding network timeout should help

The problem is that BSC produces blocks so quickly that it exceeds the default number of blocks Truffle is configured to wait for. You can solve this by adding the networkCheckTimeout and timeoutBlocks fields in your network config:
bsc: {
networkCheckTimeout: 1000000,
timeoutBlocks: 200
}

Best way I found to avoid this error is to
truffle compile
manually before migration.
Then, when migrate, to add the --compile-none option.
truffle migrate --network xxx --compile-none

Related

Deploying to fuji network with hardhat creates HttpProviderError

I am having issue deploying contract to fuji c chain with hardhat. Here is my hardhat.config.js file:
const config: HardhatUserConfig = {
networks: {
fuji: {
url: 'https://api.avax-test.network/ext/bc/C/rpc',
chainId: 43113,
gasPrice: 20000000000,
accounts: [`0x${PRIVATE_KEY}`],
},
avalanche: {
url: 'https://api.avax.network/ext/bc/C/rpc',
chainId: 43114,
gasPrice: 20000000000,
accounts: [`0x${PRIVATE_KEY}`],
},
},
};
Here is the command for deploying the contract:
npx hardhat run --network fuji scripts/deploy.ts
I am getting the following error:
ProviderError: HttpProviderError
at HttpProvider.request (E:\SolidityProject\Leveor\nft-platform-script\node_modules\hardhat\src\internal\core\providers\http.ts:78:19)
I also have used a different rpc url provided by infura with the API key but it gave the same error.
How to resolve this?
The problem was setting the gasPrice on the hardhat config file. For the fuji network the gas is automatically calculated and explicitly setting a gas price causes the error ProviderError: HttpProviderError (The error message could be better). This is the same case for the Celo network. But for Ethereum, Polygon, and Binance smart chains you CAN explicitly define the gas price.

Errors connecting to AWS Keyspaces using a lambda layer

Intermittently getting the following error when connecting to an AWS keyspace using a lambda layer
All host(s) tried for query failed. First host tried, 3.248.244.53:9142: Host considered as DOWN. See innerErrors.
I am trying to query a table in a keyspace using a nodejs lambda function as follows:
import cassandra from 'cassandra-driver';
import fs from 'fs';
export default class AmazonKeyspace {
tpmsClient = null;
constructor () {
let auth = new cassandra.auth.PlainTextAuthProvider('cass-user-at-xxxxxxxxxx', 'zzzzzzzzz');
let sslOptions1 = {
ca: [ fs.readFileSync('/opt/utils/AmazonRootCA1.pem', 'utf-8')],
host: 'cassandra.eu-west-1.amazonaws.com',
rejectUnauthorized: true
};
this.tpmsClient = new cassandra.Client({
contactPoints: ['cassandra.eu-west-1.amazonaws.com'],
localDataCenter: 'eu-west-1',
authProvider: auth,
sslOptions: sslOptions1,
keyspace: 'tpms',
protocolOptions: { port: 9142 }
});
}
getOrganisation = async (orgKey) => {
const SQL = 'select * FROM organisation where organisation_id=?;';
return new Promise((resolve, reject) => {
this.tpmsClient.execute(SQL, [orgKey], {prepare: true}, (err, result) => {
if (!err?.message) resolve(result.rows);
else reject(err.message);
});
});
};
}
I am basically following this recommended AWS documentation.
https://docs.aws.amazon.com/keyspaces/latest/devguide/using_nodejs_driver.html
It seems that around 10-20% of the time the lambda function (cassandra driver) cannot connect to the endpoint.
I am pretty familiar with Cassandra (I already use a 6 node cluster that I manage) and don't have any issues with that.
Could this be a timeout or do I need more contact points?
Followed the recommended guides. Checked from the AWS console for any errors but none shown.
UPDATE:
Update to the above question....
I am occasionally (1 in 50 if I parallel call the function (5 concurrent calls)) getting the below error:
"All host(s) tried for query failed. First host tried,
3.248.244.5:9142: DriverError: Socket was closed at Connection.clearAndInvokePending
(/opt/node_modules/cassandra-driver/lib/connection.js:265:15) at
Connection.close
(/opt/node_modules/cassandra-driver/lib/connection.js:618:8) at
TLSSocket.
(/opt/node_modules/cassandra-driver/lib/connection.js:93:10) at
TLSSocket.emit (node:events:525:35)\n at node:net:313:12\n at
TCP.done (node:_tls_wrap:587:7) { info: 'Cassandra Driver Error',
isSocketError: true, coordinator: '3.248.244.5:9142'}
This exception may be caused by throttling in the keyspaces side, resulting the Driver Error that you are seeing sporadically.
I would suggest taking a look over this repo which should help you to put measures in place to either prevent the occurrence of this issue or at least reveal the true cause of the exception.
Some of the errors you see in the logs you will need to investigate Amazon CloudWatch metrics to see if you have throttling or system errors. I've built this AWS CloudFormation template to deploy a CloudWatch dashboard with all the appropriate metrics. This will provide better observability for your application.
A System Error indicates an event that must be resolved by AWS and often part of normal operations. Activities such as timeouts, server faults, or scaling activity could result in server errors. A User error indicates an event that can often be resolved by the user such as invalid query or exceeding a capacity quota. Amazon Keyspaces passes the System Error back as a Cassandra ServerError. In most cases this a transient error, in which case you can retry your request until it succeeds. Using the Cassandra driver’s default retry policy customers can also experience NoHostAvailableException or AllNodesFailedException or messages like yours "All host(s) tried for query failed". This is a client side exception that is thrown once all host in the load balancing policy’s query plan have attempted the request.
Take a look at this retry policy for NodeJs which should help resolve your "All hosts failed" exception or pass back the original exception.
The retry policies in the Cassandra drivers are pretty crude and will not be able to do more sophisticated things like circuit breaker patters. You may want to eventually use a "failfast" retry policy for the driver and handle the exceptions in your application code.

AWS API Gateway Slowness

This has me perturbed. I have a basic API gateway that is supposed to be capped at 10,000 requests per second with 5,000 request bursts. However, when hooked up to Lambdas, best I can hit currently is ~70 requests / second.
The end-points I have are basic Lambda proxies created with Serverless framework (HTTP EDGE).
I know that the lambda itself is not the bottleneck as I have the same issue when I replace the lambda with an empty function. I have 100+ allocated concurrency for the lambda, but the lambda never appears to hit the limit.
functions:
loadtest:
handler: loadtest/index.handler
reservedConcurrency: 200
events:
- http: POST load_test
I'm wondering if there's something that I am overlooking here. My test runs for a minute and attempts to hit 200 req / sec (works fine with other so it's not my bandwidth). The delays grow to be as much as 20-30s at some point, so clearly something is choking up.
If it's a warm up issue - how long am I expected to run such load until everything is running warm?
Any ideas on where to look or additional information that I could share?
[Edit] I am using node12.x and I even tried with this code:
const AWS = require('aws-sdk');
AWS.config.update({region: '<my-region>'});
var sqs = new AWS.SQS({apiVersion: '2012-11-05'});
exports.handler = async (event, context) => {
return {"status":"ok", ... }
};
The results were basically the same. I'm not sure where the bottle neck is, to be honest. I can try further testing with concurrency on the lambda side, but going from 100 to 200 had no effect - the completed requests clocks at around 70/s for an empty function.
Also, I'm using loadtest npm package to perform the loadtest and this is what the output looks like:
{ totalRequests: 8200,
totalErrors: 0,
totalTimeSeconds: 120.00341689999999,
rps: 68,
meanLatencyMs: 39080.6,
maxLatencyMs: 78490,
minLatencyMs: 427,
percentiles: { '50': 38327, '90': 70684, '95': 74569, '99': 77679 },
errorCodes: {},
instanceIndex: 0 }
Here's a picture of how provisioned concurrency looked like over that period of time. I ran this over 2 minutes with the target at 200 req/sec.
Appears that this is actually an issue with WSL2 and NodeJS. The exact nature of it is still unclear, but it is not an issue with the API gateway itself. I demonstrated this by running it on MacBook and everything worked fine and request counts were high.
There are posts suggesting that this is an issue with the Node HTTP client & DNS, so perhaps that's a good starting point, but the above question is moot.

How to connect Loopback app to google database with ssl enabled?

I'm attempting to connect a Loopback app to a Google SQL database, and I've altered the Datasource.json file to match the credentials. However, when I make a GET request in the Loopback API explorer I get an error. I have not found any docs on how to specify the ssl credentials in Datasource.json and I think this is causing the error.
I've fruitlessly attempted to change Datasource.json and below is the current state. I've changed details for privacy, but I'm 100% certain the credentials are correct as I can make a successful connection with javascript.
{
"nameOfModel": {
"name": "db",
"connector": "mysql",
"host": "xx.xxx.x.xxx",
"port": xxxx,
"user": "user",
"password": "password",
"database": "sql_db",
"ssl": true,
"ca" : "/server-ca.pem",
"cert" : "/client-cert.pem",
"key" : "/client-key.pem"
}
}
This is the error the command line returns when I attempt a GET request on the loopback API explorer. The "Error:
Timeout in connecting after 5000 ms" leads me to believe it's not reading the ssl credentials.
Unhandled error in GET /edd-sales?filter[offset]=0&filter[limit]=0&filter[skip]=0: 500 TypeError: Cannot read property 'name' of undefined
at EddDbDataSource.DataSource.queueInvocation.DataSource.ready (D:\WebstormProjects\EDD-Database\edd-api\node_modules\loopback-datasource-juggler\lib\datasource.js:2577:81)
(node:10176) UnhandledPromiseRejectionWarning: Error: Timeout in connecting after 5000 ms
at Timeout._onTimeout (D:\WebstormProjects\EDD-Database\edd-api\node_modules\loopback-datasource-juggler\lib\datasource.js:2572:10)
at ontimeout (timers.js:498:11)
(node:10176) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:10176) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Are you sure datasource.json allows you to specify "ssl" connections? Where did you get that information? I have checked their documentation and they don't show the "ssl" properties that you are using. Neither it specifies so on the MySQL connector properties.
You have two options:
1.- Create the connection without using SSL.
2.- Create your own connector or use an existing one with ssl options implemented. Have in mind that this might causes issues with LoopBack framework.
Don't matter which of the two options you decide to use, remember to Whitelist your IP (the IP from where you are trying to access the database instance), you can do so on the Cloud Console, on the "Connections" tab, under "public IP" Authorized networks. If you don't do this it can cause the timeout error that you are getting.
Try this. I am using lookback3 and it works well for me. You need to create datasources.local.js in order to properly load CA files.
datasources.local.js
const fs = require('fs');
module.exports = {
nameOfModel: {
name: 'db',
connector: 'mysql',
host: 'xx.xxx.x.xxx',
port: 'xxxx',
user: 'user',
password: 'password',
database: 'sql_db',
ssl: {
ca: fs.readFileSync(`${__dirname}/server-ca.pem`),
cert: fs.readFileSync(`${__dirname}/client-cert.pem`),
key: fs.readFileSync(`${__dirname}/client-key.pem`),
},
}
}
Notice that instead of using ssl: true, you need to use an object with those properties.

Inconsistent AWS "Signature not current" errors from Cloudformation API

I have a ruby client (fog) that makes a call to the AWS CloudFormation API. The client runs on an AWS EC2 instance. For months, the client has been running without issue, but in the last 2 weeks, I've been getting random authorization failures because of "Signature not current".
Here's some cherry-picked debug details from excon (the underlying library used by fog to make http calls).
request:
:headers => {
"User-Agent" => "fog/1.24.0"
"x-amz-date" => "20150326T152500Z"
}
excon.error.response
:headers => {
"Date" => "Thu, 26 Mar 2015 15:19:28 GMT"
}
ERROR: Fog::AWS::CloudFormation::Error: SignatureDoesNotMatch => Signature not yet current: 20150326T152500Z is still later than 20150326T152429Z (20150326T151929Z + 5 min.)
Looks to me like a time sync error: the CFN API is responding with a 15:19:28 timestamp while the request on the client side (ec2 instance) has a time of 13:25:00 - just over 5 minutes ahead...
Assuming this is something that needs to be addressed by AWS... any suggestions for a workaround?
Your server has some clock drift that is causing the request signature to be invalid, or at least, not valid yet.
if Linux, Please check your ntp server is running in system or not.
service ntp start
service ntp status