Deploying to fuji network with hardhat creates HttpProviderError - blockchain

I am having issue deploying contract to fuji c chain with hardhat. Here is my hardhat.config.js file:
const config: HardhatUserConfig = {
networks: {
fuji: {
url: 'https://api.avax-test.network/ext/bc/C/rpc',
chainId: 43113,
gasPrice: 20000000000,
accounts: [`0x${PRIVATE_KEY}`],
},
avalanche: {
url: 'https://api.avax.network/ext/bc/C/rpc',
chainId: 43114,
gasPrice: 20000000000,
accounts: [`0x${PRIVATE_KEY}`],
},
},
};
Here is the command for deploying the contract:
npx hardhat run --network fuji scripts/deploy.ts
I am getting the following error:
ProviderError: HttpProviderError
at HttpProvider.request (E:\SolidityProject\Leveor\nft-platform-script\node_modules\hardhat\src\internal\core\providers\http.ts:78:19)
I also have used a different rpc url provided by infura with the API key but it gave the same error.
How to resolve this?

The problem was setting the gasPrice on the hardhat config file. For the fuji network the gas is automatically calculated and explicitly setting a gas price causes the error ProviderError: HttpProviderError (The error message could be better). This is the same case for the Celo network. But for Ethereum, Polygon, and Binance smart chains you CAN explicitly define the gas price.

Related

AWS Elastic Cache Redis Cluster `MOVED XXXXX ip:6379` error

I am trying to connect to AWS Elastic Cache Redis Cluster and i keep getting this I am still getting the
Error MOVED 12218 ip:6379
Following is the code
https://www.npmjs.com/package/redis - redis: ^4.0.1
import {createClient} from "redis";
const client = createClient({url: "redis://xyz.abc.clustercfg.use2.cache.amazonaws.com:6379"});
await client.connect();
console.log("client connected");
console.log(await client.ping());
OUTPUT:
client connected
PONG
But when I do await client.get(key) or await client.set(key, value) I get the MOVED error.
I even followed this https://github.com/redis/node-redis/issues/1782, but yet i am getting the same MOVED 12218 ip:6379 error.
I am hoping you are trying cluster mode enabled redis in aws.
"redis": "^4.1.0".
I am using this redis version
If so then you can try this below code
const redis = require('redis');
const client = redis.createCluster({
rootNodes: [
{
url: `redis://${ConfigurationEndpoint}:${port}`,
},
],
useReplicas: true,
});
Just some info. I realized that I set up a Redis-Cluster Helm chart but I was connecting with Jedis from a Sentinel/Redis setup. Once I changed Jedis to connect with JedisCluster then this error went away. Now this is with a client so your setup might be different, but something to look at.

Using Hardhat to deploy smart contract to local Polygon node

I have followed this tutorial to learn how can I use HARDHAT to deploy a Smart Contract on a Polygon testnet (and it worked just fine).
Now I want to run some tests on my local Polygon blockchain instance which is running and working fine on my local computer (with 4 nodes). I know it works because I can operate over it via jsonRPC and gRPG, consulting balances, status, etc.
So, in my hardhat.config.js I have this settings:
require("#nomiclabs/hardhat-ethers");
module.exports = {
defaultNetwork: "matic",
networks: {
hardhat: {
},
matic: {
url: "http://localhost:10002"
}
},
solidity: {
version: "0.8.0",
settings: {
optimizer: {
enabled: true,
runs: 200
}
}
},
paths: {
sources: "./contracts",
tests: "./test",
cache: "./cache",
artifacts: "./artifacts"
},
mocha: {
timeout: 20000
}
}
I then compiled and tryed to deploy Hardhat's sample script:
$ npx hardhat compile
>Downloading compiler 0.8.0
>Compiled 2 Solidity files successfully
$ npx hardhat run scripts/sample-script.js --network matic
>ProviderError: the method eth_accounts does not exist/is not available
> at HttpProvider.request (/home/edu/projects/test-hardhat->polygon/node_modules/hardhat/src/internal/core/providers/http.ts:74:19)
> at GanacheGasMultiplierProvider.request (/home/edu/projects/test-hardhat->polygon/node_modules/hardhat/src/internal/core/providers/gas-providers.ts:312:34)
It seems Hardhat is calling the method eth_accounts which does not exist in my Polygon-Edge local blockchain.
What am I doing wrong?
Thanks in advance
good question... Ran into your question while troubleshooting the same issue. I'm running polygon-edge server --dev... and couldn't deploy smart contracts with hardhat or truffle. An alternative is to use Remix IDE and your wallet to deploy via Injected Web3 per the project's Does polygon-edge support smart contract? #411 discussion.
Deploy & Run Transactions within the Remix IDE, with selected environment with deploying via transaction.
Once completed you'll be able to review and approve the transaction within your wallet which in turn deploys the contract via transaction. Pause for a second and be vigilant of scammers, fake remix IDE clones, or incidentally deploying this transaction to a live network using real funds! With that said: how to import a network to Metamask if you have yet to do so. If you need an account with funds, what worked for me was importing into Metamask an account via private key for which I included premined currency. To retrieve the private key I referred to my validator node's data folder contained the private key at $data-dir/consensus/validator.key file.
Regarding the actual error and web3js.. The error message is accurate. If you revisit the polygon-edge docs referencing JSON RPC Commands note eth_accounts method call is missing. This is problematic for the underlying node modules relying on web3.eth.getAccounts() call setup for contract deployment, and in turn impacting truffle and hardhat.

Polygon: Solidity build failed for 'pragma solidity ^0.8.7' with chainlink

In polygon chain, the latest chainlink version not support.
If I remove chainlink library, it deploy sucessfully.
Chainlink 0.8 working fine in 'ropston test' network. But in 'Mumbai test net', not able to deploy contract.
hardhat.config.js [edit:1]
error log:
remix error log:
You can deploy the contract with Remix Ide Online, web3 connection with Metamask.
Can you share your hardhat config file or error logs?
This should be inside your hardhat env.
mumbai: {
url: "https://rpc-mumbai.matic.today",
// url: API_URL, //or Infura API URL
accounts: [`0x${PRIVATE_KEY}`],
gasPrice: 10000000000,
gasLimit: 9000000
},
Edit: It's failing because cant estimate gasLimit of the transaction. You can set it manually.
Inside your deploy.js script set the gasPrice and gasLimit. Depending on you are using web3js or etherjs this code will be different. This is example of another minting function.
FT = await contract.ownerMint(WALLET_ADDRESS,{ gasLimit: 285000, gasPrice: ethers.utils.parseUnits('30', 'gwei')});
Edit2: you can always deploy it with remix online with Metamask

BSC Testnet: Truffle Migrate ETIMEDOUT

I need to deploy my smart contract to BSC Testnet
I always got this :
Error: PollingBlockTracker - encountered an error while attempting to update latest block:
Error: ETIMEDOUT
I tried to change the RPC specified here https://docs.binance.org/smart-chain/developer/rpc.html#rate-limit
All of them, yet still the same.
One thing is, I tried to deploy it to ropsten instead just for fun.
And it is success.
Is there any problem with BSC Testnet RPC nowadays ?
Here is my snip for truffle-config.js
testnet: {
provider: () => new HDWalletProvider(mnemonic, `https://data-seed-prebsc-1-s2.binance.org:8545`),
network_id: 97, // 3 for ropsten, 97 for bsc test
confirmations: 2,
timeoutBlocks: 2000,
skipDryRun: true,
networkCheckTimeout: 1000000
},
I searched, some people use websocket (wss), some change the RPC Url, some add the networkCheckTimeout.
I tried all of them (except wss, since I don't see it is provided by BSC Testnet).
But nothing work.
Any suggestion ? Thank you
When I used other endpoints, The issue was fixed. You can try the below endpoints.
BSC RPC Endpoints:
https://data-seed-prebsc-1-s1.binance.org:8545/
https://data-seed-prebsc-2-s1.binance.org:8545/
https://data-seed-prebsc-1-s2.binance.org:8545/
https://data-seed-prebsc-2-s2.binance.org:8545/
https://data-seed-prebsc-1-s3.binance.org:8545/
https://data-seed-prebsc-2-s3.binance.org:8545/
I searched for more than a week. Finally, I found the answer here, Not only change the pollingInterval, but also do this: in the module web3-provider-engine, modify the timeout a bigger number. Remember that, the module maybe imported more than one times, so change the value everywhere in your projects.
xhr({
uri: targetUrl,
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
},
body: JSON.stringify(newPayload),
rejectUnauthorized: false,
timeout: 2000, // change the value bigger
The timeout value is hardcoded, maybe nowadays most people have very good Internet connection and only a few techs suffering, I tried very hard to find out the answer. After I change this configuration, I never suffer timeout!
bsc: {
networkCheckTimeout: 999999,
provider: () => new HDWalletProvider(mnemonic, `https://data-seed-prebsc-1-s1.binance.org:8545`),
network_id: 97, // Ropsten's id
gas: 5500000, // Ropsten has a lower block limit than mainnet
confirmations: 10, // # of confs to wait between deployments. (default: 0)
timeoutBlocks: 200, // # of blocks before a deployment times out (minimum/default: 50)
skipDryRun: true // Skip dry run before migrations? (default: false for public nets )
},
adding network timeout should help
The problem is that BSC produces blocks so quickly that it exceeds the default number of blocks Truffle is configured to wait for. You can solve this by adding the networkCheckTimeout and timeoutBlocks fields in your network config:
bsc: {
networkCheckTimeout: 1000000,
timeoutBlocks: 200
}
Best way I found to avoid this error is to
truffle compile
manually before migration.
Then, when migrate, to add the --compile-none option.
truffle migrate --network xxx --compile-none

Google.Cloud.Diagnostics.AspNetCore 3.0.0-beta13 Not working on GKE

Purpose
Use Google Cloud Diagnostics on a .net core 2.2 REST API, for Logging, Tracing and Error Reporting in two possible scenarios:
Local execution on Visual Studio 2017
Deployed on a Docker Container and Running on a GCP Kubernetes Engine
Environment details
.NET version: 2.2.0
Package name and version: Google.Cloud.Diagnostics.AspNetCore 3.0.0-beta13
Description
For configuring Google Cloud Diagnostics, two documentation sources were used:
Google.Cloud.Diagnostics.AspNetCore
Enable configuration support for Google.Cloud.Diagnostics.AspNetCore #2435
Based on the above documentation the UseGoogleDiagnostics extension method on IWebHostBuilder was used, as this configures Logging, Tracing and Error Reporting middleware.
According to the 2) link, the following table presents the information needed when using the UseGoogleDiagnostics method:
For local execution => project_id, module_id, and version_id are needed,
For GKE => module_id, and version_id
The .net core configuration files were used to provide the above information for each deployment:
appsettings.json
{
"GCP": {
"ServiceID": "my-service",
"VersionID": "v1"
}
}
appsettings.Development.json
{
"GCP": {
"ID": "my-id"
}
}
Basically, the above will render the following configuration:
On Local execution
return WebHost.CreateDefaultBuilder(args)
.UseGoogleDiagnostics("my-id", "my-service", "v1")
.UseStartup<Startup>();
On GKE
return WebHost.CreateDefaultBuilder(args)
.UseGoogleDiagnostics(null, "my-service", "v1")
.UseStartup<Startup>();
To guarantee i'm using the correct information, i used two places on the GCP UI to verify:
On Endpoints Listing, checked the service details:
Service name: my-service Active version: v1
Checked the Endpoint logs, for a specific API POST Endpoint
{
insertId: "e6a63a28-1451-4132-ad44-a4447c33a4ac#a1"
jsonPayload: {…}
logName: "projects/xxx%2Fendpoints_log"
receiveTimestamp: "2019-07-11T21:03:34.851569606Z"
resource: {
labels: {
location: "us-central1-a"
method: "v1.xxx.ApiOCRPost"
project_id: "my-id"
service: "my-service"
version: "v1"
}
type: "api"
}
severity: "INFO"
timestamp: "2019-07-11T21:03:27.397632588Z"
}
Am i doing anything wrong or is it a bug on Google.Cloud.Diagnostics.AspNetCore 3.0.0-beta13?
When executing the service Endpoints, for each specific deployment, Google Cloud Diagnostics behaves differently:
On Local execution (VS2017) => Logging, Tracing and Error Reporting work as expected, everything showing in GCP UI
On GKE Deployment => Logging, Tracing and Error Reporting DO NOT Work, nothing shows in GCP UI
I've tried several variations, hardcoding the values directly in the code, etc, but no matter what i do, Google Cloud Diagnostics is not working when deployed in GKE:
Hardcoding the values directly
return WebHost.CreateDefaultBuilder(args)
.UseGoogleDiagnostics(null, "my-service", "v1")
.UseStartup<Startup>();
Without the v in Version
return WebHost.CreateDefaultBuilder(args)
.UseGoogleDiagnostics(null, "my-service", "1")
.UseStartup<Startup>();