Why is transaction marked as invalid? - blockchain

I am trying to create a transaction by sending 1 ether from one account to another. Currently, I'm running a local fully-synched parity node. It's running on the Volta test network of EWF (Energy Web Foundation). It's actually possible connecting to that node by Metamask and sending some Ether, but whenever I try that with the web3js which runs in a nodejs app, the parity node gives the following warning/output:
2019-10-25 00:56:50 jsonrpc-eventloop-0 TRACE own_tx Importing transaction: PendingTransaction { transaction: SignedTransaction { transaction: UnverifiedTransaction { unsigned: Transaction { nonce: 1, gas_price: 60000000000, gas: 21000, action: Call(0x2fa24fee2643d577d2859e90bc6d9df0e952034c), value: 1000000000000000000, data: [] }, v: 37, r: 44380982720866416681786190614497685349697533052419104293476368619223783280954, s: 3058706309566473993642661190954381582008760336148306221085742371338506386965, hash: 0x31b4f889f5f10e08b9f10c87f953c9dfded5d0ed1983815c3b1b837700f43702 }, sender: 0x0b9b5323069e9f9fb438e89196b121f3d40fd56e, public: Some(0xa3fc6a484716b844f18cef0039444a3188a295811f133324288cb963f3e5a21dd6ee19c91e42fa745b45a3cf876ff04e0fd1d060ccfe1dab9b1d608dda4c3733) }, condition: None }
2019-10-25 00:56:50 jsonrpc-eventloop-0 DEBUG own_tx Imported to the pool (hash 0x31b4f889f5f10e08b9f10c87f953c9dfded5d0ed1983815c3b1b837700f43702)
2019-10-25 00:56:50 jsonrpc-eventloop-0 WARN own_tx Transaction marked invalid (hash 0x31b4f889f5f10e08b9f10c87f953c9dfded5d0ed1983815c3b1b837700f43702)
When I checked the balance of the accounts, nothing has happened. I've tried to increase gasPrice, adding a 'from' key/value pair to txObject, etc. I have also started parity node with --no-persistent-txqueue so that it doesn't cache too many of transactions, as it's suggested here. But that didn't change anything, either. So I still get the same error and transaction doesn't go through. What is causing this problem and how can I solve it?
web3.eth.getTransactionCount(from, (err, txCount) => {
const txObject = {
nonce: web3.utils.toHex(txCount),
from: from,
to: to,
value: web3.utils.toHex(web3.utils.toWei(val, 'ether')),
gas: web3.utils.toHex(21000),
gasPrice: web3.utils.toHex(web3.utils.toWei('60', 'gwei'))
}
// Sign the transaction
const tx = new Tx(txObject);
tx.sign(pk);
const serializedTx = tx.serialize();
const raw = '0x' + serializedTx.toString('hex');
// Broadchast the transaction to the network
web3.eth.sendSignedTransaction(raw, (err, txHash) => {
if (txHash === undefined) {
res.render('sendTransaction', {
txSuccess: 0,
blockHash: 'Hash Undefined'
});
res.end();;
} else {
res.render('sendTransaction', {
txSuccess: 1,
blockHash: txHash,
});
res.end();;
}
});
});
Any suggestion is welcome,
Thanks!

The code looks fine so it must be something small, you're very close!
What does the error object contain in web3.eth. sendSignedTransaction callback?
Check your nonce to make sure it has not been used yet. https://volta-explorer.energyweb.org/address/0x0b9b5323069e9f9fb438e89196b121f3d40fd56e/transactions shows a few transactions from the sender address, with "4" being next value for nonce
Try generating the raw transaction via JavaScript and broadcasting it manually via command line

Related

how likely is it to miss a solidity event?

Hi i am making a finance application that depends on Events from blockchain,
Basically i update my database based on Events i receive using web3js, and when user asks i sign with private key for the contract to be able to give user Money.
My only concern is can i depend on events? like can there be a case where i miss events?
here is my code for doing so :
const contract = new web3.eth.Contract(abi, contract_address)
const stale_keccak256 = "0x507ac39eb33610191cd8fd54286e91c5cc464c262861643be3978f5a9f18ab02";
const unStake_keccak256 = "0x4ac743692c9ced0a3f0052fb9917c0856b6b12671016afe41b649643a89b1ad5";
const getReward_keccak256 = "0x25c30c62c42b51e4f667b70ef60f1f683c376f6ace28312ed45a40665e01af37";
let userRepository: Repository<UserData> = connection.getRepository(UserData);
let globalRepository: Repository<GlobalStakingInfo> = connection.getRepository(GlobalStakingInfo);
let userStakingRepository: Repository<UserStakingInfo> = connection.getRepository(UserStakingInfo);
let transactionRepository: Repository<Transaction> = connection.getRepository(Transaction);
const topics = []
web3.eth.subscribe('logs', {
address: contract_address, topics: topics
},
function (error: Error, result: Log) {
if (error) console.log(error)
}).on("data", async function (log: Log) {
let response: Boolean = false;
try {
response = await SaveTransaction(rpc_url, log.address, log.transactionHash, transactionRepository)
} catch (e) {
}
if (response) {
try {
let global_instance: GlobalStakingInfo | null = await globalRepository.findOne({where: {id: 1}})
if (!global_instance) {
global_instance = new GlobalStakingInfo()
global_instance.id = 1;
global_instance = await globalRepository.save(global_instance);
}
if (log.topics[0] === stale_keccak256) {
await onStake(web3, log, contract, userRepository, globalRepository, userStakingRepository, global_instance);
} else if (log.topics[0] === unStake_keccak256) {
await onUnStake(web3, log, contract, userStakingRepository, userRepository, globalRepository, global_instance)
} else if (log.topics[0] === getReward_keccak256) {
await onGetReward(web3, log, userRepository)
}
} catch (e) {
console.log("I MADE A BOBO", e)
}
}
}
)
The Code works and everything, i am just concerned if i could maybe miss a event? cause finance is related and people will lose money if missing event is a thing.
Please advise
You can increase redundancy by adding more instances of the listener connected to other nodes.
And also by polling past logs - again recommended to use a separate node.
Having multiple instances doing practically the same thing will result in multiplication of incoming data, so don't forget to store only unique logs. This might be a bit tricky, because theoretically one transaction ID can produce the same log twice (e.g. through a multicall), so the unique key could be a combination of a transaction ID as well as the log index (unique per block).

Ethereum.on How to get error if chain is not added into metamask yet

With this method app is listening for chain change:
ethereum.on('chainChanged', (chainId) => {
})
but if the chain to which the user is going is not added yet into metamask it throws :
inpage.js:1 MetaMask - RPC Error: Unrecognized chain ID "0x89".
Try adding the chain using wallet_addEthereumChain first. Object
of course, there is a method to add a new chain into metamask but how to catch this metamask error? try and catch outside ethereum.on gives nothing
Thanks!
write a mapping for metamask networks:
const NETWORKS = {
1: "Ethereum Main Network",
3: "Ropsten Test Network",
4: "Rinkeby Test Network",
5: "Goerli Test Network",
42: "Kovan Test Network",
56: "Binance Smart Chain",
1337: "Ganache",
};
const getChainId= async () => {
const chainId = await web3.eth.getChainId();
// handle the error here
if (!chainId) {
throw new Error(
"Cannot retrieve an account. Please refresh the browser"
);
}
return NETWORKS[chainId];
}
);
The simplest approach seems for me it is to add chain first before switching and after successful adding Metamask will ask user to switch network to newly added here sample code to add BSC network:
export async function addBSCToMetamask() {
if (typeof window !== 'undefined') {
window.ethereum.request({
jsonrpc: '2.0',
method: 'wallet_addEthereumChain',
params: [
{
chainId: '0x38',
chainName: 'Binance Smart Chain Mainnet',
rpcUrls: ['https://bsc-dataseed.binance.org/'],
nativeCurrency: {
name: 'BNB',
symbol: 'BNB',
decimals: 18
},
blockExplorerUrls: ['https://bscscan.com']
}
],
id: 0
})
}
}

Lambda trigger is not working as intended with bulk data

I'm using lambda triggers to detect an insertion into a DynamoDB table (Tweets). Once triggered, I want to take the message in the event, and get the sentiment for it using Comprehend. I then want to update a second DynamoDB table (SentimentAnalysis) where I ADD + 1 to a value depending on the sentiment.
This works fine if I manually insert a single item, but I want to be able to use the Twitter API to insert bulk data into my DynamoDB table and have every tweet analysed for its sentiment. The lambda function works fine if the count specified in the Twitter params is <= 5, but anything above causes an issue with the update in the SentimentAnalysis table, and instead the trigger keeps repeating itself with no sign of progress or stopping.
This is my lambda code:
let AWS = require("aws-sdk");
let comprehend = new AWS.Comprehend();
let documentClient = new AWS.DynamoDB.DocumentClient();
exports.handler = (event, context) => {
event.Records.forEach(record => {
if (record.eventName == "INSERT") {
//console.log(JSON.stringify(record.dynamodb.NewImage.tweet.S));
let params = {
LanguageCode: "en",
Text: JSON.stringify(record.dynamodb.NewImage.tweet.S)
};
comprehend.detectSentiment(params, (err, data) => {
if (err) {
console.log("\nError with call to Comprehend:\n " + JSON.stringify(err));
} else {
console.log("\nSuccessful call to Comprehend:\n " + data.Sentiment);
//when comprehend is successful, update the sentiment analysis data
//we can use the ADD expression to increment the value of a number
let sentimentParams = {
TableName: "SentimentAnalysis",
Key: {
city: record.dynamodb.NewImage.city.S,
},
UpdateExpression: "ADD " + data.Sentiment.toLowerCase() + " :pr",
ExpressionAttributeValues: {
":pr": 1
}
};
documentClient.update(sentimentParams, (err, data) => {
if (err) {
console.error("Unable to read item " + JSON.stringify(sentimentParams.TableName));
} else {
console.log("Successful Update: " + JSON.stringify(data));
}
});
}
});
}
});
};
This is the image of a successful call, it works with the first few tweets
This is the unsuccessful call right after the first image. The request is always timed out
The timeout is why it’s happening repeatedly. If the lambda times out or otherwise errs it will cause the batch to be reprocessed. You need to handle this because the delivery is “at least once”. You also need to figure out the cause of the timeout. It might be as simple as smaller batches, or a more complex solution using step functions. You might just be able to increase the timeout on the lambda.

Apollo client writeQuery updates stores, but UI componens only updates after the second function call

"apollo-cache-inmemory": "^1.6.2",
"apollo-client": "^2.6.3",
I setup a simple subscription with the client.subscribe method and try to update the store with the client.writeQuery method
export default class App extends Component<Props> {
componentDidMount() {
this.purchaseSubscription = client.subscribe({
query: PURCHASE_ASSIGNED_SUBSCRIPTION,
variables: { status: ['INPREPARATION', 'PROCESSED', 'READYFORCOLLECTION', 'ONTHEWAY', 'ATLOCATION'] },
}).subscribe({
next: (subscriptionData) => {
const { cache } = client;
const prev = cache.readQuery({
query: MY_PURCHASES,
variables: { status: ['INPREPARATION', 'PROCESSED', 'READYFORCOLLECTION', 'ONTHEWAY', 'ATLOCATION'] },
});
const newPurchase = subscriptionData.data.purchaseAssignedToMe;
const data = { myPurchases: [...prev.myPurchases, newPurchase] };
cache.writeQuery({
query: MY_PURCHASES,
variables: { status: ['INPREPARATION', 'PROCESSED', 'READYFORCOLLECTION', 'ONTHEWAY', 'ATLOCATION'] },
data,
});
},
error: (err) => { console.error('err', err) },
});
}
render() {
return (
<ApolloProvider client={client}>
<AppContainer />
</ApolloProvider>
);
}
}
The store gets updated after the call, however the UI component is only re-rendered only on the second publish event.
The UI components is setup the following way:
<Query
query={MY_PURCHASES}
variables={{ status: ['INPREPARATION', 'PROCESSED', 'READYFORCOLLECTION', 'ONTHEWAY', 'ATLOCATION'] }}
>
...
<Query />
By reading the cache after the writeQuery is called I was able to validate that the store reflect the proper state, however the UI component only gets updated at every second call.
What am I missing here?
ApolloClient.subscribe's next function is very similar to how updateQueries works in Apollo Client’s mutate function, but with the exception that cache.writeQuery does not broadcast the changes if it is not called from the the Mutation's update function.
SOLUTION: use client.writeQuery(...) instead of cache.writeQuery(...)
Note: The update function receives cache rather than client as its
first parameter. This cache is typically an instance of InMemoryCache,
as supplied to the ApolloClient constructor when the client was
created. In case of the update function, when you call
cache.writeQuery, the update internally calls broadcastQueries, so
queries listening to the changes will update. However, this behavior
of broadcasting changes after cache.writeQuery happens only with the
update function. Anywhere else, cache.writeQuery would just write to
the cache, and the changes would not be immediately broadcast to the
view layer. To avoid this confusion, prefer client.writeQuery when
writing to cache.
Source: https://www.apollographql.com/docs/react/essentials/mutations/#updating-the-cache

AWS javascript SDK request.js send request function execution time gradually increases

I am using aws-sdk to push data to Kinesis stream.
I am using PutRecord to achieve realtime data push.
I am observing same delay in putRecords as well in case of batch write.
I have tried out this with 4 records where I am not crossing any shard limit.
Below is my node js http agent configurations. Default maxSocket value is set to infinity.
Agent {
domain: null,
_events: { free: [Function] },
_eventsCount: 1,
_maxListeners: undefined,
defaultPort: 80,
protocol: 'http:',
options: { path: null },
requests: {},
sockets: {},
freeSockets: {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: Infinity,
maxFreeSockets: 256 }
Below is my code.
I am using following code to trigger putRecord call
event.Records.forEach(function(record) {
var payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
// put record request
evt = transformEvent(payload );
promises.push(writeRecordToKinesis(kinesis, streamName, evt ));
}
Event structure is
evt = {
Data: new Buffer(JSON.stringify(payload)),
PartitionKey: payload.PartitionKey,
StreamName: streamName,
SequenceNumberForOrdering: dateInMillis.toString()
};
This event is used in put request.
function writeRecordToKinesis(kinesis, streamName, evt ) {
console.time('WRITE_TO_KINESIS_EXECUTION_TIME');
var deferred = Q.defer();
try {
kinesis.putRecord(evt , function(err, data) {
if (err) {
console.warn('Kinesis putRecord %j', err);
deferred.reject(err);
} else {
console.log(data);
deferred.resolve(data);
}
console.timeEnd('WRITE_TO_KINESIS_EXECUTION_TIME');
});
} catch (e) {
console.error('Error occured while writing data to Kinesis' + e);
deferred.reject(e);
}
return deferred.promise;
}
Below is output for 3 messages.
WRITE_TO_KINESIS_EXECUTION_TIME: 2026ms
WRITE_TO_KINESIS_EXECUTION_TIME: 2971ms
WRITE_TO_KINESIS_EXECUTION_TIME: 3458ms
Here we can see gradual increase in response time and function execution time.
I have added counters in aws-sdk request.js class. I can see same pattern in there as well.
Below is code snippet for aws-sdk request.js class which executes put request.
send: function send(callback) {
console.time('SEND_REQUEST_TO_KINESIS_EXECUTION_TIME');
if (callback) {
this.on('complete', function (resp) {
console.timeEnd('SEND_REQUEST_TO_KINESIS_EXECUTION_TIME');
callback.call(resp, resp.error, resp.data);
});
}
this.runTo();
return this.response;
},
Output for send request:
SEND_REQUEST_TO_KINESIS_EXECUTION_TIME: 1751ms
SEND_REQUEST_TO_KINESIS_EXECUTION_TIME: 1816ms
SEND_REQUEST_TO_KINESIS_EXECUTION_TIME: 2761ms
SEND_REQUEST_TO_KINESIS_EXECUTION_TIME: 3248ms
Here you can see it is increasing gradually.
Can anyone please suggest how can I reduce this delay?
3 seconds to push single record to Kinesis is not at all acceptable.