I am assigned this task that I have to retrieve transactions from a block on any blockchain network and create a log file using GO programming language. I have searched ethereum blockchain and tried to do the same using geth client but it makes me download the whole blockchain which is more than 100gb. So my question is, is there any way to access a block on any blockchain and read it's transactions and use the same to create a log file. I just need some head up. Help appreciated. thanks
Please use truffle Ganache ethereum client.
Download from
http://truffleframework.com/ganache/
I have created NodeJS code to read transaction from latest block.
Step 1: Install nodeJS and NPM if not installed in your machine.
Step 2: Create new folder "demo" and create new package.json file. Place below code in package.json file
{
"name": "transactionRead",
"version": "1.0.0",
"description": "Blockchain Transaction Read",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"dependencies": {
"web3": "^0.19.0"
},
"author": "",
"license": "ISC"
}
Create index.js file and place below code.
var Web3 = require('web3');
var fs = require('fs');
//Create a log file to store transaction
fs.writeFile('log.txt', 'Hello Transaction!', function (err) {
if (err) throw err;
console.log('Created!');
});
// create an instance of web3 using the HTTP provider.
// NOTE in mist web3 is already available, so check first if it's available before instantiating
if (typeof web3 !== 'undefined') {
web3 = new Web3(web3.currentProvider);
} else {
// set the provider you want from Web3.providers
web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:7545"));
}
// Watch for blockchain transaction, if found changes fetch the transaction data
var filter = web3.eth.filter('latest', function (error, blockHash) {
if (!error) {
var block = web3.eth.getBlock(blockHash, true);
if (block.transactions.length > 0) {
console.log("found " + block.transactions.length + " transactions in block " + blockHash);
fs.appendFile('log.txt', JSON.stringify(block.transactions), function (err) {
if (err) throw err;
console.log('Updated!');
});
console.log(JSON.stringify(block.transactions));
} else {
console.log("no transaction in block: " + blockHash);
}
}
});
Step 4: Run $ node index.js command through command line
Let me know if need any help.
Thanks,
Related
I have been working with AWS for the last month and I need to know how can we update the step function without changing the name of the file.
The way that the documentation provided to make the changes in the step function is to change the name of the existing one and add the changes in the file. But that will eliminate the logs that have been created in the AWS CLI.
For Example, if I replace the following code with something else I have to change the whole dynamic of the project in order to make them appear in the AWS CLI
Can somebody please provide a solution for this??
The update part can be done through an AWS configuration command. Follow the commands below, it will keep all the changes in the execution logs as wells.
let aws = require('aws-sdk');
let roleArn = `roleARN goes here`;
let params = {
name: stepFunctionName,
roleArn: roleArn,
definition: JSON.stringify(definitionGoesHere),
};
let stepFunctions = new aws.StepFunctions();
stepfunctions.createStateMachine(params, function (err, data) {
if (err) {
console.log("error occured while creating the step function");
console.log(err, err.stack);
if (err.code === "StateMachineAlreadyExists" && err.statusCode === 400) {
let paramsUpdate = {
stateMachineArn: "stateMachine ARN for the existing stateMachine",
definition: JSON.stringify(definition),
loggingConfiguration: {
includeExecutionData: true,
},
roleArn: roleArn,
};
stepfunctions.updateStateMachine(
paramsUpdate,
function (error, updateData) {
if (error) {
console.log("error occured while updating the step function.");
console.log("Error", error.stack);
}
console.log("step function updated successfully");
console.log("response", updateData);
}
);
}
console.log(
"step function does not exist and the function creation and update faild in the process."
);
console.log("definition", definition for the stateMachine);
} // an error occurred
else console.log(data); // successful response
});
I tried but didn't work. Got an error: Error when evaluating SSR module /node_modules/cross-fetch/dist/browser-ponyfill.js:
<script lang="ts">
import fetch from 'cross-fetch';
import { ApolloClient, InMemoryCache, HttpLink } from "#apollo/client";
const client = new ApolloClient({
ssrMode: true,
link: new HttpLink({ uri: '/graphql', fetch }),
uri: 'http://localhost:4000/graphql',
cache: new InMemoryCache()
});
</script>
With SvelteKit the subject of CSR vs. SSR and where data fetching should happen is a bit deeper than with other somewhat "similar" solutions. The bellow guide should help you connect some of the dots, but a couple of things need to be stated first.
To define a server side route create a file with the .js extension anywhere in the src/routes directory tree. This .js file can have all the import statements required without the JS bundles that they reference being sent to the web browser.
The #apollo/client is quite huge as it contains the react dependency. Instead, you might wanna consider importing just the #apollo/client/core even if you're setting up the Apollo Client to be used only on the server side, as the demo bellow shows. The #apollo/client is not an ESM package. Notice how it's imported bellow in order for the project to build with the node adapter successfully.
Try going though the following steps.
Create a new SvelteKit app and choose the 'SvelteKit demo app' in the first step of the SvelteKit setup wizard. Answer the "Use TypeScript?" question with N as well as all of the questions afterwards.
npm init svelte#next demo-app
cd demo-app
Modify the package.json accordingly. Optionally check for all packages updates with npx npm-check-updates -u
{
"name": "demo-app",
"version": "0.0.1",
"scripts": {
"dev": "svelte-kit dev",
"build": "svelte-kit build --verbose",
"preview": "svelte-kit preview"
},
"devDependencies": {
"#apollo/client": "^3.3.15",
"#sveltejs/adapter-node": "next",
"#sveltejs/kit": "next",
"graphql": "^15.5.0",
"node-fetch": "^2.6.1",
"svelte": "^3.37.0"
},
"type": "module",
"dependencies": {
"#fontsource/fira-mono": "^4.2.2",
"#lukeed/uuid": "^2.0.0",
"cookie": "^0.4.1"
}
}
Modify the svelte.config.js accordingly.
import node from '#sveltejs/adapter-node';
export default {
kit: {
// By default, `npm run build` will create a standard Node app.
// You can create optimized builds for different platforms by
// specifying a different adapter
adapter: node(),
// hydrate the <div id="svelte"> element in src/app.html
target: '#svelte'
}
};
Create the src/lib/Client.js file with the following contents. This is the Apollo Client setup file.
import fetch from 'node-fetch';
import { ApolloClient, HttpLink } from '#apollo/client/core/core.cjs.js';
import { InMemoryCache } from '#apollo/client/cache/cache.cjs.js';
class Client {
constructor() {
if (Client._instance) {
return Client._instance
}
Client._instance = this;
this.client = this.setupClient();
}
setupClient() {
const link = new HttpLink({
uri: 'http://localhost:4000/graphql',
fetch
});
const client = new ApolloClient({
link,
cache: new InMemoryCache()
});
return client;
}
}
export const client = (new Client()).client;
Create the src/routes/qry/test.js with the following contents. This is the server side route. In case the graphql schema doesn't have the double function specify different query, input(s) and output.
import { client } from '$lib/Client.js';
import { gql } from '#apollo/client/core/core.cjs.js';
export const post = async request => {
const { num } = request.body;
try {
const query = gql`
query Doubled($x: Int) {
double(number: $x)
}
`;
const result = await client.query({
query,
variables: { x: num }
});
return {
status: 200,
body: {
nodes: result.data.double
}
}
} catch (err) {
return {
status: 500,
error: 'Error retrieving data'
}
}
}
Add the following to the load function of routes/todos/index.svelte file within <script context="module">...</script> tag.
try {
const res = await fetch('/qry/test', {
method: 'POST',
credentials: 'same-origin',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
num: 19
})
});
const data = await res.json();
console.log(data);
} catch (err) {
console.error(err);
}
Finally execute npm install and npm run dev commands. Load the site in your web browser and see the server side route being queried from the client whenever you hover over the TODOS link on the navbar. In the console's network tab notice how much quicker is the response from the test route on every second and subsequent request thanks to the Apollo client instance being a singleton.
Two things to have in mind when using phaleth solution above: caching and authenticated requests.
Since the client is used in the endpoint /qry/test.js, the singleton pattern with the caching behavior makes your server stateful. So if A then B make the same query B could end up seeing some of A data.
Same problem if you need authorization headers in your query. You would need to set this up in the setupClient method like so
setupClient(sometoken) {
...
const authLink = setContext((_, { headers }) => {
return {
headers: {
...headers,
authorization: `Bearer ${sometoken}`
}
};
});
const client = new ApolloClient({
credentials: 'include',
link: authLink.concat(link),
cache: new InMemoryCache()
});
}
But then with the singleton pattern this becomes problematic if you have multiple users.
To keep your server stateless, a work around is to avoid the singleton pattern and create a new Client(sometoken) in the endpoint.
This is not an optimal solution: it recreates the client on each request and basically just erases the cache. But this solves the caching and authorization concerns when you have multiple users.
We can use GCP cloud functions to start and stop the GCP instances but I need to work on scheduled suspend and resume of GCP instances using cloud function and scheduler.
From GCP documentation, I got that we can do start and stop using cloud functions available below
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/functions/scheduleinstance
Do we have same node JS or other language Pcgks available to suspend and resume GCP instances?
If not can we create our own for suspend/resume.
When I tried one I got below error
"TypeError: compute.zone(...).vm(...).resume is not a function
Edit, thanks Chris and Guillaume, after going through you links i have edited my code and below is my index.js file now.
For some reason when I do
gcloud functions deploy resumeInstancePubSub --trigger-topic resume-instance --runtime nodejs10 --allow-unauthenticated
i always get
Function 'resumeInstancePubSub1' is not defined in the provided module.
resumeInstancePubSub1 2020-09-04 10:57:00.333 Did you specify the correct target function to execute?
i have not worked on Node JS Or JS before, I was expecting something similar to start/stop documentation which I could make work easily using below git repo
https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git
My index.js file,
// BEFORE RUNNING:
// ---------------
// 1. If not already done, enable the Compute Engine API
// and check the quota for your project at
// https://console.developers.google.com/apis/api/compute
// 2. This sample uses Application Default Credentials for authentication.
// If not already done, install the gcloud CLI from
// https://cloud.google.com/sdk and run
// `gcloud beta auth application-default login`.
// For more information, see
// https://developers.google.com/identity/protocols/application-default-credentials
// 3. Install the Node.js client library by running
// `npm install googleapis --save`
const {google} = require('googleapis');
var compute = google.compute('beta');
authorize(function(authClient) {
var request = {
// Project ID for this request.
project: 'my-project', // TODO: Update placeholder value.
// The name of the zone for this request.
zone: 'my-zone', // TODO: Update placeholder value.
// Name of the instance resource to resume.
instance: 'my-instance', // TODO: Update placeholder value.
resource: {
// TODO: Add desired properties to the request body.
},
auth: authClient,
};
exports.resumeInstancePubSub = async (event, context, callback) => {
try {
const payload = _validatePayload(
JSON.parse(Buffer.from(event.data, 'base64').toString())
);
const options = {filter: `labels.${payload.label}`};
const [vms] = await compute.getVMs(options);
await Promise.all(
vms.map(async (instance) => {
if (payload.zone === instance.zone.id) {
const [operation] = await compute
.zone(payload.zone)
.vm(instance.name)
.resume();
// Operation pending
return operation.promise();
}
})
);
// Operation complete. Instance successfully started.
const message = `Successfully started instance(s)`;
console.log(message);
callback(null, message);
} catch (err) {
console.log(err);
callback(err);
}
};
compute.instances.resume(request, function(err, response) {
if (err) {
console.error(err);
return;
}
// TODO: Change code below to process the `response` object:
console.log(JSON.stringify(response, null, 2));
});
});
function authorize(callback) {
google.auth.getClient({
scopes: ['https://www.googleapis.com/auth/cloud-platform']
}).then(client => {
callback(client);
}).catch(err => {
console.error('authentication failed: ', err);
});
}
Here and here is the documetation for the new beta verison of the api. You can see that you can suspend an instance like:
compute.instances.suspend(request, function(err, response) {
if (err) {
console.error(err);
return;
}
And you can resume a instance in a similar way:
compute.instances.resume(request, function(err, response) {
if (err) {
console.error(err);
return;
}
GCP recently added "create schedule" feature to start and stop the VM instances based on the configured schedule.
More details can be found at
https://cloud.google.com/compute/docs/instances/schedule-instance-start-stop#managing_instance_schedules
I am trying to develop an Alexa skill, that fetches information from a DynamoDB database. In order to use that I have to import the aws-sdk.
But for some reason when I import it, my skill stops working. The skill does not even open. My code is hosted from the Alexa Developer Console.
Here's what happens:
In the testing panel, when I input 'Open Cricket Update' (the app name), Alexa's response is, 'There was a problem with the requested skill's response'.
This happens only when I import the aws-sdk.
What am I doing wrong?
index.js
const Alexa = require('ask-sdk-core');
const AWS = require('aws-sdk');
AWS.config.update({region:'us-east-1'});
const table = 'CricketData';
const docClient = new AWS.DynamoDB.DocumentClient();
const LaunchRequestHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'LaunchRequest';
},
handle(handlerInput) {
const speakOutput = 'Hello! Welcome to cricket update.';
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
}
};
package.json
{
"name": "hello-world",
"version": "1.1.0",
"description": "alexa utility for quickly building skills",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Amazon Alexa",
"license": "ISC",
"dependencies": {
"ask-sdk-core": "^2.6.0",
"ask-sdk-model": "^1.18.0",
"aws-sdk": "^2.326.0"
}
}
You are missing the exports.handler block at the end of your index.js that "builds" the skill composed from your handlers, e.g.
exports.handler = Alexa.SkillBuilders.custom()
.addRequestHandlers(LaunchRequestHandler)
.lambda();
A more complete example can be found here
I am using the GCP console on my browser. I have created a function as following:
function listFiles(bucketName) {
// [START storage_list_files]
// Imports the Google Cloud client library
const Storage = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
storage
.bucket(bucketName)
.getFiles()
.then(results => {
const files = results[0];
console.log('Files:');
files.forEach(file => {
console.log(file.name);
});
})
.catch(err => {
console.error('ERROR:', err);
});
// [END storage_list_files]
}
exports.helloWorld = function helloWorld (req, res) {
if (req.body.message === undefined) {
// This is an error case, as "message" is required
res.status(400).send('No message defined!');
}
else {
// Everything is ok
console.log(req.body.lat);
console.log(req.body.lon);
listFiles("drive-test-demo");
res.status(200).end();
}
}
Literally all I am trying to do right now is list the files inside a bucket, if a certain HTTPS trigger comes through.
my package.json file is as follows:
{
"name": "sample-http",
"version": "0.0.1",
"dependencies": {
"#google-cloud/storage": "1.5.1"
}
}
and I am getting the error "Cannot find module '#google-cloud/storage'"
Most queries I have seen thus far have been resolved by using npm install, but I don't know how to do that considering that my index.js and package.json files are stored in a zip file inside a gcloud bucket. Any advice on how to solve this would be much apreciated.
Open console, change dir to you functions project and type:
npm install --save #google-cloud/storage
That's all!