migration run out of gas - blockchain

Hi I am using Geth and i try to truffle migrate but it gives error.
truffle-config.js is belown:
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "4", //rinkeby id
from:"my address",
gas: 1000
}
When I do truffle migrate using command -truffle migrate, I get this error.
Error: Error: Error: *** Deployment Failed ***
"Migrations" ran out of gas (using a value you set in your network
config or deployment parameters.)
* Block limit: 0x50e7c
* Gas sent: 1000
at Object.run (/usr/local/lib/node_modules/truffle/build/webpack:/packages/truffle-migrate/index.js:92:1)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
Can you help me please?

I solved the error by adding this code to the truffle config file.
compilers: {
solc: {
version: "0.5.16",
settings: {
optimizer: {
enabled: true, // Default: false
runs: 1000, // Default: 200
},
},
},
},

It's exactly what the error says.
ran out of gas (using a value you set in your network
config or deployment parameters.)
gas: 1000 is not enough to deploy your contract

Related

Unable to create a scheduler using the #google-cloud/scheduler package

GOT this error while creating a scheduler , earlier i got the code on local and it was working but when i deployed the code on vm on gcp itself it started failing and showed this error.
Error: 7 PERMISSION_DENIED: Request had insufficient authentication scopes.
at Object.callErrorFromStatus (/app/node_modules/#grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client.js:180:52)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:336:141)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:299:181)`enter code here`
at /app/node_modules/#grpc/grpc-js/build/src/call-stream.js:160:78
at processTicksAndRejections (internal/process/task_queues.js:79:11) {
code: 7,
details: 'Request had insufficient authentication scopes.',
metadata: Metadata {
internalRepr: Map {
'google.rpc.errorinfo-bin' => [Array],
'grpc-status-details-bin' => [Array],
'grpc-server-stats-bin' => [Array]
},
options: {}
},
statusDetails: [
ErrorInfo {
metadata: [Object],
reason: 'ACCESS_TOKEN_SCOPE_INSUFFICIENT',
domain: 'googleapis.com'
}
],
reason: 'ACCESS_TOKEN_SCOPE_INSUFFICIENT',
domain: 'googleapis.com',
errorInfoMetadata: {
service: 'cloudscheduler.googleapis.com',
method: 'google.cloud.scheduler.v1.CloudScheduler.CreateJob'`enter code here`
}
}
If you use the default service account on your compute instance, you have to update the scopes; add all or only the Cloud Scheduler one.
If you don't use the default service account, you haven't scopes to select (scopes selection is a legacy mode, no longer available with new feature).
Note You have to stop the VM, change the scopes/service account and then restart the VM

Meteor deploy error (mup): pushing meteor app bundle to server failed

I am trying to deploy a meteor app to an AWS server, but am getting this message:
Started TaskList: Configuring App
[52.41.84.125] - Pushing the Startup Script
nodemiral:sess:52.41.84.125 copy file - src: /
Users/Olivia/.nvm/versions/node/v7.8.0/lib/node_modules/mup/lib/modules/meteor/assets/templates/start.sh, dest: /opt/CanDu/config/start.sh, vars: {"appName":"CanDu","useLocalMongo":0,"port":80,"bind":"0.0.0.0","logConfig":{"opts":{"max-size":"100m","max-file":10}},"docker":{"image":"abernix/meteord:base","imageFrontendServer":"meteorhacks/mup-frontend-server","imagePort":80},"nginxClientUploadLimit":"10M"} +0ms
[52.41.84.125] x Pushing the Startup Script: FAILED Failure
Previously I had been able to deploy using mup, but now I am getting this message. The only major thing I've changed is the Python path in my .noderc. I am also able to SSH into my amazon server directly from the terminal. My mup file is:
module.exports = {
servers: {
one: {
host: '##.##.##.###',
username: 'ec2-user',
pem: '/Users/Olivia/.ssh/oz-pair.pem'
// password:
// or leave blank for authenticate from ssh-agent
}}meteor: {
name: 'CanDu',
path: '/Users/Olivia/repos/bene_candu_v2',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
mobileSettings: {
public: {
"astronomer": {
"appId": "<key>",
"disableUserTracking": false,
"disableRouteTracking": false,
"disableMethodTracking": false
},
"googleMaps": "<key>",
"facebook":{
"permissions":["email","public_profile","user_friends"]
}
},
},
},
env: {
ROOT_URL: 'http://ec2-##-##-##-###.us-west-2.compute.amazonaws.com',
MONGO_URL: 'mongodb://. . . "
},
/*ssl: {
crt: '/opt/keys/server.crt', // this is a bundle of certificates
key: '/opt/keys/server.key', // this is the private key of the certificate
port: 443, // 443 is the default value and it's the standard HTTPS port
upload: false
},*/
docker: {
image: 'abernix/meteord:base'
},
deployCheckWaitTime: 60
}
};
And I have checked to make sure there are no trailing commas, and have tried increasing the wait time. etc. The error message I'm getting is pretty unhelpful. Does anyone have any insight? Thank you so much!

Deploying EmberJS to AWS using SSH + RSync

I've managed to deploy a simple todo app unto AWS with S3 using this site
http://emberigniter.com/deploy-ember-cli-app-amazon-s3-linux-ssh-rsync/
However, when I attempt to do this ( Deploying with SSH and Rsync ) according to the tutorial, I run into the following error:
gzipping **/*.{js,css,json,ico,map,xml,txt,svg,eot,ttf,woff,woff2}
ignoring null
✔ assets/ember-user-app-d41d8cd98f00b204e9800998ecf8427e.css
✔ assets/vendor-d41d8cd98f00b204e9800998ecf8427e.css
✔ assets/ember-user-app-45a9825ab0116a8007bb48645b09f060.js
✔ crossdomain.xml
✔ robots.txt
✔ assets/vendor-d008595752c8e859a04200ceb9a77874.js
gzipped 6 files ok
|
+- upload
| |
| +- rsync
- Uploading using rsync...
- Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-47/rsync/io.c(453) [sender=2.6.9]
The following is my config/deploy.js
module.exports = function(deployTarget) {
var ENV = {
build: {
environment: deployTarget
},
's3-index': {
accessKeyId: "<myKeyID>",
secretAccessKey: "<mySecret>",
bucket: "emberjsft",
region: "ap-southeast-1",
allowOverwrite: true
},
's3': {
accessKeyId: "<myKeyID>",
secretAccessKey: "<mySecret>",
bucket: "emberjsft",
region: "ap-southeast-1"
},
'ssh-index': {
remoteDir: "/var/www/",
username: "ec2-user",
host: "ec2-<elastic-ip>.ap-southeast-1.compute.amazonaws.com",
privateKeyFile: "/Users/imac/MY_AWS_PEMFILE.pem",
allowOverwrite: true
},
rsync: {
dest: "/var/www/",
username: "ec2-user",
host: "ec2-<elastic-ip>.ap-southeast-1.compute.amazonaws.com",
delete: false
}
// include other plugin configuration that applies to all deploy targets here
};
if (deployTarget === 'development') {
ENV.build.environment = 'development';
// configure other plugins for development deploy target here
}
if (deployTarget === 'staging') {
ENV.build.environment = 'production';
// configure other plugins for staging deploy target here
}
if (deployTarget === 'production') {
ENV.build.environment = 'production';
// configure other plugins for production deploy target here
}
// Note: if you need to build some configuration asynchronously, you can return
// a promise that resolves with the ENV object instead of returning the
// ENV object synchronously.
return ENV;
};
How should I resolve this issue?
Thanks
I've just spent the last hour fighting the same issue as you. I was able to kind of fix it by using ssh-add /home/user/.ssh/example-key.pem and removing privateKeyFile.
I still get a error thrown after the transfer ends, but can confirm all files successfully transferred to my EC2 box despite the error..
deploy.js
module.exports = function (deployTarget) {
var ENV = {
build: {
environment: deployTarget
},
'ssh-index': {
remoteDir: "/var/www/",
username: "ubuntu",
host: "52.xx.xx.xx",
allowOverwrite: true
},
rsync: {
host: "ubuntu#52.xx.xx.xx",
dest: "/var/www/",
recursive: true,
delete: true
}
};
return ENV;
};
In your deploy.js file you need to place your information for accessKeyId. You left "" in the place of accessKeyId. You need to put your information there. Same for secretAccessKey, acessKeyId, plus your host , you need to put your elastic-ip address.
myKeyID and mySecret shall be present in a .env file and then accessed here by process.env.myKeyID , process.env.mySecret
Not a good practice to hard-code the Keys in deploy.js file.
Best practise would be read it using Consul

Rails app deployment with AZK fail on Digital Ocean

I'm trying to push a very simple rails app on a DigitalOcean droplet. Unfortunatly, i'm enable to continue the deployment : i get stuck with a very elusive error message :
info: [agent] get agent status
info: [agent] agent is running: true
info: [agent] get agent status
info: [agent] agent is running: true
info: [agent] get agent status
info: [agent] agent is running: true
info: [agent] get agent status
info: [agent] agent is running: true
info: Connecting to http://192.168.50.4:2375
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
fatal: [default]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "parsed": false}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #playbooks/setup.retry
PLAY RECAP *********************************************************************
default : ok=0 changed=0 unreachable=0 failed=1
Am i the only one who ever encounter this problem ?
Here is my Azkfile too :
/**
* Documentation: http://docs.azk.io/Azkfile.js
*/
// Adds the systems that shape your system
systems({
'apptelier-website': {
// Dependent systems
depends: [],
// More images: http://images.azk.io
image: {docker: 'azukiapp/ruby:2.3.0'},
// Steps to execute before running instances
provision: [
"bundle install --path /azk/bundler"
],
workdir: "/azk/#{manifest.dir}",
shell: "/bin/bash",
command: ["bundle", "exec", "rackup", "config.ru", "--pid", "/tmp/ruby.pid", "--port", "$HTTP_PORT", "--host", "0.0.0.0"],
wait: 20,
mounts: {
'/azk/#{manifest.dir}': sync("."),
'/azk/bundler': persistent("./bundler"),
'/azk/#{manifest.dir}/tmp': persistent("./tmp"),
'/azk/#{manifest.dir}/log': path("./log"),
'/azk/#{manifest.dir}/.bundle': path("./.bundle")
},
scalable: {"default": 1},
http: {
domains: [
'#{env.HOST_DOMAIN}', // used if deployed
'#{env.HOST_IP}', // used if deployed
'#{system.name}.#{azk.default_domain}' // default azk domain
]
},
ports: {
// exports global variables
http: "3000/tcp"
},
envs: {
// Make sure that the PORT value is the same as the one
// in ports/http below, and that it's also the same
// if you're setting it in a .env file
RUBY_ENV: "production",
RAILS_ENV: "production",
RACK_ENV: 'production',
WORKER_RETRY: 1,
BUNDLE_APP_CONFIG: '/azk/bundler',
APP_URL: '#{system.name}.#{azk.default_domain}'
}
},
deploy: {
image: {docker: 'azukiapp/deploy-digitalocean'},
mounts: {
'/azk/deploy/src': path('.'),
'/azk/deploy/.ssh': path('#{env.HOME}/.ssh'), // Required to connect with the remote server
'/azk/deploy/.config': persistent('deploy-config')
},
// This is not a server. Just run it with `azk deploy`
scalable: {default: 0, limit: 0},
envs: {
GIT_REF: 'master',
AZK_RESTART_COMMAND: 'azk restart -Rvv',
BOX_SIZE: '512mb'
}
}
});
Thanks for the help !
Edouard, nice to meet you. I'm from azk core team and I'm not sure if you're using the latest deployment image.
Please follow these steps to update it:
adocker pull azukiapp/deploy-digitalocean:0.0.7;
Edit the deploy system in your Azkfile and add the tag 0.0.7 to the used deployment image to ensure we're using the latest one. It should be like: image: {docker: 'azukiapp/deploy-digitalocean:0.0.7'};
Next, be sure you have the env DEPLOY_API_TOKEN set in your .env file. If you don't have it set yet, take a look on the Step 7 of the article we've published on DigitalOcean Community Tutorials: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-azk#step-7-%E2%80%94-obtaining-a-digitalocean-api-token
Finally, re-run the deploy command:
azk deploy clear-cache;
azk deploy
Please let me know if this is enough to solve your problem.

No Ports Assigned to Strongloop App

I am attempting to deploy a strongloop app to a Digitalocean remote box running Strongloop Process Manager. I have gotten as far as successfully running the deploy command as follows:
USER ~/projects/loopback/places-api $ slc deploy http://IPADDRESS deploy
Counting objects: 5215, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4781/4781), done.
Writing objects: 100% (5215/5215), 7.06 MiB | 4.27 MiB/s, done.
Total 5215 (delta 1130), reused 0 (delta 0)
To http://104.131.66.124:8701/api/services/1/deploy/default
* [new branch] deploy -> deploy
Deployed `deploy` as `placesAPI` to `http://IPADDRESS:8701/`
Next, I check the status of my Strongloop app by running the following command:
slc ctl -C http://IPADDRESS:8701
Service ID: 1
Service Name: placesAPI
Environment variables:
No environment variables defined
Instances:
Version Agent version Debugger version Cluster size Driver metadata
5.1.0 2.0.2 n/a 1 N/A
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling? Tracing? Debugging?
1.1.1050 1050 0
1.1.2065 2065 49
At this point, I am not able to access my app by visiting IPADDRESS:3001 as the Strongloop documentation would suggest and there are no processes listed in the above app status running on port 3001 as would be expected according to the Strongloop documentation.
Comparing my app status to the app status at this state of deployment shown in the Strongloop documentation, It appears I should have some processes listening to port 3001 which are not running in my app.
Here is the app status shown in the Strongloop documentation:
$ slc ctl -C http://prod.foo.com:7777
Service ID: 1
Service Name: appone
Environment variables:
No environment variables defined
Instances:
Version Agent version Cluster size
4.0.30 1.4.15 4
Processes:
ID PID WID Listening Ports Tracking objects? CPU profiling?
1.1.22555 22555 0
1.1.22741 22741 5 prod.foo.com:3001
1.1.22748 22748 6 prod.foo.com:3001
1.1.22773 22773 7 prod.foo.com:3001
1.1.22793 22793 8 prod.foo.com:3001
Notice the additional processes listening to port 3001.
My question is: how do I get my strongloop app to run and listen to these ports?
If it helps here are my package.json and config.json files:
:::::::::::::::::::::::package.json::::::::::::::::
{
"name": "placesAPI",
"version": "1.0.0",
"main": "server/server.js",
"scripts": {
"start": "node .",
"pretest": "jshint ."
},
"dependencies": {
"body-parser": "^1.9.0",
"compression": "^1.0.3",
"connect-ensure-login": "^0.1.1",
"cookie-parser": "^1.3.2",
"cors": "^2.5.2",
"errorhandler": "^1.1.1",
"express-flash": "0.0.2",
"express-session": "^1.7.6",
"jade": "^1.7.0",
"loopback": "^2.22.0",
"loopback-boot": "^2.6.5",
"loopback-component-explorer": "^2.1.0",
"loopback-component-passport": "^1.5.0",
"loopback-connector-postgresql": "^2.4.0",
"loopback-datasource-juggler": "^2.39.0",
"passport": "^0.3.2",
"passport-facebook": "^1.0.3",
"passport-google-oauth": "^0.2.0",
"passport-local": "^1.0.0",
"passport-oauth2": "^1.1.2",
"passport-twitter": "^1.0.3",
"serve-favicon": "^2.0.1"
},
"devDependencies": {
"jshint": "^2.5.6"
},
"repository": {
"type": "",
"url": ""
},
"description": "placesAPI",
"bundleDependencies": [
"body-parser",
"compression",
"connect-ensure-login",
"cookie-parser",
"cors",
"errorhandler",
"express-flash",
"express-session",
"jade",
"loopback",
"loopback-boot",
"loopback-component-explorer",
"loopback-component-passport",
"loopback-connector-postgresql",
"loopback-datasource-juggler",
"passport",
"passport-facebook",
"passport-oauth2",
"serve-favicon"
]
}
:::::::::::::::::::::::config.json::::::::::::::::
{
"restApiRoot": "/api",
"host": "0.0.0.0",
"port": 3000,
"cookieSecret": "REDACTED",
"remoting": {
"context": {
"enableHttpContext": false
},
"rest": {
"normalizeHttpPath": false,
"xml": false
},
"json": {
"strict": false,
"limit": "100kb"
},
"urlencoded": {
"extended": true,
"limit": "100kb"
},
"cors": false,
"errorHandler": {
"disableStackTrace": false
}
},
"legacyExplorer": false
}
There is also this error in the logs from log-dump:
2015-12-23T22:13:35.876Z pid:2720 worker:84 events.js:142
2015-12-23T22:13:35.882Z pid:2720 worker:84 throw er; // Unhandled 'error' event
2015-12-23T22:13:35.882Z pid:2720 worker:84 ^
2015-12-23T22:13:35.882Z pid:2720 worker:84 Error: connect ECONNREFUSED 127.0.0.1:5432
2015-12-23T22:13:35.882Z pid:2720 worker:84 at Object.exports._errnoException (util.js:856:11)
2015-12-23T22:13:35.882Z pid:2720 worker:84 at exports._exceptionWithHostPort (util.js:879:20)
2015-12-23T22:13:35.883Z pid:2720 worker:84 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1064:14)
2015-12-23T22:13:35.919Z pid:1106 worker:0 ERROR supervisor worker id 84 (pid 2720) accidental exit with 1
2015-12-23T22:13:38.253Z pid:1106 worker:0 INFO supervisor started worker 85 (pid 2738)
2015-12-23T22:13:38.253Z pid:1106 worker:0 INFO supervisor resized to 1
2015-12-23T22:13:39.858Z pid:2738 worker:85 INFO strong-agent native addon missing, install a compiler
2015-12-23T22:13:39.859Z pid:2738 worker:85 INFO strong-agent v2.0.2 profiling app 'placesAPI' pid '2738'
2015-12-23T22:13:39.890Z pid:2738 worker:85 INFO strong-agent[2738] started profiling agent
2015-12-23T22:13:44.943Z pid:2738 worker:85 INFO strong-agent not profiling, agent metrics requires a valid license.
2015-12-23T22:13:44.944Z pid:2738 worker:85 Please contact sales#strongloop.com for assistance.
2015-12-23T22:13:44.992Z pid:2738 worker:85 Web server listening at: http://0.0.0.0:3001
2015-12-23T22:13:44.997Z pid:2738 worker:85 Browse your REST API at http://0.0.0.0:3001/explorer
2015-12-23T22:13:45.103Z pid:2738 worker:85 Connection fails: { [Error: connect ECONNREFUSED 127.0.0.1:5432]
2015-12-23T22:13:45.104Z pid:2738 worker:85 code: 'ECONNREFUSED',
2015-12-23T22:13:45.104Z pid:2738 worker:85 errno: 'ECONNREFUSED',
2015-12-23T22:13:45.104Z pid:2738 worker:85 syscall: 'connect',
2015-12-23T22:13:45.104Z pid:2738 worker:85 address: '127.0.0.1',
2015-12-23T22:13:45.104Z pid:2738 worker:85 port: 5432 }
2015-12-23T22:13:45.104Z pid:2738 worker:85 It will be retried for the next request.
Just to put this in an answer, from the package.json we can see that you have included the loopback-connector-postgresql and in the log we see an attempted connection to port 5432 which is the default for that DBMS. It's trying to connect on the localhost (127.0.0.1) and my guess is that Postgres is either not installed on your Digital Ocean box, or not running. You'll need to update the config for your DB, or install (and run) the DB on your DO droplet.
If you have different configs for dev vs production then you can set up an environment-specific datasources config file: datasources.production.json for example. In that file you would put your prod config, and in datasources.json you would have your dev (local) config. When using this method, be sure to set the NODE_ENV variable on your DO droplet to production (to match the name of the prod datasources config file).