Deploying a smart contract using Hardhat WITHOUT script - blockchain

Literally everywhere I look (other stackoverflow posts, or official docs), the way to deploy a smart contract is always using a script
npx hardhat run scripts/deploy.js --network ropsten
I am looking for a way to deploy it only using the npm library "hardhat" without actually running a "script" in terminal.
Does anyone know how?

So, what I did was put defaultNetwork: "network name" in the module.exports object inside of your hardhat.config.js file.

Related

Created a pipeline using AWS copilot, original push worked but when I make changes to code and push them to github they don't show up

would appreciate any help with this:
I've followed the guide for AWS copilot here: https://aws.github.io/copilot-cli/docs/getting-started/first-app-tutorial/ and then the guide for creating a pipeline and connecting it to github here: https://aws.github.io/copilot-cli/docs/concepts/pipelines/. That all appears to have worked and I can view the react app I'm working on at the url indicated in aws.
My problem is that when I make changes to my code and then push it to the tracked github branch, the changes don't appear when viewing the app at the url. However, when I make the push to github, the pipeline does register that a change has occured. It indicates that a change has been made and goes through the flow of creating a new build. But whatever I try, the changes don't seem to actually show up.
I assume that I'm missing something simple here, and that for some reason, docker is building the app based on the original code. But I can't figure out why that would be. Maybe something is weird with my DockerFile?
My docker file looks like this:
FROM node:16.14
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm i
COPY . ./
CMD ["npm", "run", "server"]
My understanding of how this should work, is that I push up new code to github, that is sent to the aws pipeline and a new image is generated based on that code, which is then used to create a container that is hosted on ECS. But clearly I am missing something.
copilot deploy does work. I'm unsure if
the problem is that my pipeline is successfully building (as it does not throw an error in the console) and then just not hosting it at the same url as copilot deploy. Or
the pipeline is hitting an error that just doesn't show up in the pipeline console. Digging into the logs I find this:
echo "Cloudformation stack and config files were not generated. Please check build logs to see if there was a manifest validation error." 1>&2;
Which seems to point towards the second option. Any suggestions on how resolve whatever it going on in the container if that is the problem?
The error suggests that I check build logs but these are the build logs. Are there more granular build logs I can examine?
When running containers in ECS, unless your container is already crashing because of an error, it often won't pick up code changes from your new image unless you force a new deployment. You can do this from the command line using the AWS CLI with the following:
aws ecs update-service --cluster <cluster_name> --service <service_name> --force-new-deployment --profile <aws_profile_name>
Note that the profile is optional if you're using your default aws cli configuration profile.

Serverless and AWS deploy issue

I have to update a website on aws using serverless deploy.
This website were not created by me, it's the first time I work with serverless and AWS solutions.
I have the source code, deploy files, etc, from the last person in charge.
I run a before-deploy.js script to create all local files, check them to see if the updates went ok. Everything's fine.
But anytime I try to deploy using the simple command "serverless deploy", it fails printing this error :
CREATE_FAILED: MainStaticSite (AWS::S3::Bucket)
“mywebsite.com” already exists
I don’t really understand this error, as I know the website already exists but I just want to update it.
I tried more specific commands like :
serverless deploy -v --stage production --region eu-west-1
But this one only shows this output :
Framework Core: 3.10.1
Plugin: 6.2.0
SDK: 4.3.2
PS
And doesn't updates the website.
I changed the keys on AWS, maybe it's because of this ?
Looks like he doesn’t want to overwrite the existing files, but no idea why.
If someone has an answer or a lead.
Thank you :)

AWS cloudformation: How to run cfn-nag locally in Windows

I have a cloud formation template where I have all the resources and details for the project.
I have the cfn-lint setup locally and it is running perfectly fine. However when I push the code changes, build fails at deployment stage due to cfn-nag stating some simple changes which could be fixed.
I'm using windows machine and I need a way to run this cfn-nag locally so that I could check this just like cfn-lint and fix them locally instead of waiting 40 minutes for build till it reaches deployment stage.
I referred several posts online, found below two helpful
https://stelligent.com/2018/03/23/validating-aws-cloudformation-templates-with-cfn_nag-and-mu/
https://github.com/stelligent/cfn_nag
What is the difference between cfn-nag and cfn-lint and why lint is not failing on what cfn-nag is complaining about?
The above links have some instructions on Ruby and Brew but I'm using Nodejs, felt lost. Please help.
CFN-Nag looks for patterns in AWS CloudFormation templates that may indicate insecure infrastructure,
Ex:
IAM rules that are too permissive (wildcards),
Security group rules that are too permissive (wildcards),
Access logs that aren’t enabled,
Encryption that isn’t enabled,
CFN-Lint scans the AWS CloudFormation template by processing a collection of Rules, where every rule handles a specific function check or validation of the template. It validates against AWS CloudFormation Resource specification.
This collection of rules can be extended with custom rules using the --append-rules argument.
Ex: Whitespaces, alignment(YAML), type checks, valid values for resource properties, and other best practices.
Those two links you previded above have all the information needed, just not directly for a Nodejs developer using a Windows machine.
Step1: Pull the docket image stelligent/cfn-nag
Step2: Add the script to your package.json for cfn-nag
Ex:
"scripts" : {
"cfn:nag": "cfn-nag"
}
If you're using docker-compose.yml
Add the cfn-nag image details to your docker-compose.yml like below
cfn-nag:
image: "stelligent/cfn-nag"
volumes:
-./path_of_cfn_file_to_copy: /path_to_copy_to
command: ${COMMAND: -/path_to_copy_tp/cfn_file}
Just set the scripts in package.json to run via docker-compose
"cfn:nag": "docker-compose run --rm cfn-nag"

trying to debug "502 Bad Gateway" error after deploying react app to gcp?

I've deployed a React app via "gcloud app deploy". The "gcloud app browse" command opens a browser which tries to load for a while but then displays a browser title of "502 Bad Gateway." I found the following troubleshooting page:
https://cloud.google.com/endpoints/docs/openapi/troubleshoot-response-errors#gae_errors
The following info on the troubleshoting page appears to be a good match for my scenario:
"An error code 502 with BAD_GATEWAY in the message usually indicates
that App Engine terminated the application because it ran out of
memory. The default App Engine flexible VM only has 1GB of memory,
with only 600MB available for the application container."
But I don't see any "out of memory" error reference in my logs for this. I think I probably need to ensure that I "gcloud app deploy" with a proper app.yaml file. I'm having problems identifying what is a valid minimum yaml file for my React app for which I can be assured that my "gcloud app deploy" will have the expected result. I found the following reference which appears to be a good starting point:
https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine
^^^ This page refers to the following yaml sample code:
https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/endpoints/getting-started/src/main/appengine/app.yaml
But the url refers to "java-docs-sample" so not sure if this is a vaid yaml file for a React app deployment. Can you provide some guidance on this? I'm really just looking for the minimum yaml file that I can use for a successful deployment. This is the structure of the yaml file that I used for my initial "gcloud app deploy", and the deployment process appeared to indicate success, but not sure if there is any type of fatal flaw here or anything else that may be missing:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
From what I understand, you just want a minimal good app.yaml for react apps as the out of memory seems to be the issue if everything else is correct.
A sample app.yaml for react is the following:
# [START runtime]
runtime: nodejs
env: flex
# [END runtime]
# [START handlers]
handlers:
- url: /
static_files: index.html
upload: index.html
# [END handlers]
But you need to modify your handlers according to your needs/ configuration.
502 error sometimes indicates that your app has an issue itself. So it's better to test locally first and make sure your app is working.
Then for the memory part, you can try specifying the instance type to be one with a higher memory. If it still throws the same error then most likely the issue is within your app or dependencies.
I think there is something about react-scripts start that google cloud doesn't like; I've had trouble with this (react app + google cloud deployment) twice in completely different environments (one had docker and one did not); but the first time I never posted anything to stack overflow so I had to go through the pain again :p
Try changing the package.json file to not use react-scripts start when you run npm run start.
Note that this will overwrite the npm run start and npm start command, so if you use this, you can also update the package json with another keyword such as local and change your local running process to involve writing npm run local
"scripts": {
"start": "serve -s build",
"local": "react-scripts start",
"build": "react-scripts build",
...
},
A working repo

Running 'git' in AWS lambda

I am trying to run git in AWS lambda to make a checkout of a repository.
This is my setup:
I am using nodejs 4.3
I am not using nodegit because I want to use the "--depth=1" parameter, which is not supported by nodegit.
I have copied the git and ssh executable from the correct AWS AMI and placed then in a "bin" folder in the zip I upload.
I added them to PATH with this:
->
process.env['PATH'] = process.env['LAMBDA_TASK_ROOT'] + "/bin:" + process.env['PATH'];
The input variables are set like this:
"checkout_url": "git#...",
"branch":"master
Now I do this (for brevity, I mixed some pseudo-code in):
downloadDeploymentKeyFromS3Sync('/tmp/ssh_key');
fs.chmodSync("/tmp/ssh_key",0600);
process.env['GIT_SSH_COMMAND'] = 'ssh -o StrictHostKeyChecking=no -i /tmp/ssh_key';
execSync("git clone --depth=1 " + checkout_url + " --branch " + branch + " /tmp/checkout");
Running this in my local computer using lambda-local everything works fine! But when I test it in lambda, I get:
warning: templates not found /usr/share/git-core/templates
PRIV_END: seteuid: Operation not permitted\r
fatal: Could not read from remote repository.
The "warning" is of course, because I did not install git but just copied the binary. Is that a reason why this should not work?
Why is git needing "setuid"? I read that in some shells, that is disabled for security reasons. So it makes sense that it does not work in lambda. Can git somehow be instructed to not "need" this command?
Yep, this is definitely possible, I've created a Lambda Layer that achieves just this. No need to mess with any env variables, should work out of the box:
https://github.com/lambci/git-lambda-layer
As stated in the README, all you need to do is add a layer with the following ARN:
arn:aws:lambda:<region>:553035198032:layer:git:<version>
(replace <region> and <version>, check README for latest version)
The issue is that you cannot copy just the git binary. You need a portable version of git and even with that you're going to have a bad time because you cannot guarantee that the os the lambda function runs on is going to be compatible with the binary.
Stepping back, I would just walk away from this approach completely. I would clone and build a package that I would just download pretty much the same way you do downloadDeploymentKeyFromS3Sync.
You might consider this a non-answer, but I've found the easiest way to run arbitrary binaries from Lambda is... not to. If I cannot do the work from within a platform-independent, non-binary approach, I integrate Docker into the workflow, managing Docker containers from the Lambda function.
On AWS one way to do this is to use the Elastic Container Service (ECS) to spawn a task that runs git.
If you stand up a Docker Swarm instance or integrate another Docker-API compatible service such as Rackspace Carina or Joyent's Triton, then you could use a project I personally put together specifically for integrating AWS Lambda with Docker: "Dockaless".
Good luck!