Cannot deploy the program to devnet with anchor - blockchain

First, I tried with deploying on localhost with anchor deploy it works fine. But then I changed the cluster to devnet, changed Anchor.toml and lib.rs file with the address I got after anchor build. Then tried with anchor deploy --provider.cluster devnet
Changed the solana cluster also solana config set --url https://api.devnet.solana.com, then solana program deploy /target/deploy/voter.so. Still getting the error for both of the methods.
Deploying workspace: https://api.devnet.solana.com
Upgrade authority: /home/XXXXXX/.config/solana/id.json
Deploying program "voter"...
Program path: /home/<user>/workspace/voter/target/deploy/voter.so...
Error: Account xxxx is not an upgradeable program or already in use
There was a problem deploying: Output { status: ExitStatus(unix_wait_status(256)), stdout: "", stderr: "" }.

First check if devnet is up and running with: https://status.solana.com
Then check if you have SOL: solana balance
^Usually it's one of those two!
I hope this helps!

The error gives the info: Error: Account xxxx is not an upgradeable program or already in use
Check the address you're trying to deploy to. You may need to change the program address to a new one if it's already being used by another program or user.

Related

Error: Custom: Invalid blockhash when solana program deploy

Trying to deploy a program to solana devnet.
I was using
solana program deploy ./path/xxxx.so -u devnet
And I got multiple lines of
msg 21AbKBwMcsDR4DciR6Z69X6vrqVj7uHKg2Wf1ap8FB1J
msg 21AbKBwMcsDR4DciR6Z69X6vrqVj7uHKg2Wf1ap8FB1J
msg 21AbKBwMcsDR4DciR6Z69X6vrqVj7uHKg2Wf1ap8FB1J
msg 21AbKBwMcsDR4DciR6Z69X6vrqVj7uHKg2Wf1ap8FB1J
msg 21AbKBwMcsDR4DciR6Z69X6vrqVj7uHKg2Wf1ap8FB1J
It gave me this error in the end
Error: Custom: Invalid blockhash
Tried search online but didn't get any useful informations
I was able to resolve this error by reverting to earlier versions of solana-cli and anchor-cli
solana-cli 1.8.0 (src:4a8ff62a; feat:1813598585)
anchor-cli 0.18.2
rustc 1.57.0 (f1edd0429 2021-11-29)
I also shared the above solution in this related github issue
One guess I have is that ExitStatus(unix_wait_status(256)) (which was in my error logs) could be a "time out error" from waiting too long due to slower deployment validation.
I did notice my deployments using later versions (solana-cli 1.9.1, anchor-cli 0.20.1) being significantly slower, especially for larger programs (my small program deployments were still working fine). These were the versions that showed similar "msg" logging as OP.
UPDATE:
Nowadays, my most fool proof solution is to run the snippet below before deployments
sh -c "$(curl -sSfL https://release.solana.com/v1.8.13/install)"
For deploying to devnet via Anchor, the error I got:
Error: Custom: Invalid blockhash
There was a problem deploying: Output { status: ExitStatus(unix_wait_status(256)), stdout: "", stderr: "" }.
I have solved this:
Local npm package: #project-serum/anchor 0.20.1
Solana program dependencies: anchor-lang 0.20.1, anchor-spl 0.20.1, solana-program 1.9.4
deploy to devnet: failed
Local npm package: #project-serum/anchor 0.19.0
Solana program dependencies: anchor-lang 0.19.0, anchor-spl 0.19.0, solana-program 1.9.4
deploy to devnet: successful or failed
Local npm package: #project-serum/anchor 0.18.2
Solana program dependencies: anchor-lang 0.18.2, anchor-spl 0.18.2, solana-program 1.9.4
deploy to devnet: successful or failed
in all cases, the global environment is the same:
Rust 1.57.0
solana-cli 1.9.4
#project-serum/anchor-cli 0.20.1
it seems sometimes the transaction may fail and those transactions include program deployment. see reference: https://www.brianfriel.xyz/resending-dropped-transactions-on-solana/
This seems to be only because of anchor's version, I had the following versions installed:
Rust 1.59.0
solana-cli 1.9.6
anchor-lang 0.22.1
I couldn't deploy to devnet because of
Error: Custom: Invalid blockhash
There was a problem deploying: Output { status: ExitStatus(unix_wait_status(256)), stdout: "", stderr: "" }.
So I just changed anchor-lang to 0.22.0 and all worked well.
That's very strange, and not the normal output that you should get while deploying a program. Have you tried updating to the latest Solana SDK, which is 1.9.2 at the time of writing? More docs at https://docs.solana.com/cli/install-solana-cli-tools#use-solanas-install-tool
I just tried this on devnet, and got the following output:
$ solana program deploy path/to/program.so -u devnet
Finding leader nodes...
[x/y] Transactions sent...
[x/y] Transactions confirmed. Retrying in z blocks.
Program Id: 6sxk3XpYapcRnpSwRKFS1nGK9MJpm8Jkb9aofnBcG1p8
For Mac M1,
Rust 1.57.0
solana-cli 1.8.0
#project-serum/anchor-cli 0.20.1
these tools' versions work.
I tried myself.
https://github.com/project-serum/anchor/issues/1157#issuecomment-1065828414
try solana cli 1.8.12 as 1.9.* seems to have issues deploying on devnet
I had similar error in chainlink contract github example. I changed this line in Execute Struct
pub system_program:Program<'info,System>,
to this:
#[account(address=system_program::ID)]
/// CHECK:
pub system_program:AccountInfo<'info>,
I had to import system_program
use anchor_lang::solana_program::system_program;
for me, to solve it my solana-cli version was 1.9.4 and I just updated with
solana-install update
and it's downgraded to 1.8.16 and deployed my program without any problem.
before this I can't deploy using anchor test(process) or solana program deploy.
Also, If you upgrade to one of the 1.10.X versions the issue will fix.

AWS OpsWorks setup_failed for Instance - unable to deploy_branch

I've had a remote dashboard running fine for a couple of years (written for me by an external developer). It runs on an EC2 instance and is configured using OpsWorks.
Today it's not working, and I see in OpsWorks that the instance is showing as setup_failed.
According to the logs it fails here:
[2021-07-02T15:00:59+00:00] FATAL: Stacktrace dumped to /var/chef/runs/18bc4301-71c1-4393-bb26-eae958791d5a/local-mode-cache/cache/chef-stacktrace.out
[2021-07-02T15:00:59+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2021-07-02T15:00:59+00:00] ERROR: deploy_branch[/srv/api] (iparcelbox::deploy-api line 45) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '255'
---- Begin output of git fetch origin ----
STDOUT:
STDERR: error: cannot open .git/FETCH_HEAD: Permission denied
---- End output of git fetch origin ----
Ran git fetch origin returned 255
I've checked the recipe file for iparcelbox::deploy-api and line 45 calls a deploy_branch:
deploy_branch server_path do
user userName
group groupName
repository node[:iparcelbox][:git_url]
revision node[:iparcelbox][:revision]
enable_submodules false
migrate false
shallow_clone true
git_ssh_wrapper "/tmp/api_git_wrapper.sh"
rollback_on_error false
keep_releases 5
symlink_before_migrate.clear
purge_before_symlink purge_dirs
create_dirs_before_symlink []
symlinks({})
action :deploy
end
So as I understand it, the deploy_branch is trying to fetch a git repo, and for some reason it's failing? I've checked my GitHub repository for the source files and I can see an ssh 'Deploy Key' which is showing as used within the last week.
If anyone could give me any suggestions as to what else to try, it would be much appreciated!
I found an answer to this - I thought the issue was permission denied accessing the git repository, but actually it was because the destination folder on my instance had modified ownership. Setting the ownership back to that specified in the Chef recipe using chown allowed the setup to complete successfully.

AWS: ERROR: Pre-processing of application version xxx has failed and Some application versions failed to process. Unable to continue deployment

Hi I am trying to deploy a node application from cloud 9 to ELB but I keep getting the below error.
Starting environment deployment via CodeCommit
--- Waiting for Application Versions to be pre-processed --- ERROR: Pre-processing of application version app-491a-200623_151654 has
failed. ERROR: Some application versions failed to process. Unable to
continue deployment.
I have attached an image of the IAM roles that I have. Any solutions?
Go to your console and open up your elastic beanstalk console. Go to both applications and environments and delete them. Then in your terminal hit
eb init #Follow instructions
eb create --single ##Follow instructions.
It would fix the error, which is due to some application states which are failed. If you want to check those do
aws elasticbeanstalk describe-application-versions
I was searching for this answer as a result of watching a YouTube tutorial for how to pass the AWS Certified Developer Associate exam. If anyone else gets this error as a result of that tutorial, delete the 002_node_command.config file created in the tutorial and commit that change, as that is causing the error to occur.
A failure within the pre-processing phase, may be caused by an invalid manifest, configuration or .ebextensions file.
If you deploy an (invalid) application version using eb deploy and you enable the preprocess option, The details of the error will not be revealed.
You can remove the --process flag and enable the verbose option to improve error output.
in my case I deploy using this command:
eb deploy -l "XXX" -p
And can return a failure when I mess around with .ebextensions:
ERROR: Pre-processing of application version xxx has failed.
ERROR: Some application versions failed to process. Unable to continue deployment.
With that result I can't figure up what is wrong,
but deploying without -p (or --process)and adding -v (verbose) flag:
eb deploy -l "$deployname" -v
It returns something more useful:
Uploading: [##################################################] 100% Done...
INFO: Creating AppVersion xxx
ERROR: InvalidParameterValueError - The configuration file .ebextensions/16-my_custom_config_file.config in application version xxx contains invalid YAML or JSON.
YAML exception: Invalid Yaml: while scanning a simple key
in 'reader', line 6, column 1:
(... details of the error ...)
, JSON exception: Invalid JSON: Unexpected character (#) at position 0.. Update the configuration file.
Now I can fix the problem.

trying to debug "502 Bad Gateway" error after deploying react app to gcp?

I've deployed a React app via "gcloud app deploy". The "gcloud app browse" command opens a browser which tries to load for a while but then displays a browser title of "502 Bad Gateway." I found the following troubleshooting page:
https://cloud.google.com/endpoints/docs/openapi/troubleshoot-response-errors#gae_errors
The following info on the troubleshoting page appears to be a good match for my scenario:
"An error code 502 with BAD_GATEWAY in the message usually indicates
that App Engine terminated the application because it ran out of
memory. The default App Engine flexible VM only has 1GB of memory,
with only 600MB available for the application container."
But I don't see any "out of memory" error reference in my logs for this. I think I probably need to ensure that I "gcloud app deploy" with a proper app.yaml file. I'm having problems identifying what is a valid minimum yaml file for my React app for which I can be assured that my "gcloud app deploy" will have the expected result. I found the following reference which appears to be a good starting point:
https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine
^^^ This page refers to the following yaml sample code:
https://github.com/GoogleCloudPlatform/java-docs-samples/blob/master/endpoints/getting-started/src/main/appengine/app.yaml
But the url refers to "java-docs-sample" so not sure if this is a vaid yaml file for a React app deployment. Can you provide some guidance on this? I'm really just looking for the minimum yaml file that I can use for a successful deployment. This is the structure of the yaml file that I used for my initial "gcloud app deploy", and the deployment process appeared to indicate success, but not sure if there is any type of fatal flaw here or anything else that may be missing:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
From what I understand, you just want a minimal good app.yaml for react apps as the out of memory seems to be the issue if everything else is correct.
A sample app.yaml for react is the following:
# [START runtime]
runtime: nodejs
env: flex
# [END runtime]
# [START handlers]
handlers:
- url: /
static_files: index.html
upload: index.html
# [END handlers]
But you need to modify your handlers according to your needs/ configuration.
502 error sometimes indicates that your app has an issue itself. So it's better to test locally first and make sure your app is working.
Then for the memory part, you can try specifying the instance type to be one with a higher memory. If it still throws the same error then most likely the issue is within your app or dependencies.
I think there is something about react-scripts start that google cloud doesn't like; I've had trouble with this (react app + google cloud deployment) twice in completely different environments (one had docker and one did not); but the first time I never posted anything to stack overflow so I had to go through the pain again :p
Try changing the package.json file to not use react-scripts start when you run npm run start.
Note that this will overwrite the npm run start and npm start command, so if you use this, you can also update the package json with another keyword such as local and change your local running process to involve writing npm run local
"scripts": {
"start": "serve -s build",
"local": "react-scripts start",
"build": "react-scripts build",
...
},
A working repo

I am deploying with 'eb deploy' in aws but getting the following error

I am using 'eb deploy' for deploying my commits but getting this error
WARNING: You have uncommitted changes.
Creating application version archive "6fea".
Uploading: [##################################################] 100% Done...
INFO: Environment update is starting.
INFO: Deploying new version to instance(s).
ERROR: [Instance: i-10d1f9ec] Command failed on instance. Return code: 126
Output: /bin/sh: ./scripts/update-ftp-dns.sh: /bin/sh^M: bad interpreter: No such file or directory.
container_command 07-update_ftp_dns in .ebextensions/03-vsftpd.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
INFO: New application version was deployed to running EC2 instances.
ERROR: Update environment operation is complete, but with errors. For more information, see troubleshooting documentation.
Please help me to solve this.
The error message is a bit hidden, but it's in there:
Output: /bin/sh: ./scripts/update-ftp-dns.sh: /bin/sh^M: bad interpreter: No such file or directory.
If I had to guess, you have a line break with both a line feed and a carriage return in it. It's treating the carriage return character as if it's part of the name of the executable.
Make sure that you've converted the /scripts/update-ftp-dns.sh script so that it uses Unix line endings only.
See ./configure : /bin/sh^M : bad interpreter
I had something similar, and the cause was having Git's autocrlf value set to true. What this means is that Git will convert the file to a Windows-formatted one when git checkout is run - which unfortunately means that the Elastic Beanstalk tool tries to upload Windows-formatted files to your Linux server, which will manifest in errors like this.
I fixed it by switching autoclrf to false, and committing the relevant file again. Do be aware of the repercussions of this however.