Broken gpg key from google kubernetes kubectl - kubectl

So I'm trying to install kubectl from the official repository but I'm getting an error while adding their key: https://packages.cloud.google.com/apt/doc/apt-key.gpg .
I've tried user#machine:~/Downloads$ gpg --verify apt-key.gpg
gpg: verify signatures failed: Unexpected error
Is it a bad key for you also? I'm using a DeepinOS for the first time so perhaps it's some internal issue?

It seems my corporate proxy was changing the file so the command sudo apt-key add apt-key.gpg was failing with 'Unexpected error'. The download site didn't hash to compare.

Related

yum fails to fetch mirror list 403 Amazon Linux

Edit: seems to be working now. Discussion here https://forums.aws.amazon.com/thread.jspa?threadID=344200
I'm finding that Amazon Linux yum can not retrieve the mirror list, failing with a 403 error.
Going to http://amazonlinux.default.amazonaws.com/2/core/latest/x86_64/mirror.list in a browser does indeed produce a 403 error.
This is running from local docker environment, so no S3 VPC endpoint is involved.
What can I do about this?
To reproduce:
docker run -it --entrypoint bash amazonlinux:latest
yum update
This produces the following:
bash-4.2# yum update
Loaded plugins: ovl, priorities
Could not retrieve mirrorlist http://amazonlinux.default.amazonaws.com/2/core/latest/x86_64/mirror.list error was
14: HTTP Error 403 - Forbidden
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
((truncated long output))
Cannot find a valid baseurl for repo: amzn2-core/2/x86_64
It would seem the files on AWS's S3 bucket at this location have been removed or the access revoked.
This has now been resolved by AWS.

Error | entroll the member admin | fabric-ca-client enroll | Amazon Managed Blockchain

I am trying to setup hyperledger fabric blockchain network using amazon managed blockchain following this guide. To entroll, I have used the following command,
fabric-ca-client enroll -u 'https://admin:#D7a22hjjh*9b9#ca.m-zzz.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30002' --tls.certfiles /home/ec2-user/managedblockchain-tls-chain.pem -M /home/ec2-user/admin-msp
I got the following error,
Error: The URL of the fabric CA server is missing the enrollment ID and secret; found 'https://admin:#D7a22613ac75c9b9#ca.m-zzz.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30002' but expecting 'https://<enrollmentID>:<secret>#admin:'
I thought this is due to # symbol in the password. For testing purpose I remove the # symbol and tried. I got the following error.
Error: Failed to create keystore directory: mkdir /home/ec2-user/admin-msp: permission denied
when I use sudo, I am getting the following error,
sudo: fabric-ca-client: command not found
Help me to fix this issue.
What user are you logged in as? whoami
You should be using ec2-user, which has access to the /home/ec2-user/ directory.
You can try manually creating the admin-msp directory before running enrolling the admin:
cd ~ && mkdir admin-msp
Then try running your command.
If that doesn't work, use sudo to create the directory and then chown it to be owned by ec2-user:
cd ~
sudo mkdir admin-msp
sudo chown ec2-user ~/admin-msp
Then try your command.
Note that you can also wrap the username/password in quotes:
fabric-ca-client enroll -u 'https://"admin":"#D7a22hjjh*9b9"#ca.m-zzz.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30002' --tls.certfiles /home/ec2-user/managedblockchain-tls-chain.pem -M /home/ec2-user/admin-msp

appcfg.py request_logs certificate verify failed (_ssl.c:661)

We've been using appcfg.py request_logs to download GAE logs, every once in a while it throws the error:
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)
But after a few times trying it works out, sometimes also it works after updating gcloud using gcloud components update. We thought it might be some network throttling issue of some kind and didn't give it enough thought. Lately though, we're trying to figure out what is causing this.
The full command we use is:
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append --no_cookies
It seems the error is related to httplib2 library, but since it is part of the appcfg.py calls we're not sure we should tamper with something within its calls
Versions:
Python 2.7.13
Google Cloud SDK 196.0.0
app-engine-python 1.9.67
This has become more persistent now and I couldn't download logs for a few days now no matter how many times I try.
Looking at the download logs command I tried the same command again but without the --no_cookies flag to see what would happen.
appcfg.py request_logs -A testapp --version=20180321t073239 --severity=0 all_logs.log --append
I got the error:
Error 403: --- begin server output ---
You do not have permission to modify this app (app_id=u'e~testapp').
--- end server output ---
Which lead me to the answer provided here https://stackoverflow.com/a/34694577/1394228 by #ninjahoahong. This worked for me and logs where downloaded from first trial in case someone faces the same issue
There's also this Google Group post which I didn't try but seems like it does the same thing.
Not sure if removing the file ~/.appcfg_oauth2_tokens would have other effects, yet to find out.
Update:
I also found out that my httplib2 located at /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2 was version = "0.7.5", I upgraded it to version = '0.11.3' using target location(directory) upgrade command:
sudo pip2 install --upgrade httplib2 -t /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2/

AWS Elastic Beanstalk commands return no output

I am very new to the Amazon Web Services and have been trying a learn-by-doing approach with them.
In summary I was trying to set up Git with the elastic beanstalk command line interface for my web-app. However, I wanted to use my SSH key-pair to authenticate (aws-access-id, secret) and in my naivety and ignorance, I just supplied this information (the SSH key files) and now I can't get it to work. More specifically stated below.
I have my project directory with Git set up so that it works. I then open the git bash window MINGW64 (I am on Windows 10) and attempt to set up eb.
$ eb init
It then tells me that my credentials are not set up and asks me for aws-access-id and the secret. I had just set up the SSH key-pair and try to enter these files; what's the harm in trying? EB failure, it turns out. Now, the instances seem to run fine still, looking at their status on the AWS console website. However, whatever I type into the bash:
$ eb init
$ eb status
$ eb deploy
$
There is no output. Not even an error. It just silently returns to awaiting a new command from me.
When using the --debug option with these commands, a long list of operations is returned, ending with
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
I thought I would be able to log out or something the like, so that I could enter proper credentials which I messed up from the beginning. I restarted the web-app from the AWS webpage interface and restarted my PC. No success.
Thanks in advance.
EDIT:
I also tried reinstalling awscli and awsebcli:
pip uninstall awsebcli
pip uninstall awscli
pip install awscli
pip install awsebcli --upgrade --user
Problem persists, but now there is one output (previously seen only upon --debug option):
$ eb init
ERROR: ResponseParserError - Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
$
It sounds like you have replaced your AWS credentials in ~/.aws/credentials and/or ~/.aws/config file(s) with your SSH key. You could manually replace these or execute aws configure if you have the AWS CLI installed.

How to fix "gpg: decryption failed: secret key not available" when running "lein deploy clojars"?

I've been trying for ages to deploy a library to clojars without having to specify username and password using lein deploy clojars. But I end up with the following error message:
gpg: gpg-agent is not available in this session
gpg: can't query passphrase in batch mode
gpg: Invalid passphrase; please try again ...
gpg: can't query passphrase in batch mode
gpg: Invalid passphrase; please try again ...
gpg: can't query passphrase in batch mode
gpg: decryption failed: secret key not available
Could not decrypt credentials from /Users/johan/.lein/credentials.clj.gpg
nil
See `lein help gpg` for how to install gpg.
No credentials found for clojars
See `lein help deploying` for how to configure credentials to avoid prompts.
My ~./lein/credentials.clj.gpg looks like this (unencrypted):
{ #"https://clojars.org/repo"
{:username "<username>" :password "<password>"}}}
I know that the username and password are correct (they are just copied from 1password).
Running gpg --list-keys gives me:
/Users/myname/.gnupg/pubring.gpg
-------------------------------
pub 2048R/0486A2C5 2010-10-12
uid My Name <myname#somemail.com>
sub 2048R/0617110A 2010-10-12
I've tried specifiying both 0486A2C5 and 0617110A in ~/.lein/profiles.clj (:signing {:gpg-key "<key>"}) but it doesn't make any difference.
I've also made sure that use-agent is uncommented in ~/.gnupg/gpg.conf and I've also made sure that gpg-agent is installed on my machine (brew install gpg-agent).
Update 1
Running gpg --list-secret-keys gives me:
/Users/myname/.gnupg/secring.gpg
-------------------------------
sec 2048R/0486A2C5 2010-10-12
uid My Name <myname#somemail.com>
ssb 2048R/0617110A 2010-10-12
Running gpg --quiet --batch --decrypt ~/.lein/credentials.clj.gpg prompts me for my password and then yields the unencrypted results.
Update 2
I'm using gpg 1.4.20 and gpg-agent 2.0.29 (both installed using brew) on Mac OS X El Capitan.
What am I missing!?
I solved this by uninstalling gpg from brew (brew uninstall gpg) and then install the binaries from gpgtools. I then opened the terminal and created a symbolic link from gpg2 to gpg:
$ ln -s /usr/local/MacGPG2/bin/gpg2 /usr/local/MacGPG2/bin/gpg
Then I added /usr/local/MacGPG2/bin to the PATH in my ~/.profile. When running lein clojars deploy I now get a graphical user interface where I enter the password and afterwards it successfully signs the release and publish the artifacts.
The easiest way to address this issue is to attempt to decrypt a file using gpg with the --batch flag. I think you'll find that your agent is installed but has not been started, or is perhaps improperly configured. If you can get gpg to work with --batch, then the deploy command will work.
My experience has been that naming the jar file you want to deploy with -SNAPSHOT on the end means that you do not have to set up public/private keys.