When I am trying to copy a file from my Windows local to google compute engine (Rhel 6), using gcloud command I am getting network error as given below:
Fatal: Network error: Connection timed out
ERROR: (gcloud.compute.copy-files) [C:\Program Files\Google\Cloud SDK\google-clo
ud-sdk\bin\sdk\scp.EXE] exited with return code [1].
Here is the command I used -
C:\Program Files\Google\Cloud SDK\java>gcloud compute copy-files --plain test.txt [userid#DEST_instance:~/directory_name] --zone us-central1-f
Could anyone point out what is causing this error?
Related
I am getting subprocess error when launching ec2 cluster instance.
The terminal lags on
Waiting for cluster to enter 'ssh-ready' state'
when running
./spark-ec2 --key-pair=ru_spark --identity-file=ru_spark.pem --region=us-east-1 --zone=us-east-1a launch mycluster
Console:
Warning: Permanently added 'ec2-52-87-225-32.compute-1.amazonaws.com,52.87.225.32' (RSA) to the list of known hosts.
Connection to ec2-52-87-225-32.compute-1.amazonaws.com closed.
Warning: Permanently added 'ec2-52-87-225-32.compute-1.amazonaws.com,52.87.225.32' (RSA) to the list of known hosts.
Transferring cluster's SSH key to slaves...
ec2-34-207-153-79.compute-1.amazonaws.com
Warning: Permanently added 'ec2-34-207-153-79.compute-1.amazonaws.com,34.207.153.79' (RSA) to the list of known hosts.
Cloning spark-ec2 scripts from https://github.com/amplab/spark-ec2/tree/branch-1.6 on master...
Warning: Permanently added 'ec2-52-87-225-32.compute-1.amazonaws.com,52.87.225.32' (RSA) to the list of known hosts.
Cloning into 'spark-ec2'...
error: Peer reports incompatible or unsupported protocol version. while accessing https://github.com/amplab/spark-ec2/info/refs?service=git-upload-pack
fatal: HTTP request failed
Connection to ec2-52-87-225-32.compute-1.amazonaws.com closed.
Error executing remote command, retrying after 30 seconds: Command '['ssh', '-o', 'StrictHostKeyChecking=no', '-o', 'UserKnownHostsFile=/dev/null', '-i', 'ru_spark.pem', '-t', '-t', u'root#ec2-52-87-225-32.compute-1.amazonaws.com', 'rm -rf spark-ec2 && git clone https://github.com/amplab/spark-ec2 -b branch-1.6 spark-ec2']' returned non-zero exit status 128
I updated to curl ssl, changed file permissions to 400 and 600 for ru_spark.pem but neither have helped solve the issue.
I am trying to launch aws replication agent in a CENTOS 8.3 and always returns me an error during the process of replication agent installation ( python3 aws-replication-installer-init.py ......)
The output of the process shows me:
The installation of the AWS Replication Agent has started.
Identifying volumes for replication.
Identified volume for replication: /dev/sdb of size 7 GiB
Identified volume for replication: /dev/sda of size 11 GiB
All volumes for replication were successfully identified.
Downloading the AWS Replication Agent onto the source server... Finished.
Installing the AWS Replication Agent onto the source server...
Error: Failed Installing the AWS Replication Agent
Installation failed.
If i check the aws_replication_agent_installer.log i can see that appears messages like:
make -C /lib/modules/4.18.0-348.2.1.el8_5.x86_64/build M=/tmp/tmp8mdbz3st/AgentDriver modules
.....................
retcode: 0
Build essentials returned with code None
--- Building software
running: 'which zypper'
retcode: 256
running: 'make'
retcode: 0
running: 'chmod 0770 ./aws-replication-driver-commander'
retcode: 0
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 0.
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 1.
............
Cannot insert module. Try 9.
Installation returned with code 2
Installation failed due to unspecified error:
stderr: sh: /var/lib/aws-replication-agent/stopAgent.sh: No such file or directory
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no apt-get in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
rmmod: ERROR: Module aws_replication_driver is not currently loaded
insmod: ERROR: could not insert module ./aws-replication-driver.ko: Required key not available
rmmod: ERROR: Module aws_replication_driver is not currently loaded
Any issue of the error?
Launching with the command:
mokutil --disable-validation
will allow to change kernel modules (next boot will confirm it introducing password that must be entered afet command mokutil)
I referred to this question but I have a different question
Get VS Code Python extension to connect to Jupyter running on remote AWS EMR master node
Environment:
The Jupyter Notebook is on AWS EMR.
To access notebook from the browser using a SOCKS5 proxy. To do so, I have to connect to the VPN to work, SSH using PuTTY using .ppk file and Tunnel (Dynamic Port Forwarding).
To automate the step above, I am using PLink(Homepage | Docs | Download) : c:\stuff\plink.exe --ssh -i c:\stuff\file.ppk -D XXXX user-name#<some_IP_address
I can successfully enable proxy by using msedge.exe --proxy-server="socks5://<address>" or chrome.exe --proxy-server="socks5://<address>"
VS Code Version:
C:\Users\ablaze>code --version
1.55.2
3c4e3df9e89829dce27b7b5c24508306b151f30d
x64
Objective:
How can I access the remote Jupyter Notebook hosted on AWS EMR behind a proxy from Visual Studio Code?
I tried and failed:
Given Visual Studio Code is built on top of Electron and benefits from all the networking stack capabilities of Chromium.
I used Plink like above to SSH and then executed the following command on my windows command prompt: C:\Users\ablaze>code --proxy-server="socks5://<address>" --verbose
Error message:
[17668:0505/104410.116:WARNING:dns_config_service_win.cc(692)] Failed to read DnsConfig.
[main 2021-05-05T14:44:10.224Z] Starting VS Code
[main 2021-05-05T14:44:10.224Z] from: c:\Users\ablaze\AppData\Local\Programs\Microsoft VS Code\resources\app
[main 2021-05-05T14:44:10.224Z] args: {
_: [],
...
'no-proxy-server': false,
'proxy-server': 'socks5://localhost:8088',
...
logsPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\logs\\20210505T104410'
}
...
[main 2021-05-05T14:44:10.258Z] windowsManager#open pathsToOpen [
{
backupPath: 'C:\\Users\\ablaze\\AppData\\Roaming\\Code\\Backups\\1620224957079',
remoteAuthority: undefined
}
]
To double check I also opened the log file C:\Users\ablaze\AppData\Roaming\Code\logs\20210505T104410\main.log
...
[2021-05-05 10:44:40.345] [main] [trace] update#checkForUpdates, state = idle
[2021-05-05 10:44:40.345] [main] [info] update#setState checking for updates
[2021-05-05 10:44:40.345] [main] [trace] RequestService#request https://update.code.visualstudio.com/api/update/win32-x64-user/stable/3c4e3df9e89829dce27b7b5c24508306b151f30d
[2021-05-05 10:44:40.346] [main] [trace] resolveShellEnv(): skipped (Windows)
[2021-05-05 10:44:44.354] [main] [error] Error: net::ERR_PROXY_CONNECTION_FAILED
at SimpleURLLoaderWrapper.<anonymous> (electron/js2c/browser_init.js:109:6508)
at SimpleURLLoaderWrapper.emit (events.js:315:20)
[2021-05-05 10:44:44.354] [main] [info] update#setState idle
I don't know why but I cannot seem to figure out why this is happening. I can build and run the docker image locally.
Recent Events:
2015-05-25 12:57:07 UTC+1000 ERROR Update environment operation is complete, but with errors. For more information, see troubleshooting documentation.
2015-05-25 12:57:07 UTC+1000 INFO New application version was deployed to running EC2 instances.
2015-05-25 12:57:04 UTC+1000 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2015-05-25 12:57:04 UTC+1000 ERROR [Instance: i-4775ec9b] Command failed on instance. Return code: 1 Output: (TRUNCATED)... run Docker container: vel="fatal" msg="Error response from daemon: Cannot start container 02c057b331bf3a3d912bf064f1dca3e00c95746b5748c3c4a28a5c6b452ff335: [8] System error: exec: \"bin/app\": permission denied" . Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2015-05-25 12:57:03 UTC+1000 ERROR Failed to run Docker container: vel="fatal" msg="Error response from daemon: Cannot start container 02c057b331bf3a3d912bf064f1dca3e00c95746b5748c3c4a28a5c6b452ff335: [8] System error: exec: \"bin/app\": permission denied" . Check snapshot logs for details.
Dockerfile:
FROM java:8u45-jre
MAINTAINER Terence Munro <terry#zenkey.com.au>
ADD ["opt", "/opt"]
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon:daemon", "."]
USER daemon
ENTRYPOINT ["bin/app"]
EXPOSE 9000
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
],
"Volumes": []
}
Additional logs as attachment at: https://forums.aws.amazon.com/thread.jspa?threadID=181270
Any help is extremely appreciated.
#nick-humrich suggestion of trying eb local run worked. So using eb deploy ended up working.
I had previously been uploading through the web interface.
Initially using eb deploy was giving me a ERROR: TypeError :: data must be a byte string but I found this issue which was resolved by uninstalling pyopenssl.
So I don't know why the web interface was giving me permission denied perhaps something to do with the zip file?
But anyway I'm able to deploy now thank you.
I had a similar problem running Docker on Elastic Beanstalk. When I pointed CMD in the Dockerfile to a shell script (/path/to/my_script.sh), the EB deployment would fail with
/path/to/my_script.sh: Permission denied.
Apparently, even though I had run RUN chmod +x /path/to/my_script.sh during the Docker build, by the time the image was run, the permissions had been changed. Eventually, to make it work I settled on:
CMD ["/bin/bash","-c","chmod +x /path/to/my_script.sh && /path/to/my_script.sh"]
I was trying to run through the NFS example in the Kubernetes codebase on Container Engine, but I couldn't get the shares to mount. Turns out every time the nfs-server pod is launched, the kernel is throwing an error:
Apr 27 00:11:06 k8s-cluster-6-node-1 kernel: [60165.482242] ------------[ cut here ]------------
Apr 27 00:11:06 k8s-cluster-6-node-1 kernel: [60165.483060] WARNING: CPU: 0 PID: 7160 at /build/linux-50mAO0/linux-3.16.7-ckt4/fs/nfsd/nfs4recover.c:1195 nfsd4_umh_cltrack_init+0x4a/0x60 nfsd
Full output here: http://pastebin.com/qLzCFpAa
Any thoughts on how to solve this?
The NFS example doesn't work because GKE (by default) doesn't support running privileged containers, such as the nfs-server. I just tested this with a v0.16.0 cluster and kubectl v0.15.0 (the current gcloud default) and got a nice error message when I tried to start the nfs-server pod:
$ kubectl create -f nfs-server-pod.yaml
Error: Pod "nfs-server" is invalid: spec.containers[0].privileged: forbidden 'true'