I am getting subprocess error when launching ec2 cluster instance.
The terminal lags on
Waiting for cluster to enter 'ssh-ready' state'
when running
./spark-ec2 --key-pair=ru_spark --identity-file=ru_spark.pem --region=us-east-1 --zone=us-east-1a launch mycluster
Console:
Warning: Permanently added 'ec2-52-87-225-32.compute-1.amazonaws.com,52.87.225.32' (RSA) to the list of known hosts.
Connection to ec2-52-87-225-32.compute-1.amazonaws.com closed.
Warning: Permanently added 'ec2-52-87-225-32.compute-1.amazonaws.com,52.87.225.32' (RSA) to the list of known hosts.
Transferring cluster's SSH key to slaves...
ec2-34-207-153-79.compute-1.amazonaws.com
Warning: Permanently added 'ec2-34-207-153-79.compute-1.amazonaws.com,34.207.153.79' (RSA) to the list of known hosts.
Cloning spark-ec2 scripts from https://github.com/amplab/spark-ec2/tree/branch-1.6 on master...
Warning: Permanently added 'ec2-52-87-225-32.compute-1.amazonaws.com,52.87.225.32' (RSA) to the list of known hosts.
Cloning into 'spark-ec2'...
error: Peer reports incompatible or unsupported protocol version. while accessing https://github.com/amplab/spark-ec2/info/refs?service=git-upload-pack
fatal: HTTP request failed
Connection to ec2-52-87-225-32.compute-1.amazonaws.com closed.
Error executing remote command, retrying after 30 seconds: Command '['ssh', '-o', 'StrictHostKeyChecking=no', '-o', 'UserKnownHostsFile=/dev/null', '-i', 'ru_spark.pem', '-t', '-t', u'root#ec2-52-87-225-32.compute-1.amazonaws.com', 'rm -rf spark-ec2 && git clone https://github.com/amplab/spark-ec2 -b branch-1.6 spark-ec2']' returned non-zero exit status 128
I updated to curl ssl, changed file permissions to 400 and 600 for ru_spark.pem but neither have helped solve the issue.
I am building an .ova file using as source a base image (which is in .vmx format).
The base image (created as said above in .vmx format) is built from an Ubuntu 16.04 server using vmware-iso builder.
Here is my builder configuration
"builders": [
{
"type": "vmware-vmx",
"vmx_data": {
"memsize": "8192",
"numvcpus": "4"
},
"source_path": "path/to/base.vmx",
The first provisioner that will run is the following:
"provisioners": [
{
"type": "shell",
"inline": [
"sudo apt-get update -y",
"sudo apt-get upgrade -y",
...
However, although I have repeated the process many times, it suddenly breaks with the following error:
==> vmware-vmx: Cloning source VM...
==> vmware-vmx: Starting HTTP server on port 8031
==> vmware-vmx: Starting virtual machine...
==> vmware-vmx: Waiting 10s for boot...
==> vmware-vmx: Connecting to VM via VNC (127.0.0.1:5924)
==> vmware-vmx: Typing the boot command over VNC...
==> vmware-vmx: Waiting for SSH to become available...
==> vmware-vmx: Connected to SSH!
==> vmware-vmx: Provisioning with shell script: /tmp/packer-shell747369685
vmware-vmx: Reading package lists...
vmware-vmx: E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable)
vmware-vmx: E: Unable to lock directory /var/lib/apt/lists/
==> vmware-vmx: Stopping virtual machine...
==> vmware-vmx: Deleting output directory...
Build 'vmware-vmx' errored: Script exited with non-zero exit status: 100
See Unable to lock the administration directory (/var/lib/dpkg/) is another process using it?
The lock is placed when an apt process is running, and is removed when the process completes. If there is a lock with no apparent process running, this may mean the process got stuck for some reason.
Just for testing purposes what to exec to fail docker in aws batch with custom code?
I have tried:
exit 137 -> CannotStartContainerError: API error (404): oci runtime error: container_linux.go:262: starting container process caused "exec: \"exit\": executable file not found in $PATH"
Exit 137
bash exit 137 -> bash: exit: No such file or directory
Assuming you are trying with busy box you can use any one of below format
Space delimited: sh -c 'exit 1'
Json: ["sh","-c","exit 1"]
I am trying to connect to kinesis using the erlang library kinetic. https://github.com/AdRoll/kinetic... my development.config has my aws key and secret in it, however I am not sure what the metadata_base_url should be or what else I need in order to make it work...currently i have:
%% -*- erlang -*-
[{kinetic,
[{args, [
% All of these values are optional
% kinetic will get all of the context from the instance
{metadata_base_url, "https://kinesis.us-east-1.amazonaws.com"},
{aws_access_key_id, "mykey"},
{aws_secret_access_key, "mysecret"},
{iam_role, "kinetic"},
{lhttpc_opts, [{max_connections, 5000}]}
]}]
}].
Below is my results when trying to start it...
kinetic (master) $ make
==> lhttpc (get-deps)
==> jiffy (get-deps)
==> meck (get-deps)
==> kinetic (get-deps)
==> lhttpc (compile)
==> jiffy (compile)
==> meck (compile)
==> kinetic (compile)
Compiled src/kinetic.erl
kinetic (master) $ erl -pa ebin -pa deps/*/ebin -s inets -s crypto -s ssl -s lhttpc -config development -s kinetic
Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]
Eshell V5.10.4 (abort with ^G)
1>
=INFO REPORT==== 2-Dec-2014::11:51:31 ===
application: kinetic
exited: {{shutdown,
{failed_to_start_child,kinetic_config,
{{badmatch,{error,403}},
[{kinetic_config,new_args,1,
[{file,"src/kinetic_config.erl"},{line,127}]},
{kinetic_config,update_data,1,
[{file,"src/kinetic_config.erl"},{line,42}]},
{kinetic_config,init,1,
[{file,"src/kinetic_config.erl"},{line,55}]},
{gen_server,init_it,6,
[{file,"gen_server.erl"},{line,304}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}}},
{kinetic,start,[normal,[]]}}
type: temporary
When removed the base_url config...
=ERROR REPORT==== 2-Dec-2014::12:41:30 ===
{failed_connect,[{to_address,{"169.254.169.254",80}},{inet,[inet],etimedout}]}
=INFO REPORT==== 2-Dec-2014::12:41:30 ===
application: kinetic
exited: {{shutdown,
{failed_to_start_child,kinetic_config,
{{badmatch,
{error,
{failed_connect,
[{to_address,{"169.254.169.254",80}},
{inet,[inet],etimedout}]}}},
[{kinetic_config,new_args,1,
[{file,"src/kinetic_config.erl"},{line,127}]},
{kinetic_config,update_data,1,
[{file,"src/kinetic_config.erl"},{line,42}]},
{kinetic_config,init,1,
[{file,"src/kinetic_config.erl"},{line,55}]},
{gen_server,init_it,6,
[{file,"gen_server.erl"},{line,304}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}}},
{kinetic,start,[normal,[]]}}
type: temporary
It seems that in case of running kinetic application outside of ec2 cluster you need to specify region in config:
[{kinetic,
[{args, [
{region, "us-east-1"}, %% just an example
...
]}]
}].
and use fixed version of kinetic which won't be trying to discover region.
Second solution is to set metadata_base_url option to your http service which on get request for "/latest/meta-data/placement/availability-zone" will respond with your region.
I've never used aws and some of statements might be improper.
The error message says that the access to https://kinesis.us-east-1.amazonaws.com/latest/meta-data/placement/availability-zone return an error 403: Forbidden. Reading http documentation it means that the authentication was correct but that you do not have the right access to this ressource.
ssh: Could not resolve hostname mitosis.example.com: Name or service not known
fatal: The remote end hung up unexpectedly
*** [deploy:update_code] rolling back
* executing "rm -rf /home/httpd/h.example.com/htdocs/www/releases/20131011050831; true"
servers: ["stagger6.colo.example.com"]
** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: stagger6.colo.example.com
(SocketError: getaddrinfo: Name or service not known)
Command git ls-remote git#mitosis.example.com:example-site.git master returned status code pid 4571 exit 128
The capfile is configured incorrectly.
git#mitosis.example.com:example-site.git is example info.
mitosis.example.com isn't a valid host.