passing Python variable to triple quoted curl command - python-2.7

SIZ=100
imap_cmd="""
curl -s -X GET --insecure -u xxx https://xxxxx/_search?pretty=true -d '{
"from":0,
"size":%SIZ,
"query":{ "match_all": {} },
"_source":["userEmail"]
}' | grep -i userEmail|awk {'print $3'} | cut -d ',' -f1
"""
def run_cmd(cmd):
p = Popen(cmd, shell=True, stdout=PIPE)
output = (p.communicate()[0])
return output
I'm trying to pass the SIZ (python) variable to the curl command, but it is not interpreting the value when i execute the command. what I'm missing here

It looks like you're trying to use the % formatter in this line,
"size":%SIZ,
try
imap_cmd="""
curl -s -X GET --insecure -u xxx https://xxxxx/_search?pretty=true -d '{
"from":0,
"size":%d,
"query":{ "match_all": {} },
"_source":["userEmail"]
}' | grep -i userEmail|awk {'print $3'} | cut -d ',' -f1
""" % SIZ
Here is more info on formatting strings.

Related

How to download a secured file using curl from s3 bucket using SecretAccessKey and AccessKeyId

I want to download an apk which exists into my private s3 bucket using curl command. I dont want to use awscli/boto3. I have SecretAccessKey, SessionToken, Expiration, AccessKeyId
Tried Following Code:
curl -k -v -L -o url="https://s3-eu-west-1.amazonaws.com" -H "x-amz-security-token: xxxxxxxxxxxxxxxx" -H "Content-Type: application/xml" -X GET https://xyz/test.apk
curl -k -v -L -o url="https://s3-eu-west-1.amazonaws.com" -H "Content-Type: application/xml" -X GET https://xyz/.test.apk?AWSAccessKeyId=xxxxxxxxxxxxx
Here is the script that downloads and upload file to s3, you have to export keys or can modify the script accordingly.
export AWS_ACCESS_KEY_ID=AKxxx
export AWS_SECRET_ACCESS_KEY=zzzz
Download a file
./s3download.sh get s3://mybucket/myfile.txt myfile.txt
That's it, all you need to pass s3 bucket along with file name
#!/bin/bash
set -eu
s3simple() {
local command="$1"
local url="$2"
local file="${3:--}"
# todo: nice error message if unsupported command?
if [ "${url:0:5}" != "s3://" ]; then
echo "Need an s3 url"
return 1
fi
local path="${url:4}"
if [ -z "${AWS_ACCESS_KEY_ID-}" ]; then
echo "Need AWS_ACCESS_KEY_ID to be set"
return 1
fi
if [ -z "${AWS_SECRET_ACCESS_KEY-}" ]; then
echo "Need AWS_SECRET_ACCESS_KEY to be set"
return 1
fi
local method md5 args
case "$command" in
get)
method="GET"
md5=""
args="-o $file"
;;
put)
method="PUT"
if [ ! -f "$file" ]; then
echo "file not found"
exit 1
fi
md5="$(openssl md5 -binary $file | openssl base64)"
args="-T $file -H Content-MD5:$md5"
;;
*)
echo "Unsupported command"
return 1
esac
local date="$(date -u '+%a, %e %b %Y %H:%M:%S +0000')"
local string_to_sign
printf -v string_to_sign "%s\n%s\n\n%s\n%s" "$method" "$md5" "$date" "$path"
local signature=$(echo -n "$string_to_sign" | openssl sha1 -binary -hmac "${AWS_SECRET_ACCESS_KEY}" | openssl base64)
local authorization="AWS ${AWS_ACCESS_KEY_ID}:${signature}"
curl $args -s -f -H Date:"${date}" -H Authorization:"${authorization}" https://s3.amazonaws.com"${path}"
}
s3simple "$#"
You can find more detail here

POST custom metric with curl to cloudwatch

I want to log results of an api call to cloud watch.
What I want to do is to post consume metrics to cloud watch.
My architecture is designed to sit on elastic beanstalk, so I prefer to install as little as possible on it.
I am trying to understand is it possible to post the cloud watch with simple CURL POST.
I have this tutorial.
I don't really understand this example.
Can I to it with post? (looks like get method).
And what is the endpoint?
When I tried :
curl -X POST https://monitoring.&api-domain;/doc/2010-08-01/?Action=PutMetricData&Version=2010-08-01&Namespace=TestNamespace&MetricData.member.1.MetricName=buffers&MetricData.member.1.Unit=Bytes&&MetricData.member.1.Dimensions.member.1.Name=InstanceType&MetricData.member.1.Dimensions.member.1.Value=m1.small&AUTHPARAMS
I got this error:
'api-domain' is not recognized as an internal or external command,
operable program or batch file.
'Version' is not recognized as an internal or external command,
operable program or batch file.
'Namespace' is not recognized as an internal or external command,
operable program or batch file.
'MetricData.member.1.MetricName' is not recognized as an internal or external command,
operable program or batch file.
'MetricData.member.1.Unit' is not recognized as an internal or external command,
operable program or batch file.
'MetricData.member.1.Dimensions.member.1.Value' is not recognized as an internal or external command,
operable program or batch file.
'AUTHPARAMS' is not recognized as an internal or external command,
operable program or batch file.
Please don't tell me to use aws cli.
I know I can use it. I want to try not to use it.
Here is the documentation explaining how to make POST requests: https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/making-api-requests.html#CloudWatch-API-requests-using-post-method
Here is a list of endpoints per region: https://docs.aws.amazon.com/general/latest/gr/rande.html#cw_region
Resulting curl will look something like this:
curl -X POST \
https://monitoring.us-east-1.amazonaws.com \
-H 'Accept: application/json' \
-H 'Authorization: AWS4-HMAC-SHA256 Credential=YOUR_ACCESS_KEY_GOES_HERE/20190326/us-east-1/monitoring/aws4_request, SignedHeaders=accept;content-encoding;content-length;content-type;host;x-amz-date;x-amz-target, Signature=SIGV4_SIGNATURE_GOES_HERE' \
-H 'Content-Encoding: amz-1.0' \
-H 'Content-Length: 141' \
-H 'Content-Type: application/json' \
-H 'X-Amz-Date: 20190326T071934Z' \
-H 'X-Amz-Target: GraniteServiceVersion20100801.PutMetricData' \
-H 'host: monitoring.us-east-1.amazonaws.com' \
-d '{
"Namespace": "StackOverflow",
"MetricData": [
{
"MetricName": "TestMetric",
"Value": 123.0
}
]
}'
Note that Authorization header in the example above has two placeholders, YOUR_ACCESS_KEY_GOES_HERE and SIGV4_SIGNATURE_GOES_HERE. These are the access key from the credentials you'll be using to sign the requests and the signature you'll have to construct using this algorithm: https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html and this is one of the reasons why using the CLI or SDKs is the recommended way of making requests.
Here is a pure bash example that only needs curl and openssl. It is greatly inspired by: https://github.com/riboseinc/aws-authenticating-secgroup-scripts
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
function sha256Hash() {
printf "$1" | openssl dgst -sha256 -binary -hex | sed 's/^.* //'
}
function log() {
local message="$1"
local details="$2"
printf "${message}\n" >&2
printf "${details}\n\n" | sed 's/^/ /' >&2
}
function to_hex() {
printf "$1" | od -A n -t x1 | tr -d [:space:]
}
function hmac_sha256() {
printf "$2" | \
openssl dgst -binary -hex -sha256 -mac HMAC -macopt hexkey:"$1" | \
sed 's/^.* //'
}
## http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
function sig4_create_canonical_request() {
local http_request_method="$1"
local canonical_uri="/${api_uri}"
local canonical_query=""
local header_host="host:${api_host}"
local canonical_headers="${header_host}\n${header_x_amz_date}"
local request_payload=$(sha256Hash "$metricjson")
local canonical_request="${http_request_method}\n${canonical_uri}\n${canonical_query}\n${canonical_headers}\n\n${signed_headers}\n${request_payload}"
printf "$(sha256Hash ${canonical_request})"
}
## http://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html
function sigv4_create_string_to_sign() {
local hashed_canonical_request="$1"
local sts="${algorithm}\n${timestamp}\n${credential_scope}\n${hashed_canonical_request}"
printf "${sts}"
}
## http://docs.aws.amazon.com/general/latest/gr/sigv4-calculate-signature.html
function sigv4_calculate_signature() {
local secret=$(to_hex "AWS4${aws_secret_key}")
local k_date=$(hmac_sha256 "${secret}" "${today}")
local k_region=$(hmac_sha256 "${k_date}" "${aws_region}")
local k_service=$(hmac_sha256 "${k_region}" "${aws_service}")
local k_signing=$(hmac_sha256 "${k_service}" "aws4_request")
local string_to_sign="$1"
local signature=$(hmac_sha256 "${k_signing}" "${string_to_sign}" | sed 's/^.* //')
printf "${signature}"
}
## http://docs.aws.amazon.com/general/latest/gr/sigv4-add-signature-to-request.html#sigv4-add-signature-auth-header
function sigv4_add_signature_to_request() {
local credential="Credential=${aws_access_key}/${credential_scope}"
local s_headers="SignedHeaders=${signed_headers}"
local signature="Signature=$1"
local authorization_header="Authorization: ${algorithm} ${credential}, ${s_headers}, ${signature}"
printf "${authorization_header}"
}
# https://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
function sign_it() {
local method="$1"
local hashed_canonical_request=$(sig4_create_canonical_request "${method}")
local string_to_sign=$(sigv4_create_string_to_sign "${hashed_canonical_request}")
local signature=$(sigv4_calculate_signature "${string_to_sign}")
local authorization_header=$(sigv4_add_signature_to_request "${signature}")
printf "${authorization_header}"
}
## https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/making-api-requests.html#CloudWatch-API-requests-using-post-method
function put_metric_data() {
local http_method="$1"
local authorization_header=$(sign_it "${http_method}")
local content_length=$(echo -n "$metricjson"|wc -m)
printf "> ${http_method}-ing ${api_url}\n"
curl -i -X ${http_method} "${api_url}" \
-H 'Accept: application/json' \
-H 'Content-Encoding: amz-1.0' \
-H 'Content-Type: application/json' \
-H 'X-Amz-Target: GraniteServiceVersion20100801.PutMetricData' \
-H "${authorization_header}" \
-H "${header_x_amz_date}" \
-H "Content-Length: ${content_length}" \
-d "$metricjson"
}
function main() {
while [ -n "${1-}" ]; do
case "$1" in
-credentials) credentials="$2"; shift ;;
-url) url="$2"; shift ;;
-metricjson) metricjson="$2"; shift ;;
*) echo "Option $1 not recognized"; exit 1 ;;
esac
shift
done
if [ -z "${credentials}" ] || [ -z "${url}" ] || [ -z "${metricjson}" ] ; then
log "sample usage:" "<script> -metricjson <metricjson> -credentials <aws_access_key>:<aws_secret_key> -url <c_url>"
exit 1
fi
local method="POST"
local aws_access_key=$(cut -d':' -f1 <<<"${credentials}")
local aws_secret_key=$(cut -d':' -f2 <<<"${credentials}")
local api_url="${url}"
local timestamp=${timestamp-$(date -u +"%Y%m%dT%H%M%SZ")} # 20171226T112335Z
local today=${today-$(date -u +"%Y%m%d")} # 20171226
local api_host=$(printf ${api_url} | awk -F/ '{print $3}')
local api_uri=$(printf ${api_url} | grep / | cut -d/ -f4-)
local aws_region=$(cut -d'.' -f2 <<<"${api_host}")
local aws_service=$(cut -d'.' -f1 <<<"${api_host}")
local algorithm="AWS4-HMAC-SHA256"
local credential_scope="${today}/${aws_region}/${aws_service}/aws4_request"
local signed_headers="host;x-amz-date"
local header_x_amz_date="x-amz-date:${timestamp}"
put_metric_data "${method}"
}
main "$#"

How to upload files to S3 using Signature v4

I'm trying this for couple of days now but getting stuck at signature calculation. for the background, I've an EC2 instance role assign to the instance and also need to use MKS SSE (Server Side encryption) to store data in S3.
So, this is the script I'm using now:
#!/usr/bin/env bash
set -E
export TERM=xterm
#
s3_region='eu-west-1'
s3_bucket='my-s3-file-bucket'
#
bkup_optn='hourly'
data_type='application/octet-stream'
bkup_path="/tmp/backups/${bkup_optn}"
bkup_file="$( ls -t ${bkup_path}|head -n1 )"
timestamp="$( LC_ALL=C date -u "+%Y-%m-%d %H:%M:%S" )"
#
appHost="$( hostname -f )"
thisApp="$( facter -p my_role )"
thisEnv="$( facter -p my_environment )"
upldUri="${thisEnv}/${appHost}/${bkup_optn}/${bkup_file}"
# This doesn't work on OS X
iso_timestamp=$(date -ud "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ud "${timestamp}" "+%Y%m%d")
date_header=$(date -ud "${timestamp}" "+%a, %d %h %Y %T %Z")
## AWS instance role
awsMetaUri='http://169.254.169.254/latest/meta-data/iam/security-credentials/'
awsInstRole=$( curl -s ${awsMetaUri} )
awsAccessKey=$( curl -s ${awsMetaUri}${awsInstRole}|awk -F'"' '/AccessKeyId/ {print $4}' )
awsSecretKey=$( curl -s ${awsMetaUri}${awsInstRole}|grep SecretAccessKey|cut -d':' -f2|sed 's/[^0-9A-Za-z/+=]*//g' )
awsSecuToken=$( curl -s ${awsMetaUri}${awsInstRole}|sed -n '/Token/{p;}'|cut -f4 -d'"' )
signedHeader='date;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-server-side-encryption;x-amz-server-side-encryption-aws-kms-key-id'
echo -e "awsInstRole => ${awsInstRole}\nawsAccessKey => ${awsAccessKey}\nawsSecretKey => ${awsSecretKey}"
payload_hash()
{
local output=$(shasum -ba 256 "${bkup_path}/${bkup_file}")
echo "${output%% *}"
}
canonical_request()
{
echo "PUT"
echo "/${upldUri}"
echo ""
echo "date:${date_header}"
echo "host:${s3_bucket}.s3.amazonaws.com"
echo "x-amz-security-token:${awsSecuToken}"
echo "x-amz-content-sha256:$(payload_hash)"
echo "x-amz-server-side-encryption:aws:kms"
echo "x-amz-server-side-encryption-aws-kms-key-id:arn:aws:kms:eu-west-1:xxxxxxxx111:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx1432"
echo "x-amz-date:${iso_timestamp}"
echo ""
echo "${signedHeader}"
printf "$(payload_hash)"
}
canonical_request_hash()
{
local output=$(canonical_request | shasum -a 256)
echo "${output%% *}"
}
string_to_sign()
{
echo "AWS4-HMAC-SHA256"
echo "${iso_timestamp}"
echo "${date_scope}/${s3_region}/s3/aws4_request"
echo "x-amz-security-token:${awsSecuToken}"
echo "x-amz-server-side-encryption:aws:kms"
echo "x-amz-server-side-encryption-aws-kms-key-id:arn:aws:kms:eu-west-1:xxxxxxxx111:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx1432"
printf "$(canonical_request_hash)"
}
signature_key()
{
local secret=$(printf "AWS4${awsSecretKey}" | hex_key)
local date_key=$(printf ${date_scope} | hmac_sha256 "${secret}" | hex_key)
local region_key=$(printf ${s3_region} | hmac_sha256 "${date_key}" | hex_key)
local service_key=$(printf "s3" | hmac_sha256 "${region_key}" | hex_key)
printf "aws4_request" | hmac_sha256 "${service_key}" | hex_key
}
hex_key() {
xxd -p -c 256
}
hmac_sha256() {
local hexkey=$1
openssl dgst -binary -sha256 -mac HMAC -macopt hexkey:${hexkey}
}
signature() {
string_to_sign | hmac_sha256 $(signature_key) | hex_key | sed "s/^.* //"
}
curl \
-T "${bkup_path}/${bkup_file}" \
-H "Authorization: AWS4-HMAC-SHA256 Credential=${awsAccessKey}/${date_scope}/${s3_region}/s3/aws4_request,SignedHeaders=${signedHeader},Signature=$(signature)" \
-H "Date: ${date_header}" \
-H "x-amz-date: ${iso_timestamp}" \
-H "x-amz-security-token:${awsSecuToken}" \
-H "x-amz-content-sha256: $(payload_hash)" \
-H "x-amz-server-side-encryption:aws:kms" \
-H "x-amz-server-side-encryption-aws-kms-key-id:arn:aws:kms:eu-west-1:xxxxxxxx111:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx1432" \
"https://${s3_bucket}.s3.amazonaws.com/${upldUri}"
It was written following this doc:
http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html and parts from a sample script in Github, which I forgot to bookmark. After the initial bumpy ride, now I'm keep getting this error:
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>XXXXXXXXXXXXXXX</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256
I went through several AWS docs but I cannot figure out why. Anyone can help me out here please?
-San

Python : Use Popen and Communicate and get the stdout in a loop

I want to execute bash command and get the output and I decided to use Popen and Communicate to do this. this job has to be done in a loop. The problem is that in the first round of loop every thing is OK but in the next loops I get Error when using communicate function. I know that communicate function can only be used once but I create a new subprocess each loop and therefore running communicate should be possible .
Here's the Code:
with open('ComSites2.txt') as file:
for site in file:
sf = site.split()
temp = "http://wwww." + sf[0]
command = "wget --spider -S '%s' 2>&1 | grep 'HTTP/' | awk '{print $2}'" %temp
print command
output = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
StatusCode = int(output.communicate()[0])
if StatusCode == 200:
print "page found successfully"
elif StatusCode == 404:
print "Page not found!!"
else:
print "no result got!!"
when I run this I get this output :
wget --spider -S 'http://wwww.ACAServices.com' 2>&1 | grep 'HTTP/' | awk '{print $2}'
page found successfully
wget --spider -S 'http://wwww.ACI.com' 2>&1 | grep 'HTTP/' | awk '{print $2}'
Traceback (most recent call last):
File "/root/AghasiTestCases/URLConnector.py", line 17, in <module>
StatusCode = int(output.communicate()[0])
ValueError: invalid literal for int() with base 10: ''

Accumulo Overview console not reachable outside of VirtualBox VM

I am running Accumulo 1.5 in an Ubuntu 12.04 VirtualBox VM. I have set the accumulo-site.xml instance.zookeeper.host file to the VM's IP address, and I can connect to accumulo and run queries from a remote client machine. From the client machine, I can also use a browser to see the hadoop NameNode, browse the filesystem, etc. But I cannot connect to the Accumulo Overview page (port 50095) from anywhere else than directly from the Accumulo VM. There is no firewall between the VM and the client, and besides the Accumulo Overview page not being reachable, everything else seems to work fine.
Is there a config setting that I need to change to allow outside access to the Accumulo Overview console?
thanks
I was able to get the Accumulo monitor to bind to all network interfaces by manually applying this patch:
https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=commit;h=7655de68
In conf/accumulo-env.sh add:
# Should the monitor bind to all network interfaces -- default: false
export ACCUMULO_MONITOR_BIND_ALL="true"
In bin/config.sh add:
# ACCUMULO-1985 provide a way to use the scripts and still bind to all network interfaces
export ACCUMULO_MONITOR_BIND_ALL=${ACCUMULO_MONITOR_BIND_ALL:-"false"}
And modify bin/start-server.sh to match:
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
SOURCE="$(readlink "$SOURCE")"
[[ $SOURCE != /* ]] && SOURCE="$bin/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
# Stop: Resolve Script Directory
. "$bin"/config.sh
HOST="$1"
host "$1" >/dev/null 2>/dev/null
if [ $? -ne 0 ]; then
LOGHOST="$1"
else
LOGHOST=$(host "$1" | head -1 | cut -d' ' -f1)
fi
ADDRESS="$1"
SERVICE="$2"
LONGNAME="$3"
if [ -z "$LONGNAME" ]; then
LONGNAME="$2"
fi
SLAVES=$( wc -l < ${ACCUMULO_HOME}/conf/slaves )
IFCONFIG=/sbin/ifconfig
if [ ! -x $IFCONFIG ]; then
IFCONFIG='/bin/netstat -ie'
fi
# ACCUMULO-1985 Allow monitor to bind on all interfaces
if [ ${SERVICE} == "monitor" -a ${ACCUMULO_MONITOR_BIND_ALL} == "true" ]; then
ADDRESS="0.0.0.0"
fi
ip=$($IFCONFIG 2>/dev/null| grep inet[^6] | awk '{print $2}' | sed 's/addr://' | grep -v 0.0.0.0 | grep -v 127.0.0.1 | head -n 1)
if [ $? != 0 ]
then
ip=$(python -c 'import socket as s; print s.gethostbyname(s.getfqdn())')
fi
if [ "$HOST" = "localhost" -o "$HOST" = "`hostname`" -o "$HOST" = "$ip" ]; then
PID=$(ps -ef | egrep ${ACCUMULO_HOME}/.*/accumulo.*.jar | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
else
PID=$($SSH $HOST ps -ef | egrep ${ACCUMULO_HOME}/.*/accumulo.*.jar | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
fi
if [ -z $PID ]; then
echo "Starting $LONGNAME on $HOST"
if [ "$HOST" = "localhost" -o "$HOST" = "`hostname`" -o "$HOST" = "$ip" ]; then
#${bin}/accumulo ${SERVICE} --address $1 >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err &
${bin}/accumulo ${SERVICE} --address ${ADDRESS} >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err &
MAX_FILES_OPEN=$(ulimit -n)
else
#$SSH $HOST "bash -c 'exec nohup ${bin}/accumulo ${SERVICE} --address $1 >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err' &"
$SSH $HOST "bash -c 'exec nohup ${bin}/accumulo ${SERVICE} --address ${ADDRESS} >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err' &"
MAX_FILES_OPEN=$($SSH $HOST "/usr/bin/env bash -c 'ulimit -n'")
fi
if [ -n "$MAX_FILES_OPEN" ] && [ -n "$SLAVES" ] ; then
if [ "$SLAVES" -gt 10 ] && [ "$MAX_FILES_OPEN" -lt 65536 ]; then
echo "WARN : Max files open on $HOST is $MAX_FILES_OPEN, recommend 65536"
fi
fi
else
echo "$HOST : $LONGNAME already running (${PID})"
fi
Check that the monitor is bound to the correct interface, and not the "localhost" loopback interface. You may have to edit the monitors file in Accumulo's configuration directory with the IP/hostname of the correct interface.