I installed official SSL certificate on APIM. Now the carbon Web-app fails to load. Could it be a problem with catalina-server.xml ? All xml are well configured with new keystore and password.
The only ERROR in wso2carbon.log on start-up :
TID: [-1] [] [2016-05-10 08:52:45,170] ERROR {org.wso2.carbon.tomcat.ext.internal.CarbonTomcatServiceComponent} - Error while adding the carbon web-app {org.wso2.carbon.tomcat.ext.internal.CarbonTomcatServiceComponent}
org.wso2.carbon.tomcat.CarbonTomcatException: Webapp failed to deploy
at org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:302)
at org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:185)
at org.wso2.carbon.tomcat.ext.internal.CarbonTomcatServiceComponent.activate(CarbonTomcatServiceComponent.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
at org.eclipse.equinox.internal.ds.Resolver.buildNewlySatisfied(Resolver.java:473)
at org.eclipse.equinox.internal.ds.Resolver.enableComponents(Resolver.java:217)
at org.eclipse.equinox.internal.ds.SCRManager.performWork(SCRManager.java:816)
at org.eclipse.equinox.internal.ds.SCRManager$QueuedJob.dispatch(SCRManager.java:783)
at org.eclipse.equinox.internal.ds.WorkThread.run(WorkThread.java:89)
at org.eclipse.equinox.internal.util.impl.tpt.threadpool.Executor.run(Executor.java:70)
Caused by: java.lang.NullPointerException
at org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:233)
... 17 more
Java Version : 1.8.0_71
Operating System : Linux 2.6.32-573.18.1.el6.x86_64, amd64
User : xxxx, US-US, Europe/Paris
Thank you
if anyone come here.
I just spent two days making this work with a public let's encrypt certificate.
This procedure was done with the wso2am-2.2.0.zip version, it may not work on other versions
I installed it in /opt/wso2
Here is what i did :
These are my variables, the existing jks path, and it's key
jks_location="/opt/tomcat/conf/tomcat.jks" jks_password="changeit" key_password="changeit" jks_alias=tomcat server_name="your public server name"
I search every file with the wso2carbon.jks keyword
And i replace some certificate values, but not the one concerning the client trust store
grep -R wso2carbon.jks /opt/wso2/ | cut -d ':' -f1 | grep "\.xml$" | grep -v -e ".b$" -e logs -e migration -e "\.db" | sort -u | while read file ; do awk '{if(/<\/[Kk]eyStore/ && q==0 && p==1){print "<KeyAlias>'$jks_alias'</KeyAlias>"}
if(/<\/[Kk]eyStore>/ || /<\/dataBridgeConfiguration>/ ){p=0;q=0}if($1~/<[Kk]eyStore/ || /<\/dataBridgeConfiguration>/ ){p=1}
if(/<KeyAlias/ && p==1){q=1}
if(p==1 && /<Password>/){
print " <Password>'$jks_password'</Password>"
} else if (p==1 && /<password/){
print " <password>'$jks_password'</password>"
} else if (p==1 && /<keyStorePassword/){
print " <keyStorePassword>'$jks_password'</keyStorePassword>"
} else if (p==1 && /<KeyPassword/){
print " <KeyPassword>'$key_password'</KeyPassword>"
} else if (p==1 && /<KeyAlias/){
print " <KeyAlias>'$jks_alias'</KeyAlias>"
} else if (p==1 && /<Location/){
print " <Location>'$jks_location'</Location>"
} else if (p==1 && /<location/){
print " <location>'$jks_location'</location>"
} else if (p==1 && /<keyStoreLocation/){
print " <keyStoreLocation>'$jks_location'</keyStoreLocation>"
} else if (/keystoreFile=.*wso2carbon.jks/){
print " keystoreFile=\"'$jks_location'\""
} else if (/keystorePass="wso2carbon"/){
print " keystorePass=\"'$jks_password'\""
} else if (/<parameter name="wss.ssl.key.store.file">/){
print " <parameter name=\"wss.ssl.key.store.file\">'$jks_location'</parameter>"
} else if (/<parameter name="wss.ssl.key.store.pass"/){
print " <parameter name=\"wss.ssl.key.store.pass\">'$jks_password'</parameter>"
}
else {print}
}' $file > "$file".t ; echo "$file" ; cp -a "$file".t "$file" ; done
I change the carbon.local.ip in the xml files
grep -R 'carbon.local.ip' /opt/wso2/ | cut -d ':' -f1 | grep "\.xml$" | sort -u | while read file ; do sed -i -e 's/\${carbon.local.ip}/'$server_name'/g' $file ; done
I add a server name so you don't get redirect to the IP address on the first page
sed -i '/<ServerURL>local/a<!-- Manual add-->\n <HostName>'$server_name'<\/HostName>\n <MgtHostName>'$server_name'<\/MgtHostName>' /opt/wso2/repository/conf/carbon.xml
I import the root certificate of my existing certificate (i use a public from let's encrypt, but i need to add it anyway, maybe you won't have to)
keytool -import -alias lets_encrypt_root -file your-root-file.pem -keystore /opt/wso2/repository/resources/security/client-truststore.jks -storePass wso2carbon
If your certificate does not contain "localhost" as an alias
you need to change the https backend call from localhost to your real hostname
grep -i -R "https://localhost:" /opt/wso2/ | grep -v '\.log' | cut -d ':' -f1 | sort -u | grep -v '\.t$' | xargs -I file sed -i -e 's|https://localhost:|https://'$server_name:'|g' file
If the awk seems complicated, it's because i'm not an expert, some tag can be different when you look at the case (don't know if it's important to leave it that way)
And for example the password tag must only be change when inside the keystore tag.
Hope it helps
Related
I am new to Google API. I have a script to import all mails into Google Groups but I cannot get the API to work.
I have my client_id, client_secret
then I used this link:
https://accounts.google.com/o/oauth2/auth?client_id=[CLIENID]&redirect_uri=urn:ietf:wg:oauth:2.0:oob&scope=https://www.googleapis.com/auth/apps.groups.migration&response_type=code
where I replaced the [CLIENDID] with my ClientID, I can authenticate and get back the AuthCode which I then used to run this command:
curl --request POST --data "code=[AUTHCODE]&client_id=[CLIENTID]&client_secret=[CLIENTSECRET]&redirect_uri=urn:ietf:wg:oauth:2.0:oob&grant_type=authorization_code" https://accounts.google.com/o/oauth2/token
This works and shows me the refresh token, however, the script does say authentication failed. So I tried to run the command again and it says
"error": "invalid_grant"
"error_description": "Bad Request"
If I reopen the link above, get a new authcode and run the command again, it works but only for the first time. I am on a NPO Google Account and I activated the Trial Period.
Can anyone help me out here?
Complete script:
client_id="..."
client_secret="...."
refresh_token="......"
function usage() {
(
echo "usage: $0 <group-address> <mbox-dir>"
) >&2
exit 5
}
GROUP="$1"
shift
MBOX_DIR="$1"
shift
[ -z "$GROUP" -o -z "$MBOX_DIR" ] && usage
token=$(curl -s --request POST --data "client_id=$client_id&client_secret=$client_secret&refresh_token=$refresh_token&grant_type=refresh_token" https://accounts.google.com/o/oauth2/token | sed -n "s/^\s*\"access_token\":\s*\"\([^\"]*\)\",$/\1/p")
# create done folder if it doesn't already exist
DONE_FOLDER=$MBOX_DIR/../done
mkdir -p $DONE_FOLDER
i=0
for file in $MBOX_DIR/*; do
echo "importing $file"
response=$(curl -s -H"Authorization: Bearer $token" -H'Content-Type: message/rfc822' -X POST "https://www.googleapis.com/upload/groups/v1/groups/$GROUP/archive?uploadType=media" --data-binary #${file})
result=$(echo $response | grep -c "SUCCESS")
# check to see if it worked
if [[ $result -eq 0 ]]; then
echo "upload failed on file $file. please run command again to resume."
exit 1
fi
# it worked! move message to the done folder
mv $file $DONE_FOLDER/
((i=i+1))
if [[ $i -gt 9 ]]; then
expires_in=$(curl -s "https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=$token" | sed -n "s/^\s*\"expires_in\":\s*\([0-9]*\),$/\1/p")
if [[ $expires_in -lt 300 ]]; then
# refresh token
echo "Refreshing token..."
token=$(curl -s --request POST --data "client_id=$client_id&client_secret=$client_secret&refresh_token=$refresh_token&grant_type=refresh_token" https://accounts.google.com/o/oauth2/token | sed -n "s/^\s*\"access_token\":\s*\"\([^\"]*\)\",$/\1/p")
fi
i=0
fi
done
I want to download an apk which exists into my private s3 bucket using curl command. I dont want to use awscli/boto3. I have SecretAccessKey, SessionToken, Expiration, AccessKeyId
Tried Following Code:
curl -k -v -L -o url="https://s3-eu-west-1.amazonaws.com" -H "x-amz-security-token: xxxxxxxxxxxxxxxx" -H "Content-Type: application/xml" -X GET https://xyz/test.apk
curl -k -v -L -o url="https://s3-eu-west-1.amazonaws.com" -H "Content-Type: application/xml" -X GET https://xyz/.test.apk?AWSAccessKeyId=xxxxxxxxxxxxx
Here is the script that downloads and upload file to s3, you have to export keys or can modify the script accordingly.
export AWS_ACCESS_KEY_ID=AKxxx
export AWS_SECRET_ACCESS_KEY=zzzz
Download a file
./s3download.sh get s3://mybucket/myfile.txt myfile.txt
That's it, all you need to pass s3 bucket along with file name
#!/bin/bash
set -eu
s3simple() {
local command="$1"
local url="$2"
local file="${3:--}"
# todo: nice error message if unsupported command?
if [ "${url:0:5}" != "s3://" ]; then
echo "Need an s3 url"
return 1
fi
local path="${url:4}"
if [ -z "${AWS_ACCESS_KEY_ID-}" ]; then
echo "Need AWS_ACCESS_KEY_ID to be set"
return 1
fi
if [ -z "${AWS_SECRET_ACCESS_KEY-}" ]; then
echo "Need AWS_SECRET_ACCESS_KEY to be set"
return 1
fi
local method md5 args
case "$command" in
get)
method="GET"
md5=""
args="-o $file"
;;
put)
method="PUT"
if [ ! -f "$file" ]; then
echo "file not found"
exit 1
fi
md5="$(openssl md5 -binary $file | openssl base64)"
args="-T $file -H Content-MD5:$md5"
;;
*)
echo "Unsupported command"
return 1
esac
local date="$(date -u '+%a, %e %b %Y %H:%M:%S +0000')"
local string_to_sign
printf -v string_to_sign "%s\n%s\n\n%s\n%s" "$method" "$md5" "$date" "$path"
local signature=$(echo -n "$string_to_sign" | openssl sha1 -binary -hmac "${AWS_SECRET_ACCESS_KEY}" | openssl base64)
local authorization="AWS ${AWS_ACCESS_KEY_ID}:${signature}"
curl $args -s -f -H Date:"${date}" -H Authorization:"${authorization}" https://s3.amazonaws.com"${path}"
}
s3simple "$#"
You can find more detail here
Using the following code to check for expiry of SSL certs:
cat localdomains | xargs -L 1 bash -c 'openssl s_client -connect $0:443 -servername $0 2> /dev/null | openssl x509 -noout -enddate | cut -d = -f 2 | xargs -I {} echo {} $0'
When I run into domains with no cert returning, I am trying to wrap my head around how to change the output to something like N/A, instead of trying to evaluate "cut -d = -f 2" and then xargs -I
If you capture the output of your command in a variable you can then validate it. Assuming this doesn't have to be a one liner:
#!/bin/bash
while read domain; do
expiry=$(openssl s_client -connect ${domain}:443 -servername ${domain} 2>/dev/null </dev/null | \
openssl x509 -noout -enddate 2>&1 | cut -d = -f 2)
# validate output with date
if date -d "${expiry}" > /dev/null 2>/dev/null ; then
echo ${expiry} ${domain}
else
echo "N/A" ${domain}
fi
done
Note a couple of things:
Redirect /dev/null into stdin of openssl s_client to get the prompt back (see here for technical details)
Redirect stderr of the second openssl x509 to stdout in order to validate against it.
You could use grep or sed to validate the output. I found date to be convenient (and if you wanted to use it to reformat the date, it would be extra-convenient).
I tested this solution by putting it into a file check_cert_expiry.sh:
$ cat localdomains
stackoverflow.com
example.example
google.com
$ cat localdomains | ./check_cert_expiry.sh
Aug 14 12:00:00 2019 GMT stackoverflow.com
N/A example.example
Feb 21 09:37:00 2018 GMT google.com
Cheers!
I'm trying this for couple of days now but getting stuck at signature calculation. for the background, I've an EC2 instance role assign to the instance and also need to use MKS SSE (Server Side encryption) to store data in S3.
So, this is the script I'm using now:
#!/usr/bin/env bash
set -E
export TERM=xterm
#
s3_region='eu-west-1'
s3_bucket='my-s3-file-bucket'
#
bkup_optn='hourly'
data_type='application/octet-stream'
bkup_path="/tmp/backups/${bkup_optn}"
bkup_file="$( ls -t ${bkup_path}|head -n1 )"
timestamp="$( LC_ALL=C date -u "+%Y-%m-%d %H:%M:%S" )"
#
appHost="$( hostname -f )"
thisApp="$( facter -p my_role )"
thisEnv="$( facter -p my_environment )"
upldUri="${thisEnv}/${appHost}/${bkup_optn}/${bkup_file}"
# This doesn't work on OS X
iso_timestamp=$(date -ud "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ud "${timestamp}" "+%Y%m%d")
date_header=$(date -ud "${timestamp}" "+%a, %d %h %Y %T %Z")
## AWS instance role
awsMetaUri='http://169.254.169.254/latest/meta-data/iam/security-credentials/'
awsInstRole=$( curl -s ${awsMetaUri} )
awsAccessKey=$( curl -s ${awsMetaUri}${awsInstRole}|awk -F'"' '/AccessKeyId/ {print $4}' )
awsSecretKey=$( curl -s ${awsMetaUri}${awsInstRole}|grep SecretAccessKey|cut -d':' -f2|sed 's/[^0-9A-Za-z/+=]*//g' )
awsSecuToken=$( curl -s ${awsMetaUri}${awsInstRole}|sed -n '/Token/{p;}'|cut -f4 -d'"' )
signedHeader='date;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-server-side-encryption;x-amz-server-side-encryption-aws-kms-key-id'
echo -e "awsInstRole => ${awsInstRole}\nawsAccessKey => ${awsAccessKey}\nawsSecretKey => ${awsSecretKey}"
payload_hash()
{
local output=$(shasum -ba 256 "${bkup_path}/${bkup_file}")
echo "${output%% *}"
}
canonical_request()
{
echo "PUT"
echo "/${upldUri}"
echo ""
echo "date:${date_header}"
echo "host:${s3_bucket}.s3.amazonaws.com"
echo "x-amz-security-token:${awsSecuToken}"
echo "x-amz-content-sha256:$(payload_hash)"
echo "x-amz-server-side-encryption:aws:kms"
echo "x-amz-server-side-encryption-aws-kms-key-id:arn:aws:kms:eu-west-1:xxxxxxxx111:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx1432"
echo "x-amz-date:${iso_timestamp}"
echo ""
echo "${signedHeader}"
printf "$(payload_hash)"
}
canonical_request_hash()
{
local output=$(canonical_request | shasum -a 256)
echo "${output%% *}"
}
string_to_sign()
{
echo "AWS4-HMAC-SHA256"
echo "${iso_timestamp}"
echo "${date_scope}/${s3_region}/s3/aws4_request"
echo "x-amz-security-token:${awsSecuToken}"
echo "x-amz-server-side-encryption:aws:kms"
echo "x-amz-server-side-encryption-aws-kms-key-id:arn:aws:kms:eu-west-1:xxxxxxxx111:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx1432"
printf "$(canonical_request_hash)"
}
signature_key()
{
local secret=$(printf "AWS4${awsSecretKey}" | hex_key)
local date_key=$(printf ${date_scope} | hmac_sha256 "${secret}" | hex_key)
local region_key=$(printf ${s3_region} | hmac_sha256 "${date_key}" | hex_key)
local service_key=$(printf "s3" | hmac_sha256 "${region_key}" | hex_key)
printf "aws4_request" | hmac_sha256 "${service_key}" | hex_key
}
hex_key() {
xxd -p -c 256
}
hmac_sha256() {
local hexkey=$1
openssl dgst -binary -sha256 -mac HMAC -macopt hexkey:${hexkey}
}
signature() {
string_to_sign | hmac_sha256 $(signature_key) | hex_key | sed "s/^.* //"
}
curl \
-T "${bkup_path}/${bkup_file}" \
-H "Authorization: AWS4-HMAC-SHA256 Credential=${awsAccessKey}/${date_scope}/${s3_region}/s3/aws4_request,SignedHeaders=${signedHeader},Signature=$(signature)" \
-H "Date: ${date_header}" \
-H "x-amz-date: ${iso_timestamp}" \
-H "x-amz-security-token:${awsSecuToken}" \
-H "x-amz-content-sha256: $(payload_hash)" \
-H "x-amz-server-side-encryption:aws:kms" \
-H "x-amz-server-side-encryption-aws-kms-key-id:arn:aws:kms:eu-west-1:xxxxxxxx111:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx1432" \
"https://${s3_bucket}.s3.amazonaws.com/${upldUri}"
It was written following this doc:
http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html and parts from a sample script in Github, which I forgot to bookmark. After the initial bumpy ride, now I'm keep getting this error:
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>XXXXXXXXXXXXXXX</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256
I went through several AWS docs but I cannot figure out why. Anyone can help me out here please?
-San
I am running Accumulo 1.5 in an Ubuntu 12.04 VirtualBox VM. I have set the accumulo-site.xml instance.zookeeper.host file to the VM's IP address, and I can connect to accumulo and run queries from a remote client machine. From the client machine, I can also use a browser to see the hadoop NameNode, browse the filesystem, etc. But I cannot connect to the Accumulo Overview page (port 50095) from anywhere else than directly from the Accumulo VM. There is no firewall between the VM and the client, and besides the Accumulo Overview page not being reachable, everything else seems to work fine.
Is there a config setting that I need to change to allow outside access to the Accumulo Overview console?
thanks
I was able to get the Accumulo monitor to bind to all network interfaces by manually applying this patch:
https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=commit;h=7655de68
In conf/accumulo-env.sh add:
# Should the monitor bind to all network interfaces -- default: false
export ACCUMULO_MONITOR_BIND_ALL="true"
In bin/config.sh add:
# ACCUMULO-1985 provide a way to use the scripts and still bind to all network interfaces
export ACCUMULO_MONITOR_BIND_ALL=${ACCUMULO_MONITOR_BIND_ALL:-"false"}
And modify bin/start-server.sh to match:
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
SOURCE="$(readlink "$SOURCE")"
[[ $SOURCE != /* ]] && SOURCE="$bin/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
# Stop: Resolve Script Directory
. "$bin"/config.sh
HOST="$1"
host "$1" >/dev/null 2>/dev/null
if [ $? -ne 0 ]; then
LOGHOST="$1"
else
LOGHOST=$(host "$1" | head -1 | cut -d' ' -f1)
fi
ADDRESS="$1"
SERVICE="$2"
LONGNAME="$3"
if [ -z "$LONGNAME" ]; then
LONGNAME="$2"
fi
SLAVES=$( wc -l < ${ACCUMULO_HOME}/conf/slaves )
IFCONFIG=/sbin/ifconfig
if [ ! -x $IFCONFIG ]; then
IFCONFIG='/bin/netstat -ie'
fi
# ACCUMULO-1985 Allow monitor to bind on all interfaces
if [ ${SERVICE} == "monitor" -a ${ACCUMULO_MONITOR_BIND_ALL} == "true" ]; then
ADDRESS="0.0.0.0"
fi
ip=$($IFCONFIG 2>/dev/null| grep inet[^6] | awk '{print $2}' | sed 's/addr://' | grep -v 0.0.0.0 | grep -v 127.0.0.1 | head -n 1)
if [ $? != 0 ]
then
ip=$(python -c 'import socket as s; print s.gethostbyname(s.getfqdn())')
fi
if [ "$HOST" = "localhost" -o "$HOST" = "`hostname`" -o "$HOST" = "$ip" ]; then
PID=$(ps -ef | egrep ${ACCUMULO_HOME}/.*/accumulo.*.jar | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
else
PID=$($SSH $HOST ps -ef | egrep ${ACCUMULO_HOME}/.*/accumulo.*.jar | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
fi
if [ -z $PID ]; then
echo "Starting $LONGNAME on $HOST"
if [ "$HOST" = "localhost" -o "$HOST" = "`hostname`" -o "$HOST" = "$ip" ]; then
#${bin}/accumulo ${SERVICE} --address $1 >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err &
${bin}/accumulo ${SERVICE} --address ${ADDRESS} >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err &
MAX_FILES_OPEN=$(ulimit -n)
else
#$SSH $HOST "bash -c 'exec nohup ${bin}/accumulo ${SERVICE} --address $1 >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err' &"
$SSH $HOST "bash -c 'exec nohup ${bin}/accumulo ${SERVICE} --address ${ADDRESS} >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err' &"
MAX_FILES_OPEN=$($SSH $HOST "/usr/bin/env bash -c 'ulimit -n'")
fi
if [ -n "$MAX_FILES_OPEN" ] && [ -n "$SLAVES" ] ; then
if [ "$SLAVES" -gt 10 ] && [ "$MAX_FILES_OPEN" -lt 65536 ]; then
echo "WARN : Max files open on $HOST is $MAX_FILES_OPEN, recommend 65536"
fi
fi
else
echo "$HOST : $LONGNAME already running (${PID})"
fi
Check that the monitor is bound to the correct interface, and not the "localhost" loopback interface. You may have to edit the monitors file in Accumulo's configuration directory with the IP/hostname of the correct interface.