Memurai Developer Sentinel as service won't register - memurai

Has anyone had success in getting Memurai to run as a service as a Sentinel? Following the instructions from their site,
memurai.exe --service-install --service-name "memurai-sentinel" --sentinel memurai-sentinel.conf
results in
sentinel directive while not in sentinel mode
even with the sentinel flag being present. Cutting out the service flags, it will run as a sentinel without issue.

The Memurai team just released version 2.0.1 which fixes the Memurai Sentinel running as a service issue. Please visit https://www.memurai.com/get-memurai to get the lasted Developer build.

There should be a fix by the Memurai Team in the next release.
Edit: This has been fixed in the Memurai 2.0.1 release.
In the meanwhile, here's a workaround that uses a PowerShell script to add a sentinel service on Windows through other means:
Install Memurai.
Copy sentinel.conf to the desired sentinel working directory. Current values in the files assume C:\sentinelconf\sentinel.conf
Change sentinel.conf with the desired configuration. Current values assume a Redis instance will be running in the same machine in the default port.
Run "Windows PowerShell ISE" as an Administrator:
Open sentinel_script.ps1
Change the variables at the beginning of the file to the desired values.
Run the script (F5)
If it results in a "running scripts is disabled on this system." error:
In the interactive shell run (without the quotes): 'Set-ExecutionPolicy RemoteSigned'
Try running the script again (F5)
In the interactive shell run (without the quotes): 'Set-ExecutionPolicy Restricted'
Connect the sentinel using memurai-cli in the command prompt (without the quotes): 'memurai-cli -p 5000'
Issue command to test the sentinel (without the quotes): 'sentinel master mymaster'
Sample sentinel.conf:
# Copy this file to C:\sentinelconf\sentinel.conf
logfile "C:\sentinelconf\sentinel.log"
port 5000
sentinel monitor mymaster 127.0.0.1 6379 1
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1
The sentinel_script.ps1 file :
# Full path of installed memurai.exe
$MemuraiBinPath="C:\Program Files\Memurai\memurai.exe"
# Full path of sentinel.conf
$SentinelConfPath="C:\sentinelconf\sentinel.conf"
# Full path of sentinel.log
$SentinelLogPath="C:\sentinelconf\sentinel.log"
# Desired name of the service
$SentinelServiceName="Memurai Sentinel"
# Get Network Service credentials
$NetworkServiceCredentials = New-Object -TypeName System.Management.Automation.PSCredential ("NT AUTHORITY\NETWORK SERVICE", (New-Object System.Security.SecureString))
# Create a service to start Memurai in Sentinel mode
New-Service -Name $SentinelServiceName -Credential $NetworkServiceCredentials -BinaryPathName "`"$MemuraiBinPath`" --service-run `"$SentinelConfPath`" --sentinel"
# Create a rule to give the Network Service account access to the paths.
$SentinelAccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("NT AUTHORITY\NETWORK SERVICE", "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow")
# Get the path where the sentinel.conf file is located and add permissions.
$SentinelConfFolderPath = Split-Path $SentinelConfPath
$ConfPathAccessPermissions = Get-Acl $SentinelConfFolderPath
$ConfPathAccessPermissions.SetAccessRule($SentinelAccessRule)
Set-Acl $SentinelConfFolderPath $ConfPathAccessPermissions
# Get the path where the sentinel.log file is located and add permissions.
$SentinelLogFolderPath = Split-Path $SentinelLogPath
$LogPathAccessPermissions = Get-Acl $SentinelLogFolderPath
$LogPathAccessPermissions.SetAccessRule($SentinelAccessRule)
Set-Acl $SentinelLogFolderPath $LogPathAccessPermissions
# Start the Memurai Sentinel service.
Start-Service $SentinelServiceName
Write-Host "Started the $SentinelServiceName service."

Related

WSO2-AM 3.2.0 - Developer Portal - redirecting to localhost

I'm trying to access WSO2 Developer Portal sing-in page on a docker environment but I'm been automatically redirected to localhost.
I access Developer Portal using this address:
https://192.168.21.120:9443/devportal/apis
When I click in Sign-In button I'm redirected to:
https://localhost:9443/oauth2/authorize?response_type=code&c...
Where can I fixe this url?
Try updating the following configuration in the deployment.toml
[server]
hostname = "localhost"
You will also have to update the SP as per below doc.
https://apim.docs.wso2.com/en/latest/troubleshooting/troubleshooting-invalid-callback-error/#
This is how I did it in a docker container .
Pick the container. We need id from it.
docker ps -a
open the container kernel containerid is actual id
docker exec -u 0 -it containerid /bin/bash
find folders in the container and use the api manager one below
ll
find where your deployment.toml is in the apimanager folder
find wso2amfolder/ -type f -name deployment.toml
open your file with terminal editor
nano routetofile/deployment.toml
Not actual values but references
routetofile
wso2amfolder
containerid

Travis CI Deploy by SSH Script/Hosts issue

I have a django site which I would like to deploy to a Digital Ocean server everytime a branch is merged to master. I have it mostly working and have followed this tutorial.
.travis.yml
language: python
python:
- '2.7'
env:
- DJANGO_VERSION=1.10.3
addons:
ssh_known_hosts: mywebsite.com
git:
depth: false
before_install:
- openssl aes-256-cbc -K *removed encryption details* -in travis_rsa.enc -out travis_rsa -d
- chmod 600 travis_rsa
install:
- pip install -r backend/requirements.txt
- pip install -q Django==$DJANGO_VERSION
before_script:
- cp backend/local.env backend/.env
script: python manage.py test
deploy:
skip_cleanup: true
provider: script
script: "./travis-deploy.sh"
on:
all_branches: true
travis-deploy.sh - runs when the travis 'deploy' task calls it
#!/bin/bash
# print outputs and exit on first failure
set -xe
if [ $TRAVIS_BRANCH == "master" ] ; then
# setup ssh agent, git config and remote
echo -e "Host mywebsite.com\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
eval "$(ssh-agent -s)"
ssh-add travis_rsa
git remote add deploy "travis#mywebsite.com:/home/dean/se_dockets"
git config user.name "Travis CI"
git config user.email "travis#mywebsite.com"
git add .
git status # debug
git commit -m "Deploy compressed files"
git push -f deploy HEAD:master
echo "Git Push Done"
ssh -i travis_rsa -o UserKnownHostsFile=/dev/null travis#mywebsite.com 'cd /home/dean/se_dockets/backend; echo hello; ./on_update.sh'
else
echo "No deploy script for branch '$TRAVIS_BRANCH'"
fi
Everything works find until things get to the 'deploy' stage. I keep getting error messages like:
###########################################################
# WARNING: POSSIBLE DNS SPOOFING DETECTED! #
###########################################################
The ECDSA host key for mywebsite.com has changed,
and the key for the corresponding IP address *REDACTED FOR STACK OVERFLOW*
is unknown. This could either mean that
DNS SPOOFING is happening or the IP address for the host
and its host key have changed at the same time.
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
* REDACTED FOR STACK OVERFLOW *
Please contact your system administrator.
Add correct host key in /home/travis/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/travis/.ssh/known_hosts:11
remove with: ssh-keygen -f "/home/travis/.ssh/known_hosts" -R mywebsite.com
Password authentication is disabled to avoid man-in-the-middle attacks.
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.
Permission denied (publickey,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Script failed with status 128
INTERESTINGLY - If I re-run this job the 'git push' command will succeed at pushing to the deploy remote (my server). However, the next step in the deploy script which is to SSH into the server and run some postupdate commands will fail for the same reason (hosts fingerprint change or something). Or, it will ask for travis#mywebsite.com password (it has none) and will hang on the input prompt.
Additionally when I debug the Travis CI build and use the SSH url you're given to SSH into the machine Travis CI runs on - I can SSH into my own server from it. However it takes multiple tries to get around the errors.
So - this seems to be a fluid problem with stuff persisting from builds into the next on retries causing different errors/endings.
As you can see in my .yml file and the deploy script I have attempted to disable various host name checks and added the domain to known hosts etc... all to no avail.
I know I have things 99% set up correctly as things do mostly succeed when I retry the job a few times.
Anyone seen this before?
Cheers,
Dean

Setting up passwordless ssh failed for all the HAWQ hosts

we have 3 node and trying to setup hdfs and pivotal hawq with ambari and i have already enabled passwordless ssh for all the 3 machines but when i start hawq service i am getting "Setting up passwordless ssh failed for all the HAWQ hosts" this error please help to resolve this issue.
enter image description here
On all of your hosts, edit your /etc/ssh/sshd_config file and change "PasswordAuthentication no" to "PasswordAuthentication yes". This can be done with sed too.
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
Then restart sshd on all of the hosts:
sudo /etc/init.d/sshd restart
Now you can proceed with the installation of HAWQ. The installation is using a command called gpssh-exkeys. This process uses password authentication to communicate with the hosts so that it can create and exchange keys for the gpadmin account. Once the keys have been exchanged, the gpadmin account no longer needs password authentication.
Also, after the installation is complete, you can revert back and disable password authentication if you like.
Lastly, I've asked the PM for HDB at Pivotal to enhance Ambari to do these steps for you automatically. There is a similar process for iptables being disabled during the installation of Hadoop so this would be like that. Ambari would enable password authentication, install HDB, and then disable password authentication.

Authentication or permission failure, did not have permissions on the remote directory

I am using ansijet to automate the ansible playbook to be run on a button click. The playbook is to stop the running instances on AWS. If run, manually from command-line, the playbook runs well and do the tasks. But when run through the web interface of ansijet, following error is encountered
Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in "/tmp". Failed command was: mkdir -p $HOME/.ansible/tmp/ansible-tmp-1390414200.76-192986604554742 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1390414200.76-192986604554742 && echo $HOME/.ansible/tmp/ansible-tmp-1390414200.76-192986604554742, exited with result 1:
Following is the ansible.cfg configuration.
# some basic default values...
inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/
remote_tmp = $HOME/.ansible/tmp/
pattern = *
forks = 5
poll_interval = 15
sudo_user = root
#ask_sudo_pass = True
#ask_pass = True
transport = smart
#remote_port = 22
module_lang = C
I try to change the remote_tmp path to /home/ubuntu/.ansible/tmp
But still getting the same error.
By default, the user Ansible connects to remote servers as will be the same name as the user ansible runs as. In the case of Ansijet, it will try to connect to remote servers with whatever user started Ansijet's node.js process. You can override this by specifying the remote_user in a playbook or globally in the ansible.cfg file.
Ansible will try to create the temp directory if it doesn't already exist, but will be unable to if that user does not have a home directory or if their home directory permissions do not allow them write access.
I actually changed the temp directory in my ansible.cfg file to point to a location in /tmp which works around these sorts of issues.
remote_tmp = /tmp/.ansible-${USER}/tmp
I faced the same problem a while ago and solved like this . The possible case is that either the remote server's /tmp directory did not have enough permission to write . Run the ls -ld /tmp command to make sure its output looks something like this
drwxrwxrwt 7 root root 20480 Feb 4 14:18 /tmp
I have root user as super user and /tmp has 1777 permission .
Also for me simply -
remote_tmp = /tmp worked well.
Another check would be to make sure $HOME is present from the shell which you are trying to run . Ansible runs commands via /bin/sh shell and not /bin/bash.Make sure that $HOME is present in sh shell .
In my case I needed to login to the server for the first time and change the default password.
Check the ansible user on the remote / client machine as this error occurs when the ansible user password expires on the remote / client machine.
==========
'WARNING: Your password has expired.\nPassword change required but no TTY available.\n')
<*.*.*.*> Failed to connect to the host via ssh: WARNING: Your password has expired.
Password change required but no TTY available.
Actual error :
host_name | UNREACHABLE! => {
"changed": false,
"msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /tmp/ansible-$USER `\"&& mkdir /tmp/ansible-$USER/ansible-tmp-1655256382.78-15189-162690599720687 && echo ansible-tmp-1655256382.78-15189-162690599720687=\"` echo /tmp/ansible-$USER/ansible-tmp-1655256382.78-15189-162690599720687 `\" ), exited with result 1",
"unreachable": true
===========
This could happen mainly because on the Remote Server, there is no home directory present for the user.
The following steps resolved the issue for me -
Log into the remote server
switch to root
If the user is linux_user from which Host (in my case Ansible) is trying to connect , then run following commands
mkdir /home/linux_user
chown linux_user:linux_user /home/linux_user

How do I add authentication and endpoint to Django Celery Flower Monitoring?

I've been using flower locally and it seems easy enough to setup and run, but I can't see how I would set it up in a production environment.
In particular, how can I add authentication and how would I define a url to access it?
For custom address, use the --address flag.
For auth, use the --basic_auth flag.
See below:
# celery flower --help
Usage: /usr/local/bin/celery [OPTIONS]
Options:
--address run on the given address
--auth regexp of emails to grant access
--basic_auth colon separated user-password to enable
basic auth
--broker_api inspect broker e.g.
http://guest:guest#localhost:15672/api/
--certfile path to SSL certificate file
--db flower database file (default flower.db)
--debug run in debug mode (default False)
--help show this help information
--inspect inspect workers (default True)
--inspect_timeout inspect timeout (in milliseconds) (default
1000)
--keyfile path to SSL key file
--max_tasks maximum number of tasks to keep in memory
(default 10000) (default 10000)
--persistent enable persistent mode (default False)
--port run on the given port (default 5555)
--url_prefix base url prefix
--xheaders enable support for the 'X-Real-Ip' and
'X-Scheme' headers. (default False)
You an use https://pypi.org/project/django-revproxy/
This way Flower is hidden behind Django auth which, and you don't need rewrite rule in your webserver.
Orignal source of this answer: Celery Flower Security in Production