Cntlmd not starting under systemd on Centos 7.1 - centos7

Had a weird error trying to start cntlmd on Centos 7.1.
systemctl start cntlmd` results in the following in the logs (and yes, becomming is exactly how it's spelt in the logs :)):
systemd: Started SYSV: Cntlm is meant to be given your proxy address and becoming
Weird thing is:
that it did run initially after installation.
The exact same config works perfectly on another machine (provisioned with Chef so 100% same config).
If I run it in the foreground it works but through systemd, not.
To "fix" it, I had to manually remove and reinstall, whereupon it worked again.
Anybody seen this error (Google reveals nothing) and know what's going on?

I realised that the /var/run/cntlm directory seemed to be "removed" after every boot. Turns out the /var/run/cntlm directory is never created by systemd-tmpfiles on boot (thanks to this SO answer), which then resulted in:
Feb 29 06:13:04 node01 cntlm: Using following NTLM hashes: NTLMv2(1) NT(0) LM(0)
Feb 29 06:13:04 node01 cntlm[10540]: Daemon ready
Feb 29 06:13:04 node01 cntlm[10540]: Changing uid:gid to 996:995 - Success
Feb 29 06:13:04 node01 cntlm[10540]: Error creating a new PID file
because cntlm couldn't write it's pid file because /var/run/cntlm didn't exist.
So to get systemd-tmpfiles to create the /var/run/cntlm directory on boot you need to add the following file in /usr/lib/tmpfiles.d/cntlm.conf:
d /run/cntlm 700 cntlm cntlm
Reboot and Bob's your uncle.

Related

AWS - EC 2 All of sudden lost access due to No /sbin/init, trying fallback

In my AWS EC2 instance, was locked and lost access from December 6th for an unknown reason, it cannot be an action i did on the EC2, because i was overseas on holidays from December 01st and Came back January 01st, I realized server was lost connection from 6t December and i have no way to connect to the EC2 now on,
EC2 runs on CENTOS 7 and PHP, NGINX, SSHD setup.
When i checked the System Log i see below.
[[32m OK [0m] Started Cleanup udevd DB.
[[32m OK [0m] Reached target Switch Root.
Starting Switch Root...
[ 6.058942] systemd-journald[99]: Received SIGTERM from PID 1 (systemd).
[ 6.077915] systemd[1]: No /sbin/init, trying fallback
[ 6.083729] systemd[1]: Failed to execute /bin/sh, giving up: No such file or directory
[ 180.596117] random: crng init done
Any idea on what is the issue will be much appreciated
In Brief, i had to following to recover, The root cause has been that the disk was completely full.
) Problem mounting the slaved volume (xfs_admin)
) Not able to chroot the environment (ln -s)
) Disk at 100% (df -h) Removing var/log files
) Rebuilt the initramfs (dracut -f)
) Rename the etc/fstab
) Switched the Slave volume back to original UUID (xfs_admin)
) Configured the Grub to boot the latest version of the kernel/initramfs
) Rebuilt Initramfs and Grub

Hyperledger Fabric: Peer nodes fail to restart with byfn script when machine is shut down while network is running

I have a hyperledger fabric network running on a single AWS instance using the default byfn script.
ERROR: Orderer, cli, CA docker containers show "Up" status. Peers show "Exited" status.
Error occurs when:
Byfn network is running, machine is rebooted (not in my control but because of some external reason).
Network is left running overnight without shutting the machine. Shows same status next morning.
Error shown:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b0523a7b1730 hyperledger/fabric-tools:latest "/bin/bash" 23 seconds ago Up 21 seconds cli
bfab227eb4df hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 23 seconds ago peer1.org1.example.com
6fd7e818fab3 hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 19 seconds ago peer1.org2.example.com
1287b6d93a23 hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 22 seconds ago peer0.org2.example.com
2684fc905258 hyperledger/fabric-orderer:latest "orderer" 28 seconds ago Up 26 seconds 0.0.0.0:7050->7050/tcp orderer.example.com
93d33b51d352 hyperledger/fabric-peer:latest "peer node start" 28 seconds ago Exited (2) 25 seconds ago peer0.org1.example.com
Attaching docker log: https://hastebin.com/ahuyihubup.cs
Only the peers fail to start up.
Steps I have tried to solve the issue:
docker start $(docker ps -aq) or manually, starting individual peers.
byfn down, generate and then up again. Shows the same result as above.
Rolled back to previous versions of fabric binaries. Same result on 1.1, 1.2 and 1.4. In older binaries, error is not repeated if network is left running overnight but repeats when machine is restarted.
Used older docker images such as 1.1 and 1.2.
Tried starting up only one peer, orderer and cli.
Changed network name and domain name.
Uninstalled docker, docker-compose and reinstalled.
Changed port numbers of all nodes.
Tried restarting without mounting any volumes.
The only thing that works is reformatting the AWS instance and reinstalling everything from scratch. Also, I am NOT using AWS blockchain template.
Any help would be appreciated. I have been stuck on this issue for a month now.
Error resolved by adding following lines to peer-base.yaml:
GODEBUG=netdns=go
dns_search: .
Thanks to #gari-singh for the answer:
https://stackoverflow.com/a/49649678/5248781

What is wrong with the setup of Hyperledger Fabric?

Because I want to install a new clear version of Hyperledger Fabric, I deleted old Hyperledger file of one month ago, and run "vagrant destroy".
I run "vagrant up", and "vagrant ssh" successfully.
I "make peer" successfully, when I run "peer", if failed.
When I run "make peer" and "peer" again, the error is pop up as below:
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric$ make peer
make: Nothing to be done for `peer'.
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric$ peer
No command 'peer' found, did you mean:
Command 'pee' from package 'moreutils' (universe)
Command 'beer' from package 'gerstensaft' (universe)
Command 'peel' from package 'ears' (universe)
Command 'pear' from package 'php-pear' (main)
peer: command not found
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric$
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric$ cd peer
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric/peer$ ls -l
total 60
drwxr-xr-x 1 vagrant vagrant 204 Jun 26 01:16 bin
-rw-r--r-- 1 vagrant vagrant 17342 Jun 25 14:18 core.yaml
-rw-r--r-- 1 vagrant vagrant 35971 Jun 25 14:18 main.go
-rw-r--r-- 1 vagrant vagrant 1137 Jun 23 08:46 main_test.go
The binary peer file's location is ./build/bin/ folder.
For your configuration the full path is "/opt/gopath/src/github.com/hyperledger/fabric/build/bin/"
Let me tell you one thing I observed when I pulled code from gitHub last week, [Thursday to be exact].
Make command had created the executable in "/opt/gopath/src/github.com/hyperledger/fabric/build/bin/". But one pretty thing which I found was, it had copied the same to "/hyperledger/build/bin". And the $PATH variable now included "/hyperledger/build/bin" also.
So to answer your question, you have two options :-
1. one retain your current version of code & Navigate into the bin folder in the fabric directory and see whether peer executable is present there. ? If yes, then execute the rest of the code.
2. Pull the latest copy from gitHub.com and make peer from fabric directory as usual. But execute peer from anywhere. :)

SSH Connection disconnected

I'm a student from korea
first, i'm sorry about my low level english :)
I'm make a web service using AWS + nginx + django
I connect to AWS instance(ubuntu) using SSH protocol
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-74-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Sat Apr 30 07:03:51 UTC 2016
System load: 0.0 Processes: 105
Usage of /: 23.8% of 7.74GB Users logged in: 0
Memory usage: 14% IP address for eth0: 172.31.17.137
Swap usage: 0%
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
21 packages can be updated.
17 updates are security updates.
Last login: Sat Apr 30 07:03:52 2016 from 210.103.124.253
pyenv-virtualenv: no virtualenv has been activated.
and
manage.py runserver --settings=abc.settings.production
So everyone can access my web service!
but.... after 30miniute
the SSL connection is broken itself....
export this message
packet_write_wait: Connection to 52.69.xxx.xxx: Broken pipe
and nobody can't access my web service...
so... my web site can't access when my computer was power off, none SSL connection...
I want everyone can access my web service 24/7
please give me a method thank you :)
When you want to run a command that continues after your current shell terminates, you should use the nohup command to launch it.
That causes the process to be detached from its initial parent shell so it is not killed when the parent terminates.

How to run jetty as a daemon

I've just downloaded jetty 9 and wanted to run it as a daemon. I've set some options in /etc/default/jetty, here they are:
JETTY_HOME=/opt/jetty
JETTY_ARGS=jetty.port=8080
NO_START=0
JETTY_USER=jetty
JETTY_PID=/opt/jetty/jetty.state
JETTY_LOGS=/var/log/jetty
DEBUG=1
When I run service jetty start I get this:
Starting Jetty: FAILED Sun Apr 13 17:20:25 UTC 2014
Well, what could be wrong? There's no info in logs, how can I debug this?
Take a look here for some tips. The default script runs start-stop-daemon with the -b flag, which puts everything into a detached process and prevents output from going to the console. Remove -b and add -v (verbose), and you'll probably get some debugging information and an indication of how far it got.