How to use sys/mount to mount a NFS system? - c++

So, I am trying to connect two Ubuntu computers using a NFS connection
On the server, I made the following
Install NFS Server on Ubuntu
$ sudo apt-get install nfs-kernel-server portmap
Export shares over NFS
$ sudo mkdir /opt/share
$ sudo chown nobody:nogroup /opt/share
Edit the NFS server exports configuration file
$ sudo gedit /etc/exports
Add the following settings
/home 192.168.0.0/24(rw,sync,no_root_squash,no_subtree_check)
/opt/share 192.168.0.0/24(rw,sync,no_subtree_check)
Apply the new settings by running the following command. This will export all directories listed in /etc/exports file
$ sudo exportfs -a
On the client side, I tried to use the sys/mount library to make the NFS connection
if(mount(":/mnt/share","/opt/share","nfs",0,"nolock,addr=192.168.0.101") == -1)
{
printf("NFS ERROR: mount failed: %s \n",strerror(errno));
}
else
{
printf("NFS connected\n");
}
But it returns
m#m-ThinkPad-L15-Gen-2:~/Desktop/teste$ sudo ./mountnfs
NFS ERROR: mount failed: Permission denied
Does anybody have any clue of what is happening?

According to the documentation:
Linux: the CAP_SYS_ADMIN capability is required to mount file systems.
And according to this page, you can add that capability to your program using something along the lines of the following command (each time after you build it, probably):
sudo setcap CAP_SYS_ADMIN+ep /path/to/your/binary
That's the gist of it anyway, but you might need to a bit more digging to find the optimal solution - in which case you can answer your own question.
Also, what's that leading : doing in your source parameter? Most likely I just don't understand the syntax, but it looks a bit weird to me.

Related

The .xauthority file is not does not exist;hence via local ssh connection display from the GCP compute engine not working

explaining all that has been tried and double checked.
Set up on local windows machine:
Xming installed and running.
in ssh_config ForwardX11 is set to yes.
In VS code remote connection config the the Forward X11 is set to yes.
Set up on GCP compute engine with Debian / Linux 9 and 1 GPU[free tier]:
xauth is installed.
In the sshd_config file below is set:
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no
The sshserver has be restarted to ensure below setting are read .
from local workstation I fire gcloud compute ssh --ssh-flag="-X" tensorflow-2-vm(instance name) and the response is :
/usr/bin/xauth: file /home/user/.Xauthority does not exist,
So, I attempted to perform the below on the remote compute engine with instance name - tensorflow-2-vm and user trapti_kalra:
trapti_kalra#tensorflow-2-vm:~$ xauth list
xauth: file /home/trapti_kalra/.Xauthority does not exist
trapti_kalra#tensorflow-2-vm:~$ mv .Xauthority old.Xauthority
mv: cannot stat '.Xauthority': No such file or directory
trapti_kalra#tensorflow-2-vm:~$ touch ~/.Xauthority
trapti_kalra#tensorflow-2-vm:~$ xauth generate :0 . trusted
xauth: (argv):1: unable to open display ":0".
trapti_kalra#tensorflow-2-vm:~$ sudo xauth generate :0 . trusted
xauth: file /root/.Xauthority does not exist
xauth: (argv):1: unable to open display ":0".
so, looks like something is missing, any help will be appreciated. This was working with a EC2 server before I moved to GCP.
Create n new file: touch ~/.Xauthority
Log out and back in again with your ssh session. (I'm using MobaXterm)
Then it writes the needed.
You logged into your Linux server over ssh and got the following error;
.Xauthority does not exist
Solution :
Let's go into the /etc/ssh/sshd_config file and remove the # sign at the beginning of the 3 lines below
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
Then systemctl restart sshd
Login again and you will not get the error.
There are many solutions to this problem, it can also depend on what machine you originate from. If you come from a Linux box, enabling sshd config options like:
X11Forwarding yes
could be enough.
When you use a Macbook however the scenario is different. In that case, you need to install xQuartz with brew:
brew install xquartz
And after this start it:
xQuartz &
After this is done the xQuartz logo appears in your bar and you can right-click the icon and start the terminal from the Applications menu. After you perform this you can run the following:
echo $DISPLAY from this terminal. This should give you the output:
:0
When you have another terminal such as iTerm, you can export this value in another terminal with export DISPLAY=:0 As long as xQuartz is still running the other terminal should be able to continue to use xQuartz.
After this you can SSH into the remote machine and check if the display variable is set:
$: ssh -Y anldisr#my-remote-machine
$: echo $DISPLAY
localhost:11.0
It took me a hour to figure this out, hope it helps someone. :)
This also happened when I added a new user to remote machine without giving the user a sudo privilege during creation.
To resolve, I used the root user or a sudo privileged user to assign a sudo privilege to the new user. Exit the new user and ssh again into your server.
> $ sudo usermod -aG sudo [newUser]

CLion Full Remote Mode with FreeBSD as the remote host

Currently, the Full Remote Mode of CLion only supports Linux as a remote host OS. Is it possible to have a FreeBSD remote host?
Yes, you can!
However, note that I'm recalling these steps retrospectively, so probably I have missed one step or two. Should you encounter any problem, please feel free to leave a comment below.
Rent a FreeBSD server, of course :)
Update your system to the latest release. Otherwise, you may get weird errors like "libdl.so.1" not found when installing packages. The one I'm using is FreeBSD 12.0-RELEASE-p3.
Create a user account. Don't forget to make it a member of wheel, and uncomment the %wheel ALL=(ALL) ALL line in /usr/local/etc/sudoers.
Set up SSH. This step is especially tricky, because we need to use both public-key and password authentication.
Due to a known bug, in some cases, the remote host must use password authentication, or you'll get an error when setting up the toolchain. You can enable password authentication by setting PasswordAuthentication yes in /etc/ssh/sshd_config, followed by a sudo /etc/rc.d/sshd restart.
It appears that CLion synchronizes files between the local and remote host with rsync and SSH. For some reasons I cannot explain, this process will hang forever if the host server doesn't support passphrase-less SSH key login. Follow this answer to create an SSH key as an additional way of authentication.
CLion assumes the remote host OS to be Linux, so we must fix some incompatibilities between GNU/Linux and FreeBSD.
Install GNU utilities with sudo pkg install coreutils.
Rename the BSD utility stat with sudo mv /usr/bin/stat /usr/bin/_stat.
Create a "new" file /usr/bin/stat with the content in Snippet 1. This hack exploits the fact that CLion sets the environment variable JETBRAINS_REMOTE_RUN to 1 before running commands on the remote server.
Do sudo chmod a+x /usr/bin/stat to make it executable.
Again, rename the BSD utility ls with sudo mv /bin/ls /bin/_ls.
Create a "new" file /bin/ls with the content in Snippet 2, like before.
Lastly, sudo chmod a+x /bin/ls.
Install the dependencies with sudo pkg install rsync cmake gcc gdb gmake.
Now you can follow the official instructions, and connect to your shiny FreeBSD host!
Snippet 1
#!/bin/sh
if [ -z "$JETBRAINS_REMOTE_RUN" ]
then
exec "/usr/bin/_stat" "$#"
else
exec "/usr/local/bin/gnustat" "$#"
fi
Snippet 2
#!/bin/sh
if [ -z "$JETBRAINS_REMOTE_RUN" ]
then
exec "/bin/_ls" "$#"
else
exec "/usr/local/bin/gls" "$#"
fi
Additionally you need to fix one more incompatibility between GNU/Linux and FreeBSD.
Check gtar is installed if no pkg install gtar
Rename the BSD utility tar with mv /usr/bin/tar /usr/bin/_tar
Create a "new" file /usr/bin/tar with the content in Snippet 3, like before.
Lastly, sudo chmod a+x /usr/bin/tar
Snippet 3
#!/bin/sh
if [ -z "$JETBRAINS_REMOTE_RUN" ]
then
exec "/usr/bin/_tar" "$#"
else
exec "/usr/local/bin/gtar" "$#"
fi
Starting CLion 2020.1 the instruction regarding gnustat and "ls" is not relevant anymore. Because CLion 2020.1 includes the proper fixes in jsch-nio library (https://github.com/lucastheisen/jsch-nio/commit/410cf5cbb489114b5da38c7c05237f6417b9125b)
Starting CLion 2020.2 doesn't use tar --dereference option, so the instruction regarding gtar (gnutar) is also not relevant anymore.

How to start odoo server automatically when system is ON

Haii everyone
How to start Odoo server automatically when system is ON.
Normally i searched in google i had found a link " http://www.serpentcs.com/serpentcs-odoo-auto-startup-script-322 "
i follow the each and every step and i started the odoo-server
ps -ax | grep python
5202 ? Sl 0:01 python /home/tejaswini/Odoo_workspace/workspace_8/odoo8/openerp-server --config /etc/odoo-server.conf --logfile /var/log/odoo-server.log
it is showing the server path also
but when i run 0.0.0.0:8069/localhost:8069 in browser it is running
shows This site can’t be reached
please any one help me
Thanks in advance
To start a service automatically when the system turns on, you need to put that service into init script. Try below command
sudo update-rc.d <service_name> defaults
In your case,
sudo update-rc.d odoo-server defaults
Hope it will help you.
For the final step we need to install a script which will be used to start-up and shut down the server automatically and also run the application as the correct user. There is a script you can use in /opt/odoo/debian/init but this will need a few small modifications to work with the system installed the way I have described above. here is the link
Similar to the configuration file, you need to either copy it or paste the contents of this script to a file in /etc/init.d/ and call it odoo-server. Once it is in the right place you will need to make it executable and owned by root:
sudo chmod 755 /etc/init.d/odoo-server
sudo chown root: /etc/init.d/odoo-server
In the configuration file there’s an entry for the server’s log file. We need to create that directory first so that the server has somewhere to log to and also we must make it writeable by the openerp user:
sudo mkdir /var/log/odoo
sudo chown odoo:root /var/log/odoo
reference

Get files from guest to host in vagrant

Can I retrieve the files from my vagrant machine (guest) and sync it to my host machine?
I know sync folders work the other way around but I was hoping there is a way to make it in reverse? Instead of synching files from the host machine to the guest machine; retrieve the files from inside the guest machine and have it exposed on the host machine.
Thanks.
Why not just put them in the /vagrant folder of your vagrant vm. This is a special mounted folder from the host (where Vagrantfile) resides to the guest.
This way you do not have to worry about doing any other copy operations between hosts.
$ ls
Vagrantfile
$ vagrant ssh
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-42-generic x86_64)
Last login: Wed Oct 12 12:05:53 2016 from 10.0.2.2
vagrant#vagrant-virtualbox:~$ ls /vagrant
Vagrantfile
vagrant#vagrant-virtualbox:~$ cd /vagrant
vagrant#vagrant-virtualbox:/vagrant$ touch hello
vagrant#vagrant-virtualbox:/vagrant$ exit
logout
Connection to 127.0.0.1 closed.
$ ls
Vagrantfile hello
$
have you tried to scp files between your host and your virtual machine ? As I remember, the ssh login and password are "vagrant".
Something like this could do the job :
scp vagrant#<vm_ip>:<path_to_file> <path_to_dest_on_your_host>
Using scp with private key will make this much easy!
scp -i ./.vagrant/machines/default/virtualbox/private_key -r -P2222 vagrant#127.0.0.1:/home/vagrant/somefolder ./
You may want to try vagrant-rsync-back. I've not tried it yet.
Install python then run, e.g.
$ nohup python3.6 -m SimpleHTTPServer &
in the output directory. I put this command in a file set to run always during provisioning. This solution required zero system configuration.

macOS - vagrant up failed, /dev/vboxnetctl: no such file or directory

Can be useful, I found this error. The common solution is reinstall virtualbox but there are a better way.
Solution
sudo /Library/StartupItems/VirtualBox/VirtualBox restart
or
sudo /Library/StartupItems/VirtualBox/VirtualBox start
VirtualBox 4.3+
On recents versions, the file (/Library/StartupItems/VirtualBox/VirtualBox) don't exists, so you need to use the command below:
sudo launchctl load /Library/LaunchDaemons/org.virtualbox.startup.plist
Error
Print: http://d.pr/i/1Bvi
There was on error while executing VBoxManage, a CLI used by Vagrant for controlling VirtualBox. The command and stderr is shown below
Command: ["hostonlyif", "create"]
Stderr: 0%... Progress state: NS_ERROR_FAILURE VBoxManage: error:
Failed to create the host-only adapter VBoxManage: error:
VBoxNetAdpCtl: Error while adding new interface: failed to open
/dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005),
component HostNetworkInterface, interface IHostNetworkInterface
VBoxManage: error: Context: "int handleCreate(HandlerArg*, int, int*)"
at line 68 of file VBoxManageHostonly.cpp
Vagrant Git issue about the error: https://github.com/mitchellh/vagrant/issues/1671#issuecomment-22304107
I'm running macOS High Sierra 10.13.1 and VirtualBox 5.2.2.
This worked for me:
Grant permission to VirtualBox under System Preferences > Security & Privacy > General (this request is new to macOS High Sierra)
Open Terminal and run: sudo "/Library/Application Support/VirtualBox/LaunchDaemons/VirtualBoxStartup.sh" restart
If your system recently updated the kernel, you many need to rerun the vbox setup again.
If this is the case, you will see the following messages when you run virtualbox start command:
$ sudo /path/to/virtualbox start
WARNING: The vboxdrv kernel module is not loaded. Either there is no module
available for the current kernel (2.6.32-358.23.2.el6.x86_64) or it failed to
load. Please recompile the kernel module and install it by
sudo /etc/init.d/vboxdrv setup
You will not be able to start VMs until this problem is fixed.
This worked for me (macOS Monterey). This reloads all VirtualBox's kernel extensions.
sudo kmutil load -b org.virtualbox.kext.VBoxUSB
sudo kmutil load -b org.virtualbox.kext.VBoxNetFlt
sudo kmutil load -b org.virtualbox.kext.VBoxNetAdp
sudo kmutil load -b org.virtualbox.kext.VBoxDrv
I had some problems with vbox running on Ubuntu 17.10 when starting a virtual machine with host-only adapted/bridge network. Looking for an answer I found numerous commands that are useful when having that kind of problems. Here they are:
VIRTUAL HOST PROBLEMS
failed to open /dev/vboxnetctl
vboxnet0 - this is the bad guy who is causing all the trouble.
VBoxNetAdpCtl: Error while adding new interface: failed to open
/dev/vboxnetctl: No such file or directory.
These commands are not used in particular order. They are just generally useful and problem-solving.
1) sudo modprobe vboxdrv
2) sudo modprobe vboxnetadp - (host only interface)
3) sudo modprobe vboxnetflt - (make vboxnet0 accecible)
IF YOU HAVE PROBLEMS WITH SECURE BOOT RUNNING the FIRST COMMAND I RECOMMEND DISABLING SECURE BOOT IN BIOS (or reboot).
modprobe: FATAL: Module vboxnetftl not found in directory
/lib/modules/4.13.0-21-generic
(bridge networking)
4) sudo apt-get install virtualbox-dkms (extension) -> go to command 1 after this
5) sudo vboxmanage hostonlyif create
These sometimes might work:
I. service --status-all
II service service_name restart
Tried above all remedies, few commands although executed, did not work.
Nothing is present in my Mac (el captain) of sort /Library/StartupItems/Vir*, and below command failed:
sudo /Library/StartupItems/VirtualBox/VirtualBox restart
Reinstalling latest VirtualBox and then running below command helped me having VM running
sudo launchctl load /Library/LaunchDaemons/org.virtualbox.startup.plist
I had a similar problem starting a virtual box on High Sierra.
macOS High Sierra 10.13 introduces a new feature that requires user approval before loading newly-installed third-party kernel extensions (KEXTs). When a request is made to load a KEXT that the user has not yet approved, the load request is denied. Apps or installers that treat a KEXT load failure as a hard error will need to be changed to handle this new case.
To resolve, you must manually approve the KEXT in System Preferences > Security & Privacy.
Here is the Technical Note from Apple:
https://developer.apple.com/library/content/technotes/tn2459/_index.html
I was stuck on this for a while. I kept seeing 'command not found' when trying to run the sudo: /Library.. command.
However, this did work for me:
sudo /Library/Application\ Support/VirtualBox/LaunchDaemons/VirtualBoxStartup.sh restart
RUN
$ sudo modprobe vboxdrv
$ sudo modprobe vboxnetadp
$ sudo vboxreload
Thank forks, it worked for me.
Grant permission to VirtualBox under System Preferences > Security & Privacy > General
Throw away /Applications/VirtualBox into the trash
Re-install VirtualBox from your .dmg file
When I get the error...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.
Command: ["hostonlyif", "create"]
Stderr: 0%...
Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterface, interface IHostNetworkInterface
VBoxManage: error: Context: "int handleCreate(HandlerArg*, int, int*)" at line 68 of file VBoxManageHostonly.cpp
The following works for me and returns no errors, I am then able to bring vagrant up successfully
sudo /Library/StartupItems/VirtualBox/VirtualBox restart