First, my ultimate goal is to cross compile OpenCV for arm so I have tried 2 approaches, but no success so far.
This question is related to using distcc for compiling, using the target to run the make command but taking advantage of a beefy server to speed things up.
Basically, the target doesn't seem to be sending jobs to the slave server.
I installed distcc on both machines (apt-get install distcc)
As I understand it, the daemon only needs to run on the slave.
I set up hosts in /etc/distcc/hosts: In that file I have the IPs of both the target at 192.168.10.45 and slave at 192.168.10.34
I run the daemon with
distccd --daemon --allow 192.168.10.45
to allow the target
with ps aux | grep distcc
I can see the 32 instances of distccd running.
If I use
netstat -pant | grep distcc
I see the daemon listening
Now, if I tail the log file at /var/log/distccd.log, there is nothing there, and nothing happening
When I run a job on the target with
make -j33 CC=distcc
it seems to run fine, but I see nothing happening on the slave
ufw is disabled, the 2 machines ping and can talk to each other via ssh.
What am I missing here?
You must define the list of compilation hosts (through the /etc/distcc/hosts file or through the DISTCC_HOSTS environment variable) on the master (target) machine. Check the host list by running on the master distcc --show-hosts.
Specify distcc as a compiler for C++ as well:
make -j33 CC=distcc CXX=distcc
Did you run:
sudo update-distcc-symlinks
The official installation documentation currently omits this step. I had the same symptoms and had some trouble finding the log, but eventually saw that I had to specify logging in an environment variable:
DISTCCD_OPTS="${DISTCCD_OPTS} --log-file /dev/shm/distccd.log"
Which said:
(dcc_warn_masquerade_whitelist) CRITICAL! /usr/local/lib/distcc not found. You must see up masquerade (see distcc(1)) to list whitelisted compilers or pass --enable-tcp-insecure. To set up masquerade automatically run update-distcc-symlinks.
Related
I am using Code Server within my Cloud Shell. I need to use the port 3000 for a specific npm package. Unfortunately port 3000 is already used by the default editor Theia within Cloud Shell.
I have already tried the following:
sudo kill {{PID of Theia process}} ...but it restarts again immediatelly
searched for settings within /google/devshell/editor/theia ...but could not find any port settings
sudo netstat -tlnp gives the following output:
Any help is very appreciated.
As mentioned by JShinigami, That issue got resolved here by changing the port of the other application, other alternative of resolving this issue is as below :
First I would recommend you to reset your cloud shell.
You can refer to the Answer to follow the steps on how to kill a process running on the particular Port.
Option 1 A One-liner to kill only LISTEN on specific port:
kill -9 $(lsof -t -i:3000 -sTCP:LISTEN)`
Option 2 If you have npm installed you can also run
npx kill-port 3000
I also found this answer on stack overflow that may be relevant as it shows how they were able to kill the process once they obtained its PID.
could you run the following command :
"sudo netstat -tlnp"
From the above you will be able to tell what processes are running on the ports. From there you will see the Possibility of "auto restart" configuration somewhere causing the process to appear even after kill command.
Found this useful article on ways to list processes running on ports.
This is cloudshelledit occupy the port
If you don't need cloudshelledit and can kill off
And if you open the cloudshelledit, this process is not shut off
cloudshelledit
I want, when my centos 7 server boots, to run
echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse
because I want to reuse open connections.
Systemd is allready installed, but my command is not a service, just a one time execution command at startup.
How can I run automatically this command at startup ? Thanks !
Add your command to
/etc/rc.d/rc.local
and it will run at startup.
The correct way to make the changes persistent is to edit the file:
/etc/sysctl.conf
After the change, type:
sysctl -p
This will load the changes into the current session. The fact that the settings are in /etc/sysctl.conf will ensure that they load on reboots.
I have a standard Debian 8.9 instance on google cloud compute (GCE) where my startup script is ignored.
In the custom metadata field, for startup-script, I am trying to run an Rscript (which is used for batch execution of R files), followed by a system shutdown, with the following:
#! /bin/bash
sudo /usr/bin/Rscript /home/myuser/launch_script.R
sudo shutdown -h now
Starting the instance is immediately followed by a shutdown and the Rscript is ignored. Removing the last line to shutdown causes the GCE instance to start, but the Rscript to be ignored. Running just "sudo /usr/bin/Rscript /home/myuser/launch_script.R" from the terminal results in the script being run. It has a chmod of 755, so I don't think this is a permissions issue.
In addition to this problem, I have read elsewhere that logging should happen in /var/log/, but there is nothing there. Instead, I have a bunch of log files (that only contain the start-up script and nothing else) in the root of my instance:
I got in touch with Google cloud support, who gave the following response:
script definition is kept under /var/run/google.startup.script
If the script does not run initially, you can force it manually with : $ sudo google_metadata_script_runner --script-type startup # for Debian, or # sudo /usr/share/google/run-startup-scripts # on Ubuntu and older images
I'm posting this information here, because it is not in their documentation (as of August 2017). I'm not sure how helpful it is, since the google.startup.script didn't exist in my case (using the latest Debian image on GCE), but I did run the other commands.
However, I think my main issues were:
I was using autossh to connect to a remote database. The startup-script was running before autossh. Building a 40 second delay into the script and running the script as a user (not sudo-type root) seems to have solved this problem for now. Autossh was being run as the main user, which I think gets loaded before lower-privilege user-defined scripts get loaded.
I was using some gcloud commands from the user account which had its own authentication issues. Running gcloud auth login as the user and ensuring correct permissions on my private key solved this.
Always remember to check the messages and syslog files in /var/log for troubleshooting. This allowed me to see the order of things being loaded at system-boot.
My Development Environment has already started after all the pre-requisites needed:
vagrant up
vagrant ssh
make membersrvc
make peer
But when trying to Start the membersrvc by doing membersrvc after coming into the folder $ cd $GOPATH/src/github.com/hyperledger/fabric, It is not Responding!
No Response even after One Hour!
Any suggestions?
This is exactly how membersrvc supposed to behave. when you execute membersrvc command you don't see any output whatsoever, however you can verify that it is running by opening a separate terminal window and running
ps -a | grep membersrvc
command.
Besides, as Sergey Balashevich commented, you also need to make sure that membersrvc is started and running beforepeer process will be able to get a valid certificate, which means that you need to start both membersrvc and peer process in separate terminal windows simultaneously.
If you want to run all the processes in a single terminal window you can execute them in background asmembersrvc > result 2>&1 & it will start the process and redirect both stdout and stderr to a result file which you can specify. If you don't care about the output at all - you can use /dev/null instead of specifying the file.
So I'm having some adventures with the vagrant-aws plugin, and I'm now stuck on the issue of syncing folders. This is necessary to provision the machines, which is the ultimate goal. However, running vagrant provision on my machine yields
[root#vagrant-puppet-minimal vagrant]# vagrant provision
[default] Rsyncing folder: /home/vagrant/ => /vagrant
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p '/vagrant'
I'm almost positive the error is caused because ssh-ing manually and running that command yields 'permission denied' (obviously, a non-root user is trying to make a directory in the root directory). I tried ssh-ing as root but it seems like bad practice. (and amazon doesn't like it) How can I change the folder to be rsynced with vagrant-aws? I can't seem to find the setting for that. Thanks!
Most likely you are running into the known vagrant-aws issue #72: Failing with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now support the boolean configuration parameter config.ssh.pty. Set the parameter to true to force Vagrant to use a PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you must not set config.ssh.pty on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the workarounds listed below anymore. (But note that I haven't tested it myself yet.) See Vagrant's CHANGELOG for details -- unfortunately the config.ssh.pty option is not yet documented under SSH Settings in the Vagrant docs.
Edit 2: Bad news. It looks as if even a boothook will not be "faster" to run (to update /etc/sudoers.d/ for !requiretty) than Vagrant is trying to rsync. During my testing today I started seeing sporadic "mkdir -p /vagrant" errors again when running vagrant up --no-provision. So we're back to the previous point where the most reliable fix seems to be a custom AMI image that already includes the applied patch to /etc/sudoers.d.
Edit: Looks like I found a more reliable way to fix the problem. Use a boothook to perform the fix. I manually confirmed that a script passed as a boothook is executed before Vagrant's rsync phase starts. So far it has been working reliably for me, and I don't need to create a custom AMI image.
Extra tip: And if you are relying on cloud-config, too, you can create a Mime Multi Part Archive to combine the boothook and the cloud-config. You can get the latest version of the write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp
$ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart
$ chmod +x write-mime-multipart
$ cat boothook.sh
#!/bin/bash
SUDOERS_FILE=/etc/sudoers.d/999-vagrant-cloud-init-requiretty
echo "Defaults:ec2-user !requiretty" > $SUDOERS_FILE
echo "Defaults:root !requiretty" >> $SUDOERS_FILE
chmod 440 $SUDOERS_FILE
$ cat cloud-config
#cloud-config
packages:
- puppet
- git
- python-boto
$ ./write-mime-multipart boothook.sh cloud-config > combined.txt
You can then pass the contents of 'combined.txt' to aws.user_data, for instance via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry for not mentioning this earlier, but I am literally troubleshooting this right now myself. :)
Original answer (see above for a better approach)
TL;DR: The most reliable fix is to "patch" a stock Amazon Linux AMI image, save it and then use the customized AMI image in your Vagrantfile. See below for details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files. In a nutshell, add the following to your Vagrantfile:
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly this will configure the OS to not require a tty for user ec2-user, which seems to be the root of the problem. I /think/ that the additional installation of the puppet package is not required for the actual fix (although Vagrant may use Puppet for provisioning the machine later, depending on how you configured Vagrant).
My experience with the described workaround
I have tried this workaround but Vagrant still occasionally fails with the same error. It might be a "race condition" where Vagrant happens to run its rsync phase faster than cloud-init (which is what aws.user_data is passing information to) can prepare the workaround for #72 on the machine for Vagrant. If Vagrant is faster you will see the same error; if cloud-init is faster it works.
What will work (but requires more effort on your side)
What definitely works is to run the command on a stock Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image of yours.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it
$ sudo su - root
$ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty
$ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
# Note: Installing puppet is mentioned in the #72 bug report but I /think/ you do not need it
# to fix the described Vagrant problem.
$ yum install -y puppet
You must then use this custom AMI image in your Vagrantfile instead of the stock Amazon one. The obvious drawback is that you are not using a stock Amazon AMI image anymore -- whether this is a concern for you or not depends on your requirements.
What I tried but didn't work out
For the record: I also tried to pass a cloud-config to aws.user_data that included a bootcmd to set !requiretty in the same way as the embedded shell script above. According to the cloud-init docs bootcmd is run "very early" in the startup cycle for an EC2 instance -- the idea being that bootcmd instructions would be run earlier than Vagrant would try to run its rsync phase. But unfortunately I discovered that the bootcmd feature is not implemented in the outdated cloud-init version of current Amazon's Linux AMIs (e.g. ami-05355a6c has cloud-init 0.5.15-69.amzn1 but bootcmd was only introduced in 0.6.1).