How to set jetty.home in Ubuntu 16.04 for CKAN - jetty

I'm following the "Installing CKAN from source" guide. And in the step to start the jetty service: sudo service jetty start. But it doesn't work, it prints "Failed to start jetty.service: Unit jetty.service not found".
Now, if instead that command, I use: sudo /etc/init.d/jetty8 start, the server starts correctly.
So, my guess (not totally sure) is that the jetty.home is not set properly.
For what it's worth, I'm using Ubuntu 16.04, running in virtualbox.
Thanks in advance to anyone who can help me.
P.S: If additional information is needed, please let me know.

For Ubuntu 16, just run sudo systemctl unmask jetty8 then sudo service jetty8 start

If sudo /etc/init.d/jetty8 start works then you should be able to use
sudo service jetty8 start
(note the use of jetty8 instead of jetty).

Related

GCP VM not installing nVidia driver properly

I have created the VM using GCP Console in browser.
While creating VM, I selected the VM Image as "c2-deeplearning-pytorch-1-8-cu110-v20210619-debian-10". Also, I selected GPU as T4.
VM gets created and started and it shows green icon in browser.
Then I try to connect from "gcloud compute ssh " and it asks if I want to install nVidia Driver and I do Y, then it gives error for lock file and driver is not installed as:
This VM requires Nvidia drivers to function correctly. Installation
takes ~1 minute. Would you like to install the Nvidia driver? [y/n] y
Installing Nvidia driver. install linux headers:
linux-headers-4.19.0-16-cloud-amd64 E: dpkg was interrupted, you must
manually run 'sudo dpkg --configure -a' to correct the problem.
Nvidia driver installed.
I try to verify if driver is installed by running python code as:
import torch
torch.cuda.is_available() #returns False.
Anybody else faced this issue?
This is the correct way to install NVIDIA driver on a GCP instance:
cd /
sudo apt purge nvidia-*
Reboot
cd /
sudo wget https://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run
sudo sh cuda_11.2.2_460.32.03_linux.run
Adjust your config accordingly as it pops options in the terminal
Reboot
Solution to my problem was:
Run manually : sudo dpkg --configure -a
Disconnect from machine.
Connect again using SSH. Select Y again when asked to install nVidia Driver.
It works then.
Make sure you are running as root. I know this sounds silly, but if you use their notebook instances the default user is not root and if you try to ssh into the instance and run something like gpustat etc or run custom code, you might get errors like NVIDIA drivers are not loaded or such.
If you make sure your user (which is called jupyter in the default case) is in the sudoers then all will work fine.
It is often very complicated to install or reinstall GPU drivers on GCP instances. Make sure you actually need to reinstall before you attempt other solutions.

Google Material Design Components on Ubuntu Server on Google Cloud

I cannot get Material Design Components to run on my virtual server. I have tried following their "quick start" page and their Material basics (Web 101) course to no avail. I am able to execute most of the steps in either tutorial, but I cannot see the JavaScript apply to the page. What am I doing wrong? I will detail my process below so that someone can hopefully spot my mistake.
First I create a VM instance on the Google Cloud Platform. It is a Ubuntu 18.04 LTS image with 1 CPU, 3.75 GB memory, and HTTP/HTTPS traffic allowed on the firewall.
Then I install Node.js and NPM on the machine.
sudo apt-get update
sudo apt-get install nodejs
sudo apt-get install npm
Then I clone the codelab from GitHub. (following Web 101 in this example)
git clone https://github.com/material-components/material-components-web-codelabs
...and navigate to the pertinent directory.
cd material-components-web-codelabs/mdc-101/starter
In that directory, I install NPM.
npm install
The install works just fine, save for one optional dependency called "/chokidar/fsevents" which is apparently for Mac OS X anyways.
From the same directory, I start NPM.
npm start
At this point, the tutorial says I should be able to reach the site. It says to navigate to http://localhost:8080/. Since I am installing this on a remote, cloud server, I replace "localhost" with the server's external IP. I invariably get a timeout error from the browser.
Ensure that the port 8080 is open and listening inside the VM instance by running telnet, nmap, or netstat commands.
$ telnet localhost 8080
$ nmap <external-ip-vm-instance>
$ netstat -plant
If it is not listening, then this means that the application was not installed correctly.
Look at the Firewall Rule in the GCP to make sure that the VM instance allows ingress traffic to the port 8080.
Since you are running Ubuntu, make sure that the default Ubuntu firewall did not block the port 8080. If so, you have to allow access to the port 8080 by running the following command:
$ sudo ufw allow 8080/tcp

ColdFusion 2016 Installation

I can't run ColdFusion 2016 after installed. I open terminal window and type in this command:
/Applications/Coldfusion2016/cfusion/bin/coldfusion start
After that, it prompted me to type the command: sudo ./coldfusion start
You must be the root user to configure the ColdFusion connector.
Start ColdFusion as "sudo ./coldfusion start" to configure the
connector. Once connector has been configured, start ColdFusion as
"./coldfusion start" to run ColdFusion as non-root user".
I did all the steps however it failed to run. Can anyone help me with this problem? I greatly appreciate your answer.
You literally need to add the sudo command to the beginning of your start command. MacOS is requiring admin access to start the server.
sudo /Applications/Coldfusion2016/cfusion/bin/coldfusion start
When this runs, you'll be prompted for your admin password before it starts.

How to enable opcache in PHP7

Hi I just installed my PHP7 in a centos7 server. I was wondering if I still need to install Opcache in this server or does it come together with PHP7? If its already bundled how can I enable it? I tried using the below config and restart nginx and php-fpm but no luck
zend_extension=opcache.so
opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=4000
opcache.revalidate_freq=60
opcache.fast_shutdown=1
opcache.enable_cli=1
opcache.enable=1
But when I try to search for the file opcache.so there are no results. Does it mean I have to install it? and when I check my phpinfo here's what I get

Cannot connect icecream (icecc) on Fedora

I can't manage to get an icecc daemon to connect to the local icecc-scheduler from any machine running Fedora 20.
I've had no issues setting this up on 5 different Ubuntu 14.04 machines, and each can run the scheduler with no issue. In fact, it appears to work out of the box with no additional config on Ubuntu - simple install and play.
In those cases on Ubuntu
sudo apt-get install icecc
sudo service iceccd start
And on one of the machines
sudo service icecc-scheduler start
Then simply setting the path and building like so
export PATH=/usr/lib/icecc/bin:$PATH
make -j16
This is all that is needed to get the distributed compile working on Ubuntu as far as I can see.
On Fedora installing and starting I use
sudo yum install icecream.x86_64
sudo systemctl start iceccd
And compiling with
export PATH=/usr/libexec/icecc/bin:$PATH
make -j16
This doesn't distribute the compile.
The icemon utility on the scheduler does not show any evidence of the fedora machine either and running a status on the iceccd service gives this error:
Jul 21 09:44:08 Fedora20 iceccd[4642]: [4642] 09:44:08: scheduler not yet found.
So far the only thing I've tried that might have been the issue is opening up the ports that the readme provides by adding them to the Zones->Ports part of Firewall Configuration , but this hasn't helped.
Maybe there is something I need to do on the Ubuntu schedular and daemons? Has anyone else had any luck with setting up icecream on Fedora 20?
For other future devs who might come here from google -
To get icecc working I edited the /usr/lib/systemd/system/icecc/iceccd-wrapper file by adding two arguments to the iceccd command.
-s <schedular> -m <number of jobs>
Then when running the following command
sudo systemctl start iceccd
The daemon starts up and is seen by the scheduler.
Remember the ports also need to be open!
Instead to editing either /usr/lib/systemd/system/icecc/iceccd-wrapper (like proposed by foips) or /usr/lib/systemd/system/iceccd.service itself, I found it more convenient to modify global icecream settings file /etc/sysconfig/icecream and set
# If the daemon can't find the scheduler by broadcast (e.g. because
# of a firewall) you can specify it.
#
ICECREAM_SCHEDULER_HOST="<scheduler>"
On Ubuntu 20.04 with ICECC 1.3.1 the config file is /etc/icecc/icecc.conf and the setting is called ICECC_SCHEDULER_HOST. You need to put the scheduler IP there.