I have just started my first symfony project and I'm getting the above error in a more complex situation. To narrow it down I created a new project but the error is there again :/
What I have down:
cd /var/www
sudo symfony new em
cd em
sudo mkdir src/AppBundle/Entity
sudo touch src/AppBundle/Entity/Company.php
sudo chmod -R 777 *
php bin/console doctrine:database:create // db symfony
php bin/console doctrine:generate:entities AppBundle
sudo touch src/AppBundle/Controller/CompanyController.php
sudo chmod -R 777 * // failed to write to cache
When navigating to http://em.at/app_dev.php/companies I get the error!
I already tried Uncaught PHP Exception Doctrine\ORM\ORMException: "Unknown Entity namespace alias 'AppBundle'."
so
php bin/console cache:clear
which resulted in
[Symfony\Component\Filesystem\Exception\IOException]
Failed to remove file "/var/www/em/var/cache/de~/twig/2b/2bef18b12bca6cd9cb
d50ce5b194c23fdc85bb73d29114f27fb3793b7f085d8a.php": .
so I deleted them manually
sudo rm -R cache/de~
sudo rm -R cache/dev
but the error persists. Anyone know what's wrong?.
my files:
<?php
// src/AppBundle/Entity/Company.php
namespace AppBundle\Entity;
use Doctrine\ORM\Mapping as ORM;
/**
* #ORM\Entity
* #ORM\Table(name="company")
*/
class Company
{
/**
* #ORM\Column(type="integer")
* #ORM\Id
* #ORM\GeneratedValue(strategy="AUTO")
*/
private $id;
/**
* #ORM\Column(type="string", length=100)
*/
private $name;
}
and
<?php
// src/AppBundle/Entity/CompanyController.php
namespace AppBundle\Controller;
use AppBundle\Entity\Company;
use Sensio\Bundle\FrameworkExtraBundle\Configuration\Route;
use Symfony\Bundle\FrameworkBundle\Controller\Controller;
use Symfony\Component\HttpFoundation\Request;
class CompanyController extends Controller
{
/**
* #Route("/companies", name="company_list")
*/
public function listAction(Request $request)
{
$companies = $this->getDoctrine()->getRepository('Appbundle:Company')->findAll();
// replace this example code with whatever you need
return $this->render('company/list.html.twig', [
'base_dir' => realpath($this->getParameter('kernel.root_dir').'/..').DIRECTORY_SEPARATOR,
'companies' => $companies
]);
}
}
Did you set the file permissions on your var folder. Form the above messages it looks like you may not have done that. Here's the reference you need:
https://symfony.com/doc/current/setup/file_permissions.html
Also, just in case run:
php bin/symfony_requirements
to make sure you've met all other requirements.
This will save you a lot of grief.
Related
I am facing the below issue-
WordPress 5.7.2 <> Tyring to install the required extension for UNYSON framework...and getting below error -
Install Extension Install theme compatible extensions
Downloading the "Shortcodes" extension...
Cannot create temporary directory:
Cannot create temporary directory: Return to the Extensions page
Tried - sudo chown -R bitnami:daemon /opt/bitnami/wordpress/wp-content/plugins
Still no luck...Please help.
I am able to fix the error using by adding the line.
define('WP_CONTENT_DIR', realpath(ABSPATH . './wp-content/'));
under the lines,
if ( ! defined( 'ABSPATH' ) ) { define( 'ABSPATH', __DIR__ . '/' );}
in wp-config.php.
Final code looks like this
if ( ! defined( 'ABSPATH' ) ) { define( 'ABSPATH', __DIR__ . '/' );}
define('WP_CONTENT_DIR', realpath(ABSPATH . './wp-content/'));
Let's reset the WordPress permissions:
sudo chown -R bitnami:daemon /opt/bitnami/wordpress
sudo chown -R bitnami:daemon /bitnami/wordpress/
sudo chmod -R g+w /opt/bitnami/wordpress
sudo chmod -R g+w /bitnami/wordpress/
sudo chmod 644 /bitnami/wordpress/wp-config.php
Can you install the WordPress' plugin now?
I solved the problem.
Just go to the linux's command line, access the config archive using:
vim /opt/bitnami/wordpress/wp-config.php
Then find this piece of code:
/** Absolute path to the WordPress directory. */
if ( ! defined( 'ABSPATH' ) ) {
define( 'ABSPATH', __DIR__ . '/' );
}
Press ' i ' to insert a new line of code right below that one, and write:
define('WP_CONTENT_DIR',realpath(ABSPATH.'/wp-content/'));
As a result you must have:
/* That's all, stop editing! Happy publishing. */
/** Absolute path to the WordPress directory. */
if ( ! defined( 'ABSPATH' ) ) {
define( 'ABSPATH', __DIR__ . '/' );
}
define('WP_CONTENT_DIR',realpath(ABSPATH.'/wp-content/'));
/**
* The base configuration for WordPress
*
Then just press 'ESC' to stop editing, then :W to save - or write - the code, and lastly :Q to quit the file.
Now you must be able to run the installation/ update successfully
Not an expert on AWS and trying to fool around with Cron jobs. For testing, I had a sample script send me emails every minute. Now, I want to change it to once every 10 minutes (*/10 * * * *) These are the container commands I tried and none of them seems to work.
I am using a config file and a txt file to define the crons.
Config file contents (with various ideas I read from online sources)
container_commands:
00_remove_old_cron_jobs0:
command: "rm -fr /etc/cron.d/cron_job"
01_remove_old_cron_jobs1:
command: "sudo sed -i 's/empty stuff//g' /etc/cron.d/cron_job"
02_remove_old_cron_jobs2:
command: "crontab -r || exit 0"
03_cron_job:
command: "cat .ebextensions/cron_job.txt > /etc/cron.d/cron_job && chmod 644 /etc/cron.d/cron_job"
leader_only: true
cron_job.txt file contents.
# The newline at the end of this file is extremely important. Cron won't run without it.
0 * * * * ec2-user /usr/bin/php -q /var/www/html/cron1.php > /dev/null
0 * * * * ec2-user /usr/bin/php -q /var/www/html/html/cron2.php > /dev/null
*/10 * * * * ec2-user /usr/bin/php -q /var/www/html/cronTestEmailer.php > /dev/null
The test emailer script keeps firing every minute instead of every 10 mins and I dont know how I can make sure cron updates are reflected correctly.
You can achieve the same with the follow ebextensions config file.
files:
"/etc/cron.d/mycron":
mode: "000644"
owner: root
group: root
content: |
* * * * * root /usr/local/bin/myscript.sh
"/usr/local/bin/myscript.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
date > /tmp/date
# Your actual script content
exit 0
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
More Details about the config file below:
files: Creates a Cron job and a file with the name myscript.sh. If a file with the same name exists already, first it moves the old file .bak and creates the file with new contents.
commands: deletes the all .bak files
I'm trying to deploy a Single Node File Server as referred to in the instructions here: https://cloud.google.com/solutions/using-tensorflow-jupyterhub-classrooms
When I follow the instructions, the instance appears to come up OK, but NFS does not appear to be running. When I try to mount from another instance with
sudo mount -t nfs jupyterhub-filer-vm:/jupyterhub /mnt
I get
mount.nfs: Connection timed out
When I inspect the filer instance from the Compute Engine UI (https://console.cloud.google.com/compute/instancesDetail/zones/us-east1-d/instances/jupyterhub-filer-vm), I see
Custom metadata
ADMIN_PASSWORD xxx
ATTACHED_DISKS jupyterhub-filer-vm-jupyterhub
C2D_STATUS DEPLOYMENT_FAILED
ENABLE_NFS enable:True
ENABLE_SMB enable:False
FILE_SYSTEM xfs
STORAGE_POOL_NAME jupyterhub
The documentation suggests
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=workpop-dev --zone us-east1-d jupyterhub-filer-vm
and then accessing localhost:3000 in a browser to access a performance dashboard. The ssh command connects me to the instance, but the browser returns ERR_EMPTY_RESPONSE and in the ssh session I see
channel 4: open failed: connect failed: Connection refused.
Within the ssh session, I try
$ ps -e | grep nfs
and it returns nothing.
$ cat /etc/exports
returns a default file containing only comments.
So I look for the disk with $ sudo find / -name "jupyterhub*" but this returns nothing. Poking around some more, I see /opt/c2d/setup.log which has the following lines at the the end:
VIRTUAL_IP =
+ readonly ZFS_KERNEL_CONFIG=/etc/modprobe.d/zfs.conf
+ ZFS_KERNEL_CONFIG=/etc/modprobe.d/zfs.conf
+ networks=(10.0.0.0/8 127.0.0.1)
+ readonly networks
+ readonly DISK_PREFIX=/dev/disk/by-id/google
+ DISK_PREFIX=/dev/disk/by-id/google
+ readonly DATA_DEVICE=/dev/disk/by-id/google-jupyterhub-filer-vm-data
+ DATA_DEVICE=/dev/disk/by-id/google-jupyterhub-filer-vm-data
+ [[ xfs = \z\f\s ]]
+ [[ -n '' ]]
+ case "${FILE_SYSTEM}" in
+ mkfs.xfs -L jupyterhub /dev/disk/by-id/google-jupyterhub-filer-vm-data
/dev/disk/by-id/google-jupyterhub-filer-vm-data: No such file or directory
Usage: mkfs.xfs
/* blocksize */ [-b log=n|size=num]
/* metadata */ [-m crc=0|1,finobt=0|1]
/* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
(sunit=value,swidth=value|su=num,sw=num|noalign),
sectlog=n|sectsize=num
/* force overwrite */ [-f]
/* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
projid32bit=0|1]
/* no discard */ [-K]
/* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
sunit=value|su=num,sectlog=n|sectsize=num,
lazy-count=0|1]
/* label */ [-L label (maximum 12 characters)]
/* naming */ [-n log=n|size=num,version=2|ci,ftype=0|1]
/* no-op info only */ [-N]
/* prototype file */ [-p fname]
/* quiet */ [-q]
/* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
/* sectorsize */ [-s log=n|size=num]
/* version */ [-V]
devicename
<devicename> is required unless -d name=xxx is given.
<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<value> is xxx (512 byte blocks).
At this point, I'm convinced that something has gone wrong, but I don't know how to fix it. Can anyone help?
There is an issue with the disk name.
Try it with the default value: Storage Name = data
( It finished the setup for me without an error and localhost:3000 is loading correctly. I'm not sure if it creates errors later in the lab.)
Relatively new to running cron jobs in Centos6, I can't seem to get this Python script to execute properly. I would like this script to execute and then email me the output. I have been receiving emails, but they're empty.
So far, in Crontab I've tried entering:
*/10 * * * * cd /home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1 && /usr/bin/python ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com
and
*/10 * * * * /home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1/ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com
I have run chmod +x on the python script to make the script executable and the Python script has #!/usr/bin/env python at the header. What am I doing wrong here?
The other problem might be that I shouldn't be using the log file? All I see at /var/log/cron when I open with cat cron is entires like this, for example (no actual output from the script):
Jul 23 13:20:01 ent-mocdvsmg01 CROND[24681]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 23 13:20:01 ent-mocdvsmg01 CROND[24684]: (MYJOB\purrone) CMD (/home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1/ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com)
There is nothing going into your mailx input; it expects the message on stdin. Try running it outside of crontab as a test until it sends a valid email. You could test with:
% echo hello |mailx -s test my#email.com
Note that cron can email you the output of its run. You just need to add a line to the top of crontab like:
MAILTO = you#email.com
Solution was to omit the redirect > and instead edit the Crontab thusly:
*/15 * * * * /home/local/COMPANY/malvin/SilverChalice_CampusInsiders/SilverChalice_Parser.py | tee /home/local/COMPANY/malvin/SilverChalice_CampusInsiders`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log | mailx -s "SilverChalice CampusInsiders" my#email.com
I have a ami which need username/password for login via ssh. I want to create new amis from this, in which I can login from any newly created keypairs.
Any suggestions?
I'm not sure what AMI allows username/password login, but when you create an instance from an AMI, you need to specify a key pair.
That key will be ADDED to the authorized_keys for the default user (ec2-user for Amazon Linux, ubuntu for the Ubuntu AMI, etc).
Why you don't just add the users/password to the instance and then build your AMI from there? Then you can change your /etc/ssh/sshd_config and permit username passwords with this: PasswordAuthentication yes. Btw, Username/Password authentication is not recommended for servers in the cloud because of man in the middle attacks. (use it at your own risk)
Not sure if I understand the question fully, but if you want to change the behavior of the instance when it boots up I suggest you look at fuzzing with cloud-init. The configuration in the instance is under /etc/cloud/cloud.cfg. For example on on Ubuntu the default says something like this:
user: ubuntu
disable_root: 1
preserve_hostname: False
...
If you want to change the default user you can change it there
user: <myuser>
disable_root: 1
preserve_hostname: False
...
The simplest way is to do this is by adding the following snippet in to the /etc/rc.local or its equivalent.
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
if [ ! -d /root/.ssh ] ; then
mkdir -p /root/.ssh
chmod 0700 /root/.ssh
fi
# Fetch public key using HTTP
curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/aws-key 2>/dev/null
if [ $? -eq 0 ] ; then
cat /tmp/aws-key >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
fi
rm -f /tmp/aws-key
# or fetch public key using the file in the ephemeral store:
if [ -e /mnt/openssh_id.pub ] ; then
cat /mnt/openssh_id.pub >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
fi