Fatal error: require_once(): Failed opening required 'C:\xampp\htdocs/system/startup.php' (include_path='.;N:\New folder\php\PEAR') in N:\New folder\htdocs\index.php on line 23...
This error is showing on localhost server when I had to uninstall XAMPP folder from c: drive but I took backup of htdocs and MySQL folder and when I reinstall XAMPP in n: drive and copy paste old htdocs and MySQL to new files, this error is showing...
2nd problem which I am facing is that my SQL is not running on XAMPP control panel whereas
Apache is running without any issue...
What can be the reason for this error?
OpenCart uses two config.php files, one at root and another in admin folder. In these files some global variables are defined containing also paths to installation folder. When you installed the OC, it automatically find the paths and write them into both config files.
But when you changed the directory of your OC (moving on disk, uploading to server), they don't update automatically and you need to do this manually:
// DIR
define('DIR_APPLICATION', '/path/to/public_html/catalog/');
define('DIR_SYSTEM', '/path/to/public_html/system/');
define('DIR_DATABASE', '/path/to/public_html/system/database/');
define('DIR_LANGUAGE', '/path/to/public_html/catalog/language/');
define('DIR_TEMPLATE', '/path/to/public_html/catalog/view/theme/');
define('DIR_CONFIG', '/path/to/public_html/system/config/');
define('DIR_IMAGE', '/path/to/public_html/image/');
define('DIR_CACHE', '/path/to/public_htmll/system/cache/');
define('DIR_DOWNLOAD', '/path/to/public_html/download/');
define('DIR_LOGS', '/path/to/public_html/system/logs/');
E.g. if you have old paths like this one:
define('DIR_APPLICATION', 'c:/htdocs/catalog/');
And now you moved it to n:/New folder, change all the paths accordingly, e.g.
define('DIR_APPLICATION', 'n:/New folder/htdocs/catalog/');
Related
I'm new to opencart. While I understand the security benefits of renaming the admin folder and renaming all instances of 'admin' in the config file, my question is, if for example I install a payment extension that comes with its own admin and catalog folder to be merged, do I also rename the extensions admin folder to reflect the new change?
The .htaccess file isn't an option for now as my IP address isnt static.
Thanks for your anticipated reply.
Steps to Change OpenCart Admin Dashboard URL & Folder
Log into your hosting account cPanel, or FTP
Navigate to the folder containing the “admin” folder. It is usually the “public_html” or “/var/www/html” folders.
Right click on the “admin” folder and choose the “rename” option from the dropdown
Enter the new folder name for the “admin” folder. Use an un-common name which is hard to guess and completely unrelated to your business. (For eg: “STA22R1”, “ROCKETSCIENCE74851”)
Now, edit the /admin/config.php and replace ALL instances of the word ‘admin’ with the new folder name you have chosen in the above step.
// HTTP
define('HTTP_SERVER', 'http://test.domain.com/opencart/**admin**/');
define('HTTP_CATALOG', 'http://test.domain.com/opencart/');
// HTTPS
define('HTTPS_SERVER', 'http://test.domain.com/opencart/**admin**/');
define('HTTPS_CATALOG', 'http://test.domain.com/opencart/');
//DIR
define('DIR_APPLICATION', '/home/userna5/public_html/opencart/**admin**/');
define('DIR_SYSTEM', '/home/userna5/public_html/opencart/system/');
define('DIR_DATABASE', '/home/userna5/public_html/opencart/system/database/');
define('DIR_LANGUAGE', '/home/userna5/public_html/opencart/**admin**/language/');
define('DIR_TEMPLATE', '/home/userna5/public_html/opencart/**admin**/view/template/');
define('DIR_CONFIG', '/home/userna5/public_html/opencart/system/config/');
define('DIR_IMAGE', '/home/userna5/public_html/opencart/image/');
define('DIR_CACHE', '/home/userna5/public_html/opencart/system/cache/');
define('DIR_DOWNLOAD', '/home/userna5/public_html/opencart/download/');
define('DIR_LOGS', '/home/userna5/public_html/opencart/system/logs/');
define('DIR_CATALOG', '/home/userna5/public_html/opencart/catalog/');
// DB
define('DB_DRIVER', 'mysqli');
define('DB_HOSTNAME', 'localhost');
define('DB_USERNAME', 'username_example');
define('DB_PASSWORD', 'password');
define('DB_DATABASE', 'username_example');
define('DB_PREFIX', 'oc_');
If you are using vQmod, you will have to update the pathReplaces.php file, located in the /vqmod directory. The code to rename the ‘admin’ folder should already be there. It would be similar to the code shown below:
$replaces[] = array('~^admin\b~', 'backend');
Replace the word backend with the new folder name we have chosen in the above steps.
If you renamed the admin folder name then the other extension you are trying to install that will not work because the other extension is looking for the admin folder that folder is doesn't exists in the directory so, it will you the error like no /admin directory exists. But if you renamed the extension folder name as to your opencart admin then it may depend if your extension has install.xml then it will not work properly if there is no install.xml then it will work properly. I hope this answer will help you if you like this then please vote for it.
I have a site running on an Elastic Beanstalk single instance server and want to add automated SSL certificate generation from LetsEncrypt using the AcmePHP library.
The library tries to store the certificates in ~/.acmephp, which the server responds to with an error
Failed to create "/home/webapp/.acmephp": mkdir(): Permission denied.
The acmephp library doesn't have an option to change the path built in, and rather than fork and recompile the script, I'd like to be able to store the files in the default directory.
Does anyone know how I can give the app permission to create this directory, outside of the web root, or how I can make the server create it automatically and have it be available to the app?
It looks like since it's being ran by the webapp user, when acmePHP is trying to store the certificate under that user's home directory it fails because that directory doesn't exist (afaik the webapp user only runs httpd and it definitely doesn't have a home directory).
A very dirty workaround could be manually creating that file and folder in the . ebextensions folder in your project.The file would be .ebextensions/create_home.config and it would contain something like this:
files:
"/tmp/create-home.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p /home/webapp
chown webapp:webapp -R /home/webapp
commands:
01_create:
command: "/tmp/create-home.sh"
That script is ran by the root user, and afterwards it changes ownership of the /home/webapp folder to the webapp user and group respectively. Hope it helps
I moved a OpenCart version 1.5.6.4 installation from one server to another and after some time I noticed I can't upload images anymore. Whenever I use the image manager to upload the images it just gets stuck, the spinner wheel keeps spinning but the images are not uploading.
I tried changing the folder pemissions of the image and data folders to 777 and cleared the cache. I also tried to upload an extension that allows for multiple files uploads hoping that somehow magically it would fix the problem, needless to say, it didn't. Haven't found a solution on SO or OC forums.
My best guess is that the problem lies in the config files.
I runned a phpinfo(); and you can check it at http://atelier-faiblesse.ro/info.php.
The admin config contains the following code:
<?php
// HTTP
define('HTTP_SERVER', 'http://atelier-faiblesse.ro/admin/');
define('HTTP_CATALOG', 'http://atelier-faiblesse.ro/');
// HTTPS
define('HTTPS_SERVER', 'http://atelier-faiblesse.ro/admin/');
define('HTTPS_CATALOG', 'http://atelier-faiblesse.ro/');
// DIR
define('DIR_APPLICATION', '/var/www/clients/client9/web72/web/admin/');
define('DIR_SYSTEM', '/var/www/clients/client9/web72/web/system/');
define('DIR_DATABASE', '/var/www/clients/client9/web72/web/system/database/');
define('DIR_LANGUAGE', '/var/www/clients/client9/web72/web/admin/language/');
define('DIR_TEMPLATE', '/var/www/clients/client9/web72/web/admin/view/template/');
define('DIR_CONFIG', '/var/www/clients/client9/web72/web/system/config/');
define('DIR_IMAGE', '/var/www/clients/client9/web72/web/image/');
define('DIR_CACHE', '/var/www/clients/client9/web72/web/system/cache/');
define('DIR_DOWNLOAD', '/var/www/clients/client9/web72/web/download/');
define('DIR_LOGS', '/var/www/clients/client9/web72/web/system/logs/');
define('DIR_CATALOG', '/var/www/clients/client9/web72/web/catalog/');
// DB
define('DB_DRIVER', 'mysqli');
define('DB_HOSTNAME', 'localhost');
define('DB_USERNAME', 'XXXXXXXX');
define('DB_PASSWORD', 'XXXXXXXX');
define('DB_DATABASE', 'XXXXXXXX');
define('DB_PREFIX', 'oc_');
?>
Do you notice any problems in the config file? Or do you know any other reasons file uploading might not work?
Run below commands from ssh to fix this issue.
chown -R www-data /var/www/clients/client9/web72/web/image
chmod -R 755 /var/www/clients/client9/web72/web/image
I'm working in a python 2.7 elastic beanstalk environment.
I'm trying to use the sources key in an .ebextensions .config file to copy a tgz archive to a directory in my application root -- /opt/python/current/app/utility. I'm doing this because the files in this folder are too big to include in my github repository.
However, it looks like the sources key is executed before the ondeck symbolic link is created to the current bundle directory so I can't reference /opt/python/ondeck/app when using the sources command because it creates the folder and then beanstalk errors out when trying to create the ondeck symbolic link.
Here are copies of the .ebextensions/utility.config files I have tried:
sources:
/opt/python/ondeck/app/utility: http://[bucket].s3.amazonaws.com/utility.tgz
Above successfully copies to /opt/python/ondec/app/utility but then beanstalk errors out becasue it can't create the symbolic link from /opt/python/bundle/x --> /opt/python/ondeck.
sources:
utility: http://[bucket].s3.amazonaws.com/utility.tgz
Above copies the folder to /utility right off the root in parallel with /etc.
You can use container_commands instead of sources as it runs after the application has been set up.
With container_commands you won't be able to use sources to automatically get your files and extract them so you will have to use commands such as wget or curl to get your files and untar them afterwards.
Example: curl http://[bucket].s3.amazonaws.com/utility.tgz | tar xz
In my environment (php) there is no transient ondeck directory and the current directory where my app is eventually deployed is recreated after commands are run.
Therefore, I needed to run a script post deploy. Searching revealed that I can put a script in /opt/elasticbeanstalk/hooks/appdeploy/post/ and it will run after deploy.
So I download/extract my files from S3 to a temporary directory in the simplest way by using sources. Then I create a file that will copy my files over after the deploy and put it in the post deploy hook directory .
sources:
/some/existing/directory: https://s3-us-west-2.amazonaws.com/my-bucket/vendor.zip
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/99_move_my_files_on_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
mv /some/existing/directory /var/app/current/where/the/files/belong
I am trying to upload a file to a remote server using the SCP task. I have OpenSSH configured on the remote server in question, and I am using an Amazon EC2 instance running Windows Server 2008 R2 with Cygwin to run the Bamboo build server.
My question is regarding finding the directory I wish to use. I want to upload the entire contents of C:\doc using SCP. The documentation notes that I must use the local path relative to the Bamboo working directory rather than an absolute directory name.
I found by running pwd during the build plan that the working directory is /cygdrive/c/build-dir/CDP-DOC-JOB1. So to get to doc, I can run cd ../../doc. However, when I set my working directory under the SCP configuration as ../../doc/** (using this pattern matching guide), I get the message There were no files to upload. in the log.
C:\doc contains subfolders as well as a textfile in the root directory.
Here is my SCP task configuration:
Here is a look from cygwin at my directory:
You may add a first "script" task running a Windows shell, that copies everything from C:\doc to some local directory, and then run the scp task to copy the content of this new directory onto your remote server
mkdir doc
xcopy c:\doc .\doc /E /F
Then the pattern for copy should be /doc/**