I'm trying to setup Redmine on the following products
redmine-4.0.7
Rails 5.2.4.2
Phusion Passenger 6.0.7
Apache/2.4.6
mysql Ver 14.14
I expected there will be initializing page however, I got `Internal Error' from http://mydomain/redmine/
I can see the following messages in log/prduction.log
Completed 500 Internal Server Error in 21ms (ActiveRecord: 1.5ms)
ActiveRecord::StatementInvalid (Mysql2::Error: Can't find file: './redmine/settings.frm' (errno: 13 - Permission denied): SHOW FULL FIELDS FROM `settings`):
It seems I need ./redmine/settings.frm but there isn't.
Does anyone know how to place ./redmine/settings.frm and what content should be in?
The error is thrown by your database server (i.e. MySQL). It seems that MySQL does not have the required permission to access the files where it stores the table data.
Usually, those files are handled (i.e. created, updated, and eventually deleted) entirely by MySQL which requires specific access patterns to ensure consistent data. Because of that, you should strongly avoid to manually change any files under control of MySQL. Instead, you should only use SQL commands to update table structures and table data.
o fix this issue now, you need to fix the permissions of your MySQL data files so that MySQL can properly access them. What exactly is required here is unfortunately not simply explained since there can be various causes. If you have jsut setup your MySQL server, it might be best start entirely new.
Related
We have installed WSO2AM 2.6.0 with IS as KM (5.7). We deployed AM as an active-active all in one instance and IS as KM active-active too following all the directives written on the Official documentation.
Based on the documentation, we created the following databases with their respectives datasources: regdb (registry), carbondb, userdb (user store), mb-store, apimdb.
The issue that we have now is on IS side. We tried several things to check that everything was working correctly, like create users, check registry acces etc. We created a user called "test", chaged some properties, etc and after that, we proceed to delete the user.
When we deleted the user we get the following popup on the IS console:
Checking the logs we find the following:
Caused by: org.postgresql.util.PSQLException: ERROR: relation "cm_receipt" does not exist
Position: 135
TID: [-1234] [] [2020-05-11 09:00:30,062] ERROR {org.wso2.carbon.user.mgt.ui.UserAdminClient} - Error when handling event : POST_DELETE_USER
org.wso2.carbon.user.mgt.stub.UserAdminUserAdminException: UserAdminUserAdminException
We checked on the database and the user was deleted correctly and IS carbon console is not displaying it any more, so the user was correctly deleted. Checking a little bit more, the Delete user process is trying to access table "cm_receipt" on carbondb, but the table exists on apimdb.
On postgres side, we have this log during the delete:
<2020-05-08 11:49:50.452 -03:172.19.35.21(45740):wso2carbon#carbondb:[12476]:>ERROR: relation "cm_receipt" does not exist at character 135
<2020-05-08 11:49:50.452 -03:172.19.35.21(45740):wso2carbon#carbondb:[12476]:>STATEMENT: SELECT R.CONSENT_RECEIPT_ID, R.LANGUAGE, R.PII_PRINCIPAL_ID, R.PRINCIPAL_TENANT_ID, R.STATE,RS.SP_DISPLAY_NAME,RS.SP_DESCRIPTION FROM CM_RECEIPT R INNER JOIN CM_RECEIPT_SP_ASSOC RS ON R.CONSENT_RECEIPT_ID=RS.CONSENT_RECEIPT_ID WHERE PII_PRINCIPAL_ID LIKE $1 AND PRINCIPAL_TENANT_ID =$2 AND SP_NAME LIKE $3 AND STATE LIKE $4 ORDER BY ID ASC LIMIT $5 OFFSET $6
Have you got any idea why it can be happening? There is some bug related or something?
Thanks!
There could be two reasons for this.
You've forgot to execute the D script which contains the consent management tables. /wso2is-5.7.0/dbscripts/consent/postgresql.sql.
Your wso2is-5.7.0/repository/conf/consent-mgt-config.xml configuration file is referring to the wrong datasource.
Solution
Check what's the datasource that the consent-mgt-config.xml file is referring to. By default it's like this.
<ConsentManager xmlns="http://wso2.org/carbon/consent/management" xmlns:svns="http://org.wso2.securevault/configuration">
<DataSource>
<!-- Include a data source name (jndiConfigName) from the set of data sources defined in master-datasources
.xml -->
<Name>jdbc/WSO2IdentityDB</Name>
</DataSource>
Here, it's the jdbc/WSO2IdentityDB. Then go to your wso2is-5.7.0/repository/conf/datasources/master-datasource.xml file and check the database of that datasource. If the mentioned tables are not created in that database you can execute the above mentioned postgre.sql script in that database. (If you've already created these tables in a different datasource, you might want to change the datasource defined in the consent-mgt-config.xml file.)
P.S. Never use -Dsetup argument for automatic executions of database scripts on the startup. Always manually execute the database scripts against the database.
P.S. The reason for the user deletion success is that this user consent removal process being a POST_USER_DELETION event. A failure in a POST handler won't effect the action itself.
WSO2 ESB goes into error state on startup.
During startup, the following H2 database error is thrown.
org.h2.jdbc.JdbcSQLException: Row not found when trying to delete from
index
Due to some data corruption, the following error occurs. Restarts didnt help.
We need more information... ¿What do you need? , restart and recover all the resources in your install or just restart and keep working.
1.- Make a backup copy of all databases files in $CARBON_HOME/databases/ folder
2.- Restart a clean instances removing the corrupted database, remove all the H2 files in $CARBON_HOME/databases/ folder. If you have all your artifacts in $CARBON_HOME/deployment it should rebuild all.
WSO2 products has an inbuilt H2 database. Though its sufficient for DEV environments, Its not recommended for production.
For the above error, the H2 DB has been corrupted. To fix, rename the existing
$CARBON_HOME/databases/ folder and create an empty databases folder.
Start the server, with -Dsetup option as ./wso2server.sh -Dsetup
This will recreate a new DB setup and populate the required data.
We are using WSO2CEP version 4.2.0. We are connecting to a MySQL database (version 5.6.34-1 community edition from Oracle) on the back-end with mysql-connector-java-5.1.40.jar. We have set up several connections in the master-datasources.xml, and receive "Connection is healthy" for all connections when testing them in Datasources. When we attempt to use an event publisher that accesses the referenced databases an error appears:
[2017-01-24 17:11:22,178] ERROR {org.wso2.carbon.event.publisher.admin.EventPublisherAdminService} - org.wso2.carbon.event.output.adapter.core.exception.OutputEventAdapterRuntimeException: A mandatory attribute null does not exist
org.wso2.carbon.event.publisher.core.exception.EventPublisherConfigurationException: org.wso2.carbon.event.output.adapter.core.exception.OutputEventAdapterRuntimeException: A mandatory attribute null does not exist
at org.wso2.carbon.event.publisher.core.EventPublisherDeployer.processDeployment(EventPublisherDeployer.java:227)
at org.wso2.carbon.event.publisher.core.EventPublisherDeployer.executeManualDeployment(EventPublisherDeployer.java:249)
.........several lines after this ...............
Our team are kind of at a loss, we have tried things like giving blanket permissions including DDL to the database user, trying an old database that "used to work", and changing out versions of the mysql-connector-java jar.
We found that we had a configuration problem - invalid XML in output-event-adapters.xml that was causing the error. Bad XML fixed, error gone.
WSO2, please consider addressing error verbosity in your products. The error that was being logged provided no indication that invalid XML might be the cause, and we wasted several hours troubleshooting the problem as a result. We have experienced similar error-verbosity-related issues in other WSO2 products. A simple "could not parse XML" with the file name would have literally saved us several hours this time.
We currently have two ColdFusion 10 dedicated servers which we are migrating to a single VPS server. We have many scheduled tasks on each. I have taken each of the neo-cron.xml files and copied the var XML elements, from within the struct type='coldfusion.server.ConfigMap' XML element, and pasted them within that element in the neo-cron.xml file on the new server. Afterward I restarted the ColdFusion service, log into cf admin, and the tasks all show as expected.
My problem is, when I try to update any of the tasks I get the following error when saving:
An error occured scheduling the task. Unable to store Job :
'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)',
because one already exists with this identification
Also, when I try to delete a task it tells me a task with that name does not exist. So it seems to me that the task information must also be stored elsewhere. So there when I try to update a task, the record doesn't exist in the secondary location so it tries to add it new to the neo-cron.xml file, which causes an error because it already exists. And when trying to delete, it doesn't exist in the secondary location so it says a task with that name does not exist. That is just a guess though.
Any ideas how I can get this to work without manually re-creating dozens of tasks? From what I've read this should work, but I need to be able to edit the tasks.
Thank you.
After a lot of hair-pulling I was able to figure out the problem. It all boiled down to having parentheses in the scheduled task names. This was causing both the "Unable to store Job : 'SERVERSCHEDULETASK#$%^DEFAULT.job_MAKE CATALOGS (SITE CONTROL)', because one already exists with this identification" error and also causing me to be unable to delete jobs. I believe it has something to do with encoding the parentheses because the actual neo-cron.xml name attribute of the var element encodes the name like so:
serverscheduletask#$%^default#$%^MAKE CATALOGS (SITE CONTROL)
Note that this anomaly did not exist on ColdFusion 10, Update 10, but does exist on Update 13. I'm not sure which update broke it, but there you go.
You will have to copy the neo-cron.xml from C:\ColdFusion10\\lib of one server to another. After that restart the server to make the changes effective. Login to the CF Admin and check the functionality.
This should work.
Note:- Please take a backup of the existing neo-cron.xml, before making the changes.
The problem:
My C++ application connects to a MySQL server, reads the first/header line of each db export.txt, makes a create table statement to prepare for the import and executes that against the database (no problem with that, the table appears just as intended) -- but when I try and execute the LOAD DATA LOCAL INFILE to import the data into the newly created table, I get the error "The used command is not allowed with this MySQL version". But, this works on the CLI! When I execute this command on the CLI using mysql -u <user> -p<password> -e "LOAD DATA LOCAL INFILE 'myfile.txt' INTO TABLE mytable FIELDS TERMINATED BY '|' LINES TERMINATED BY '\r\n';" it works flawlessly?
The Situation:
My company gets a large quantity of database exports (160 files/10gb of .txt files that are '|' delimited) from our vendors on a monthly basis that have to replace the old vendor lists. I am working on a smallish C++ app to deal with it on my work desktop. The application is meant to set up the required tables, import the data, then execute a series of intermediate queries against multiple tables to assemble information in a series of final tables, which is then itself exported and uploaded to the production environment, for use in the companies e-commerce website.
My Setup:
Ubuntu 12.04
MySQL Server v. 5.5.29 + MySQL Command Line client
Linux GNU C++ Compiler
libmysqlcppconn is installed and I have the required mysqlconn library linked in.
I have already overcome/tried the following issues/combinations:
1.) I have already discovered (the hard way) that LOAD DATA [LOCAL] INFILE statements must be enabled in the config -- I have the "local-infile" option set in the configuration files for both client and server. (fixed by updating the /etc/mysql/my.cnf with "local-infile" statements for the client and server. NOTE: I could have used the --local-infile=1 to restart the mysql-server, but this is my local dev environment so I just wanted it turned on permanently)
2.) LOAD DATA LOCAL INFILE seems to fail to perform the import (from the CLI) if the target import file does not have execute permissions enabled (fixed with chmod +x target_file.txt)
3.) I am using the mysql root account in my application code (because its my localhost, not production and this particular program will never run on a production server.)
4.) I have tried executing my compiled binary program using the sudo command (no change, same error "The used command is not allowed with this MySQL version")
5.) I have tried changing the ownership of the binary file from my normal login to root (no change, same error "The used command is not allowed with this MySQL version")
6.) I know the libcppmysqlconn is working because I am able to connect and perform the CREATE TABLE call without a problem, and I can do other queries and execute statements
What am I missing? Any suggestions? Thanks in advance :)
After much diligent trial and error working with the /etc/mysql/my.cfg file (I know this is a permissions issue because it works on the command line, but not from the connector) and after much googling and finding some back alley tech support posts I've come to conclude that the MySQL C++ connector did not (for whatever reason) decide to implement the ability for developers to be able to allow the local-infile=1 option from the C++ connector.
Apparently some people have been able to hack/fork the MySQL C++ connector to expose the functionality, but no one posted their source code -- only said it worked. Apparently there is a workaround in the MySQL C API after you initialize the connection you would use this:
mysql_options( &mysql, MYSQL_OPT_LOCAL_INFILE, 1 );
which apparently allows the LOAD DATA LOCAL INFILE statements to work with the MySQL C API.
Here are some reference articles that lead me to this conclusion:
1.) How can I get the native C API connection structure from MySQL Connector/C++?
2.) Mysql 5.5 LOAD DATA INFILE Permissions
3.) http://osdir.com/ml/db.mysql.c++/2004-04/msg00097.html
Essentially if you want the ability to use the LOAD DATA LOCAL INFILE functionality from a programmatic Connector API -- you have to use the mysql C API or hack/fork the existing mysql C++ api to expose the connection structure. Or just stick to executing the LOAD DATA LOCAL INFILE from the command line :(