Drupal 8 Errors in configuration - drupal-8

I uninstalled a few modules.
~/apps/drupal/htdocs/modules$ drush pm-uninstall relaxed
The following extensions will be uninstalled: relaxed
Do you really want to continue? (y/n): y
relaxed was successfully uninstalled. [ok]
bitnami#ip-172-26-15-109:~/apps/drupal/htdocs/modules$ drush pm-uninstall replication
The following extensions will be uninstalled: replication
Do you really want to continue? (y/n): y
replication was successfully uninstalled. [ok]
bitnami#ip-172-26-15-109:~/apps/drupal/htdocs/modules$ drush pm-uninstall multiversion
The following extensions will be uninstalled: multiversion
Do you really want to continue? (y/n): y
multiversion was successfully uninstalled.
bitnami#ip-172-26-15-109:~/apps/drupal/htdocs/modules$ drush pm-uninstall views_rest_feed
The following extensions will be uninstalled: views_rest_feed
Do you really want to continue? (y/n): y
views_rest_feed was successfully uninstalled.
And now I see errors.
I see following errors, how should I proceed with fixing them?
Entity/field definitions Mismatched entity and/or field definitions
The following changes were detected in the entity type and field definitions.
Comment
The Comment entity type needs to be updated.
File
The File entity type needs to be updated.
Content
The Revision ID field needs to be installed.
The UUID field needs to be updated.
Shortcut link
The UUID field needs to be updated.
Taxonomy term
The Taxonomy term entity type needs to be updated.
User
The UUID field needs to be updated.
Custom menu link
The Custom menu link entity type needs to be updated.
Running drush entity-updates gives
The following updates are pending:
comment entity type :
The Comment entity type needs to be updated.
file entity type :
The File entity type needs to be updated.
node entity type :
The Revision ID field needs to be installed.
The UUID field needs to be updated.
shortcut entity type :
The UUID field needs to be updated.
taxonomy_term entity type :
The Taxonomy term entity type needs to be updated.
user entity type :
The UUID field needs to be updated.
menu_link_content entity type :
The Custom menu link entity type needs to be updated.
Do you wish to run all pending updates? (y/n): y
Drupal\Core\Entity\EntityStorageException: The SQL storage cannot change the schema for an existing entity type (comment) with data. in [error]
Drupal\Core\Entity\Sql\SqlContentEntityStorageSchema->onEntityTypeUpdate() (line 303 of
/opt/bitnami/apps/drupal/htdocs/core/lib/Drupal/Core/Entity/Sql/SqlContentEntityStorageSchema.php).
Failed: Drupal\Core\Entity\EntityStorageException: !message in [error]
Drupal\Core\Entity\Sql\SqlContentEntityStorageSchema->onEntityTypeUpdate() (line 303 of
/opt/bitnami/apps/drupal/htdocs/core/lib/Drupal/Core/Entity/Sql/SqlContentEntityStorageSchema.php).
Cache rebuild complete. [ok]
Finished performing updates.
drush watchdog-show
ID Date Type Severity Message
147 22/Dec 15:57 php error Drupal\Core\Database\DatabaseExceptionWrapper: SQLSTATE[42S22]: Column not found: 1054 Unknown column
'base_table.vid' in 'field list': SELECT base_table.vid AS vid, base_table.nid AS nid
146 22/Dec 15:57 php error Drupal\Core\Database\DatabaseExceptionWrapper: SQLSTATE[42S22]: Column not found: 1054 Unknown column
'base_table.vid' in 'field list': SELECT base_table.vid AS vid, base_table.nid AS nid
145 22/Dec 15:57 cron notice Cron run completed.
144 22/Dec 15:57 cron notice Execution of update_cron() took 9.97ms.
143 22/Dec 15:57 cron notice Starting execution of update_cron(), execution of system_cron() took 15.01ms.
142 22/Dec 15:57 cron notice Starting execution of system_cron(), execution of search_cron() took 4.15ms.
141 22/Dec 15:57 cron notice Starting execution of search_cron(), execution of node_cron() took 30.37ms.
140 22/Dec 15:57 cron notice Starting execution of node_cron(), execution of history_cron() took 1.82ms.
139 22/Dec 15:57 cron notice Starting execution of history_cron(), execution of file_cron() took 11.11ms.
138 22/Dec 15:57 cron notice Starting execution of file_cron(), execution of field_cron() took 3.38ms.
Also accessing http://ipaddress/admin/modules/uninstall gives
The website encountered an unexpected error. Please try again later.
Apache error log shows
[Thu Dec 22 15:06:52.683333 2016] [core:notice] [pid 13291:tid 139826185398080] AH00094: Command line: '/opt/bitnami/apache2/bin/httpd.bin -f /opt/bitnami/apache2/conf/httpd.conf'
[Thu Dec 22 15:06:58.953491 2016] [proxy_fcgi:error] [pid 13300:tid 139825856366336] [client 123.201.127.176:49572] AH01071: Got error 'PHP message: PHP Fatal error: Call to a member function id() on null in /opt/bitnami/apps/drupal/htdocs/modules/multiversion/src/WorkspaceCacheContext.php on line 44\n'
[Thu Dec 22 15:12:39.540744 2016] [proxy_fcgi:error] [pid 13299:tid 139825386374912] [client 123.201.127.176:49808] AH01071: Got error 'PHP message: Uncaught PHP Exception Drupal\\Core\\Database\\DatabaseExceptionWrapper: "SQLSTATE[42S22]: Column not found: 1054 Unknown column 'revision.revision_id' in 'field list': SELECT revision.revision_id AS revision_id, revision.langcode AS langcode, revision.revision_log AS revision_log, base.id AS id, base.type AS type, base.uuid AS uuid, CASE base.revision_id WHEN revision.revision_id THEN 1 ELSE 0 END AS isDefaultRevision\nFROM \n{block_content} base\nINNER JOIN {block_content_revision} revision ON revision.revision_id = base.revision_id; Array\n(\n)\n" at /opt/bitnami/apps/drupal/htdocs/core/lib/Drupal/Core/Database/Connection.php line 671\n'
[Thu Dec 22 15:14:41.556451 2016] [proxy_fcgi:error] [pid 13299:tid 139825428338432] [client 123.201.127.176:49828] AH01071: Got error 'PHP message: Uncaught PHP Exception Drupal\\Core\\Database\\DatabaseExceptionWrapper: "SQLSTATE[42S22]: Column not found: 1054 Unknown column 'revision.revision_id' in 'field list': SELECT revision.revision_id AS revision_id, revision.langcode AS langcode, revision.revision_log AS revision_log, base.id AS id, base.type AS type, base.uuid AS uuid, CASE base.revision_id WHEN revision.revision_id THEN 1 ELSE 0 END AS isDefaultRevision\nFROM \n{block_content} base\nINNER JOIN {block_content_revision} revision ON revision.revision_id = base.revision_id; Array\n(\n)\n" at /opt/bitnami/apps/drupal/htdocs/core/lib/Drupal/Core/Database/Connection.php line 671\n'
I am using AWS lightsail bitnami drupal instance.

Try going to the database and truncate all tables that start with cache_. It's what I would do after facing such an issue. It's possible that the uninstall process crashed before the cache was cleared that the cache still has references to already deleted fields (that were removed during module uninstall).

Related

Wazuh syscheck agent SQL error on centos7: FIM is not working

I havd wazuh v3.13.3 installed on centos 7.
syscheck module configuration:
<syscheck>
<disabled>no</disabled>
<!-- Frequency that syscheck is executed default every 12 hours -->
<frequency>43200</frequency>
<scan_on_start>yes</scan_on_start>
<alert_new_files>yes</alert_new_files>
<!-- Directories to check (perform all possible verifications) -->
<directories check_all="yes">/etc,/usr/bin,/usr/sbin</directories>
<directories check_all="yes">/bin,/sbin,/boot</directories>
<directories check_all="yes" realtime="yes">/root</directories>
<!-- Files/directories to ignore -->
<ignore>/etc/mtab</ignore>
<ignore>/etc/hosts.deny</ignore>
<ignore>/etc/mail/statistics</ignore>
<ignore>/etc/random-seed</ignore>
<ignore>/etc/random.seed</ignore>
<ignore>/etc/adjtime</ignore>
<ignore>/etc/httpd/logs</ignore>
<ignore>/etc/utmpx</ignore>
<ignore>/etc/wtmpx</ignore>
<ignore>/etc/cups/certs</ignore>
<ignore>/etc/dumpdates</ignore>
<ignore>/etc/svc/volatile</ignore>
<ignore>/sys/kernel/security</ignore>
<ignore>/sys/kernel/debug</ignore>
<ignore>/dev/core</ignore>
<!-- File types to ignore -->
<ignore type="sregex">^/proc</ignore>
<ignore type="sregex">.log$|.swp$</ignore>
<!-- Check the file, but never compute the diff -->
<nodiff>/etc/ssl/private.key</nodiff>
<skip_nfs>yes</skip_nfs>
</syscheck>
Adding new file to the /root directory:
[root#host ossec]# date; echo "date" > ~/newfile.txt
Sat May 7 17:01:48 UTC 2022
agent log messages:
2022/05/07 17:01:48 ossec-syscheckd[26052] fim_db.c:558 at fim_db_exec_simple_wquery(): ERROR: SQL ERROR: cannot commit - no transaction is active
2022/05/07 17:01:48 ossec-syscheckd[26052] fim_db.c:558 at fim_db_exec_simple_wquery(): ERROR: SQL ERROR: cannot commit - no transaction is active
2022/05/07 17:01:48 ossec-syscheckd[26052] fim_db.c:558 at fim_db_exec_simple_wquery(): ERROR: SQL ERROR: cannot commit - no transaction is active
2022/05/07 17:01:48 ossec-syscheckd: ERROR: SQL ERROR: (8)attempt to write a readonly database
2022/05/07 17:01:48 ossec-syscheckd: ERROR: SQL ERROR: (8)attempt to write a readonly database
and I see no messages about new file in the logs.
It is too big infrastructure to upgrade to wazuh 4.x
How to solve this issue?
Thank you.
The message ERROR: SQL ERROR: (8)attempt to write a readonly database indicates some kind of problem with database permissions or that the FIM database fim.db does not exist, please check that the following files in the agent exist and have the following permissions, user, and group:
[drwxr-x--- ossec ossec ] /var/ossec/queue/fim
[drwxr-x--- ossec ossec ] /var/ossec/queue/fim/db
[-rw-rw---- root ossec ] /var/ossec/queue/fim/db/fim.db
[-rw-rw---- root ossec ] /var/ossec/queue/fim/db/fim.db-journal
In case the fim.db file does not exist, the agent recreates said file when restarting the agent.
In case the fim/ or fim/db/ directories do not exist, it is necessary to create them using the mkdir command and assign them the properties specified above [drwxr-x--- ossec ossec], then restart the agent.

FreeIpa DatabaseError on Add user

I have an ipa server running for over a year now.
Recently, when I try to add a new user via https or the terminal it fails with the following error message.
IPA-Fehler 4203: DatabaseError
Server is unwilling to perform: Managed Entry Plugin rejected add operation (see errors log).
In the error logs, I see:
[timestamp] [:warn] [pid 2731] [client xxx] failed to set perms (3140) on file (/var/run/ipa/ccaches/user#xxx)!, referer: xxx
[timestamp] [:error] [pid 2727] ipa: INFO: [jsonserver_session] user#xxx: group_find(None, posix=True, version=u'2.230', no_members=True): SUCCESS
[timestamp] [:warn] [pid 2731] [client xxx] failed to set perms (3140) on file (/var/run/ipa/ccaches/user#xxx)!, referer: xxx
[timestamp] [:error] [pid 2726] ipa: INFO: [jsonserver_session] user#xxx: user_add(u'xxx', givenname=u'xxx', sn=u'xxx', userpassword=u'********', version=u'2.230'): DatabaseError
The user is not created but I have to remove the managed group as described here:
https://www.redhat.com/archives/freeipa-users/2016-August/msg00092.html before I can try again.
What is going on? Any help is appreciated.
$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
$ ipa --version
VERSION: 4.6.4, API_VERSION: 2.230
So I managed to solve the Problem.
While experimenting with other settings I tried to add the user without a private group and got the error message:
Server is unwilling to perform: Automember Plugin update unexpectedly failed.
A quick search showed, that error is happening, when the user is to be added to a group that does not exist, which happened due to an outdated Auto-Membership-Rule.
Correcting that, the user can be added.

Django refuses to use DB view

I have the following scenario:
I have defined the right view in the database, taking care that the view is named according to the django conventions
I have made sure that my model is not managed by django. The migration created is accordingly defined with managed=False
The DB view is working fine by itself.
When triggering the API endpoint, two strange things happen:
the request to the database fails with:
ERROR: relation "consumption_recentconsumption" does not exist at character 673
(I have logging enabled at the postgres level, and copy-pasting the exact same request into a db console client works, without modifications whatsoever)
the request to the DB gets retried lots of times (more than 30?). Why is this happening? Is there a django setting to control this? (I am sending the request to the API just once, manually with curl)
EDIT
This is my model:
class RecentConsumption(models.Model):
name = models.CharField(max_length=100)
...
class Meta:
managed = False
This is the SQL statement, as generated by django and sent to the db:
SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", ... FROM "consumption_recentconsumption" LIMIT 21;
As I mentioned, this fails through django, but works fine when run directly against the db.
EDIT2
Logs from postgres, when running the sql directly:
2018-12-13 11:12:02.954 UTC [66] LOG: execute <unnamed>: SAVEPOINT JDBC_SAVEPOINT_4
2018-12-13 11:12:02.955 UTC [66] LOG: execute <unnamed>: SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", "consumption_recentconsumption"."date", "consumption_recentconsumption"."psc", "consumption_recentconsumption"."material", "consumption_recentconsumption"."system", "consumption_recentconsumption"."env", "consumption_recentconsumption"."objs", "consumption_recentconsumption"."size", "consumption_recentconsumption"."used", "consumption_recentconsumption"."location", "consumption_recentconsumption"."WWN", "consumption_recentconsumption"."hosts", "consumption_recentconsumption"."pool_name", "consumption_recentconsumption"."storage_name", "consumption_recentconsumption"."server" FROM "consumption_recentconsumption" LIMIT 21
2018-12-13 11:12:10.038 UTC [66] LOG: execute <unnamed>: RELEASE SAVEPOINT JDBC_SAVEPOINT_4
Logs from postgres when running through django (repeated more than 30 times):
2018-12-13 11:13:50.782 UTC [75] LOG: statement: SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", "consumption_recentconsumption"."date", "consumption_recentconsumption"."psc", "consumption_recentconsumption"."material", "consumption_recentconsumption"."system", "consumption_recentconsumption"."env", "consumption_recentconsumption"."objs", "consumption_recentconsumption"."size", "consumption_recentconsumption"."used", "consumption_recentconsumption"."location", "consumption_recentconsumption"."WWN", "consumption_recentconsumption"."hosts", "consumption_recentconsumption"."pool_name", "consumption_recentconsumption"."storage_name", "consumption_recentconsumption"."server" FROM "consumption_recentconsumption" LIMIT 21
2018-12-13 11:13:50.783 UTC [75] ERROR: relation "consumption_recentconsumption" does not exist at character 673
2018-12-13 11:13:50.783 UTC [75] STATEMENT: SELECT "consumption_recentconsumption"."id", "consumption_recentconsumption"."name", "consumption_recentconsumption"."date", "consumption_recentconsumption"."psc", "consumption_recentconsumption"."material", "consumption_recentconsumption"."system", "consumption_recentconsumption"."env", "consumption_recentconsumption"."objs", "consumption_recentconsumption"."size", "consumption_recentconsumption"."used", "consumption_recentconsumption"."location", "consumption_recentconsumption"."WWN", "consumption_recentconsumption"."hosts", "consumption_recentconsumption"."pool_name", "consumption_recentconsumption"."storage_name", "consumption_recentconsumption"."server" FROM "consumption_recentconsumption" LIMIT 21
Answering myself, in case this helps somebody in the future.
I am running the CREATE VIEW command in PyCharm, which seems to use a transaction for all operations. That means that the view is available within the db session in PyCharm (since it uses the transaction for all requests), but not from outside. The django app is running in the console, and does not see the view.
The solution is just to commit the transaction in PyCharm, to make it fully visible.
The final solution is to create the view via django migrations.

config fail2ban on joomla

Hello everyone i'm trying to config joomla with fail2ban so i created
the file /etc/fail2ban/filter.d/joomla-error.conf
and added the failregex as below:
failregex = [[]client <HOST>[]] user .* authentication failure.*
After I added this code into the jail.conf
[joomla-error]
enabled = true
port = http,https
filter = joomla-error
logpath = /var/log/httpd/domains/jayjezz.com.error.log
maxretry = 5
bantime = 30
the logpath is right but every time i try to reload fail2ban service i get
ERROR NOK: ("No 'host' group in '[[]client <HOST>[]] user .* authentication failure.*'",)
i think something is wrong with my regex, can someone provide me the right regex for
[Thu Sep 28 17:14:23.932811 2017] [:error] [pid 6673] [client 000.000.000.000:56806] user xxxxx authentication failure, referer: http://jayjezz.com/administrator/index.php
thank you
fixed this by adding a script to change file permissions inside joomla website. now when i cannot login under /administrator without launching the script first

Unable to start sphinx in rails 4

I have installed Sphinx from source with pgsql and then installed thinking-sphinx gem(3.0.1) on my application (Rails 4.0.3). And I configured & generated the sphinx configurations. Then I added the indices on app/indices and later ran the index & start the sphinx via rake ts:index && ts:start, but I got the below error, let me know the way to resolve this.
rake ts:index
Generating configuration to /home/stc/presto/config/development.sphinx.conf
Sphinx 2.1.7-release (rel21-r4638)
Copyright (c) 2001-2014, Andrew Aksyonoff
Copyright (c) 2008-2014, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/home/stc/config/development.sphinx.conf'...
FATAL: no indexes found in config file '/home/stc/config/development.sphinx.conf'
rake ts:start
In the log file I can see the below errors
[Wed Apr 2 10:40:49.834 2014] [14338] Child process 14339 has been forked
[Wed Apr 2 10:40:49.835 2014] [14339] listening on 127.0.0.1:9306
[Wed Apr 2 10:40:49.835 2014] [14339] WARNING: ERROR: index 'collection_core': RT indexes support prefixes and infixes with only dict=keywords - NOT SERVING
[Wed Apr 2 10:40:49.836 2014] [14339] WARNING: ERROR: index 'resource_core': RT indexes support prefixes and infixes with only dict=keywords - NOT SERVING
[Wed Apr 2 10:40:49.836 2014] [14339] WARNING: index 'collection': no such local index 'collection_core' - SKIPPING LOCAL INDEX
[Wed Apr 2 10:40:49.836 2014] [14339] WARNING: index 'collection': no valid local/remote indexes in distributed index - NOT SERVING
[Wed Apr 2 10:40:49.836 2014] [14339] WARNING: index 'resource': no such local index 'resource_core' - SKIPPING LOCAL INDEX
[Wed Apr 2 10:40:49.836 2014] [14339] WARNING: index 'resource': no valid local/remote indexes in distributed index - NOT SERVING
[Wed Apr 2 10:40:49.836 2014] [14339] FATAL: no valid indexes to serve
[Wed Apr 2 10:40:49.836 2014] [14338] Child process 14339 has been finished, exit code 1. Watchdog finishes also. Good bye!
Addressed this on the Thinking Sphinx Google group as well:
You seem to be using real-time indices - which is great - but that means you don’t need to use the ts:index task. The two main tasks that are useful are:
ts:generate - which adds/updates all documents in each real-time index.
ts:regenerate - which stops Sphinx, clears out existing index files, generates the configuration, starts Sphinx and runs ts:generate.
However, you’re also using either min_infix_len or min_prefix_len - and with Sphinx 2.1, the default dict setting doesn’t match with what is needed (as the logs detail). So, if you add dict: keywords to the appropriate environments in config/thinking_sphinx.yml and then run ts:regenerate, you should hopefully have a working Sphinx setup.