i used beberlei doctrine extensions in zend framework 2 for mysql GroupConcat after installing it with composer i changed my settings like this from question :
How to implement beberlei doctrine extensions in zend framework 2
'configuration' => array(
'orm_default' => array(
'string_functions' => array(
'GroupConcat' => 'DoctrineExtensions\Query\Mysql\GroupConcat'
)
)
),
in my local pc everything is ok but when i upload my code on my remote server show this error:
Fatal error: Class 'DoctrineExtensions\Query\MySql\GroupConcat' not found in **/www/vendor/doctrine/orm/lib/Doctrine/ORM/Query/Parser.php on line 3389
i updated every thing the code is same i installed composer using composer install and update everything on my remote server i sure the code is same and composer installed every thing
Related
Question
I need to query CloudSQL from Logstash but can't find any example out there.
Additional Context
I ran the build command for postgres jdbc driver
mvn -P jar-with-dependencies clean package -DskipTests
And provided it as Logstash JDBC driver (tried with dependencies jar too):
input {
jdbc {
jdbc_driver_library => "/Users/gustavollermalylarrain/Documents/proyectos/labs/cloud-sql-jdbc-socket-factory/jdbc/postgres/target/postgres-socket-factory-1.6.4-SNAPSHOT.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_connection_string => "jdbc:postgresql:///people?cloudSqlInstance=cosmic-keep-148903:us-central1:llermaly&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=postgres&password=postgres"
statement => "SELECT * FROM people;"
jdbc_user => "postgres"
jdbc_password => "postgres"
}
}
output {
stdout {
codec => rubydebug {
}
}
}
I'm having this error:
Error: java.lang.ClassNotFoundException: org.postgresql.Driver. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
I'm missing something?
The steps to query Cloud SQL from Logstash are:
Build the jar driver:
From this repo: https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory. Clone and run mvn -P jar-with-dependencies clean package -DskipTests
Copy files
Copy the jar files from jdbc/postgres/target/ to logstash-core/lib/jars also download the postgres jdbc driver and copy the jar file to logstash-core/lib/jars as well
Configure Logstash
The configuration file will not include the jar's path because will look into the default folder logstash-core/lib/jars where you copied the jar files.
input {
jdbc {
jdbc_driver_class => "org.postgresql.Driver"
jdbc_connection_string => "jdbc:postgresql:///people?cloudSqlInstance=cosmic-keep-148903:us-central1:llermaly&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=postgres&password=postgres"
statement => "SELECT * FROM people;"
jdbc_user => "postgres"
jdbc_password => "postgres"
}
}
output {
stdout {
codec => rubydebug {
}
}
}
jdbc user and password are ignored and the ones you provide in the connection string are used instead. For Postgre Cloud SQL connector you can use both Postgre users or IAM accounts.
Note: You need to run this from a Compute Engine to automatically apply the gcp credentials or manually create the env variables
If you want to prevent the database fixtures from being loaded accidentally in the wrong environment, the (probably) best way is to activate the DoctrineFixturesBundle only in certain environment(s).
Up to Symfony 3.4 this was done in app/AppKernel.php, as described at https://symfony.com/doc/3.4/best_practices/business-logic.html#data-fixtures
How can this be achieved in Symfony 4 (Symfony Flex), where bundles are being loaded automatically?
In Symfony 4, this can be configured in config/bundles.php, by editing the line
Doctrine\Bundle\FixturesBundle\DoctrineFixturesBundle::class => ['dev' => true, 'test' => true],
See https://symfony.com/doc/4.1/best_practices/business-logic.html#data-fixtures
When you remove the 'dev' => true, part, and then try to load the fixtures in DEV environment by running php bin/console doctrine:fixtures:load --env=dev, you will get:
Error thrown while running command "'doctrine:fixtures:load' --env=dev". Message: "There are no commands defined in the "doctrine:fixtures" namespace.
However, loading them in TEST environment still works: php bin/console doctrine:fixtures:load --env=test
I have a working local Drupal 7 site (with local MySQL Database), but now I'm trying to migrate it to Amazon Web Services (which is currently running a test Drupal site successfully). I have all of my drupal files in a directory mydrupalsite on the EC2 instance server and the database has been added to the remote MySQL, but I don't know what to do now in order to configure apache to show the new site.
What I have tried so far:
I added the file in /etc/apche2/sites-available (copied the old instance and changed the directory name where appropriate)
I changed the symlinks (deleted the old symlink and added a new one) in /etc/apache2/sites-enabled so that only the new file is pointed to.
Then I rebooted the server and navigated to the site, but it's taking me to the install page (as though i were starting over)
Here are the contents of $databases variable in the settings file on my local machine (with username changed but the pw is an empty string)
if (!isset($databases)) {
$databases = array();
}
$databases['default']['default'] = array(
'driver' => 'mysql',
'database' => 'naicdrpl',
'username' => 'mylocalsiteusername',
'password' => '', // password is an empty string
'host' => '127.0.0.1',
'port' => 33067 );
You can use backup&migrate module for migration. It is very easy for using.
Zip all files from your Drupal directory. Copy/unzip that file on new server.
Backup your database in file with backup&migrate module.
Install Drupal site on new server. Run install.php and follow steps - you should probably change settings in /sites/default/settings.php file.
Go on /admin/modules and enable backup and migrate.
Go on /admin/config/system/backup_migrate/restore upload your backup file and click restore button
NOTE 1 (database settings):
For Drupal installation of course you need to have database. You should just create empty DB and set up user for that database. You should also set up password for that DB user and give him full privileges. In settings.php file you then change that data:
if (!isset($databases)) {
$databases = array();
}
$databases['default']['default'] = array(
'driver' => 'mysql',
'database' => 'nameofyourDB', //here you enter name of new empty database
'username' => 'mylocalsiteDBusername', //here you enter user name of database user
'password' => 'yourpassword', //you should always set up password for database user for security reasons
'host' => '127.0.0.1', //name of your host (usually is localhost)
'port' => 33067 ); //default MySql port
Basically here you set up Drupal site on empty database you created on new server. After that you fill that database using backup and migrate module.
NOTE 2 (settings.php file premissions):
When you migrate site and (in your case replace old one with new) you want to change settings.php file there can be a little problem with write permissions of the settings.php file. It is a common case that by default you can't change settings.php so in order to edit/replace that file you need to change permissions of the file and also of the folders where this file is placed. With no write permissions you can end up with new site and old settings.php file (the settings.php file from site you migrate will not overwrite old file).
Hope this helps.
You can do it in the GUI with the Backup and Migrate module https://www.drupal.org/project/backup_migrate
Okay it appears the issue was that settings.php did not have the needed configuration. Here was the solution that worked for me:
In order to determine what was needed, I created a temporary database in mysql call temp_db
I commented out the database section of settings.php
I navigated to the site which started the install process again
I went through the install process using the new temp_db
After completing this process, I went to settings.php and changed the db name to the correct one.
Went to the EC2 instance again and it worked!
Thank so much to everyone who helped!
I am getting the following error while using the paperclip GEM.I have tried uploading JPG/PNG and neither works. It seems like I am getting validation error..any help would be awesome thanks!
Image has contents that are not what they are reported to be
class Listing < ActiveRecord::Base
has_attached_file :image, :styles => { :medium => "200x", :thumb => "100x100>" }, :default_url => "404.jpg"
validates_attachment_content_type :image, :content_type => /\Aimage\/.*\Z/
end
If your are using in windows 7 development mode. U need to manually install file.exe and set the path. Please follow the content in the link
installing file.exe manually.
After installing
Environment
Open config/environments/development.rb
Add the following line: Paperclip.options[:command_path] = 'C:\Program Files (x86)\GnuWin32\bin'
Restart your Rails server
This worked for Windows 8:
1.Download file.exe
2.Test if is well installed by running your cmd and put the following instructions
convert logo: logo.miff then run imdisplay logo.miff
You will get custom logo image,that will pop up on your windows screen.
From here now you can start configuring everything on rails app
Open config/environments/development.rb
Add the following line: Paperclip.options[:command_path] = 'C:\tools\GnuWin32\bin'
If your rails server is currently running,brake the server and then run again rails s.After that you should be ready to go.Upload image on your app.
I'm having an issue where my chef.json attributes in my Vagrantfile seem to be getting ignored/overwritten.
Environment: Mac OS X 10.8 host, Ubuntu 12.04 guest virtualized in VirtualBox 4.18. Using Berkshelf for cookbook dependencies and the Opscode cookbooks for all of the recipes.
The box is spinning up fine, but I'm trying to configure more like it would look if I downloaded Jetty and un-tarred the archive, rather than a bunch of symlinks from /usr/share/jetty to all over the filesystem the way it seems to be defaulting to.
Here's the chef portion of my Vagrantfile:
config.vm.provision :chef_solo do |chef|
chef.json = { :java => {
:install_flavor => "oracle",
:jdk_version => '7',
:oracle => {
:accept_oracle_license_terms => true
}
},
:jetty => {
:port => '8080',
:home => '/opt/jetty',
:config_dir => '/opt/jetty/conf',
:log_dir => '/opt/jetty/log',
:context_dir => '/opt/jetty/context',
:webapp_dir => '/opt/jetty/webapp'
}
}
chef.add_recipe "apt"
chef.add_recipe "mongodb::default"
chef.add_recipe "java"
chef.add_recipe "jetty"
end
Chef seems to be reading the chef.json because I can change Jetty's port in the Vagrantfile.
I've tried to change these attributes in attributes/default.rb of the Jetty cookbook, but that didn't help either.
What am I missing?
If you take a look at the below block in jetty/recipes/default.rb
jetty_pkgs = value_for_platform(
["debian","ubuntu"] => {
"default" => ["jetty","libjetty-extra"]
},
["centos","redhat","fedora"] => {
"default" => ["jetty6","jetty6-jsp-2.1","jetty6-management"]
},
"default" => ["jetty"]
)
jetty_pkgs.each do |pkg|
package pkg do
action :install
end
end
For Debian/Ubuntu, the default recipe uses DEB packages from official repository instead of what you want (download binary from official website, untar it into your preferred location).
Because DEB packages have their own specifications (run dpkg -L jetty to see their files/directories structure), I reckon that's why your attribute overrides in chef.json did not work.
You can enable debugging output to see more information when you run provision again
VAGRANT_LOG=debug vagrant up
NOTE: It's probably better off writing your own cookbook to download the binary and untar set permissions and do other stuff if you want Jetty to be installed the way you like;-)