CloudSQL JDBC Logstash implementation - google-cloud-platform

Question
I need to query CloudSQL from Logstash but can't find any example out there.
Additional Context
I ran the build command for postgres jdbc driver
mvn -P jar-with-dependencies clean package -DskipTests
And provided it as Logstash JDBC driver (tried with dependencies jar too):
input {
jdbc {
jdbc_driver_library => "/Users/gustavollermalylarrain/Documents/proyectos/labs/cloud-sql-jdbc-socket-factory/jdbc/postgres/target/postgres-socket-factory-1.6.4-SNAPSHOT.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_connection_string => "jdbc:postgresql:///people?cloudSqlInstance=cosmic-keep-148903:us-central1:llermaly&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=postgres&password=postgres"
statement => "SELECT * FROM people;"
jdbc_user => "postgres"
jdbc_password => "postgres"
}
}
output {
stdout {
codec => rubydebug {
}
}
}
I'm having this error:
Error: java.lang.ClassNotFoundException: org.postgresql.Driver. Are you sure you've included the correct jdbc driver in :jdbc_driver_library?
I'm missing something?

The steps to query Cloud SQL from Logstash are:
Build the jar driver:
From this repo: https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory. Clone and run mvn -P jar-with-dependencies clean package -DskipTests
Copy files
Copy the jar files from jdbc/postgres/target/ to logstash-core/lib/jars also download the postgres jdbc driver and copy the jar file to logstash-core/lib/jars as well
Configure Logstash
The configuration file will not include the jar's path because will look into the default folder logstash-core/lib/jars where you copied the jar files.
input {
jdbc {
jdbc_driver_class => "org.postgresql.Driver"
jdbc_connection_string => "jdbc:postgresql:///people?cloudSqlInstance=cosmic-keep-148903:us-central1:llermaly&socketFactory=com.google.cloud.sql.postgres.SocketFactory&user=postgres&password=postgres"
statement => "SELECT * FROM people;"
jdbc_user => "postgres"
jdbc_password => "postgres"
}
}
output {
stdout {
codec => rubydebug {
}
}
}
jdbc user and password are ignored and the ones you provide in the connection string are used instead. For Postgre Cloud SQL connector you can use both Postgre users or IAM accounts.
Note: You need to run this from a Compute Engine to automatically apply the gcp credentials or manually create the env variables

Related

"puppet agent --test" on client machine aren't getting manifest from the Puppet master server

Issue
So I have two AWS instances: a Puppet master and a Puppet client. When I run sudo puppet agent --test on my client, the tasks defined in my master's manifest didn't apply to the client instance.
Where I am right now
puppetmaster is installed on the master instance
puppet is installed on client instance
Master just finished signing my client's certificate. No errors were displayed
Master has a /etc/puppet/manifests/site.pp
Client's puppet.conf file has a server=dns_of_master line
My Puppet version is 5.4.0. I'm using the default manifest configuration.
Here's the guide that I'm following: https://www.digitalocean.com/community/tutorials/getting-started-with-puppet-code-manifests-and-modules. The only changes are the site.pp content and that I'm using AWS.
If it helps, here's my AWS instances' AMI: ami-06d51e91cea0dac8d
Details
Here's the content on my master's /etc/puppet/manifests/site.pp:
node default {
package { 'nginx':
ensure => installed
}
service { 'nginx':
ensure => running,
require => Package['nginx']
}
file { '/tmp/hello_world':
ensure => present,
content => 'Hello, World!'
}
}
The file has a permission of 777.
Here's the ouput when I run sudo puppet agent --test. This is after I ran sudo puppet agent --enable:
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Caching catalog for my_client_dns
Info: Applying configuration version '1578968015'
Notice: Applied catalog in 0.02 seconds
I have looked at other StackOverflow posts with this issue. I know that my catalog is not getting applied due to the lack of status messages and the quick time. Unfortunately, the solutions didn't apply to my case:
My site.pp is named correctly and in the correct file path /etc/puppet/manifests
I didn't touch my master's puppet.conf file
I tried restarting the server with sudo systemctl but nothing happens
So I have fixed the issue. The guide that I was following required an older version of Ubuntu (16.4, rather than 18.4 as I'm using). This needs a different AMI than the one that I used to create the instances.

How can I migrate a Drupal site to Amazon Web Services EC2?

I have a working local Drupal 7 site (with local MySQL Database), but now I'm trying to migrate it to Amazon Web Services (which is currently running a test Drupal site successfully). I have all of my drupal files in a directory mydrupalsite on the EC2 instance server and the database has been added to the remote MySQL, but I don't know what to do now in order to configure apache to show the new site.
What I have tried so far:
I added the file in /etc/apche2/sites-available (copied the old instance and changed the directory name where appropriate)
I changed the symlinks (deleted the old symlink and added a new one) in /etc/apache2/sites-enabled so that only the new file is pointed to.
Then I rebooted the server and navigated to the site, but it's taking me to the install page (as though i were starting over)
Here are the contents of $databases variable in the settings file on my local machine (with username changed but the pw is an empty string)
if (!isset($databases)) {
$databases = array();
}
$databases['default']['default'] = array(
'driver' => 'mysql',
'database' => 'naicdrpl',
'username' => 'mylocalsiteusername',
'password' => '', // password is an empty string
'host' => '127.0.0.1',
'port' => 33067 );
You can use backup&migrate module for migration. It is very easy for using.
Zip all files from your Drupal directory. Copy/unzip that file on new server.
Backup your database in file with backup&migrate module.
Install Drupal site on new server. Run install.php and follow steps - you should probably change settings in /sites/default/settings.php file.
Go on /admin/modules and enable backup and migrate.
Go on /admin/config/system/backup_migrate/restore upload your backup file and click restore button
NOTE 1 (database settings):
For Drupal installation of course you need to have database. You should just create empty DB and set up user for that database. You should also set up password for that DB user and give him full privileges. In settings.php file you then change that data:
if (!isset($databases)) {
$databases = array();
}
$databases['default']['default'] = array(
'driver' => 'mysql',
'database' => 'nameofyourDB', //here you enter name of new empty database
'username' => 'mylocalsiteDBusername', //here you enter user name of database user
'password' => 'yourpassword', //you should always set up password for database user for security reasons
'host' => '127.0.0.1', //name of your host (usually is localhost)
'port' => 33067 ); //default MySql port
Basically here you set up Drupal site on empty database you created on new server. After that you fill that database using backup and migrate module.
NOTE 2 (settings.php file premissions):
When you migrate site and (in your case replace old one with new) you want to change settings.php file there can be a little problem with write permissions of the settings.php file. It is a common case that by default you can't change settings.php so in order to edit/replace that file you need to change permissions of the file and also of the folders where this file is placed. With no write permissions you can end up with new site and old settings.php file (the settings.php file from site you migrate will not overwrite old file).
Hope this helps.
You can do it in the GUI with the Backup and Migrate module https://www.drupal.org/project/backup_migrate
Okay it appears the issue was that settings.php did not have the needed configuration. Here was the solution that worked for me:
In order to determine what was needed, I created a temporary database in mysql call temp_db
I commented out the database section of settings.php
I navigated to the site which started the install process again
I went through the install process using the new temp_db
After completing this process, I went to settings.php and changed the db name to the correct one.
Went to the EC2 instance again and it worked!
Thank so much to everyone who helped!

Deploy jetty module on client with puppet

I want to try to install the following module on my clients using puppet.
https://forge.puppetlabs.com/maestrodev/jetty
So I install the module on my master using:
puppet module install maestrodev-jetty
this seems to have worked
$ puppet module list
/home/puppetmaster/.puppet/modules
├── maestrodev-jetty (v1.1.2)
├── maestrodev-wget (v1.5.6)
└── puppetlabs-stdlib (v4.3.2)
What I want to do next is deploy jetty, set it up and deploy it on my clients.
I made the following manifest for this:
class { 'jetty':
version => "9.0.4.v20130625",
home => "/opt",
user => "jetty",
group => "jetty",
}
exec { 'stanbol-war-download':
command => "wget -0 /opt/jetty/webapps/my.war http://some.url/my.war",
path => "/usr/bin/",
creates => "/opt/jetty/webapps/my.war",
} ->
exec { 'jetty_start':
command => "java -jar /opt/jetty/my.jar jetty.port=8181 -Xmx2048m -XX:MaxPermSize=512M",
cwd => "/opt/jetty",
path => "/usr/bin/",
notify => Service["jetty"],
returns => [0, 254],
}
I have been trying for a while but I can't seem to get it installed and running on my clients without getting any sort of error, syntax or otherwise.

Chef Solo Jetty Cookbook Attributes

I'm having an issue where my chef.json attributes in my Vagrantfile seem to be getting ignored/overwritten.
Environment: Mac OS X 10.8 host, Ubuntu 12.04 guest virtualized in VirtualBox 4.18. Using Berkshelf for cookbook dependencies and the Opscode cookbooks for all of the recipes.
The box is spinning up fine, but I'm trying to configure more like it would look if I downloaded Jetty and un-tarred the archive, rather than a bunch of symlinks from /usr/share/jetty to all over the filesystem the way it seems to be defaulting to.
Here's the chef portion of my Vagrantfile:
config.vm.provision :chef_solo do |chef|
chef.json = { :java => {
:install_flavor => "oracle",
:jdk_version => '7',
:oracle => {
:accept_oracle_license_terms => true
}
},
:jetty => {
:port => '8080',
:home => '/opt/jetty',
:config_dir => '/opt/jetty/conf',
:log_dir => '/opt/jetty/log',
:context_dir => '/opt/jetty/context',
:webapp_dir => '/opt/jetty/webapp'
}
}
chef.add_recipe "apt"
chef.add_recipe "mongodb::default"
chef.add_recipe "java"
chef.add_recipe "jetty"
end
Chef seems to be reading the chef.json because I can change Jetty's port in the Vagrantfile.
I've tried to change these attributes in attributes/default.rb of the Jetty cookbook, but that didn't help either.
What am I missing?
If you take a look at the below block in jetty/recipes/default.rb
jetty_pkgs = value_for_platform(
["debian","ubuntu"] => {
"default" => ["jetty","libjetty-extra"]
},
["centos","redhat","fedora"] => {
"default" => ["jetty6","jetty6-jsp-2.1","jetty6-management"]
},
"default" => ["jetty"]
)
jetty_pkgs.each do |pkg|
package pkg do
action :install
end
end
For Debian/Ubuntu, the default recipe uses DEB packages from official repository instead of what you want (download binary from official website, untar it into your preferred location).
Because DEB packages have their own specifications (run dpkg -L jetty to see their files/directories structure), I reckon that's why your attribute overrides in chef.json did not work.
You can enable debugging output to see more information when you run provision again
VAGRANT_LOG=debug vagrant up
NOTE: It's probably better off writing your own cookbook to download the binary and untar set permissions and do other stuff if you want Jetty to be installed the way you like;-)

Is it possible to download or search application logs in Cloud Foundry

I am new to cloud foundry and am trying to find out of there is a way for downloading log files or search them in cloud foundry.
I know that I can open the logs files using vmc files but is there any other way of accessing the logs?
Thanks,
Kinjal
I think the easiest way to do this is using the VMC client library, 'cfoundry'.
The following ruby script connects and downloads the three main logs:
#!/usr/bin/env ruby
require 'rubygems'
require 'cfoundry'
creds = { :username => ARGV[0], :password => ARGV[1] }
app_name = ARGV[2]
files_to_dl = ['logs/staging.log', 'logs/stderr.log', 'logs/stdout.log']
c = CFoundry::Client.new "http://api.cloudfoundry.com"
c.login creds
app = c.app_by_name app_name
files_to_dl.each do |file|
begin
content = app.file(file)
local_path = file.match(/\/([^\/]+)$/)[1]
File.open(local_path, 'w') { |f| f.write(content) }
rescue CFoundry::NotFound
puts "404!"
end
end
This script assumes you are using the latest version of VMC (older, legacy versions don't use cfoundry) and that you also pass in username, password and application name when calling the script. It will write the contents of the remote files locally.