Configuring Blackfire on a base virtual box using Chef - profiling

I'm trying to give Blackfire.io (by Sensiolabs) a try to profile an existing PHP application running on a Vagrant machine (with PHP 5.3) on Mac.
I'm using Chef to provision my machine with Blackfire, but when running "vagrant provision" I get the following error:
default: STDERR: The server ID parameter is not set. Please run
blackfire-agent -register to configure it.
..which I already did
This is my Vagrant file:
is_windows = (RbConfig::CONFIG['host_os'] =~ /mswin|mingw|cygwin/)
Vagrant.configure("2") do |config|
..
config.vm.box = "covex/ubuntu1204-x64"
config.omnibus.chef_version = :latest
config.vm.provision "chef_solo" do |chef|
chef.json = {
:blackfire => {
:'server-id' => "d4860b49-be67-404b-9fa1-b..",
:'server-token' => "c412751f30d6c724033d8408e.."
}
}
chef.add_recipe "blackfire"
end
end
I followed the installation steps on https://blackfire.io/getting-started, except for the Probe paragraph.
Is my Vagrant file wrongly configured, so it can't read the server ID and token? Is the "brew install blackfire-php53" needed for this, if so, is there a way to configure this through my Vagrant file?

Guessing you are using https://supermarket.chef.io/cookbooks/blackfire
You missed the agent node in the config tree
{
"blackfire" => {
"agent" => {
"server-id" => "your server-id",
"server-token" => "your server-token",
}
}
}

Related

Logstash AWS solving code 403 trying to reconnect

I'm trying to push documents from local to elastic server in AWS, and when trying to do so I get 403 error and Logstash keeps on trying to establish connection with the server like so:
[2021-05-09T11:09:52,707][TRACE][logstash.inputs.file ][main] Registering file input {:path=>["~/home/ubuntu/json_try/json_try.json"]}
[2021-05-09T11:09:52,737][DEBUG][logstash.javapipeline ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x5033269f run>"}
[2021-05-09T11:09:53,441][DEBUG][logstash.outputs.amazonelasticsearch][main] Waiting for connectivity to Elasticsearch cluster. Retrying in 4s
[2021-05-09T11:09:56,403][INFO ][logstash.outputs.amazonelasticsearch][main] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://my-dom.co:8001/scans, :path=>"/"}
[2021-05-09T11:09:56,461][WARN ][logstash.outputs.amazonelasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://my-dom.co:8001/scans", :error_type=>LogStash::Outputs::AmazonElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '403' contacting Elasticsearch at URL 'https://my-dom.co:8001/scans/'"}
[2021-05-09T11:09:56,849][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-05-09T11:09:56,853][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-05-09T11:09:57,444][DEBUG][logstash.outputs.amazonelasticsearch][main] Waiting for connectivity to Elasticsearch cluster. Retrying in 8s
.
.
.
I'm using the following logstash conf file:
input {
file{
type => "json"
path => "~/home/ubuntu/json_try/json_try.json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output{
amazon_es {
hosts => ["https://my-dom.co/scans"]
port => 8001
ssl => true
region => "us-east-1b"
index => "snapshot-%{+YYYY.MM.dd}"
}
}
Also I've exported AWS keys for the SSL to work. Is there anything I'm missing in order for the connection to succeed?
I've been able to solve this by using elasticsearch as my output plugin instead of amazon_es.
This usage will require cloud_id of the target AWS node, cloud_auth for it and also the target index in elastic for the data to be sent to. So the conf file will look something like this:
input {
file{
type => "json"
path => "~/home/ubuntu/json_try/json_try.json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
output{
elasticsearch {
cloud_id: "node_name:node_hash"
cloud_auth: "auth_hash"
index: "snapshot-%{+YYYY.MM.dd}"
}
}

nixops: how to use local ssh key when deploying on machine with existing nixos (targetEnv is none)?

I have machine with nixos (provisioned using terraform, config), I want to connect to it using deployment.targetHost = ipAddress and deployment.targetEnv = "none"
But I can't configure nixops to use /secrets/stage_ssh_key ssh key
This is not working ( actually this is not documented, I have found it here https://github.com/NixOS/nixops/blob/d4e5b779def1fc9e7cf124930d0148e6bd670051/nixops/backends/none.py#L33-L35 )
{
stage =
{ pkgs, ... }:
{
deployment.targetHost = (import ./nixos-generated/stage.nix).terraform.ip;
deployment.targetEnv = "none";
deployment.none.sshPrivateKey = builtins.readFile ./secrets/stage_ssh_key;
deployment.none.sshPublicKey = builtins.readFile ./secrets/stage_ssh_key.pub;
deployment.none.sshPublicKeyDeployed = true;
environment.systemPackages = with pkgs; [
file
];
};
}
nixops ssh stage results in asking for password, expected - login without password
nixops ssh stage -i ./secrets/stage_ssh_key works as expected, password is not asked
How to reproduce:
download repo
rm -rf secrets/*
add aws keys in secrets/aws.nix
{
EC2_ACCESS_KEY="XXXX";
EC2_SECRET_KEY="XXXX";
}
nix-shell
make generate_stage_ssh_key
terraform apply
make nixops_create
nixops deploy asks password

Vagrant usbfilter make the guest machine entered an invalid state

Based on the following instructions:
https://gist.github.com/dergachev/3866825#vagrant-setup
Ubuntu Linaro
uname -a
Linux ken-desktop 3.11.0-18-generic #32-Ubuntu SMP Tue Feb 18 21:11:14 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
cat /proc/version
Linux version 3.11.0-18-generic (buildd#toyol) (gcc version 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu8) ) #32-Ubuntu SMP Tue Feb 18 21:11:14 UTC 2014
Virtualbox-4.2
VBoxManage --version
4.2.16_Ubuntur86992
Vagrant 1.5 vagrant_1.5.0_x86_64.deb
In a cookbooks folder I cloned the following chef cookbooks:
git clone git://github.com/opscode-cookbooks/vim.git
git clone git://github.com/opscode-cookbooks/git.git
git clone git://github.com/opscode-cookbooks/apt.git
git clone git://github.com/tiokksar/chef-oh-my-zsh-solo.git
git clone git://github.com/opscode-cookbooks/openssl.git
git clone git://github.com/getaroom/chef-couchbase.git
I also installed this:
https://github.com/dotless-de/vagrant-vbguest
vagrant plugin install vagrant-vbguest
I try to make a nice Vagrantfile creating a precise64 VM with usb automatically mounted.
But each time I try to add an usbfilter on my virtualbox VM, I end up with that message:
% vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'hashicorp/precise64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'hashicorp/precise64' is up to date...
==> default: Setting the name of the VM: smartofficeVM_default_1395303674511_42792
==> default: The cookbook path '/home/ken/smartofficeVM/databags' doesn't exist. Ignoring...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 3000 => 3000 (adapter 1)
default: 22 => 2222 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Error: Connection refused. Retrying...
default: Error: Connection refused. Retrying...
default: Error: Connection refused. Retrying...
default: Error: Connection refused. Retrying...
default: Error: Connection refused. Retrying...
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'poweroff' state. Please verify everything is configured
properly and try again.
If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.
my configuration file is the following:
% cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "hashicorp/precise64"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
# config.vm.box_url = "http://domain.com/path/to/above.box"
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network"
# If true, then any SSH connections made will enable agent forwarding.
# Default value: false
# config.ssh.forward_agent = true
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Don't boot with headless mode
# vb.gui = true
#
# # Use VBoxManage to customize the VM. For example to change memory:
# vb.customize ["modifyvm", :id, "--memory", "1024"]
# end
#
# View the documentation for the provider you're using for more
# information on available options.
# Enable provisioning with Puppet stand alone. Puppet manifests
# are contained in a directory path relative to this Vagrantfile.
# You will need to create the manifests directory and a manifest in
# the file hashicorp/precise32.pp in the manifests_path directory.
#
# An example Puppet manifest to provision the message of the day:
#
# # group { "puppet":
# # ensure => "present",
# # }
# #
# # File { owner => 0, group => 0, mode => 0644 }
# #
# # file { '/etc/motd':
# # content => "Welcome to your Vagrant-built virtual machine!
# # Managed by Puppet.\n"
# # }
#
# config.vm.provision "puppet" do |puppet|
# puppet.manifests_path = "manifests"
# puppet.manifest_file = "site.pp"
# end
# Enable provisioning with chef solo, specifying a cookbooks path, roles
# path, and data_bags path (all relative to this Vagrantfile), and adding
# some recipes and/or roles.
#
config.vm.provision "chef_solo" do |chef|
chef.cookbooks_path = "cookbooks"
#chef.roles_path = "../my-recipes/roles"
chef.data_bags_path = "databags"
#chef.add_role "web"
chef.add_recipe "apt"
chef.add_recipe "zsh"
chef.add_recipe "chef-oh-my-zsh-solo"
chef.add_recipe "vim"
chef.add_recipe "git"
chef.add_recipe "openssl"
chef.add_recipe "couchbase::server"
# setup users (from data_bags/users/*.json)
# chef.add_recipe "users::sysadmins" # creates users and sysadmin group
# chef.add_recipe "users"
# chef.add_recipe "users::sysadmin_sudo" # adds %sysadmin group to sudoers
# homesick_agent and its dependencies
# chef.add_recipe "root_ssh_agent::ppid" # maintains agent during 'sudo su root'
# chef.add_recipe "ssh_known_hosts"
# populates /etc/ssh/ssh_known_hosts from data_bags/ssh_known_hosts/*.json
# You may also specify custom JSON attributes:
#chef.json = { :users => "admin" }
chef.json = {
"couchbase" => {
"server"=> {
"password" => "123"
}
}
}
chef.log_level = :debug
end
# Enable provisioning with chef server, specifying the chef server URL,
# and the path to the validation key (relative to this Vagrantfile).
#
# The Opscode Platform uses HTTPS. Substitute your organization for
# ORGNAME in the URL and validation key.
#
# If you have your own Chef Server, use the appropriate URL, which may be
# HTTP instead of HTTPS depending on your configuration. Also change the
# validation key to validation.pem.
#
# config.vm.provision "chef_client" do |chef|
# chef.chef_server_url = "https://api.opscode.com/organizations/ORGNAME"
# chef.validation_key_path = "ORGNAME-validator.pem"
# end
#
# If you're using the Opscode platform, your validator client is
# ORGNAME-validator, replacing ORGNAME with your organization name.
#
# If you have your own Chef Server, the default validation client name is
# chef-validator, unless you changed the configuration.
#
# chef.validation_client_name = "ORGNAME-validator"
end
On detail is: If I remove the following lines, it's starting properly (but no usb available)
vb.customize ["modifyvm", :id, "--usb", "on"]
vb.customize ["modifyvm", :id, "--usbehci", "on"]
EDIT
Logs from Vlogs file
cat VBox.log
VirtualBox VM 4.2.16_Ubuntu r86992 linux.amd64 (Sep 21 2013 11:46:57) release log
00:00:00.033561 Log opened 2014-03-20T08:21:15.686771000Z
00:00:00.033570 OS Product: Linux
00:00:00.033572 OS Release: 3.11.0-18-generic
00:00:00.033575 OS Version: #32-Ubuntu SMP Tue Feb 18 21:11:14 UTC 2014
00:00:00.033610 DMI Product Name:
00:00:00.033624 DMI Product Version:
00:00:00.033756 Host RAM: 3882MB total, 3328MB available
00:00:00.033763 Executable: /usr/lib/virtualbox/VBoxHeadless
00:00:00.033765 Process ID: 10288
00:00:00.033767 Package type: LINUX_64BITS_GENERIC (OSE)
00:00:00.039722 Installed Extension Packs:
00:00:00.039747 VNC (Version: 4.2.16 r86992; VRDE Module: VBoxVNC)
00:00:00.046777 SUP: Loaded VMMR0.r0 (/usr/lib/virtualbox/VMMR0.r0) at 0xffffffffa0518020 - ModuleInit at ffffffffa052e0f0 and ModuleTerm at ffffffffa052e390
00:00:00.046820 SUP: VMMR0EntryEx located at ffffffffa052f510, VMMR0EntryFast at ffffffffa052f240 and VMMR0EntryInt at ffffffffa052f230
00:00:00.049809 OS type: 'Ubuntu_64'
00:00:00.073143 File system of '/home/ken/VirtualBox VMs/smartofficeVM_default_1395303674511_42792/Snapshots' (snapshots) is unknown
00:00:00.073166 File system of '/home/ken/VirtualBox VMs/smartofficeVM_default_1395303674511_42792/box-disk1.vmdk' is ext4
00:00:00.091096 VMSetError: /build/buildd/virtualbox-4.2.16-dfsg/src/VBox/Main/src-client/ConsoleImpl2.cpp(2300) int Console::configConstructorInner(PVM, util::AutoWriteLock*); rc=VERR_NOT_FOUND
00:00:00.091111 VMSetError: Implementation of the USB 2.0 controller not found!
00:00:00.091113 Because the USB 2.0 controller state is part of the saved VM state, the VM cannot be started. To fix this problem, either install the 'Oracle VM VirtualBox Extension Pack' or disable USB 2.0 support in the VM settings
00:00:00.217513 ERROR [COM]: aRC=NS_ERROR_FAILURE (0x80004005) aIID={db7ab4ca-2a3f-4183-9243-c1208da92392} aComponent={Console} aText={Implementation of the USB 2.0 controller not found!
00:00:00.217535 Because the USB 2.0 controller state is part of the saved VM state, the VM cannot be started. To fix this problem, either install the 'Oracle VM VirtualBox Extension Pack' or disable USB 2.0 support in the VM settings (VERR_NOT_FOUND)}, preserve=false
00:00:00.224473 Power up failed (vrc=VERR_NOT_FOUND, rc=NS_ERROR_FAILURE (0X80004005))
VAGRANT= debug vragant up log
http://pastebin.com/2GMhmy9T
Anybody has some expertise on the topic?
Thank you very much.
SOLUTION: I though it was already installed... when reading : 00:00:00.039722 Installed Extension Packs: 00:00:00.039747 VNC (Version: 4.2.16 r86992; VRDE Module: VBoxVNC) But in fact I have to install the extension guest pack on the Host too. It's a bit confusing. thank you very much. You can add a proper answer, I ll validate it.
The following line in VBox logs:
00:00:00.217535 Because the USB 2.0 controller state is part of the saved VM state, the VM cannot be started. To fix this problem, either install the 'Oracle VM VirtualBox Extension Pack' or disable USB 2.0 support in the VM settings (VERR_NOT_FOUND)}, preserve=false
Highlights that you have to install the VirtualBox Extension Pack in order to fix the issue.
Download and install VirtualBox extension pack from there (according to your VirtualBox version). It may solve your problem.

Logstash, EC2 and elasticsearch

I have two elasticsearch nodes setup in EC2 and am trying to use logstash with it. I get this error when I run logstash:
log4j, [2014-02-24T10:45:32.722] WARN: org.elasticsearch.discovery.zen.ping.unicast: [Ishihara, Shirow] failed to send ping to [[#zen_unicast_1#][inet[/10.110.65.91:9300]]]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
That's a snippet of it.
Here is the conf file I am using with logstash:
input {
redis {
host => "10.110.65.91"
# these settings should match the output of the agent
data_type => "list"
key => "logstash"
# We use the 'json' codec here because we expect to read
# json events from redis.
codec => json
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch {
host => "10.110.65.91"
cluster => searchbuild
}
}
~
I'm running Logstash on .91 (have a second terminal window open) Am I missing something?
I had to change "elasticsearch" to "elasticsearch_http".
Fixed.

Gradle: How do I stop Jetty if JMeter test failed

How do I stop Jetty if JMeter test failed?
My Gradle script:
apply plugin: 'jetty'
apply plugin: 'jmeter'
jmeterRun {
doFirst() {
jettyRunWar.httpPort = 8080 // Port for test
println "Starting Jetty on port:" + jettyRunWar.httpPort
jettyRunWar.daemon = true
jettyRunWar.execute()
}
doLast() {
println "Stopping Jetty"
jettyStopWar.stopPort = 8091 // Port for stop signal
jettyStopWar.stopKey = 'stopKey'
jettyStopWar.execute()
}
jmeterTestFiles = [
file("src/test/jmeter/Tests.jmx")
]
}
You can use the method finalizedBy to ensure that Jetty is stopped no matter whether JMeter runs successfully or fails.
jmeterRun {
dependsOn jettyRunWar
finalizedBy jettyStopWar
}
Try the below settings:
In doFirst()
jettyRunWar.stopPort = 8090
jettyRunWar.stopKey = 'stopKey'
In doLast()
jettyStop.stopPort = 8090
jettyStop.stopKey = 'stopKey'
Not sure if it's a bug related to this Link or that you just need to specify a stopPort for jetty to be listening on.
I was having problems stopping jetty after running the jettyRunWar task in intelliJ but have those 4 settings in my build.gradle allowed me to stop jetty by running the jettyStop task.