Permission error when trying to connect AWS to docker using SSH authentication - amazon-web-services

Not sure why but I've been looking everywhere and tried about 20 different things today to fix it with no luck.
I'm trying to use ssh authentication to link my website (inside Docker using Django) with amazon AWS ec2.
The error is really tilting and doesn't seem to be changing no matter what I do.
This is the error (I've removed the DNS)
ssh ec2-user#ec2-[DNS]-eu-west-2.compute.amazonaws.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
I've changed the SSHD config file to this: (still no luck, this was the result of 4 different tutorials all saying different things)
# $OpenBSD: sshd_config,v 1.104 2021/07/02 05:11:21 dtucker Exp $
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/bin:/usr/sbin:/sbin:/usr/bin
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
# default value.
#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
PermitRootLogin no
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
PubkeyAuthentication yes
# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile .ssh/authorized_keys
#AuthorizedPrincipalsFile none
#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
#PermitEmptyPasswords no
# Change to no to disable s/key passwords
KbdInteractiveAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
# GSSAPI options
GSSAPIAuthentication yes
GSSAPICleanupCredentials no
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the KbdInteractiveAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via KbdInteractiveAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and KbdInteractiveAuthentication to 'no'.
UsePAM yes
#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
#X11Forwarding no
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
#PrintMotd yes
#PrintLastLog yes
#TCPKeepAlive yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
#PidFile /etc/ssh/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# no default banner path
#Banner none
# override default of no subsystems
Subsystem sftp /usr/lib/ssh/sftp-server
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# PermitTTY no
# ForceCommand cvs server
I've truly ran out of ideas on this one, any help would be greatly appreciated as it's my final step in a 3 month personal programming project that seems to never end.

Related

Failed to determine the health of the cluster when initial password setting in ElasticSearch

I tried to install ElasticSearch on AWS ec2
I tried to set up initial password of ElasticSearch with following command and got this error message
/usr/share/elasticsearch/bin
$ ./elasticsearch-reset-password -u elasticsearch
ERROR: Failed to determine the health of the cluster.
Here is my elasticsearch.yml file
======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:#
#cluster.name: aaaaaaa
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
##node.name: ${HOSTNAME}
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
# Path to log files:
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.bind_host: 0.0.0.0
#network.publish_host: ${HOSTNAME}
network.host: ["0.0.0.0"]
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: - ${HOSTNAME}
discovery.type: single-node
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: - ${HOSTNAME}
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 22-10-2022 08:48:06
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
# cluster.initial_master_nodes: ["aaaaaaa"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
################################################################################################################# just try to write something to upload this post ignore this ########################################################
This error means that your cluster is not ready yet when you are trying to change the cluster password.
First, you should configure your cluster and ensure that is healthy and then change the password.
The default username and password for Elasticsearch is "elastic" and "changeme".

log kong api logs to syslog

I use the syslog plugin available with kong api gateway and I have the following:
{ "api_id": "some_id",
"id": "some_id",
"created_at": 4544444,
"enabled": true,
"name": "syslog",
"config":
{ "client_errors_severity": "info",
"server_errors_severity": "info",
"successful_severity": "info",
"log_level": "emerg"
} }
I use a centos7 ,and i have the following conf file(/etc/rsyslog.conf)
# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### MODULES ####
# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark # provides --MARK-- message capability
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
$template precise,"%syslogpriority%,%syslogfacility%,%timegenerated%,%HOSTNAME%,%syslogtag%,%msg%\n"
$ActionFileDefaultTemplate precise
# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList # run asynchronously
#$ActionResumeRetryCount -1 # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* ##remote-host:514
# ### end of the forwarding rule ###
Syslog daemon is running on the centos 7 machine and it's configured with logging level severity(info) same as or lower than the set config.log_level(emerg).
I am unable to see any logs on
/var/log/messages
syslog daemon configured with logging level severity same as or lower than the set config.log_level for proper logging.

How to configure OpenSMTPD with Amazon SES?

Amazon has instructions for postfix and sendmail, but not OpenSMTPD, so adding them here.
Tested with OpenBSD 5.8
Verify your domain and a sender in AWS SES console. Save your SMTP Settings.
Set up the SMTP authentication details in the mail secrets database (replacing $smtpUsername:$smtpPassword with the values from step 1)
# touch /etc/mail/secrets
# chmod 640 /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# echo "ses $smtpUsername:$smtpPassword" >> /etc/mail/secrets
# makemap /etc/mail/secrets
Configure OpenSMTPD:
# nano /etc/mail/smtpd.conf
listen on lo0
table aliases db:/etc/mail/aliases.db
table secrets db:/etc/mail/secrets.db
accept for local alias <aliases> deliver to mbox
accept from local for any relay via tls+auth://ses#email-smtp.us-east-1.amazonaws.com auth <secrets>
Restart OpenSMTPD:
# rcctl restart smtpd
Test it:
# sendmail -v -f verified-sender#verified-domain.com to#example.com
Subject: test subject
test body
^D
Errors?
watch your line-breaks in smtpd.conf
# smtpd -n to check for syntax errors in smtpd.conf
Try port 587 if your machine is blocking port 25 (add :587 to end of aws url in smtpd.conf)

Vagrant usbfilter make the guest machine entered an invalid state

Based on the following instructions:
https://gist.github.com/dergachev/3866825#vagrant-setup
Ubuntu Linaro
uname -a
Linux ken-desktop 3.11.0-18-generic #32-Ubuntu SMP Tue Feb 18 21:11:14 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
cat /proc/version
Linux version 3.11.0-18-generic (buildd#toyol) (gcc version 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu8) ) #32-Ubuntu SMP Tue Feb 18 21:11:14 UTC 2014
Virtualbox-4.2
VBoxManage --version
4.2.16_Ubuntur86992
Vagrant 1.5 vagrant_1.5.0_x86_64.deb
In a cookbooks folder I cloned the following chef cookbooks:
git clone git://github.com/opscode-cookbooks/vim.git
git clone git://github.com/opscode-cookbooks/git.git
git clone git://github.com/opscode-cookbooks/apt.git
git clone git://github.com/tiokksar/chef-oh-my-zsh-solo.git
git clone git://github.com/opscode-cookbooks/openssl.git
git clone git://github.com/getaroom/chef-couchbase.git
I also installed this:
https://github.com/dotless-de/vagrant-vbguest
vagrant plugin install vagrant-vbguest
I try to make a nice Vagrantfile creating a precise64 VM with usb automatically mounted.
But each time I try to add an usbfilter on my virtualbox VM, I end up with that message:
% vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'hashicorp/precise64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'hashicorp/precise64' is up to date...
==> default: Setting the name of the VM: smartofficeVM_default_1395303674511_42792
==> default: The cookbook path '/home/ken/smartofficeVM/databags' doesn't exist. Ignoring...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 3000 => 3000 (adapter 1)
default: 22 => 2222 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default: Error: Connection refused. Retrying...
default: Error: Connection refused. Retrying...
default: Error: Connection refused. Retrying...
default: Error: Connection refused. Retrying...
default: Error: Connection refused. Retrying...
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'poweroff' state. Please verify everything is configured
properly and try again.
If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.
my configuration file is the following:
% cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# All Vagrant configuration is done here. The most common configuration
# options are documented and commented below. For a complete reference,
# please see the online documentation at vagrantup.com.
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "hashicorp/precise64"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
# config.vm.box_url = "http://domain.com/path/to/above.box"
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network"
# If true, then any SSH connections made will enable agent forwarding.
# Default value: false
# config.ssh.forward_agent = true
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Don't boot with headless mode
# vb.gui = true
#
# # Use VBoxManage to customize the VM. For example to change memory:
# vb.customize ["modifyvm", :id, "--memory", "1024"]
# end
#
# View the documentation for the provider you're using for more
# information on available options.
# Enable provisioning with Puppet stand alone. Puppet manifests
# are contained in a directory path relative to this Vagrantfile.
# You will need to create the manifests directory and a manifest in
# the file hashicorp/precise32.pp in the manifests_path directory.
#
# An example Puppet manifest to provision the message of the day:
#
# # group { "puppet":
# # ensure => "present",
# # }
# #
# # File { owner => 0, group => 0, mode => 0644 }
# #
# # file { '/etc/motd':
# # content => "Welcome to your Vagrant-built virtual machine!
# # Managed by Puppet.\n"
# # }
#
# config.vm.provision "puppet" do |puppet|
# puppet.manifests_path = "manifests"
# puppet.manifest_file = "site.pp"
# end
# Enable provisioning with chef solo, specifying a cookbooks path, roles
# path, and data_bags path (all relative to this Vagrantfile), and adding
# some recipes and/or roles.
#
config.vm.provision "chef_solo" do |chef|
chef.cookbooks_path = "cookbooks"
#chef.roles_path = "../my-recipes/roles"
chef.data_bags_path = "databags"
#chef.add_role "web"
chef.add_recipe "apt"
chef.add_recipe "zsh"
chef.add_recipe "chef-oh-my-zsh-solo"
chef.add_recipe "vim"
chef.add_recipe "git"
chef.add_recipe "openssl"
chef.add_recipe "couchbase::server"
# setup users (from data_bags/users/*.json)
# chef.add_recipe "users::sysadmins" # creates users and sysadmin group
# chef.add_recipe "users"
# chef.add_recipe "users::sysadmin_sudo" # adds %sysadmin group to sudoers
# homesick_agent and its dependencies
# chef.add_recipe "root_ssh_agent::ppid" # maintains agent during 'sudo su root'
# chef.add_recipe "ssh_known_hosts"
# populates /etc/ssh/ssh_known_hosts from data_bags/ssh_known_hosts/*.json
# You may also specify custom JSON attributes:
#chef.json = { :users => "admin" }
chef.json = {
"couchbase" => {
"server"=> {
"password" => "123"
}
}
}
chef.log_level = :debug
end
# Enable provisioning with chef server, specifying the chef server URL,
# and the path to the validation key (relative to this Vagrantfile).
#
# The Opscode Platform uses HTTPS. Substitute your organization for
# ORGNAME in the URL and validation key.
#
# If you have your own Chef Server, use the appropriate URL, which may be
# HTTP instead of HTTPS depending on your configuration. Also change the
# validation key to validation.pem.
#
# config.vm.provision "chef_client" do |chef|
# chef.chef_server_url = "https://api.opscode.com/organizations/ORGNAME"
# chef.validation_key_path = "ORGNAME-validator.pem"
# end
#
# If you're using the Opscode platform, your validator client is
# ORGNAME-validator, replacing ORGNAME with your organization name.
#
# If you have your own Chef Server, the default validation client name is
# chef-validator, unless you changed the configuration.
#
# chef.validation_client_name = "ORGNAME-validator"
end
On detail is: If I remove the following lines, it's starting properly (but no usb available)
vb.customize ["modifyvm", :id, "--usb", "on"]
vb.customize ["modifyvm", :id, "--usbehci", "on"]
EDIT
Logs from Vlogs file
cat VBox.log
VirtualBox VM 4.2.16_Ubuntu r86992 linux.amd64 (Sep 21 2013 11:46:57) release log
00:00:00.033561 Log opened 2014-03-20T08:21:15.686771000Z
00:00:00.033570 OS Product: Linux
00:00:00.033572 OS Release: 3.11.0-18-generic
00:00:00.033575 OS Version: #32-Ubuntu SMP Tue Feb 18 21:11:14 UTC 2014
00:00:00.033610 DMI Product Name:
00:00:00.033624 DMI Product Version:
00:00:00.033756 Host RAM: 3882MB total, 3328MB available
00:00:00.033763 Executable: /usr/lib/virtualbox/VBoxHeadless
00:00:00.033765 Process ID: 10288
00:00:00.033767 Package type: LINUX_64BITS_GENERIC (OSE)
00:00:00.039722 Installed Extension Packs:
00:00:00.039747 VNC (Version: 4.2.16 r86992; VRDE Module: VBoxVNC)
00:00:00.046777 SUP: Loaded VMMR0.r0 (/usr/lib/virtualbox/VMMR0.r0) at 0xffffffffa0518020 - ModuleInit at ffffffffa052e0f0 and ModuleTerm at ffffffffa052e390
00:00:00.046820 SUP: VMMR0EntryEx located at ffffffffa052f510, VMMR0EntryFast at ffffffffa052f240 and VMMR0EntryInt at ffffffffa052f230
00:00:00.049809 OS type: 'Ubuntu_64'
00:00:00.073143 File system of '/home/ken/VirtualBox VMs/smartofficeVM_default_1395303674511_42792/Snapshots' (snapshots) is unknown
00:00:00.073166 File system of '/home/ken/VirtualBox VMs/smartofficeVM_default_1395303674511_42792/box-disk1.vmdk' is ext4
00:00:00.091096 VMSetError: /build/buildd/virtualbox-4.2.16-dfsg/src/VBox/Main/src-client/ConsoleImpl2.cpp(2300) int Console::configConstructorInner(PVM, util::AutoWriteLock*); rc=VERR_NOT_FOUND
00:00:00.091111 VMSetError: Implementation of the USB 2.0 controller not found!
00:00:00.091113 Because the USB 2.0 controller state is part of the saved VM state, the VM cannot be started. To fix this problem, either install the 'Oracle VM VirtualBox Extension Pack' or disable USB 2.0 support in the VM settings
00:00:00.217513 ERROR [COM]: aRC=NS_ERROR_FAILURE (0x80004005) aIID={db7ab4ca-2a3f-4183-9243-c1208da92392} aComponent={Console} aText={Implementation of the USB 2.0 controller not found!
00:00:00.217535 Because the USB 2.0 controller state is part of the saved VM state, the VM cannot be started. To fix this problem, either install the 'Oracle VM VirtualBox Extension Pack' or disable USB 2.0 support in the VM settings (VERR_NOT_FOUND)}, preserve=false
00:00:00.224473 Power up failed (vrc=VERR_NOT_FOUND, rc=NS_ERROR_FAILURE (0X80004005))
VAGRANT= debug vragant up log
http://pastebin.com/2GMhmy9T
Anybody has some expertise on the topic?
Thank you very much.
SOLUTION: I though it was already installed... when reading : 00:00:00.039722 Installed Extension Packs: 00:00:00.039747 VNC (Version: 4.2.16 r86992; VRDE Module: VBoxVNC) But in fact I have to install the extension guest pack on the Host too. It's a bit confusing. thank you very much. You can add a proper answer, I ll validate it.
The following line in VBox logs:
00:00:00.217535 Because the USB 2.0 controller state is part of the saved VM state, the VM cannot be started. To fix this problem, either install the 'Oracle VM VirtualBox Extension Pack' or disable USB 2.0 support in the VM settings (VERR_NOT_FOUND)}, preserve=false
Highlights that you have to install the VirtualBox Extension Pack in order to fix the issue.
Download and install VirtualBox extension pack from there (according to your VirtualBox version). It may solve your problem.

Need to filter out valid IP addresses using regex

I have a Radius client configuration file in /etc/raddb/server in that want to get valid IP address without commented line,So I'm using
grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' /etc/raddb/server
127.0.0.1
192.168.0.147
But I want to ignore 127.0.0.1 which is commented with # so how to do this stuff??
/etc/raddb/server file is as follow
cat /etc/raddb/server
# pam_radius_auth configuration file. Copy to: /etc/raddb/server
#
# For proper security, this file SHOULD have permissions 0600,
# that is readable by root, and NO ONE else. If anyone other than
# root can read this file, then they can spoof responses from the server!
#
# There are 3 fields per line in this file. There may be multiple
# lines. Blank lines or lines beginning with '#' are treated as
# comments, and are ignored. The fields are:
#
# server[:port] secret [timeout]
#
# the port name or number is optional. The default port name is
# "radius", and is looked up from /etc/services The timeout field is
# optional. The default timeout is 3 seconds.
#
# If multiple RADIUS server lines exist, they are tried in order. The
# first server to return success or failure causes the module to return
# success or failure. Only if a server fails to response is it skipped,
# and the next server in turn is used.
#
# The timeout field controls how many seconds the module waits before
# deciding that the server has failed to respond.
#
# server[:port] shared_secret timeout (s)
#127.0.0.1 secret 1
#other-server other-secret 3
192.168.0.147:1812 testing123 1
#
# having localhost in your radius configuration is a Good Thing.
#
# See the INSTALL file for pam.conf hints.
try grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' /etc/raddb/server