I have a database server that is started from a VirtualBox VM. I can start my development only after the dataBase VM boots up. How can I check whether the dataBase server has been booted up successfully. I am well aware of the command VBoxManage showvminfo
but it shows the State as 'running' itself even when the VM is still booting up.
Is there a way to check the booting status?
I suspect not, as even the Vagrant project seems to use a constantly-retrying SSH connection with a large timeout to determine if the machine is ready.
Related
I have a web application running on Laravel5.2 framework, with session driver set to redis with following AWS setup.
Instance-1: Running web application, with Redis configurations in .env file as follow
Redis-host: aws-private-ip-of-instance-2
Redis-password: NULL
Redis-port: 6379
Instance-2: Redis-server running with following configuration
Bind aws-private-ip-of-instance-2 and 127.0.0.1
Working directory /var/lib/redis with 775 permission, and ower-group is redis.
RDB snapshot name dump.rdb with 660 permission, and ower-group is redis.
NOTE: In AWS inbound rule for port 6379 is configured for
Instance-2.
Everything works fine, until redis tries to write the data on the RDB file. Following error shows on front-end.
MISCONF Redis is configured to save RDB snapshots, but is currently
not able to persist on disk. Commands that may modify the data set are
disabled. Please check Redis logs for details about the error.
While in the logs of Redis server i got following data.
4873:M 23 Sep 10:08:15.028 * 1 changes in 900 seconds. Saving...
4873:M 23 Sep 10:08:15.028 * Background saving started by pid 7392
7392:C 23 Sep 10:08:15.028 # Failed opening .rdb for saving: Read-only file system
4873:M 23 Sep 10:08:15.128 # Background saving error
Things I have tried
Add vm.overcommit_memory = 1 to /etc/sysctl.conf, as suggested in Redis-administraition-blog
Change path to dump.rdb file to tmp folder and change permissions to 777.
This other Stack Exchange thread might help, since you are using a custom /tmp dir for data:
The simple way to do this is to run systemctl edit redis. This will create an override drop-in file /etc/systemd/system/redis.service.d/override.conf, in which you can place your changes (and the proper section):
[Service]
ReadWriteDirectories=-/my/custom/data/dir
You may also create that directory and place files ending in .conf in it manually. But do not leave the directory empty, as this will disable the service.
In either case, run systemctl daemon-reload and you are ready to restart your service.
Many threads also point to filesystem inconsistency as root cause. Since you are using EC2, check this AWS forums post:
To fix this, you will have to:
Stop the instance
Detach the root volume of your instance
Attach the volume as a data volume to any running Linux instance in the same availability zone
Perform a filesystem check (fsck) on the volume and fix the issues
Detach the volume and attach it back to your instance as it's root volume
Boot back instance and verify if the volume was able to mount successfully
As a last resort, terminate the instance if possible.
Hope it helps!
Well this is very embarrassing to post answer of own question, which was a really stupid mistake. But hope new folks here learns from my mistake too.
So first thing I have done is enable detail logs for redis-server in /etc/redis/redis.conf file by changing log_level option to debug.
Observe the logs and understand that my redis port 6379 was open for everyone on internet.
So from logs I observe that someone else's server is spoofing into my redis server and making it slave of it. And as my redis server is configure in a way that slave is read-only, when i try to access my redis-server it throw error of read-only.
After applying the fire-wall for redis server port, I have not encounter this issue anymore.
I have installed the latest versions of Virtualbox v.5.2.6 and Vagrant v.2.0.1 on the windows machine with the Intel-Core-i5-4210U-Processor #1.70Ghz 2.40Ghz. I have added the homestead box by running the command:
vagrant box add laravel/homestead
But, on running the vagrant up it runs fine until this point:
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within the
configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that Vagrant
had when attempting to connect to the machine. These errors are
usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes. Verify
that authentication configurations are also setup properly, as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
On running the vagrant ssh-config I get:
Host homestead-7
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile C:/Users/MyUser/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
ForwardAgent yes
I have looked around and tried out different things, among others, to uninstall and install again both vagrant and virtualbox, but couldn't find any solution. How can I get this working?
I made VM Ubuntu 16.04 with one network adapter "network bridge" and second adapter "internal network". ClickHouse DBMS installed by default.
Test call within slave Ubuntu curl 'http://localhost:8123/' returns Ok.
But the same call from master Windows host returns nothing :(
Telnet and browser from master OS on http://assignedIP:8123 returns ERR_CONNECTION_REFUSED.
At the same time pings slave Ubintu from master OS and opposite are successful.
How to setup VM's network properly to be able to call ClickHouse on port 8123 from master OS?
Found it!
Have to change
<listen_host>::1</listen_host>
to
<listen_host>::</listen_host>
in /etc/clickhouse-server/config.xml
Could you please check Unbutu firewall status. If it is up, bring it down and try.
I've setup opscenter on one of cassandra cluster nodes. After installation, when setting up my cluster, I tried installation of datastax agent on all the cluster nodes via UI, but it failed. So, I had to install the agents manually.
After manually installing the agents, the node in which opscenter is installed is able to connect, but not the other nodes. It still says, "2 agents failed to connect". What could be the issue?
PS : My cassandra cluster is setup on AWS in ubuntu
My agent.log file looks like this
ERROR [os-metrics-9] 2015-07-27 07:04:43,390 Long os-stats collector failed: Cannot run program "iostat": error=2, No such file or directory
ERROR [os-metrics-7] 2015-07-27 07:04:43,391 Long os-stats collector failed: Cannot run program "iostat": error=2, No such file or directory
ERROR [os-metrics-8] 2015-07-27 07:04:53,391 Long os-stats collector failed: Cannot run program "iostat": error=2, No such file or directory
ERROR [os-metrics-3] 2015-07-27 07:04:53,392 Long os-stats collector failed: Cannot run program "iostat": error=2, No such file or directory
ERROR [StompConnection receiver] 2015-07-27 07:05:02,946 failed connecting to **.**.**.**:61620:java.net.ConnectException: Connection timed out
You have to set the stomp_interface in the address.yaml like
stomp_interface: <ip-address>
After agent restart it should be connected.
As your agent have been able to connect from the same box where opscenter is installed, so it sounds like :
You might have not configured your firewall properly. If you please try by disabling firewall on all your boxes.
You may have multiple interfaces and C* installation picked up an undesired interface. So run ifconfig or ip command on all of your instances and check with C* yaml.
About iostat failure message : You have not install sysstat pkg. Seems, you have not install dependencies as part of DSE install.
The agents uses iostat to collect some information from disks. If it cant find it you will get that error but it just means those metrics will be missing some os metrics (likely a lot of disk and cpu metrics will be missing)
These are some useful configurations that you should keep in mind when starting the agent manually in the conf/address.yaml file:
###A name for the node to use as a label throughout OpsCenter.
alias:
###Reachable IP address of the opscenterd machine. The connection made will be on stomp_port. Internal IP in this case
stomp_interface:
###Port for the agent's HTTP service (default: 61621).
#api_port: 61621
###The stomp_port used by opscenterd. == Must match with the 'incoming_port' in opscenter.conf
stomp_port: 61620
###The IP used to identify the node.
local_interface: 100.73.158.44
###The IP that the agent HTTP server listens on.
agent_rpc_interface:
###Host used to connect to local JMX server.
jmx_host: 100.73.158.44
###Whether or not to use SSL communication between the agent and opscenterd.
use_ssl: 1
To solve the "Cannot run program 'iostat'" error, do this:
sudo apt-get install sysstat
I'm trying to install the enterprise edition of neo4j on an existing EC2 (Amazon linux) instance. So far I've
wget "link to enterprise"
untar the file
renamed and moved the folder to NEO4J_HOME
then went into the config files for neo4j.properties to make the following changes:
# Enable shell server so that remote clients can connect via Neo4j shell.
remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces)
remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337
remote_shell_port=1337
EDITED Christophe Willemsen pointed out that for my original error, I had forgotten to restart the server at that point but I was still unable to access the web server while it was running. So to make it more clear, I've edited the remaining post:
I went to neo4j-server.properties and uncommented:
org.neo4j.server.webserver.address=0.0.0.0
And start the server
NEO4J_HOME/bin/neo4j start
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow
Starting Neo4j Server...WARNING: not changing user
process [28557]... waiting for server to be ready..... OK.
http://localhost:7474/ is ready.
checking the status:
NEO4J_HOME/bin/neo4j status
Neo4j Server is running at pid 28557
I can run the shell but the when I go to localhost 7474 I still can not connect
Any help would be appreciative. The only tutorial or help I've found assumed I was starting from scratch with a new instance. If someone could provide some instructions for installing or fix my configuration that would be great.
Thanks!
You have to edit neo4j-server.properties and uncomment the line with:
org.neo4j.server.webserver.address=0.0.0.0
So that the db listens on an external interface not just localhost, and you have to open the port (7474) in your firewall rules.
Make sure to secure access to the db though:
http://neo4j.com/docs/stable/security-server.html