Bitnami RedMine mysql service not restarting - redmine

I have a redmine project management system setup on my windows server on MYSQL DB.
I had some issues when tried to restart the redmineMysql service and as per a article on stackoverflow, I deleted the service from registry and recreated the service using 'sc create redmineMYSQL binpath= "D:\Bitnami\redmine-2.6.0-1\mysql\bin\mysqld.exe"'
Now when I try to restart the service I get a error "error 1053 the service did not respond to the start or control request in a timely fashion"
the time in which the service is restarting, for around 20-25 seconds my mysql is up and available to redmine but then post the error message the mysql goes down again.

ImagePath looks like this:
"C:\BitNami\REDMIN~1.2-0\apache2\bin\httpd.exe" -k runservice
It is of REG_EXPAND_SZ under HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\redmineApache
As for mysql:
This is how default serviceinstall.bat looks like, from: Binami/redmine-2.3.2.-0/mysql/scripts
#echo off
rem -- Check if argument is INSTALL or REMOVE
if not ""%1"" == ""INSTALL"" goto remove
"C:\BitNami\redmine-2.4.2-0/mysql\bin\mysqld.exe" --install "redmineMySQL" --defaults-file="C:\BitNami\redmine-2.4.2-0/mysql\my.ini"
net start "redmineMySQL" >NUL
goto end
:remove
rem -- STOP SERVICES BEFORE REMOVING
net stop "redmineMySQL" >NUL
"C:\BitNami\redmine-2.4.2-0/mysql\bin\mysqld.exe" --remove "redmineMySQL"
:end
exit

Related

How can I use a batch file to run Vagrant commands, but leave me connected to Vagrant once finished?

Here's what I have in the batch file so far:
set root="C:\Users\esohlberg\lwebsite"
cd %root%
vagrant up
vagrant ssh -- -t "source lw/bin/activate && cd /vagrant/; ./manage.py runserver 0.0.0.0:8000"
cmd /k
Once Vagrant is up, I activate a virtualenv, cd into the right place, and run a server. Executing this takes me all the way to the server running, where I can see
System check identified no issues (0 silenced).
August 24, 2018 - 12:33:12
Django version 2.0.3, using settings 'lwebsite.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
However, as soon as I quit with CONTROL-C, I see
Connection to 127.0.0.1 closed.
and I'm no longer in Vagrant. Is it possible to set up the commands in such a way that once the server is quit, I stay in the /vagrant/ directory with the connection still up and Vagrant's virtualenv still active? This would allow me to manage the site or run the server again with less hassle.
I've already looked at https://www.vagrantup.com/docs/provisioning/shell.html, but the examples seem to show commands executed only during provisioning, which I don't want to do every single time I execute this file.
I created three .bat files to start, stop vagrant and connect over ssh.
For starting vagrant, also added this to auto run when Windows starting:
cd /d "C:\Homestead"
start vagrant up
For stoping vagrant:
#echo off
cd /d "C:\Homestead"
start vagrant halt
And for connecting to local server over ssh
cd /d "C:\Homestead"
vagrant ssh
I've already looked at https://www.vagrantup.com/docs/provisioning/shell.html, but the examples seem to show commands executed only during provisioning, which I don't want to do every single time I execute this file.
You've been looking at the right place, what you want is to run the command when the VM starts so you can do the following:
config.vm.provision "shell", :inline => "source lw/bin/activate && cd /vagrant/; ./manage.py runserver 0.0.0.0:8000", :run => 'always', :privileged => false
That will make sure that every time you're calling vagrant up from your host, it will start your python server (as vagrant user)
This way you do not need the script anymore and can just start your VM normally using vagrant up

URL Rewrite 2.0 installation fails on Docker

I'm trying to get URL Rewrite 2.0 installed using this Dockerfile:
FROM microsoft/aspnet:4.6.2
WORKDIR /inetpub/wwwroot
COPY obj/Docker/publish .
ADD https://download.microsoft.com/download/C/9/E/C9E8180D-4E51-40A6-A9BF-776990D8BCA9/rewrite_amd64.msi /install/rewrite_amd64.msi
RUN net start MSIServer
RUN msiexec.exe /i c:\install\rewrite_amd64.msi /quiet /passive /qn /L*v "C:\package.log"
When I build the container image, I see this error message:
The Windows Installer Service could not be accessed. This can occur if the Windows Installer is not correctly installed. Contact your support personnel for assistance.
Looking at package.log after running the container, I see this:
SI (c) (30:A4) [08:32:10:438]: Failed to connect to server. Error: 0x80040150
SI (c) (30:A4) [08:32:10:438]: Note: 1: 2774 2: 0x80040150: 2774 2: 0x80040150
Executing net start msiserver on the running container returns a message that the service is already started, and Google says 0x80040150 could be a problem reading the registry.
Is it expected that installing URL Rewrite this way should work, or do I need to elevate permissions somehow?
Update: Running the same msiexec command on the running container successfully installs URL Rewrite.
I finally figured it out thanks to this article. Using PowerShell to run msiexec with the appropriate switches works. Oddly, it threw "Unable to connect to the remote server" when trying to also download the MSI using PowerShell, so I resorted to using ADD.
Here's the relevant portion of my Dockerfile:
WORKDIR /install
ADD https://download.microsoft.com/download/C/9/E/C9E8180D-4E51-40A6-A9BF-776990D8BCA9/rewrite_amd64.msi rewrite_amd64.msi
RUN Write-Host 'Installing URL Rewrite' ; \
Start-Process msiexec.exe -ArgumentList '/i', 'rewrite_amd64.msi', '/quiet', '/norestart' -NoNewWindow -Wait

Cloudfoundry (Bosh-lite) app push is timeout

I trying CF on day 1. Deployed local cloud foundry on Mac with Bosh lite. No issues in doing so. Also added mysql build pack without any issue. But when i try to push the app it is taking forever and fails. After few tries it succeeded once, but the app is failing to start with time out. So to increate timeout i did re-push the app with command;
cf push pong_matcher_spring -t 180 -p /DEV/github/cloudfoundry-samples/pong_matcher_spring/target/pong-matcher-spring-1.0.0.BUILD-SNAPSHOT.jar -m 256M -i 1 -n app1
The app never getting pushed. Pleas see below log;
————————————————————————————————————————————
cf push pong_matcher_spring -t 180 -p /DEV/github/cloudfoundry-samples/pong_matcher_spring/target/pong-matcher-spring-1.0.0.BUILD-SNAPSHOT.jar -m 256M -i 1 -n app1
Using manifest file /DEV/github/cloudfoundry-samples/pong_matcher_spring/manifest.yml
Creating app pong_matcher_spring in org scientia / space development as admin...
OK
Creating route app1.bosh-lite.com...
OK
Binding app1.bosh-lite.com to pong_matcher_spring...
OK
Uploading pong_matcher_spring...
Uploading app files from: /DEV/github/cloudfoundry-samples/pong_matcher_spring/target/pong-matcher-spring-1.0.0.BUILD-SNAPSHOT.jar
Uploading 798.3K, 116 files
Done uploading
OK
Binding service mysql to app pong_matcher_spring in org scientia / space development as admin...
OK
Starting app pong_matcher_spring in org scientia / space development as admin...
-----> Downloaded app package (23M)
FAILED
StagingError
TIP: use 'cf logs pong_matcher_spring --recent' for more information
————————————————————————————————————————————
I could not find anything in job logs apart from these messages.
I suspect there is something with the network. Any help is appreciated.
Restart the Vagrant VM solved the issue.

xcodebuild running tests headless?

As we all know by now, the only way to run tests on iOS is by using the simulator. My problem is that we are running jenkins and the iOS builds are running on a slave (via SSH), as a result running xcodebuild can't start the simulator (as it runs headless). I've read somewhere that it should be possible to get this to work with SimLauncher (gem sim_launcher). But I can't find any info on how to set this up with xcodebuild. Any pointers are welcome.
Headless and xcodebuild do not mix well. Please consider this alternative:
You can configure the slave node to launch via jnlp (webstart). I use a bash script with the .command extension as a login item (System Preferences -> Users -> Login Items) with the following contents:
#!/bin/bash
slave_url="https://gardner.company.com/jenkins/jnlpJars/slave.jar"
max_attempts=40 # ten minutes
echo "Waiting to try again. curl returneed $rc"
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -ne 0 -a $max_attempts -gt 0 ]; then
echo "Waiting to try again. curl returneed $rc"
sleep 5
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -eq 0 ]; then
zip -T slave.jar
rc=$?
fi
let max_attempts-=1
fi
# Simulator
java -jar slave.jar -jnlpUrl https://gardner.company.com/jenkins/computer/buildmachine/slave-agent.jnlp -secret YOUR_SECRET_KEY
The build user is set to automatically login. You can see the arguments to the slave.jar app by executing:
gardner:~ buildmachine$ java -jar slave.jar --help
"--help" is not a valid option
java -jar slave.jar [options...]
-auth user:pass : If your Jenkins is security-enabled, specify
a valid user name and password.
-connectTo HOST:PORT : make a TCP connection to the given host and
port, then start communication.
-cp (-classpath) PATH : add the given classpath elements to the
system classloader.
-jar-cache DIR : Cache directory that stores jar files sent
from the master
-jnlpCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP requests.
-jnlpUrl URL : instead of talking to the master via
stdin/stdout, emulate a JNLP client by
making a TCP connection to the master.
Connection parameters are obtained by
parsing the JNLP file.
-noReconnect : Doesn't try to reconnect when a communication
fail, and exit instead
-proxyCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP authenticated proxy requests.
-secret HEX_SECRET : Slave connection secret to use instead of
-jnlpCredentials.
-slaveLog FILE : create local slave error log
-tcp FILE : instead of talking to the master via
stdin/stdout, listens to a random local
port, write that port number to the given
file, then wait for the master to connect to
that port.
-text : encode communication with the master with
base64. Useful for running slave over 8-bit
unsafe protocol like telnet
gardner:~ buildmachine$
For a discussion about OSX slaves and how the master is launched please see this Jenkins bug: https://issues.jenkins-ci.org/browse/JENKINS-21237
Erik - I ended up doing the items documented here:
Essentially:
The first problem, is that you do have to have the user that runs the builds also logged in to the console on that Mac build machine. It needs to be able to pop up the simulator, and will fail if you don’t have a user logged in — as it can’t do this entirely headless without a display.
Secondly, the XCode Developer tools requires elevated privileges in order to execute all of the tasks on the Unit tests. Sometimes you may miss seeing it, but without these, the Simulator will give you an authentication prompt that never clears.
A first solution to this (on Mavericks) is to run:
sudo security authorizationdb write system.privilege.taskport allow
This will eliminate one class of these authentication popups. You’ll also need to run:
sudo DevToolsSecurity --enable
Per Apple’s man page on this tool:
On normal user systems, the first time in a given login session that
any such Apple-code-signed debugger or performance analysis tools are
used to examine one of the user’s processes, the user is queried for
an administator password for authorization. DevToolsSecurity tool to
change the authorization policies, such that a user who is a member of
either the admin group or the _developer group does not need to enter
an additional password to use the Apple-code-signed debugger or
performance analysis tools.
Only issue is that these same things seem to be broken once I upgraded to Xcode 6. Back to the drawing board....

AWS EMR Impala daemon issue

I've just created EMR cluster and trying to create my first Impala table. Getting this error: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore. Any suggestion please? I did everything as documented by Amazon.
[ip-10-72-69-85.ec2.internal:21000] > connect localhost;
Connected to localhost:21000
Server version: impalad version 1.2.1 RELEASE (build d0bf3eae1df0f437bb4d0e44649293756ccdc76c)
[localhost:21000] > show tables;
Query: show tables
ERROR: AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore.
[localhost:21000] >
I had the same error - after many troubles I've found the simple solution:
A. Check impala-state-store and impala-catalog daemons are running:
sudo service impala-state-store status
sudo service impala-catalog status
If not running - check the logs and be sure to activate them.
B.If they are running - simply type in your impala-shell:
invalidate metadata;‏
This command will update your catalog from the state store.
Then, you are ready to start!
Run the following command in the said order and reopen the Impala browser
sudo /etc/init.d/hive-metastore start
sudo /etc/init.d/impala-state-store start
And
sudo /etc/init.d/impala-catalog start
sudo /etc/init.d/impala-server start
I actually found the solution to this problem might be to just wait. I had this problem and had restarted everything impala with no luck. I even tried stopping all impala services and starting them in the recommended order (statestore first). Nothing helped but then after an amount of time of being left it started to work. I'm not sure what that time is but it was more than 5 minutes and less than an hour.
I would first recommend you check the logs at /mnt/var/log/apps. The error is likely related to the state-store, which can be restarted with the command below.
sudo service impala-state-store restart
I ran into the same error. The tutorial skipped a couple steps. Once in an impala-shell, create a database, then use the database, then create a table.