I am trying to add a start-stop-restart recipe for the redis-server
logged in the remote server via ssh I can run
service redis-server restart
but adding to deploy.rake
%w[start stop restart].each do |command|
desc "#{command} Redis server."
task command do
on roles(:app) do
execute "service redis-server #{command}"
end
end
end
I get an error on restart
DEBUG [8410afb7] Command: service redis-server restart
DEBUG [8410afb7] Stopping redis-server:
DEBUG [8410afb7] redis-server.
DEBUG [8410afb7] Starting redis-server:
DEBUG [8410afb7] touch:
DEBUG [8410afb7] cannot touch ‘/var/run/redis/redis-server.pid’:
Permission denied
which is obvious as /run is root:root...
how can I solve it ? should I install redis-server in my home directory ? ( if possible? )
thanks for any suggestion
I should write :
execute :sudo, "service redis-server #{command}"
in my deploy.rake recipe
Related
I defined a simple docker here : https://github.com/htool-ddm/htool_testing_environments/blob/master/ubuntu/Dockerfile where I defined a user with
ARG USER=mpi
ENV USER ${USER}
ENV USER_HOME /home/${USER}
RUN useradd -s /bin/bash --user-group --system --create-home --no-log-init ${USER}
and when I used this image as a devcontainer with
"image": "pierremarchand/htool_testing_environments:ubuntu_gcc_openmpi",
"workspaceFolder": "/home/mpi",
"workspaceMount": "source=${localWorkspaceFolder},target=/home/mpi/,type=bind",
I get the following error:
Error - 4:48:52 PM] cpptools client: couldn't create connection to server.
Launching server using command /home/mpi/.vscode-server/extensions/ms-vscode.cpptools-1.13.9-linux-x64/bin/cpptools failed. Error: spawn /home/mpi/.vscode-server/extensions/ms-vscode.cpptools-1.13.9-linux-x64/bin/cpptools EACCES
I guess this is an issue with permission because it works when running the devcontainer with root permission (using "remoteUser": "root"). Is there an issue in the way I defined my docker image ? or is this an issue in the way I define my devcontainer ?
Anchor deploy
Deploying workspace: http://localhost:8899
Upgrade authority: /home/tomcatzy/.config/solana/id.json
Deploying program "basic-1"...
Program path: /home/tomcatzy/projects/anchor/examples/tutorial/basic-1/target/deploy/basic_1.so...
Error: RPC request error: cluster version query failed: error sending request for url (http://localhost:8899/): error trying to connect: tcp connect error: Connection refused (os error 111)
There was a problem deploying: Output { status: ExitStatus(unix_wait_status(256)), stdout: "", stderr: "" }.
solana config set --url http://localhost:8899 (Is this enough to start the localhost ?)
solana-keygen new
solana-test-validator
It seems strange that after a succesful anchor build that i can't do a anchor deploy with the solana command lines ran above.
If by any means 'need' to run a - npm init - then where to do it ?
solana config set --url http://localhost:8899 (Is this enough to start the localhost ?)
solana-keygen new
solana-test-validator
I tried the above and got generated a keypair: keyname_1-keypair.json. The build went succesful but the deploy not !
I'm wondering why not ?
Hopefully some can guide me what to get it succed...
In a separate window / terminal, you need to run solana-test-validator so that the tools can talk to your local network. The error you're seeing on deployment is due to an error on connecting to that network.
By following the Solana docs and doing this
sudo $(command -v solana-sys-tuner) --user $(whoami) > sys-tuner.log 2>&1 &
https://docs.solana.com/running-validator/validator-start#system-tuning
The test-ledger folder is done and a sys-tuner.log file is created but it's 0 bytes...
Then i run the solana-test-validator in a separate terminal and the other solana config commands in another termianl then i get the following results ->
You can deploy on-chain programs with the Solana tools.
To deploy a program, you will need the location of the program's shared object.
It will return when you run anchor build in the command line.
Run solana program deploy <PROGRAM_FILEPATH>.
Successful deployment will return the program id of your program.
I created a rabbitmq cluster on two instances on EC2. My django app uses celery for async tasks which in turn uses RabbitMQ for message queue.
Whenever I start celery with the command:
python manage.py celery worker --loglevel=INFO
OR
python manage.py celeryd --loglevel=INFO
I keep getting following error message related to remote RabbitMQ:
[2015-05-19 08:58:47,307: ERROR/MainProcess] consumer: Cannot connect to amqp://myuser:**#<ip-address>:25672/myvhost/: Socket closed.
Trying again in 2.00 seconds...
I set permissions using:
sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
and then restarted rabbitmq-server on both the cluster nodes. However, it didn't help.
In log file, I see few entries like below:
=INFO REPORT==== 19-May-2015::08:14:41 ===
accepting AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672)
=ERROR REPORT==== 19-May-2015::08:14:44 ===
closing AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672):
{handshake_error,opening,0,
{amqp_error,access_refused,
"access to vhost 'myvhost' refused for user 'myuser'",
'connection.open'}}
The file /usr/local/etc/rabbitmq/rabbitmq-env.conf contains an entry for NODE_IP_ADDRESS to bind it only to localhost. Removing the NODE_IP_ADDRESS entry from the config binds the port to all network inferfaces.
Source: https://superuser.com/questions/464311/open-port-5672-tcp-for-access-to-rabbitmq-on-mac
Turns out I had not created appropriate configuration files. In my case (Ubuntu 14.04), I had to create below two configuration files:
$ cat /etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_NODE_IP_ADDRESS=<ip_of_ec2_instance>
<ip_of_ec2_instance> has to be the internal IP that EC2 uses. Not the public IP that one uses to ssh into the instance. It can be obtained using ip a command.
$ cat /etc/rabbitmq/rabbitmq.config
[
{mnesia, [{dump_log_write_threshold, 1000}]},
{rabbit, [{tcp_listeners, [25672]}]},
{rabbit, [{loopback_users, []}]}
].
I think the line {rabbit, [{tcp_listeners, [25672]}]}, was one of the most important piece of configuration that I was missing.
Thanks #dgil for the initial troubleshooting help.
The question has been answered. but just leaving notes with a similar issue i faced should anybody else find it useful
I have a flask app running on ec2 with amqp as a broker on port 5672 and ec2 elasticcache memcached as a backend. The amqp broker had trouble picking up tasks that were getting fired - so i resolved it by fixing as such
Assuming you have rabbitmq-server installed (sudo apt-get install rabbitmq-server), add the user and set the properties as such
sudo add_user username password
set_permissions username ".*" ".*" ".*"
restart server: sudo service rabbitmq-server restart
In your flask app for the celery configuration
broker_url=amqp://username:password#localhost:5672// (Set as above)
backend=cache+memcached://(ec2 cache url):11211/
(The cache+memcached:// tripped me up - without it i kept getting an import error (cannot import module)
Open up the port 5672 on your ec2 instance in the security group.
Now if you fire up your celery worker, it should pick up the the tasks that get fired and store the results on your memcached server
I have the following config file for Upstart, and it starts the Flask server fine, but whenever there is an exception in the app the log file doesn't have the exception information.
start on [2345]
stop on [06]
respawn
script
cd /var/www/binary-fission/server
export BF_CONFIG=config/staging.py
exec uwsgi --http 0.0.0.0:5000 --wsgi-file server.py --callable app --master --threads 2 --processes 4 --logto /var/log/binary-fission/server.log
end script
However, if I run the same uwsgi command manually without Upstart, the exception is logged.
How do I make upstart+uwgi log the exception from a Flask application?
It turned out that turning on the "PROPAGATE_EXCEPTIONS" option in the flask configuration file (config/staging.py) fixed the issue. This is because in that configuration file, "DEBUG" is turned off which turns "PROPAGATE_EXCEPTIONS" off at the same time.
When I ran uwsgi command manually, I didn't specify the configuration file and my Flask app fell back to the default configuration with "DEBUG" on.
I got this when I ran cap production git:check. (I took out my real ip address and user name)
DEBUG [4d1face0] Running /usr/bin/env git ls-remote -h foo#114.215.183.110:~/git/deepot.git on 114.***.***.***
DEBUG [4d1face0] Command: ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/deepot/git-ssh.sh /usr/bin/env git ls-remote -h foo#114.***.***.***:~/git/deepot.git )
DEBUG [4d1face0] Error reading response length from authentication socket.
DEBUG [4d1face0] Permission denied (publickey,password).
DEBUG [4d1face0] fatal: The remote end hung up unexpectedly
DEBUG [4d1face0] Finished in 0.715 seconds with exit status 128 (failed).
Below is my deploy file...
set :user, 'foo'
set :domain, '114.***.***.***'
set :application, 'deepot'
set :repo_url, 'foo#114.***.***.***:~/git/deepot.git'
set :deploy_to, '/home/#{fetch(:user)}/local/#{fetch(:application)}'
set :linked_files, %w{config/database.yml config/bmap.yml config/cloudinary.yml config/environments/development.rb config/environments/production.rb}
Below is my production.rb...
role :app, %w{foo#114.***.***.***}
role :web, %w{foo#114.***.***.***}
role :db, %w{foo#114.***.***.***}
server '114.***.***.***', user: 'foo', roles: %w{web app}, ssh_options: {keys: %w{/c/Users/Administrator/.ssh/id_rsa}, auth_methods: %w(publickey)}
I can successfully ssh on to my foo#114.***.***.*** without entering any password using git bash. (I am windows 7 machine and deployment server is ubuntu 12.04)
Any help will be appreciated. Thanks!
Try generating a new private/public key par and provide a passphrase for it. If it works, the problem is your current key doesn't use a passphrase.