I am trying to deploy a Clojure application on OpenShift using the clojure-cartridge running the following command:
rhc app create myapp http://cartreflect-claytondev.rhcloud.com/github/openshift-cartridges/clojure-cartridge
I can run the application locally using lein run and looking at http://localhost:8080/ It works as expected. But when I run it from OpenShift I get: Service Temporarily Unavailable.
When I do rhc tail I get:
Downloading Leiningen to /var/lib/openshift/54a1a338fcf933fb93000106/clojure//home/self-installs/leiningen-2.5.0-standalone.jar now...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 14.2M 100 14.2M 0 0 18.6M 0 --:--:-- --:--:-- --:--:-- 25.5M
Could not transfer artifact lein-ring:lein-ring:pom:0.7.5 from/to clojars (https://clojars.org/repo/): Specified destination directory cannot be created: /.m2/repository/lein-ring/lein-ring/0.7.5
This could be due to a typo in :dependencies or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.
I am new to both Clojure and using OpenShift, so I could have missed or misunderstood something obvious. But any ideas on what is going wrong?
I don't know anything about OpenShift though this error:
Specified destination directory cannot be created: /.m2/repository/lein-ring/lein-ring/0.7.5
is a strong hint that the $HOME environment variable is not available in OpenShift. lein writes files to $HOME/.m2/repository/... so if $HOME where unset it would result in the error above. It looks like OpenShift allows this:
Setting Custom Environment Variables
Set one of more environment variables for an application with the following command:
$ hc env set <Variable>=<Value> <Variable2>=<Value2> -a App_Name
Related
I am new to google cloud services and I am trying to set up an automatic build of my production requiring to download a heavy file.
I would like to download a file from a dedicated Google Storage bucket inside the Docker build process. To do so, I have added the following line to my Dockerfile:
RUN curl https://storage.cloud.google.com/[bucketname]/[filename] -o [filename]
Since files from this bucket shouldn't be publicly accessible, I disabled object level permission and added to the member [ProjectID]#cloudbuild.gserviceaccount.com the right Storage Object Viewer.
But when the docker file script run, the file downloaded is empty
Step 7/9 : RUN curl https://storage.cloud.google.com/[bucketname]/[filename] -o [filename]
---> Running in 5d1a5a1bbe87
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Removing intermediate container 5d1a5a1bbe87
---> 42938a9cc8d1
Step 8/9 : RUN ls -l [filename]
---> Running in 34ac112051a1
-rw-r--r-- 1 root root 0 Jun 15 00:37 [filename]
This link works perfectly well if I login in google.console and access it through my navigator.
I tried changing the permission settings, and ended up adding cloud build account, storage legacy bucket reader, storage legacy object reader, storage object viewer together without much success.
I am obviously doing something wrong. But its not clear to me if:
This link format is only valid in the console and I should use an other URL to get this file
The permission configuration is wrong
I still have to process to some http authorization through curl
I am overlooking something else.
Thanks for your help :)
After a long research, try and error, I managed to find out a good way to do it. here is the recipe for those whou may need to reproduce a similar setup, and my future me.
Create a service account in console.cloud.google.com / AIM & Admin dedicated to this task (You can also use cloud shell https://cloud.google.com/iam/docs/creating-managing-service-accounts)
Generate a key for this service account (you can use the web console or type gcloud iam service-accounts keys create ~/key.json --iam-account [SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com in Cloud Shell (https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-gcloud)
Add this file to your repository. Install Cloud Shell to your Docker so that you can log in (right, you are going to install a whole sdk just to log in...) :
RUN curl https://sdk.cloud.google.com | bash > /dev/null
ENV PATH="${PATH}:/root/google-cloud-sdk/bin"
Now, in a shell script run by RUN ./myscript.sh from your docker file, you will add:
Activate your service account with gcloud auth activate-service-account [ACCOUNT] --key-file=~/key.json (https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account)
You can generate an authentification token associated to this account with TOKEN=`gcloud auth print-access-token [ACCOUNT]` (https://cloud.google.com/sdk/gcloud/reference/auth/print-access-token)
Finaly, you can add the curl command :
RUN curl -L -H "Authorization: Bearer [TOKEN]" https://www.googleapis.com/storage/v1/b/[bucketname]/o/[objectname]?alt=media -o filename
I did not used gcloud cp gs://bucketname/bucketfile ./ because python2 was not available in my Docker image and python3 isn't supported by google.
Congratulate yourself with a chocolat cake or a sugary treat. (:
Bonus: If like me your docker build timeout, you have to add a cloudbuild.yaml next to your Dockerfile. Here is a generic file I use for my builds:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME:$BUILD_ID', '.' ]
images:
- 'gcr.io/$PROJECT_ID/$REPO_NAME:$BUILD_ID'
timeout: 900s
I'm running Redis server 2.8.17 on a Debian server 8.5. I'm using Redis as a session store for a Django 1.8.4 application.
I haven't changed the software configuration on my server for a couple of months and everything was working just fine until a week ago when Django began raising the following error:
MISCONF Redis is configured to save RDB snapshots but is currently not able to persist to disk. Commands that may modify the data set are disabled. Please check Redis logs for details...
I checked the redis log and saw this happening about once a second:
1 changes in 900 seconds. Saving...
Background saving started by pid 22213
Failed opening .rdb for saving: Permission denied
Background saving error
I've read these two SO questions 1, 2 but they haven't helped me find the problem.
ps shows that user "redis" is running the server:
redis 26769 ... /usr/bin/redis-server *.6379
I checked my config file for the redis file name and path:
grep ^dir /etc/redis/redis.conf =>
dir /var/lib/redis
grep ^dbfilename /etc =>
dbfilename dump.rdb
The permissons on /var/lib/redis are 755 and it's owned by redis:redis.
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis too.
I also ran strace on the server process:
ps -C redis-server # pid = 26769
sudo strace -p 26769 -o /tmp/strace.out
But when I examine the output, I don't see any errors. In particular I don't see a "Permission denied" error as I would expect.
Also, /var/lib/redis is not an NFS directory.
Does anyone know what else could be causing this? I'd hate to have to stop using Redis. I know I can run the command "set stop-writes-on-bgsave-error yes" but that doesn't solve the problem.
This is now happening on a daily basis and the only way I can stop the error is to restart the Redis server.
Thanks.
I just had a similar issue. Despite my config file being correct, when I checked the actual dbfilename and dir in redis-client, they were incorrect.
Run redis-cli and then
CONFIG GET dbfilenamewhich should return something like
1) "dbfilename"
2) "dump.rdb"
1) is just the key and 2) the value. Similarly then run CONFIG GET dir should return something like
1) "dir"
2) "/var/lib/redis"
Confirm that these are correct and if not, set them with CONFIG SET dir /correct/path
Hope this helps!
If you have moved Redis to a new mounted volume: /mnt/data-01.
sudo vim /etc/systemd/system/redis.service
Set ReadWriteDirectories=-/mnt/data-01
sudo mkdir /mnt/data-01/redis
Set chown and chmod on new redis data dir and rdb file.
The permissons on /var/lib/redis are 755 and it's owned by redis:redis
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis
Switch configurations while redis is running
$ redis-cli
127.0.0.1:6379> CONFIG SET dir /data/tmp
redis-cli 127.0.0.1:6379> CONFIG SET dbfilename temp.rdb
127.0.0.1:6379> BGSAVE
tail /var/log/redis/redis.cnf (verify saved)
Start Redis Server in a directory where Redis has write permissions
The answers above will definitely solve your problem, but here's what's actually going on:
The default location for storing the rdb.dump file is ./ (denoting current directory). You can verify this in your redis.conf file. Therefore, the directory from where you start the redis server is where a dump.rdb file will be created and updated.
Since you say your redis server has been working fine for a while and this just started happening, it seems you have started running the redis server in a directory where redis does not have the correct permissions to create the dump.rdb file.
To make matters worse, redis will also probably not allow you to shut down the server either until it is able to create the rdb file to ensure the proper saving of data.
To solve this problem, you must go into the active redis client environment using redis-cli and update the dir key and set its value to your project folder or any folder where non-root has permissions to save. Then run BGSAVE to invoke the creation of the dump.rdb file.
CONFIG SET dir "/hardcoded/path/to/your/project/folder"
BGSAVE
(Now, if you need to save the dump.rdb file in the directory that you started the server in, then you will need to change permissions for the directory so that redis can write to it. You can search stackoverflow for how to do that).
You should now be able to shut down the redis server. Note that we hardcoded the path. Hardcoding is rarely a good practice and I highly recommend starting the redis server from your project directory and changing the dir key back to./`.
CONFIG SET dir "./"
BGSAVE
That way when you need redis for another project, the dump file will be created in your current project's directory and not in the hardcoded path's project directory.
You can resolve this problem by going into the redis-cli
Type redis-cli in the terminal
Then write config set stop-writes-on-bgsave-error no and it resolved my problem.
Hope it resolved your problem
Up to redis 3.2 it shipped with pretty insane defaults which opened the port to the public. In combination with the CONFIG SET instruction everybody can change your redis config from outside easily. If the error starts after some time, someone probably changed your config.
On your local machine check that
telnet SERVER_IP REDIS_PORT
is denied. Otherwise check your config, you should have the setting
bind 127.0.0.1
enabled.
Dependent on the user that runs redis, you should also check for damage that the intruder has done.
I trying CF on day 1. Deployed local cloud foundry on Mac with Bosh lite. No issues in doing so. Also added mysql build pack without any issue. But when i try to push the app it is taking forever and fails. After few tries it succeeded once, but the app is failing to start with time out. So to increate timeout i did re-push the app with command;
cf push pong_matcher_spring -t 180 -p /DEV/github/cloudfoundry-samples/pong_matcher_spring/target/pong-matcher-spring-1.0.0.BUILD-SNAPSHOT.jar -m 256M -i 1 -n app1
The app never getting pushed. Pleas see below log;
————————————————————————————————————————————
cf push pong_matcher_spring -t 180 -p /DEV/github/cloudfoundry-samples/pong_matcher_spring/target/pong-matcher-spring-1.0.0.BUILD-SNAPSHOT.jar -m 256M -i 1 -n app1
Using manifest file /DEV/github/cloudfoundry-samples/pong_matcher_spring/manifest.yml
Creating app pong_matcher_spring in org scientia / space development as admin...
OK
Creating route app1.bosh-lite.com...
OK
Binding app1.bosh-lite.com to pong_matcher_spring...
OK
Uploading pong_matcher_spring...
Uploading app files from: /DEV/github/cloudfoundry-samples/pong_matcher_spring/target/pong-matcher-spring-1.0.0.BUILD-SNAPSHOT.jar
Uploading 798.3K, 116 files
Done uploading
OK
Binding service mysql to app pong_matcher_spring in org scientia / space development as admin...
OK
Starting app pong_matcher_spring in org scientia / space development as admin...
-----> Downloaded app package (23M)
FAILED
StagingError
TIP: use 'cf logs pong_matcher_spring --recent' for more information
————————————————————————————————————————————
I could not find anything in job logs apart from these messages.
I suspect there is something with the network. Any help is appreciated.
Restart the Vagrant VM solved the issue.
I am attempting to deploy some changes to a loopback app running on a remote Ubuntu box on top of strong-pm.
The changes that I make locally are not being reflected in what gets deployed to the server. Here are the commands I execute:
$slc build
$slc deploy http://IPADDRESS deploy
to which I get a successful deploy message which looks like this:
peter#peters-MacBook-Pro ~/Desktop/projects/www/places-api master slc deploy http://PADDRESS deploy
Counting objects: 5740, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (5207/5207), done.
Writing objects: 100% (5740/5740), 7.14 MiB | 2.80 MiB/s, done.
Total 5740 (delta 1555), reused 150 (delta 75)
To http://PADDRESS:8701/api/services/1/deploy/default
* [new branch] deploy -> deploy
Deployed `deploy` as `placesAPI` to `http://IPADDRESS:8701/`
Checking the deployed files on the server here :
/var/lib/strong-pm/svc/1/work
I can see that the changes I made to the local app are not reflected in what has just been deployed to the server.
In order to check that the changes are reflected in the build, I checked out the deploy git repository, like so:
git checkout deploy
Inspecting the files here, I can see that the changes I made are present.
**does anyone know why the changes are not reflected in what is deployed to the server ? **
I know this is a old post but for anyone getting this issue I just encountered the same problem.
Finally I used slc arc and tried to Build from there.
Make sure that the "Fully qualified path to archive" has a correct value
It should be something like
../project-1.0.0.tgz
I am simply unable to install leiningen on Debian linux :
> lein
Downloading Leiningen to /home/debianaut/.lein/self-installs/leiningen-2.4.3-standalone.jar now...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 355 100 355 0 0 240 0 0:00:01 0:00:01 --:--:-- 375
100 14.2M 100 14.2M 0 0 51565 0 0:04:48 0:04:48 --:--:-- 41059
Failed to download https://github.com/technomancy/leiningen/releases/download/2.4.3/leiningen-2.4.3-standalone.jar
It's possible your HTTP client's certificate store does not have the
correct certificate authority needed. This is often caused by an
out-of-date version of libssl. Either upgrade it or set HTTP_CLIENT
to turn off certificate checks:
export HTTP_CLIENT="wget --no-check-certificate -O" # or
export HTTP_CLIENT="curl --insecure -f -L -o"
It's also possible that you're behind a firewall haven't yet
set HTTP_PROXY and HTTPS_PROXY.
I tried with setting HTTP_CLIENT but still same error . The version I read from lein script is 2.4.3.
I also experienced this error. This is what I did in Ubuntu 15.04, with Leiningen 2.5.2.
Save the lein file into the ~bin directory (create it if it doesn't exist).
Change the permissions of the lein file to make it executable (chmod 755 ~/bin/lein)
Open lein with a text editor
On line 116, change .jar to .zip, so that it should be LEIN_JAR="$LEIN_HOME/self-installs/leiningen-$LEIN_VERSION-standalone.zip"
Download Leiningen 2.5.2 from GitHub.
Put the zip file leiningen-2.5.2-standalone.zip into ~/.lein/self-installs (do not unzip - create the directory if it doesn't exist - this is a hidden directory, in Gnome Files, hit Ctrl+H to see it)
To initiate your first project: lein new MyFirstLeinProject
Voilà.
I had the same problem using leiningen 2.1.3 on Mac OS X 10.8.5 (Mountain Lion). That script tried to download https://leiningen.s3.amazonaws.com/downloads/leiningen-2.1.3-standalone.jar
Eventually I went back to leiningen.org and fetched the current lein script from https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein
It worked well. The resource downloaded was: https://github.com/technomancy/leiningen/releases/download/2.5.1/leiningen-2.5.1-standalone.zip