Please help I have trouble with Divio App trying to making it work.
When I press "set up project"
it gives me this
*
Creating workspace
cloning project repository
Cloning into '/c/Users/Ubisoft/Documents/iloveit'...
Bad owner or permissions on /home/divio/.ssh/config
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
There was an error trying to run a command. This is most likely
not an issue with divio-cli, but the called program itself.
Try checking the output of the command above.
The command was:
git clone git#git.divio.com:iloveit.git /c/Users/Ubisoft/Documents/iloveit
*
and in windows power shell it gives me this
Creating workspace
cloning project repository
Cloning into '/c/Users/Ubisoft/Documents/iloveit'...
Bad owner or permissions on /home/divio/.ssh/config
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
------------------------------------------------------------------------------------------------------------------------
There was an error trying to run a command. This is most likely
not an issue with divio-cli, but the called program itself.
Try checking the output of the command above.
The command was:
git clone git#git.divio.com:iloveit.git /c/Users/Ubisoft/Documents/iloveit
divio#app-1.0.0 /c/Users/Ubisoft/Documents
$
I also tried this from virtual MacOS and getting this message:
https://i.stack.imgur.com/QccvY.png
I also tried to mess arouond with creating SSH keys but didn't work out.
Can someone provide me step by step explanation how to make this wonderful app work?
In the Windows examples you show, I see:
Bad owner or permissions on /home/divio/.ssh/config
I am not sure how that has happened, but that is what is preventing your local environment from providing the expected key to the Divio Control Panel.
In the Macintosh example, the environment doesn't have a key that the Control Panel knows about.
You will need to add the key (probably from ~/.ssh/rsa_id.pub) to https://control.divio.com/account/ssh-keys/. If you don't already have a key in the Macintosh environment, you will need to set one up.
For anyone who may face this issue. On my MacOS virtual machine I managed to find solution. Looks like it is Internet Service Provider blocking port 22 something like that. Okay looks like problem resolved. I used VPN and without any hastle with SSH I got different result. Looks like it is working now not finished creating project yet but promising:
Creating workspace
cloning project repository
Cloning into '/Users/johnwick/Documents/best-project'...
Locking the website...
remote: Counting objects: 785, done.
remote: Compressing objects: 100% (739/739), done.
Unlocking the website...(385/785), 1.05 MiB | 524.00 KiB/s
remote: Total 785 (delta 112), reused 0 (delta 0)
Receiving objects: 100% (785/785), 1.77 MiB | 448.00 KiB/s, done.
Resolving deltas: 100% (112/112), done.
Checking out files: 100% (615/615), done.
downloading remote docker images
Pulling db ... done
Pulling web ... done
building local docker images
db uses an image, skipping
Building web
Step 1/7 : FROM divio/base:4.15-py3.6-slim-stretch
4.15-py3.6-slim-stretch: Pulling from divio/base
Related
Hi this is my first experience trying to deploy a Python app on cloud using CF. I am having issues deploying my app; I sincerely appreciate if anyone can help me or point me to the right direction to solve the issue.
The main problem is the app that I am trying to deploy is large size due to a lot of python dependencies. The size of my app directory is 200 Kb. The first error I observed was: Staging fails due to "Failed to upload payload for droplet" . I think the reason is when all Python dependencies are downloaded from requirements.txt file and finally the droplet is created its size is too large for upload. The droplet size=982. 3 Mb.
The first solution I tried was vendoring app where I created a vendor directory containing all python dependencies but the size of vendor directory was greater that 1Gb, which causes the upload size exceed 1Gb limit and leads to failure in uploading app files.
The second solution I am working on is to upload all installed Python libraries on an object store (in my case S3 bucket which is bounded to my app) and then download the dependencies folder called Pypackages to the app's root directory: /home/vcap/app, so I want to have /home/vcap/app/Pypackages exist before my app starts on the cloud. But I couldn't do it successfully yet. I have included a python script in my app directory which downloads files from S3 bucket successfully. (I have put the correct absolute path for download in downloadS3.py script ie, /home/vcap/app/Pypackages) I want to run this script using "python downloadS3.py" as a one-off task. First I tried the solution here : Can I have multiple commands run in a manifest.yml file?
and although I can see the status of the task is SUCCEED via '$cf tasks my-app-name' , /home/vcap/app/Pypackages does not exist.
I also tried to run one-off task as the steps below:
1-
$ cf push -c 'python downloadS3.py && sleep infinity' -i 1 --no-route
2-
$ cf push -c 'null'
I have printed the contents of /home/vcap/app on my app, ie when app is started and I enter the url in my browser (I don't know what is the right way to see the contents of root directory). Anyway, the problem is Pypackages are not downloaded to the correct root directory. I am not sure if I am running the one-off task in a wrong way or if there is a better solution to make my app work.
I appreciate any helps! (edited)
Diego Cells stage apps and upload droplet to blobstore via cloud controller, the max file can be uploaded is configurable at Ops Manager > TAS for VMs > Application Developer Control > Maximum File Upload Size (MB), default is 1024MB. Seems this is causing restriction, if you can get it increased with your admin help...
Tasks run in their own containers so possibly not an option. I think Python buildpack collects and install the packages before creating the droplet, so don't think copying packages directly to /app directory will be of much help.
If you have data files then you can use .profile file and do some scripting to copy them from S3 or server/NFS location into the /app directory. Something like
wget http://s3.location.com/data_files
cp data_files /home/vcap/app/
But if all these are packages and increasing the size is not feasible then you may need to look to break the app..
I have deployed two rails apps to Digital Ocean, Ubuntu 18.04 with Passenger and Nginx.
Both apps were built on rails 5.2.2 with ruby 2.5.1, and the second app has all the same gems at the same versions. While the first app runs fine, the second will not launch.
The last useful line of the Passenger log says:
[ E 2020-08-06 22:41:56.6186 30885/T1i age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/html/AppName_Prod/current: The application encountered the following error: ActiveSupport::MessageEncryptor::InvalidMessage (ActiveSupport::MessageEncryptor::InvalidMessage)
I know this is somethign to do with the master.key file, but that is present and contains the correct key. I'm not using environment vars to store the master keys - they are in the master.key file inside each app's dir structure.
I've read every SO post I could find on this and none have solved my issue.
Any suggestions for getting these two apps (and more) to work on the same droplet?
I'm all out of ideas.
Thank you for any help you can offer.
For anyone who might have the same issue, it was a bit deceptive.
I had tried rails credentials:edit and it didn't fix the issue, but I found that the app's containing folder was owned by user:user, whereas my other app was owned by user:root.
When I changed this, everything started to work.
I hope it helps someone because I didn't find this info anywhere online and it was a lot of trial and error.
Use ls -l to list the current owner of folders in the current working directory, so you can compare them.
For me, this turned out to be somewhat complicated. I had provisioned my server using Ansible, which has a task to copy the Nginx conf. After provisioning the server, I changed RAILS_MASTER_KEY.
It turns out that my Ansible task does not re-write the Nginx conf if it already exists on the server (the file is not compared, I guess). So although I updated RAILS_MASTER_KEY in my Ansible playbook (and it was even getting copied across to the server's environment variables!), it was not accessible to Rails through passenger because it does not pass on the user's environment variables.
Whew!
To fix this (and create a snowflake server in the process...) I manually logged into the server and updated RAILS_MASTER_KEY to my new value in the Nginx passenger_env_var.
I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.
I have installed a local instance of Readthedocs server, but anytime I try to build a github repository the app gets stuck in the Triggered state!.
There is no errors or exceptions, just regular info messages:
[25/Apr/2017 14:21:11] INFO [readthedocs.projects.utils:81] Running: 'ln -nsf /var/www/my-project/user_builds/test1/rtd-builds/latest /var/www/my-project/public_web_root/test1/en/latest' [/var/www/my-project]
[25/Apr/2017 14:21:11] INFO [readthedocs.projects.tasks:844] (Build) [test1:] Updating static metadata
Any idea what could be causing this issue?
so I had this problem, and there seems to be a lot of different things that could cause it, because I've seen various postings about it on different forums however none of the solutions posted helped me. The only posting i have book marked is this github issue.
for me I found that the documentation would build if I ran the command python manage.py runserver 0.0.0.0:8000, but would be stuck in a triggered state if I used my computers ip address; the solution was to use the above command but to add the following to readthedocs/settings/local_settings.py :
import os
# Set this to the root domain where this RTD installation will be running
PRODUCTION_DOMAIN = os.getenv('RTD_PRODUCTION_DOMAIN', '10.x.x.x:8000')
# Enable private Git doc repositories
ALLOW_PRIVATE_REPOS = True
best of luck.
I am attempting to deploy some changes to a loopback app running on a remote Ubuntu box on top of strong-pm.
The changes that I make locally are not being reflected in what gets deployed to the server. Here are the commands I execute:
$slc build
$slc deploy http://IPADDRESS deploy
to which I get a successful deploy message which looks like this:
peter#peters-MacBook-Pro ~/Desktop/projects/www/places-api master slc deploy http://PADDRESS deploy
Counting objects: 5740, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (5207/5207), done.
Writing objects: 100% (5740/5740), 7.14 MiB | 2.80 MiB/s, done.
Total 5740 (delta 1555), reused 150 (delta 75)
To http://PADDRESS:8701/api/services/1/deploy/default
* [new branch] deploy -> deploy
Deployed `deploy` as `placesAPI` to `http://IPADDRESS:8701/`
Checking the deployed files on the server here :
/var/lib/strong-pm/svc/1/work
I can see that the changes I made to the local app are not reflected in what has just been deployed to the server.
In order to check that the changes are reflected in the build, I checked out the deploy git repository, like so:
git checkout deploy
Inspecting the files here, I can see that the changes I made are present.
**does anyone know why the changes are not reflected in what is deployed to the server ? **
I know this is a old post but for anyone getting this issue I just encountered the same problem.
Finally I used slc arc and tried to Build from there.
Make sure that the "Fully qualified path to archive" has a correct value
It should be something like
../project-1.0.0.tgz