Moodle on Heroku using S3 Bucket - amazon-web-services

I'm trying to set up Moodle on a free Dyno on Heroku.
I started my project with the stable version 3.10 of Moodle on a new repository.
I've created the Heroku app and connected my github repository to it
I've created an S3 Bucket that allows read and write access to objects from anyone
On config.php:
I changed the database settings (to point to the free postgres database that I also created),
I've set wwwroot as my Heroku app (http://myherokuappexample.herokuapp.com/)
I've set dataroot as my S3 Bucket endpoint (https://mys3bucketnamexample.s3.eu-north-1.amazonaws.com/moodledata/)
I created the folder moodledata, added an image to it and I was able to access it from the browser (for example https://mys3bucketnamexample.s3.eu-north-1.amazonaws.com/moodledata/ola.png)
BUT I'm getting "Fatal error: $CFG->dataroot is not configured properly, directory does not exist or is not accessible! Exiting." when the app is deployed to Heroku.
What am I doing wrong?

Related

Deploying Flask App with sqlite db in AWS Elastic Beanstalk

I have the tutorial app from the flask docs and modified for my use.
I was able to deploy my flask app and create the db via container_commands (flask init-db)
But when I try to write something to the db from the web browser. It throws the exception that
"sqlite3.OperationalError: attempt to write a readonly database"
It seems the problem is the write permission to the sqlite file. But it was created at the time of deploying the application. Any help.
Ideally when you are deploying in your own production env (not cloud), the sqlite is created in the venv/var/app-instance folder. How do I access this in AWS EB.
It looks like file write permission issue to me as well.
here is how i would troubleshoot the issue:
login to the instance via ssh, give permission to the file and see if it resolves the issue.
if it does resolves the issue, I would automate that using .ebextensions
Hope this helps you move forward.
Also look at the reference below shows how to change file permissions.
Reference: python sqlite3 OperationalError: attempt to write a readonly database

How to download updated sqlite Database from AWS EB? (Django app)

I have deployed my django app on AWS Elastic beanstalk, using the sqlite database. The site has been running for a while.
Now I have made some changes to the website and want to deploy again, but by keeping the changes the database went through(by form inputs)
But I am not able to find a way to download the updated database or any files. The only version of the app I am able to download is the one I already uploaded when I first deployed.
How can I download the current running app version and deploy with the same database?
you may use scp to connect to remote server and download all your files.
as commented, you should better use dumpdata and then i/o that file (instead of the db or full site).
however, if your remote system python version is not the same as your django site, then the dumpdata may not work (this is my case), so I managed to download the whole site folder back to local and then use dumpdata
aws elastic beanstalk Python app's files in /opt/python/current/app
just use scp -r to download the whole site to local.
I believe you have sqlite db file being packaged from your local during eb deploy. If that is a case then all you need to do is not include db.sqlite file in your deployments.
Move location of db.sqllite file to s3 or ebs persistent volume so that it persists.
Use .ebextentions to run db migration in aws on top of the aws db file just the way django recommends.
To be on safe side you can ssh into your eb env which is ec2 instance and download db.sqlite file just incase.

AWS EC2 RHEL7 Instance permission problem

I have set all the folders in my /var/www/upload directory to 755. But still my php application hosted on that server is not performing file uploads. Please tell me how to fix this permission problem and how my php file uploading will be working fine.
It does not shows any error. Just file is not moved to the intended folder.

Deploy Django App on Azure: only displays default app, even after deployment

I am trying to deploy a webapp to Azure. I am following these directions https://azure.microsoft.com/en-us/documentation/articles/web-sites-python-create-deploy-django-app/
First step, I created a webapp (Django) on the portal.
Then it says to follow the directions to configure Continuous deployment using GIT in Azure App Service. This should apparently lead to my having a local directory of Django files. https://azure.microsoft.com/en-us/documentation/articles/web-sites-publish-source-control/
So I follow those directions, installing Git, creating a local repository, adding a webpage, enabling web app repository, deploying.
The webportal now shows that I have deployed ('active' deployment). However, when I go to the web app url, what's showing is NOT what I deployed, but rather what I guess is the default Django app with its urls (login, logout, contacts).
So then I create an actual Django app in my local directory (instead of the static index.html from the directions). I commit and push it to Azure. It shows as being deployed.
The result is the same as before: the default web app is showing.
So what I'm missing is the connection between my local repository and what's actually showing. Is there some way to pull the Azure default app into my local repository? (Once it's there, I'll be able to change it as I see fit.)
Things are working as expected, but you ended up overwriting the Django app in your first the Git commit. The Continuous Deployment instructions as written are generic to any deployment, even a blank Web App.
So what I'm missing is the connection between my local repository and what's actually showing. Is there some way to pull the Azure default app into my local repository? (Once it's there, I'll be able to change it as I see fit.)
All you need to do is git clone your repo after you've initialized your local Git repo on the Azure Web App. You've already gone through most of these steps, but I'll include them here for others who may be looking for this answer.
After you create the Django Web App from the Azure Marketplace/Gallery, scroll down to set up continuous deployment.
Choose Local Git repo.
Notice that you now have a Git Clone URL in both your Quickstart Essentials info and under All Settings >> Properties. Go ahead and copy this URL.
If you haven't already done so, you may need to set or reset your Deployment Credentials. You'll find this under All Settings. This will be your Git & FTP credentials. Note that this is actually the credentials for your Microsoft Account, not just this one Web App.
You already have Git installed from your first attempt. You should now be able to navigate to the folder you want to clone the repo into and run:
git clone <your_git_clone_url>
After you type in your password, you'll have a cloned repo of the Django Web App on your local system. cd into the directory and start working from there. Once you have changes, git add ., git commit, and git push them back to the repo in Azure to see your changes there.

How To Stop Overwriting Directories When Pushing to Heroku

This seems simple enough but I can't find a fix. I have a django app on heroku. I'm using heroku as a staging environment. My app uses a directory media for images and files uploaded from the admin pages.
When I git push heroku master it destroys this directory even though it's not added in the repo.
How can I stop this?
Heroku doesn't provide persistent storage.
On git push your code is transformed into a slug and then it is distributed to a dyno that your application runs in. See https://devcenter.heroku.com/articles/slug-compiler for details.
If you want persistent storage you should integrate your application with AWS S3 (Simple Storage Service) or some other service.