Where is the heroku database? - facebook-graph-api

I am trying to host my php application over heroku cloud services. This is my first ever try with any GIT client; following the procedure as defined in heroku documentation, I am done with pushing my files to the repo.
But now one place where I am totally lost is where is the heoku database, how can I configure it?
I went through myapp>resources where it tells the 5mb of database can be used for free, the only clickable link there is the 5mb label but even that is not taking me anywhere.
But where is the control panel of that database, where I can edit and use sql to configure my database? Finds its name, username etc. (may be an interface like phpmyadmin)?
Kindly guide me through this.
Thank you.

There is no "control panel" for the Heroku database. As for "where is it", there is a SHARED_DATABASE_URL environment variable of the form:
$ heroku config | grep DATABASE
SHARED_DATABASE_URL => postgres://username:password#host:port/database_name
In your PHP code, you can get this like so:
$database_url = getenv('SHARED_DATABASE_URL');
You may need to do some parsing of that URL to get it into a format that your PHP database API needs (it's been a while since I wrote any PHP).
As for "how do I configure my database", either from the command line, e.g.
$ heroku run php
or, assuming your code has some ORM-y features, invoking that to set up the database schema, or using heroku's db:push command, e.g.:
$ heroku db:push [URL_TO_MY_LOCAL_SOURCE_DATABASE]

I was looking for something like phpmyadmin for heroku databases, and I found the Adminium add-on, which works in a similar way.
Much easier than console.

Heroku will automatically setup your access to the database.
You may use taps to push and pull data betweeen your development machine and heroku. See http://devcenter.heroku.com/articles/taps
Alternateively, you may use pgbackup - http://devcenter.heroku.com/articles/pgbackups
Heroku recommends pgbackup as the most complete way of handling your database data (as described on the taps page).

Usually when you push something to Heroku it is the production side of the application, so it has a separate database that uses the same schema that you have designed once it has been migrated over.
So all your data will need re-entering through the Heroku App which can be found at:
'app name'.herokuapp.com

Related

Manage sqlite database with git

I have this small project that specifies sqlite as the database choice.
For this particular project, the framework is Django, and the server is hosted by Heroku. In order for the database to work, it must be set up with migration commands and credentials whenever the project is deployed to continuous integration tools or development site.
The question is, that many of these environments do not actually use the my_project.sqlite3 file that comes with the source repository, which we version control with git. How do I incorporate changes to the deployed database? Is a script that set up the database suitable for this scenario? Meanwhile, it is worth notice that there are security credentials that should not appear in a script in unencrypted ways, which makes the situation tricky.
that many of these environments do not actually use the my_project.sqlite3 file that comes with the source repository
If your deployment platform does not support your chosen database, then your development environment should probably be moved to using one of the databases they do support. It is possible to run different databases in development and production, but just seems like the source of headaches.
I have found a number of articles that state that Heroku just doesn't support SQLite in production and instead recommends Postgres.
How do I incorporate changes to the deployed database? Is a script that set up the database suitable for this scenario?
I assume that you are just extracting data from one database to give to another, so yes,as long as that script is a one time batch operation each time the code is updated, then it should be fine. You will want something else if you are adding/manipulating data in production and then exporting it to your git.
Meanwhile, it is worth notice that there are security credentials that should not appear in a script in unencrypted ways
An environment variable should solve that. You set your host machine to have environment variables with your credentials and then just retrieve them within the script. You are looking to have something like this:
# Set environment vars
os.environ['USER'] = 'username'
os.environ['PASSWORD'] = 'password'
# Get environment vars
USER = os.getenv('USER')
PASSWORD = os.environ.get('PASSWORD')

wagtail cms content deploy to production

I am study on the popular django cms framework - wagtail and coming to question: how do you deploy your developed contents - like pages/documents/images to production environments?
I am puzzled because these contents(like page) are saved into database, essentially they are just database tables rows but not a resource in git repo, so if I develope a simple web site in my dev and when I come to deploy to prod, it's not as simple as a git push. what is the best practice on this?
I read some codes from torchbox, there are some database dump and records pulling tasks using fabaric, not sure if that's the preferred way and neither can fully understand them.
Or if it's production site, is it supposed that everyone add content there and prod is the source of truth, there won't need of "content deployment" as all but only those schema changes via souths migration or other static resources only.
Please help if anyone has got experience on this and provide guidance.
Thanks
On our (Torchbox) sites, all content entry usually happens on the production site, so we don't need to push any database content as part of our regular deployments. Many of our sites have tens or even hundreds of editors, so it would be almost impossible to synchronise the content across multiple installations of the site.
Whenever we need to transfer content from one installation to another (for example, deploying the production site for the first time, or pulling a snapshot of the live site to help with development), we use the Postgresql pg_dump command to make a SQL dump of the complete database, then restore it at the destination using the psql command. Tools like Fabric can be used to automate this, but this isn't essential.

How to work with a local development server and deploy to a production server in django?

I want to work locally on my django(1.7) project and regularly deploy updates to a production server. How would you do this? I have not found anything about this in the docs. I am confused about that because it seems like many people would want to do that and there should be some kind of standard solution to this. Or am I getting the whole workflow wrong?
I should note that I'm not expecting a step-by-step guide. I am just trying to understand the concept.
Assuming you already have your deployment server setup, and all you need to do is push code to your server, then you can just use git as a form of deployment.
Digital Ocean has a good tutorial at this link https://www.digitalocean.com/community/tutorials/how-to-set-up-automatic-deployment-with-git-with-a-vps
Push sources to a git repository from a dev machine.
pull sources on a production server. Restart uwsgi/whatever.
There is no standard way of doing this, so no, it cannot be included with Django or be thoroughly described in the docs.
If you're using a PaaS how you deploy depends on the PaaS. Ditto for a container like docker, you must follow the rules of that particular container.
If you're old-school and can ssh into a server you can rsync a snapshot of the code to the correct place after everything else is taken care of: database, ports, webserver setup etc. That's what I do, and I control stuff with bash scripts utilizing a makefile.
REMOETHOST=user#yourbox
REMOTEPATH=yourpath
REMOTE=$REMOTEHOST:$REMOTEPATH
make rsync REMOTE_URI=$REMOTE
ssh $REMOTEHOST make -C $REMOTEPATH deploy
My "deploy"-action is a monster but might be as easy as something that touches the wsgi-file used in order to reload the site. My medium complex ones cleans out stale files, run collectstatic and then reloads the site. The really complex ones creates a timestamped virtualenv, cloned database and remote code tree, a new server-setup that points to this, runs connection tests on the remote and if they succeed, switches the main site to point to the new versioned site, then emails me the version that is now in production, with the git hash and timestamp.
Lots of good solutions. Heroku has a good tutorial: https://devcenter.heroku.com/articles/getting-started-with-django
Check out a general guide for deploying to multiple PaaS providers here: http://www.paascheatsheet.com

Should I have my Postgres directory right next to my project folder? If so, how?

I'm trying to develop a Django website with Heroku. Having no previous experience with databases (except the sqlite3 one from the tutorial), it seems to me a good idea to have the following file structure:
Projects
'-MySite
|-MySite
'-MyDB
I'm finding it hard to figure out how to do it, with psql commands preferring to put the databases in some obscure directory instead. Perhaps it's not such a good idea?
Eventually I want to be able to test and develop my site (it'll be just a blog for a while, I'm still learning) locally (ie. add a post, play with the CSS) and sync with the Heroku repository, but I also want to be able to add posts via the website itself occasionally.
The underlying data files (MyDb) has nothing to do with your project files and should not be under your project.
EDIT
added two ways to sync your local database with the database ON the Heroku server
1) export-import
This is the most simple way, do the following steps every now and then:
make an export on the Heroku server by using the pg_dump utility
download the dump file
import the dump into your local database by using the psql utility
2) replication
A more sophisticated way for keeping your local db in sync all the time is Replication. It is used in professional environments and it is probably an overkill for you at the moment. You can read more about it here: http://www.postgresql.org/docs/9.1/static/high-availability.html

How to ensure database changes can be easily moved over DVCS using django

Overview
I'm building a website in django. I need to allow people to begin to add flatpages, and set some settings in the admin. These changes should be definitive, since that information comes from the client. However, I'm also developing the backend, and as such will am creating and migrating tables. I push these changes to the hub.
Tools
django
git
south
postgres
Problem
How can I ensure that I get the database changes from the online site down to me on my lappy, and also how can I push my database changes up to the live site, so that we have a minimum of co-ordination needed? I am familiar with git hooks, so that option is in play.
Addendum:
I guess I know which tables can be modified via the admin. There should not be much overlap really. As I consider further, the danger really is me pushing data that would overwrite something they have done.
Thanks.
For getting your schema changes up to the server, just use South carefully. If you modify any table they might have data in, make sure you write both a schema migration and as necessary a data migration to preserve the sense of their data.
For getting their updated data back down to you (which doesn't seem critical, but might be nice to work with up-to-date test data as you're developing), I generally just use Django fixtures and the dumpdata and loaddata commands. It's easy enough to dump a fixture and commit it to your repo, then a loaddata on your end.
You could try using git hooks to automate some of this, but if you want automation I do recommend trying something like Fabric instead. Much of this stuff doesn't need to be run every single time you push/pull (in particular, I usually wouldn't want to dump a new data fixture that frequently).
You should probably take a look at South:
http://south.aeracode.org/
It seems to me that you could probably create a git hook that triggers off South if you are doing some sort of continuous integration system.
Otherwise, every time you do a push you will have to manually execute the migration steps yourself. Don't forget to put up the "site is under maintenance" message. ;)
I recommend that you use mk-table-sync to pull changes from live server to your laptop.
mk-table-sync takes a lot of parameters so you can automate this process by using fabric. You would basically create a fabric function that executes mk-table-sync on each tablet that you want to pull from the server.
This means that you can not make dabatase changes yourself, because they will be overwritten by the pull.
The only changes that you would be making to the live database are using South. You would push the code to the server and then run migrate to update the database schema.