I create a new deployment for every new customer on Heroku platform.
However as my app gets deployed to Heroku some default data should be uploaded. Each time I update - existing data should remain.
What is my best option? Thanks!
Related
I'm using hasura migration guide to sync two servers - DEV and PROD.
Before we manually transferred the changes (as in 'using UI to copy all the changes'), so now databases are 90% similar.
We decided to set up proper migrations, but based on my tests doing an initial sync requires a 'clean slate'.
Example of the problem:
We have users table on both DEV and PROD. On DEV there is additional field age.
We do
1 hasura migrate create --init (on dev)
2 hasura migrate apply --endpoint PRODUCTION
We get error relation \"users\" already exists.
The question is - how can we sync the DBs without cleaning PROD first?
You're currently receiving that issue since running migrate apply is trying to execute on tables which already exist.
If you use the --skip-execution flag you can mark all of your relevant migrations as completed in the PRODUCTION environment and the migrate apply as usual to apply the new migration.
More information is available in the CLI documentation:
https://hasura.io/docs/latest/graphql/core/hasura-cli/hasura_migrate_apply.html
After re-reading the question to clarify - creating the initial migration using create --init will create a snapshot of your database as it is now (won't diff between STAGING and PRODUCTION).
To migrate this between STAGING and PRODUCTION you'd need to manually change the initial migration created to match staging and prod, and then manually create an incremental migration to bring PRODUCTION in line with STAGING.
After this, if you're working with Hasura Console through the CLI (using https://hasura.io/docs/latest/graphql/core/hasura-cli/hasura_console.html) -- it will automatically create future incremental migrations for you in the directory.
As an aside - you can also create resilient migrations manually as well using IF NOT EXISTS (these aren't automatically created by Hasura, but you can edit their SQL file migrations).
For example:
ALTER TABLE users
ADD COLUMN IF NOT EXISTS age INT
Edit 2: One other tool which I came upon which may be helpful is Migra (for Postgres, outside of Hasura). It can help with diff-ing your dev and production databases to help create the initial migration state: https://github.com/djrobstep/migra
It's a bit buried, but the section on migrations covers this scenario (you haven't been tracking/creating migrations and now need to initialize them for the first time):
https://hasura.io/docs/latest/graphql/core/migrations/migrations-setup.html#step-3-initialize-the-migrations-and-metadata-as-per-your-current-state
Hope this helps =)
I'm currently developing a screener webapp using heroku. The data is saved in csv (because its is a screener, it takes a lot of time to run if using api). Right now, I updated manually by updating and saving the data in my local then push it to heroku.
Is there a way to cron so that it updates the csv data once a day?
Heroku dynos have an ephemeral filesystem. Any changes you make will be lost when the dynos restart. This happens frequently (at least once per day).
For this reason, file-based data stores are not recommended on Heroku. If you move your data into a client-server database (Heroku's own Postgres service would be a good fit or you can pick something else), you could easily updated it whenever you like.
The Heroku Scheduler can be used to run tasks daily. Write a script to update your database table and schedule it to run whenever you like.
We are using the awesome Gitlab CI/CD workflow and had been satisfied with the process. A lot of Merge Requests could happen everyday and we want to make sure that our application is updated on realtime whenever our pipeline jobs is successful.
For instance our master branch could also be deployed on staging whenever Merge Request is accepted. Here is our example deploy_staging job on gitlab-ci.yml.
deploy_staging:
type: deploy
script:
- yarn install
- node_modules/ember-cli/bin/ember deploy staging --activate
environment:
name: staging
only:
- master
Since ember is a Single Page Application and once new deployment is shipped and available, ember couldn't recognized the new changes. Hence we need to refresh the page to be updated.
The other downside to this idea is, we can't afford to refresh the page if end user is in the middle of transaction. So my thought is to make a notification to refresh the page similar to any mobile app when updates are available they just go to the link and click the update manually.
Now this problem is narrowed down to this:
How can we sent signal to the running ember application so we can prompt a notification to refresh the page whenever updates are available (after successful CI/CD delivery)?
For this you'll want service workers :)
Service Workers are usually how most other sites notify about updates.
For ember, setting them up is fairly simple, we have ember-service-worker to get your caching and manifest going, and then we have ember-service-worke-update-notify for automatic notification of asset updates.
Though, there is a PR here: https://github.com/topaxi/ember-service-worker-update-notify/pull/3 to notify about updates in a more automated way -- the current way only notifies about an update upon refresh and load of cached assets.
I recently opened this PR, because I think with #pollingInterval={{5000}}, that would be the ideal interval to check for update, where every 5 seconds, we see if there is an update.
Hope this helps!
I have deployed a Django site on GoDaddy Django "droplet" for a while and users have been using the site to keep their records.
Now that GoDaddy is discontinuing the service, I would like to migrate the entire site with all records intact to DigitalOcean.
How does one go about doing this?
Here's what I'd do:
dump my data into a file see dumpdata
stop the server
remove all .pyc files
copy paste the whole folder of the website to the destination server
restore data using loaddata
run the server
I did this 4 times without any problems (dev to test environment, test to pre-prod and pre-prod to prod).
The reply by Oliver is good for small sites, but big companies (or any) won't be happy if you stop the server, also some records/sales might be done between the time you dump and the time you stop the server
I think this would be better but I'm unsure:
make a new database and make it sync with the old one, whenever old db changes changes should be synced to the new db (I know for example postgres has a feature that triggers some kind of alerts on creations/updates, this is indeed the hardest step and needs quite a lot of research to pull it off, both databases must be on sync)
upload the new(copy) page next to the new database connected to it
modify the DNS records to point to the new server, traffic will organically move to the new server, at some point you might have people making database updates on both servers so it is important that the sync goes both ways
take the first web-page's server down, cut the database syncing and remove old
page and old database
remove the pipe/method that was letting you sync the
new db with the old one
I am building a small financial web app with django. The app requires that the database has a complete history of prices, regardless of whether someone is currently using the app. These prices are freely available online.
The way I am currently handling this is by running simultaneously a separate python script (outside of django) which downloads the price data and records it in the django database using the sqlite3 module.
My plan for deployment is to run the app on an AWS EC2 instance, change the permissions of the folder where the db file resides, and separately run the download script.
Is this a good way to deploy this sort of app? What are the downsides?
Is there a better way to handle the asynchronous downloads and the deployment? (PythonAnywhere?)
You can write the daemon code and follow this approach to push data to DB as soon as you get it from Internet. Since your daemon would be running independently from the Django, you'd need to take care of data synchronisation related issues as well. One possible solution could be to use DateTimeField in your Django model with auto_now_add = True, which will give you idea of time when data was entered in DB. Hope this helps you or someone else looking for similar answer.