bin/console without working DB - doctrine-orm

I would like to be able to run symfony php bin/console without configured dbal.
I want to run some non-db related commands on CI without db.
Is it somehow possible?
Thanks.

As suggested by Cerad in his comment, you should remove the DoctrineBundle from AppKernel.php.
If you do need that bundle in other contexts for your app (e.g. accessing from a browser) than you could define a customized environment (e.g. console) and enable the bundle only in the other environments (prod, dev, test are the default). See https://symfony.com/doc/current/configuration/environments.html

Related

Env vars and Docker differences between dev, staging, and prod

Although my specific example involves Django, Docker, and Heroku, I believe these are pretty general testing/QA questions.
I have a dockerized Django app tested in dev with Selenium confirming that my static files are being served correctly from my local folder (EXPECTED_ROOT = '/staticfiles/'). This app is deployed to Heroku and I can see (visually and in the dev tools) that the static files are being pulled in from CloudFront correctly as well. I want to formalize this with the same test I'm using in dev. My first question is related to if/how environment variables are used for tests:
Do I add for example EXPECTED_ROOT = 'https://<somehash>.cloudfront.net/' as an env var to Heroku and use it in the Selenium test?
Also, to run this test in staging I would need to install Firefox in my Docker image like I do in dev. Perhaps this is ok in staging, but in prod I believe I should be aiming for the the smallest image possible. So the question is about differences between staging and prod:
Do I keep Firefox in my staging image, run the tests, and then send
to production a replica of that Dockerfile, but now without firefox?
Any help is appreciated.
The idea of Config Var is to setup configuration variables that differ from environment to environment. Having said that you are in control of the environment and can define what you need.
I personally would use a different approach: create a test that is independent of the environment (for example instead of testing the expected root I would confirm a given DIV ID is found, or some other element).
This would be enough to confirm the test is successful and the functionality works as expected.
The production Dockerfile indeed does not need Selenium and can be different from the one from staging.

Django un Azure Webapp: Run command on deployment

I am deploying a Django app to Azure Webapp, which does everything automatically. I have set it up so when I push to a specific Github branch, it is deployed and everything works. If I have to run a migration, I must login via SSH and run it manually (which is not perfect but I can accept it).
However, I need to use django-background-tasks, which needs to have a command running constantly listen for new tasks. I can't find a way to run this on every deployment. I found some documentation but most of it is for Node apps, it seems. For example, following some (oudated) tutoriales, I logged into {myapp}.scm.azurewebsites.net but I didnt find any "Download deployment scripts", which it seemed to be the proper way to do it.
Is there a way to set up some commands to run on deployment (without changing my current setup of deploy directly from Github using Github actions)? Or I have to do it manually?
Well, if someone is looking how to do it, I found the solution.
Create a startup file (sh), add the first line the gunicorn instruction and after that your own custom commands.
Here is explained: https://learn.microsoft.com/en-us/azure/developer/python/tutorial-deploy-app-service-on-linux-04#create-a-startup-file

Running Django's createsuperuser in Google Cloud Run

I'm trying to run a Django app on Google Cloud Run. The site itself works nicely, can run migrations and collect static assets via a startup script. The one thing I cannot figure out how to do is create a superuser. This requires interactively typing in a password or at least setting it via a django shell. I currently cannot figure out how to do this and it seems like it might not be possible; which would make Cloud Run unusable for Django. Has anyone been able to achieve this or have a sustainable workaround? Thanks!
Instead of Django Shell use the api to create the superuser. Once you have the script make it part of the container build process.

django command does it require django server must be running?

I have introduced a new django command which i can run from cronjob. This is particulary helpful to get ORM specification.
To Run this django command, do we need the django server should be running ?
No, the django server is a separate process completly independent from your custom command.
If you are using virtualenv (if you aren't yo probably should) keep in mind you must source the virtualenv or use the python interpreter within it in order to get the managemente command properly run.

Pydev + Django workflow. Local(test) + remote synchronization. Using git with django

I'm new to django and my very first project is my blog. I wonder how django developers who use pydev normally synchronize with remote hosting server, updating their sites?
I also would like to know, how do you combine usage of git with a django project? Should I just make a repository for the entire project?
At my company we've got an entire git repository for each project, including the Django sources that are put in the PYTHONPATH for each project, making Django versions project dependant. The folder structure is something like:
/.git
/projectname/app1
/projectname/app2
/projectname/manage.py
/django-lib/django/...
As django-lib is not a Python module, we include both / and /django-lib in the PYTHONPATH. If your project is becoming large, you might want to consider using git submodules on your apps.
We've also setup several servers to support the developers. There's a testing server running a central testing database and a setup including Apache with WSGI to make testing on a real server possible, which sometimes is a bit different then the local manage.py the developers use before committing their changes.
The testing server is updated with the master branch of our git repository. We've made several scripts to allow all developers to do this without letting them login to the server via SSH, but that is just during pre-release. After release, that server will become our staging server, and we'll remove all scripts from it to make it just like our production server.
Every developer has setup their local project to make sure that it communicates with the central testing database, containing several test data. I myself push my changes from the commandline, but you could also use EGit for this.
When we've got a release, we put it in a separate branch, called 'release' (obviously) and the production server will pull only from that branch. This is done via SSH, but I don't really know how your server setup looks like, so I guess that that last step is entirely up to you.
I hope that this has helped you a bit. I won't say that this is the best workflow possible, but it works for us and you should figure out what works for you.
Most experienced Django developers use pip(or distribute) and virtualenv deal with all the python packages you might need for your Django projects (including Django itself).
Personally, all I keep in my projects git repository is a bunch of segregated requirements lists generated by pip :
. ~/Dev/environs/$PROJECT_NAME/bin/activate
pip freeze > ./docs/requirements/main.list
I'm fairly sure most django developers would be familiar with Fabric, which I use for :
streamlining local interaction with git and,
pushing to our central repository,
pulling from our production or test server
touching the wsgi on the relevant server
and pretty much any other kind of task you might find yourself using ssh terminal session for.
For those cases where I need to make changes to someone elses django application in order to make it work or suit our purposes, I :
fork it on github,
clone from my forked repo
make the changes
push it up to my own repo
and provide merge requests to the original repo owner
This way, i have a repo where i can use pip requirement lists to keep pulling from until the original application owner gets their own repo updated.