I have a weird problem with Django-Kronos.
I've got it running successfully on my local machine and on our development server. However, on the production server, I can't get kronos to acknowledge my cron.py file. When I run installtasks, it runs but says "0 tasks installed". I've also tried running the tasks manually and kronos tells me the task doesn't exist.
We use git to push everything through to the server, so all the files and the structures are identical between the three locations. I've also checked and the cron.py file exists and has exactly the same content as the working servers.
The only differences between the servers is that the production server is running Postgres (SQlite on the dev server) and it's Ubuntu 12.10, whereas the dev server is 12.01.
Kronos is functioning properly, but it's not picking up our cron.py file for some reason....
Any got any ideas?!
Well, unfortunately, our solution was to scrap Django-Kronos altogether and create a custom management command which we're running from the crontab.
This happens when one of import you are trying to make is not there, your production system might be missing some Python package which is included in your cron.py.
Related
I'm trying to install to my Ubuntu 20.04 local machine using docker-compose. When I run sudo docker-compose -f docker-compose-non-dev.yml up, I got several errors and the process keep giving errors and did not end, so I aborted. Can you please tell me what the problem is?
The errors I get during Init Step 1/4 [Starting] -- Applying DB migrations are:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) >relation "logs" does not exist
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) >relation "ab_permission_view_role" does not exist
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) >relation "report_schedule" does not exist
I had the same same issue on Mac OS. And similar issues have been reported in the GitHub issues page as well, but it was not reproducible by everyone.
There is a possibility that something may have gone wrong in the first run.
Try running docker-compose down -v and then run docker compose up.
If the above fails, try upgrading your docker installation. Installing a new version solved my problem.
I had the same issue (Mac OS Monterey) where I had an instance of docker running Postgres for one of my apps, so when Superset started, it was looking at that instance of Postgres which obviously didn't have the appropriate databases/tables/views/etc.
So just stopping that other instance and restarting the Superset containers fixed the errors and properly started Superset. #embarrassed #oops
I experienced the same issue but these errors were at the end of a long string of cascading errors. I had this error consistently across all runs.
Looking at the first error, it seems like the initialisation script is not waiting for PostgreSQL to be ready and starts transacting right away. If the first transactions fail, many others fail subsequently. In my case the database needed a few more seconds to be ready so, I just added a sleep 60 at the beginning of docker/docker-bootstrap.sh to give time to PostgreSQL to start before other services start working.
I deleted the previously-created docker volumes and ran docker-compose -f docker-compose-non-dev.yml up again and now all works fine.
It appears that the program is already installed in Pycharm. I even created a blog about how to do it with pictures.
Nonetheless, it has been a nightmare getting to execute and finally connect on the web.
What code am I missing, I did it all thru Pycharm.
Yes, in order to Install Flask to do web apps. You must pay Pycharm Pro. This is true if you have a windows laptop or desktop. I am not certain about other programs where you can do Python web apps for free. But I will research that.
We developed a Flask webapp, and want to deploy it on IIS. During development, we started the app via flask run, which lanches a single instance of our app. On IIS, however, we observed (via the task manager) that our app runs multiple instances concurrently.
The problem is that our app is not designed to run in parallel. For example, our app reads a file from the file system and keeps it in memory for efficiency. This optimization is correct only if it is guaranteed that no other process changes the content of the file.
Is there a way to prevent IIS from starting multiple instances?
In IIS, you can go to FastCGI settings, in there you can see all the applications used by websites on your server. In the column "Max. Instances", the script you are talking about is probably set to 0 (or some value greater than 1), meaning it can be started multiple times. Limiting this to 1 will solve your problem.
you could use below code to run only one instance of a program:
from tendo import singleton
me = singleton.SingleInstance() # will sys.exit(-1) if other instance is running
the command to install:
pip install tendo
Reference link:
Check to see if python script is running
How to make only one instance of the same script running at any time
https://github.com/pycontribs/tendo/blob/master/tendo/singleton.py
We have an app that works perfectly fine on production but very slow on the dev machine.
Django==2.2.4
I'm using Ubuntu 20.04 but other devs are using macOS and even Windows.
Our production server is a very small compared to the dev laptops (it works very slow on every dev environment, we are 5 developers).
The app makes several request since it's a Single Page application that uses Django Rest Framework and React.js in the front-end.
We have tried different databases locally (currently postgresql, tried MySQL and sqlite3), using docker, no docker, but it does not change the performance.
Each individual request takes a few seconds to execute, but when they go all toghether the thing gets very slow. As more request are executed, the performance starts to drop.
It takes the app between 2/3 minutes to load in the dev environment and in any production or staging environment it takes little above 10 seconds.
Also tried disabling DEBUG in the back and front-end, nothing changes.
It is my opinion that one of the causes is that the dev server is single thread and it does not process a request until the previous is finished.
This makes the dev environemnt very hard to work with.
I've seen alternatives (plugins) to make the dev server multi-thread but those solutions do not work with the latests versions of django.
What alternatives could we try to improve this?
Looks like posting this question helped me think in an alternative. Using gunicorn in the dev environment really helps.
Installed it with
pip install gunicorn
And then execute it using this:
venv/bin/gunicorn be-app.wsgi --access-logfile - --workers 2 --bind localhost:8000
Of course if I want to access the static and media files I'll have to set up a local nginx but it's not a big deal
I am planning on moving from "EmberJS" to the Ember-cli, though I have a small problem. Is it possible only to run file watcher instead of serving/using ember serve that will run local server? As I am running my PHP backend on the Google App Script I have already a local python HTTP server running in localhost:8080 I do not need another one to run in localhost:4200
If I don't run ember serve my local changes in development environment wont get updated. Is there a better way of doing this? Is it possible to use assets in the app folder when running in development environment? and use dist folder for staging/live environments?
As mentioned in the guide, you can use the build command with the --watch flag.
ember build --watch
That will keep rebuilding your changes but not actually run the server.
As for your second question:
Is it possible to use assets in the app folder when running in development environment? and use dist folder for staging/live environments?
I don't believe so. You can change the output-path property in your .ember-cli config file, but you can't have one that's specific to a certain environment. You could always write a quick script to move the files though. :)