Django email log file "sent" growing - django

I have a sent file that keeps growing on my staging server. It stores a log of all emails sent and I don't know what's causing this behaviour, it doesn't look like a feature from the Django framework (1.8).
I can't find any reference of this log file in the source code. It's an issue for several reasons, including the fact that it causes the server to run out of disk space.
Maybe I'm missing something obvious in the Django config. I don't see how to fix this issue besides running a cron task to delete that file regularly. I'm open to better ideas.

Related

ColdFusion 2018 scheduled tasks not working

We have recently began migrating to ColdFusion 2018 Enterprise, but have found that the scheduled tasks do not work. Although the relevant cfm file works if run in the browser on the same server, if we try and run it as a scheduled tasks then it does not work (although it will say it has run successfully on the screen).
The log file just contains a single line for each run:
Information","DefaultQuartzScheduler_Worker-5","11/20/20","12:48:18","","Task default.takename triggered."
From what I understand there should be additional lines for the http request etc, however.
We have tried various usernames and passwords, including admin accounts to make sure it is not a permissions issue but nothing seems to make any difference.
We have also tried outputting to a file but nothing ever populates the file, although it does update the file's modified date with the date/time the tasks ran (or create a new file if necessary).
Does anyone have any experience with this type of problem?
This ended up being an IIS permissions issue. We resolved it by enabling anonymous authentication for both the directory that the relevant cfm files are contained in, as well as the "jakarta" directory that I believe ColdFusion uses for some integration requirements. Scheduled tasks then ran as expected.

How to test claims-config.xml file WSO2 Identity Server

WSO2 documentation states that claims are read from the claim-config.xml file only once here: https://docs.wso2.com/display/IS570/Adding+Claim+Mapping
"The claims configured in <IS_HOME>/repository/conf/claim-config.xml file get
applied only when you start the product for the first time, or for any newly
created tenants. With the first startup, claim dialects and claims will be
loaded from the file and persisted in the database. Any consecutive updates to
the file will not be picked up and claim dialects and claims will be loaded
from the database."
The documentation makes it seem like you only have "one chance" to see how your claim-config.xml works. I'm in the process of developing and debugging the file though - is there a way to force WSO2 to read from the claim-config.xml file again or delete relevant data from the database to force claim-config.xml to be read?
I'd like to avoid completely uninstalling the product and reinstalling every time I want to observe a change I made to the claim-config.xml file.
Things I have tried:
Completely deleting the database files (WSO2CARBON_DB.h2.db) from \repository\database. This prevented the WSO2 server from starting up.
Deleting the entries from the IDN_CLAIM table from the H2 database. This started the server, but I wasn't able to login.
Completely deleting the database files (WSO2CARBON_DB.h2.db) from
\repository\database. This prevented the WSO2 server from starting up.
If you are okay with completely resetting databases, you can delete above files. As #senthalan wrote in comments, then you need to start the server with '-Dsetup' flag. It recreates DB, re-populates the configuration and starts the server.
sh wso2server.sh -Dsetup

Heroku ephemeral storage, Sendgrid, and attachments

On occasion I need to send emails with attachments to users of my site. I am using SendGrid and python-sendgrid 0.1.4 to do the send. Email sending is queued through Redis.
Here's the issue -- where do I put the attachment, which is currently generated as part of the web process? I tried putting it /tmp, which didn't work -- presumably because the file was deleted when the web process shut down and was no longer available when the worker process came by? I tried /app/media, which also didn't work -- I think because /app/media is read-only (though, oddly, I did not get any errors attempting to write to this directory)?
I think the answer may be that I have to refactor my code to generate the attachment in the same process as the email is sent, but as that is a pretty significant refactor, I thought I'd ask the community first. Thanks!
Heroku's /tmp directories are unique to each dyno. So your Web Dyno saves a file in its /tmp directory, then your worker looks in its /tmp directory and cannot find it.
The best option is likely refactoring your code (that way you aren't clogging up your Web Dyno's resources creating and writing files to disk). However, if you really want to avoid it, you could store your files temporarily on S3 [tutorial] or some other external storage mechanism.
You always need to use an external storage like for example S3, to store files that need to be available to every server instance/dyno.
Interesting to know is, if you don't want to store those attachements forever. You can attach a lifecycle event to your S3 bucket that will automatically delete a file if it's older then x days.

Business Process "Observer" application

My client is requesting to be notified any time one of their business processes fails for any reason. I had the idea of writing a seperate application that will run as an "observer" and check for various parts of the process.
An example would be that a daily file was generated and uploaded to an FTP location. The "Observer" might have the following "tests" :
Connect to the FTP
Go to folder where file should exist
Find file with naming convention
Verify create date of file
Failure of any step will send an alert email and also log to a report (both in case database is down OR email is down).
My question is.... Are there any products out there that do something close to this? I'd rather buy if there is something robust out there. If not, this almost seems like a unit test platform... Anything out there for testing I could potentially repurpose?
As an FYI, we are a Microsoft/Windows based shop.
Thx in advance!
You could even use a Continuous Integration framework for this. They normally monitor source code repositories and build&test things, but could be used for this as well.
For instance, Hudson, Jenkins and CrouseControl.NET are a few open source ones that are good and can easily be set up for something like this. Only change the monitoring of a repository to either filesystem over FTP and write a small script which checks what you need. Everything else comes for free by the framework, i.e. email, web interface for monitoring and running things.
Just an idea.

Django action after file upload

We have an extensive existing codebase and we've added load-balanced servers with a single master server to the equation now. There are various apps that contain models with uploaded files and images which all work fine... However, this raises the obvious problem of the rsync delay. Rsync is in the crontab and set to run every minute but this still means there's a potential 59 second wait between content being created and it actually existing on the webservers.
What I would like, is to be able to register some kind of 'post file changed' handler that triggers rsync whenever a new file is uploaded. I can't find anything of the sort though! Django has file upload handlers, but these appear to only deal with the actual upload stream, not the file as it is saved to the filesystem thereafter.
The best approach I can see is to create simple extensions to FileField, FieldFile, ImageField and ImageFieldFile as part of my project and hook into the save and delete methods in the FileField. Essentially, to create custom File and Image fields with this behaviour added. This isn't massively complicated to do but it doesn't seem like the most elegant solution to me. I'll need to teach South about my new fields, update every model that is affected and then create hordes of south migrations (which I'm pretty sure will clash with some code we have pending).
I'm also looking into creating a custom Storage class for the project, but I'm nervous about this having far-reaching effects on other pieces of code.
I can't believe no-one has come across this issue before, is there a canonical approach?
Thanks very much!
If you want to tackle this problem from the server-side (eg. similar solution to rsync) and you're running Linux, you might want to check out lsyncd:
http://code.google.com/p/lsyncd/
lsyncd uses inotify in the Linux kernel to watch directories and invoke an rsync as soon as files are modified. Fairly simple to drop in.