The problem is that I want to add some constraints when using the post-review , for example if my python file does not match pep8, I want the request to be automatically refused. How can I do this?
If the requirement is only on your machine, then you can write a script that checks for all the required conditions and then calls post-review. If you want to enforce this on all the users, then you can distribute the script to all the clients or add these check in the server.
There is no implicit way to do this using post-review.
Related
Is it possible to have django running on the server and one application from django inter communicating with another python process say that I developed and fetching a response from it or even make it just do a particular action?
It can be synchronous or asynchronous; I have some idea of being asynchronous where some package like hendrix, crossbar.io or even celery can be used. But I don't understand what would be the name for this inter-communication and how should I plan the architecture for this.
Going around my head I have the two following situations I'm seeking a plan to be developed:
1.
Say I have django and an e-mail sender with the python package smtp. A user making a request to a view would make django execute my python module I developed for sending an email to a particular user (with a smpt server from google/gmail). It could be synchronous or asynchronous.
OR
2
I have django (some application) and I want it to communicate with some server I maintain; say for making this server execute some code or just fetch a file (if it is an ftp server). Is this an appropriate situation to point to the term 'microservices'? Or there is another term or workaround here?
Your first solution would be called an installable python module, just like any package you install with pip. You can have this as a separate module if you need your code to be re-usable across multiple or just future projects.
Your second solution would be a microservice. This will require setting your small module as a service that could have a REST API to communicate with and make it do whatever you intend it to do.
If your question is "what is the right approach" then I would tell you it depends on your use case. If this is just some re-usable code that you don't want to repeat over and over through our project then just make it into a separate module. While if this is a service that you expect other built services will use and rely on, then just make it into a microservice. You can use a microframework such as Flask for easier and faster setup of your service. Otherwise, if it's just some code that you will use once and serves a single functionality on your application then just write it and keep it there.
There are no rules or standards on which approach should be taken. I personally judge things depending on the use-case.
Hope this helps!
I am running Apache Superset at the following address:
http://superset.example.com:8088
That gets redirected to:
http://superset.example.com:8088/superset/welcome
Ideally, users would get redirected to:
http://superset.example.com:8088/welcome
How can that be accomplished? As well I would like for it to run under port 80 so the port doesn't need to be specified but I haven't been able to do that either.
This issue covers what you're talking about:
https://github.com/apache/incubator-superset/issues/985
which led to this closed PR:
https://github.com/apache/incubator-superset/pull/1866
You can try to reopen the PR and finish it, or you can try configuring nginx like this guy suggests.
I found it very frustrating to setup a base url for superset. If you want to save some time, I condensed a couple of comments into a working example here: https://github.com/komoot/superset-reverse-nginx-example
Below is the way I eventually made it to run on an endpoint other than '/'. But my use case is to make it work on AWS Lambda in Serverless environment.
Eventually what i did was the below to make it work:
In config.py i have added another configuration variable and used this variable in locations where redirect or appbuilder.add_link has been used.
In templates folder there are places where directly '/superset/' has been used. So, even if i did first step right, the templates are not rendering in right way. So, i have to go and change the template as well (As of now I have hard-coded this. I need to make it configurable)
In front-end i have added a file called config.ts and I have used this config in locations wherever redirect was done in front-end. This has fixed up all my front-end links.
Only thing remaining for me was fixing "Upload CSV to Database" Link. When we click this link and enter the data, since Lambda doesn't allow any writes i tried to write to /tmp - but since we don't know whether the next request is going to be served by same lambda or not... so this is an issue as of now. The way I am planning to fix this is to write the files to s3 instead of local folder. I am still figuring out a way to do this.
-- No more nginx or other links. We don't even need gunicorn in this setup.
Thanks
my application (QT, MSVC2010) requires constant updates both in code (the executable file itself) and data (files to be used by the customer).
The main issue is that not every user has the right to download the whole set of updates so I need a way to send him only the appropiate files.
I decided to do something like this:
Client: send user ID
Server: check user ID in database, send him
appropiate updates
Client: receive updates
At this stage I'm not focusing on security issues (authentication, encryption), I'd just like to know if there is any ready solution I could use or if I have to code this by myself. Even a partial solution would be of great help.
I'm not aware of any server side application that can handle this kind of situation but I must admit this is really not my field.
Last point: I need to avoid any web based solution (user logging in a website, PHP and so on) for a very long list of reasons.
Thank you!
I don't know if it's really an answer, I can just describe how I've implemented very similar design in a simple way some time ago.
1) Client has a version information build in (through .rc file) and user credentials
2) Client access central database checking if there is a URL for it's current version and user credentials.
select url from some table for credentials and version more then current version
3) Client fetches updates as single zip file using Url from Http/Ftp using standard Qt classes. If you need custom made protocol you might want to implement some logic over it.
4) Client update itself based on received data
5) Client notifies server about update complete so we know whats installed where
It's really very simple skeleton with a lot of limitations, but it's solved perfect everything I needed in that project. So you can deploy an update for particular user without affecting others.
How to make Django execute something automatically at a particular time.?
For example, my django application has to ftp upload to remote servers at pre defined times. The ftp server addresses, usernames, passwords, time, day and frequency has been defined in a django model.
I want to run a file upload automatically based on the values stored in the model.
One way to do is to write a python script and add it to the crontab. This script runs every minute and keeps an eye on the time values defined in the model.
Other thing that I can roughly think of is maybe django signals. I'm not sure if they can handle this issue. Is there a way to generate signals at predefined times (Haven't read indepth about them yet).
Just for the record - there is also celery which allows to schedule messages for the future dispatch. It's, however, a different beast than cron, as it requires/uses RabbitMQ and is meant for message queues.
I have been thinking about this recently and have found django-cron which seems as though it would do what you want.
Edit: Also if you are not specifically looking for Django based solution, I have recently used scheduler.py, which is a small single file script which works well and is simple to use.
I've had really good experiences with django-chronograph.
You need to set one crontab task: to call the chronograph python management command, which then runs other custom management commands, based on an admin-tweakable schedule
The problem you're describing is best solved using cron, not Django directly. Since it seems that you need to store data about your ftp uploads in your database (using Django to access it for logs or graphs or whatever), you can make a python script that uses Django which runs via cron.
James Bennett wrote a great article on how to do this which you can read in full here: http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/
The main gist of it is that, you can write standalone django scripts that cron can launch and run periodically, and these scripts can fully utilize your Django database, models, and anything else they want to. This gives you the flexibility to run whatever code you need and populate your database, while not trying to make Django do something it wasn't meant to do (Django is a web framework, and is event-driven, not time-driven).
Best of luck!
Im trying to create a synchronization between [Unfuddle][1] and my local server so that when there is a SVN commit on Unfuddle, my local server's code automatically gets updated. Unfuddle provides a callback and I can get the callback when a commit is made.
Now my webserver runs as the user called "apache", but all the files that are existing are owned by a "development" user. This is the ftp user that my team uses to update code everyday. I want to make it so that my callback will write the files as the "development" user.
These are the ones Ive tried so far to make this work
I cant use sudo -u development as that means I have to relax running "sudo" command for apache.
Changing file permissions to 777 works, but obviously, thats not something that I want to do.
The only option that I can think of now is run a standalone Python HTTP Server and make it run as the "development" user and make that the callback url. That way, I can update the local working copy as "development" user and my other folks can also use it as they were doing it before.
Are there any other ways of doing this?
Thanks!
For personal use, I'd use the Python HTTP Server approach, but if you're happy with Apache -- I imagine a script will then run to getch fetch things from Unfuddle svn into local. In that case, try mod_suexec, which runs scripts as a different user.
Have you considered putting the two users in the same group, and giving permission for the group?