I'm working on a pip package for our company that we can hopefully install on your applications and have a set of invoke task that we run use locally on our machines. So far, creating a task is really simple but I want to know if it's possible to have invoke find tasks inside installed packages?
Thanks!
From invoke you can certainly use the Collection class to aggregate the tasks that you find. Not so sure there is auto-detecting.
github: invoke/collection.py
Related
I want to deploy my wagtail (which is a CMS based on django) project onto an AWS lambda function. The best option seems to be using zappa.
Wagtail needs opencv installed to support all the features.
As you might know, just running pip install opencv-python is not enough because opencv needs some os level packages to be installed. So before running pip install opencv-python one has to install some packages on the Amazon Linux in which the lambda environment is running. (yum install ...)
The only solution that came to my mind is using lambda layers to properly install opencv.
But I'm not sure whether it's possible to use lambda layers with projects deployed by zappa.
Any kind of help and sharing experiences would be really appreciated!
There is an open pull request that is ready to merge, but needs additional user testing.
The older project has a pull request that claims layer support has been merged
Feel free to try it out and let the maintainers know so documentation can be updated.
I want to run an API call after every 5 mins in the app. I want to generate a notification whenever new data is added to the database through the API.So to check this I need to run the task in background to make it a real time application.
How can I achieve this task in Django?
I am facing multiple errors while using django background_tasks so I don't want to use it. Celery is not compatible with windows. Is there any other option whose tutorial is available also
As you mentioned in your another question here ,
you previously have installed django-background-task instead of django-background-tasks [notice the difference between task and tasks]. So you have got ImportError: cannot import name 'background' from 'background_task'.
Later you installed the proper one i.e django-background-tasks and then you uninstalled django-background-task which also uninstalled some of the dependencies of django-background-tasks.
SOLUTION: Uninstall both packages first. Then pip install django-background-tasks
I am a scientist who is exploring the use of Dask on Amazon Web Services. I have some experience with Dask, but none with AWS. I have a few large custom task graphs to execute, and a few colleagues who may want to do the same if I can show them how. I believe that I should be using Kubernetes with Helm because I fall into the "Try out Dask for the first time on a cloud-based system like Amazon, Google, or Microsoft Azure" category.
I also fall into the "Dynamically create a personal and ephemeral deployment for interactive use" category. Should I be trying native Dask-Kubernetes instead of Helm? It seems simpler, but it's hard to judge the trade-offs.
In either case, how do you provide Dask workers a uniform environment that includes your own Python packages (not on any package index)? The solution I've found suggests that packages need to be on a pip or conda index.
Thanks for any help!
Use Helm or Dask-Kubernetes ?
You can use either. Generally starting with Helm is simpler.
How to include custom packages
You can install custom software using pip or conda. They don't need to be on PyPI or the anaconda default channel. You can point pip or conda to other channels. Here is an example installing software using pip from github
pip install git+https://github.com/username/repository#branch
For small custom files you can also use the Client.upload_file method.
I'm new to Python, but I had a requirement at my work-place. Another programmer is developing a project on Python, Django framework, and my task is to find a way in which this project will be executed at any computer.
So, I'm talking about something like Composer for PHP. I need an easiest way that at debian branch to write in terminal a command, that will find kind of "composer.json" file on that project, will read all the required software, modules, libraries and install it step-by-step on PC.
Any ideas how to do it in the easiest way? Thanks.
Since you have not talked about virtual environment, assumed that you already setup the environment and activated it.First get all the libraries lists in requirement.txt file in your project directory by typing below command,
pip freeze > requirements.txt
when you need to setup project in another system just run this
pip install -r requirements.txt
All the dependency will be installed in your project enviornment.
Using pip freeze > requirements.txt is an anti-pattern. It does some things right, such as pinning version numbers, but it can lead to problems with orphan packages later.
I use pip-tools, this way only your top level dependencies are placed in your requirements.in file and then I use pip-sync to sync my local environment with pip.
There's a lot more information on pip best practices in my 2016 Pycon UK talk - Avoiding the "left pad" problem: How to secure your pip install process
in python there is requirements.text file which take care of the libraries that is used in that project. You should also use the virtualenv
This is a Django and Python and maybe just a general web development question.
what's the difference between using virtualenv vs vagrant vs virtual box and etc... ?
I'm kinda confused as to when to use which one :/ I've been using virtual env this whole time and creating new virtual environments for different projects....
Is this the right way to do it?
One virtualenv per project?
I'm not really sure when and where vagrant comes into play...Am I supposed to set up vagrant and then use virtualenv?
This is probably a silly question but...if I were to be working on this project with other people. Would they too have to set up a virtual env? Just to collaborate?
Wouldn't it make more sense that we all work on our local machines and then push it into the main branch? I'm just kinda confused .... I feel like I'm doing it all wrong...
Thanks for the replies everybody!
Virtualenv sets up a local sandbox for you to install Python modules into.
Vagrant is an automation tool for creating Virtual Machines.
VirtualBox is a free, open source environment for running virtual machines, like those created by Vagrant.
Virtualenv is really about all you'll need to do sandboxed development on your local machine. We use Vagrant at my work to automate the creation of VMs. This way new developers coming on to a project have basically zero configuration to do in order to start working.
If you're collaborating with other devs, they don't need to do any of the above to work on your Django project, but if there's a lot of configuration involved that can't be done with pip and a requirements.txt, then you might look at Vagrant to ease some of that automation.
But you are correct in your assumption that you can all just work on a local branch and push back to the repo. Everything else is just icing.
Virtualenv is a python construct that holds a specific set of packages, separate from your system packages. The version of Python and its packages that came with your OS or that you installed separately is a "system package".
Virtualbox is totally different -- it's a VM, an entire operating system in a box.
I'm not familiar with Vagrant.
All you need is virtualenv. Create a new virtualenv for each project (they're very lightweight!) You need to do this because the whole point of virtualenv is to isolate the exact packages and versions of those packages you need for your project. Then activate the virtualenv and use pip install to install the packages you need, presumably starting with Django itself.
Once you have all the packages you need, use pip freeze > requirements.txt to create a file called requirements.txt that records all of the packages you've decided to use.
When other people collaborate on your project, they can start a virtualenv, pull your code into it, and run pip install -r requirements.txt to replicate your environment. They can even modify requirements.txt, push that back to you via your version control system, and you can run pip install -r requirements.txt yourself to modify your environment to match their changes.
This is all essential because without virtualenv, the problem of, for instance, having one project on your computer that requires Django 1.4 and one that requires Django 1.5 becomes very complicated.
Virtualenv is not an entire operating system in a box, just a python environment, so even if you are using it, you are still working on your local machine.
We use virtualenv and a Ubuntu virtual machine. Here's why:
virtualenv allows us to have isolated Python environments on a given operating system instance
Using Ubuntu dekstop in a virtual machine for our Python development mimics what it will look like when deployed on the server which is also Ubuntu. This means that we understand precisely the external OS package dependencies and configuration. You don't get this easily when you use OSX or Windows for development and Linux for deployment.
One important point is that a virtual machine is portable. You can take a snapshot and deploy it elsewhere easily. With Vagrant and Ansible combination you can automate a remote deployment.