How can I call Django's manage.py from GNOME Builder? - django

I have GNOME Builder installed on 3.24.1 installed on Ubuntu 17.04. I have a functional Django project and an associated virtualenv. (Django 1.11, Python 3)
How can I configure Builder, so that when I click Run it invokes manage.py runserver in the virtualenv? (Ideally I'd like to be able to run other manage.py functions too, like manage.py collectstatic.)

This is not really possible as Gnome-Builder works tightly integrated with flatpak. As far as I know the "hostsystem buildsystem" only supports auto detected run targets and only one of those.
However if you create a flatpak json manifest you can set the command to be run in the command variable of the json manifest - though probably not everything you want. As this means the application runs in a flatpak sandbox.
Setup
To do that you can create a new python gnome application with gnome-builder called djangoproj. This will generate a Project that uses the meson buildsystem and a org.gnome.djangoproj.json. The next thing would be to remove the gnome application - or you just ignore it and add your Django dependencies.
Add the required modules before the native modules. For just Django this is:
[…]
"modules" : [
{
"name": "python3-Django",
"buildsystem": "simple",
"build-commands": [
"pip3 install --no-index --find-links=\"file://${PWD}\" --prefix=${FLATPAK_DEST} Django"
],
"sources": [
{
"type": "file",
"url": "https://pypi.python.org/packages/1b/50/4cdc62fc0753595fc16c8f722a89740f487c6e5670c644eb8983946777be/pytz-2018.3.tar.gz",
"sha256": "410bcd1d6409026fbaa65d9ed33bf6dd8b1e94a499e32168acfc7b332e4095c0"
},
{
"type": "file",
"url": "https://pypi.python.org/packages/54/59/4987ae4a4a8be8507af1b213e75a449c05939ab1e0f62b5e90ccea2b51c3/Django-2.0.3.tar.gz",
"sha256": "769f212ffd5762f72c764fa648fca3b7f7dd4ec27407198b68e7c4abf4609fd0"
}
]
},
{
"name" : "djangoproj",
"buildsystem" : "meson",
[…]
If you have additional dependencies there is a handy tool to generate the necessary json lines: https://github.com/flatpak/flatpak-builder-tools/tree/master/pip
Now you can add the Django project files using the host system.
django-admin startproject sample
Meson needs to know about the new files so just add subdir('sample') to the root meson directory and create new meson files in the subdirectories. The meson.build in the sample directory looks like this for me. for the sample/sample directory you'd need to adjust the moduledir and the djangoproj_sources
pkgdatadir = join_paths(get_option('prefix'), get_option('datadir'), meson.project_name())
moduledir = join_paths(pkgdatadir, 'djangoproj')
python3 = import('python3')
conf = configuration_data()
conf.set('PYTHON', python3.find_python().path())
conf.set('VERSION', meson.project_version())
conf.set('localedir', join_paths(get_option('prefix'), get_option('localedir')))
conf.set('pkgdatadir', pkgdatadir)
subdir('sample')
djangoproj_sources = [
'manage.py',
]
install_data(djangoproj_sources, install_dir: moduledir)
Now you can set the command in the org.gnome.Djangoproj.json to bash and after pressing launch in the window where otherwise the logs of the program appear there is an interactive shell. There you can explore your newly created flatpak with Django included in the /app/ directory. If you want to run the Django app you'd do:
$ python3 /app/share/djangoproj2/djangoproj2/manage.py runserver
you can also write this command in the command variable of the json file to launch it directly when pressing the "play"-button.
All the other commands do work too- however keep in mind that the environment is in a flatpak and recreated on every rebuild... So nothing that needs to persist can be saved in the flatpak directory.

Related

Server-side AssemblyScript: How to read a file?

I'd like to write some server-side AssemblyScript that uses the WASI interface to read a file and process the contents.
I know that AssemblyScript and the ByteCode Alliance have recently had a falling out over the "openness" of the WASI standard, but I was hoping that they would still play nicely together...
I've found several AssemblyScript tools/libraries that appear to bridge this gap, and the one that seems the simplest to use is as-wasi. After following the installation instructions, I'm just trying to run the little demo app.
All the VSCode design time errors have disappeared, but the AssemblyScript compiler still barfs at the initial import statement.
import "wasi"
import { Console, Environ } from "as-wasi/assembly";
// Create an environ instance
let env = new Environ();
// Get the HOME Environment variable
let home = env.get("HOME")!;
// Log the HOME string to stdout
Console.log(home);
Running npm run asbuild gives.
$ npm run asbuild
> file_reader#1.0.0 asbuild
> npm run asbuild:debug && npm run asbuild:release
> file_reader#1.0.0 asbuild:debug
> asc assembly/index.ts --target debug
ERROR TS6054: File '~lib/wasi.ts' not found.
:
1 │ import "wasi"
│ ~~~~~~
└─ in assembly/index.ts(1,8)
FAILURE 1 parse error(s)
The file ~lib/wasi.ts does not exist and creating this file as a softlink pointing to the index.ts in the ./node_modules/as-wasi/assembly/ directory makes no difference.
Since the library is called as-wasi and not wasi, I've tried importing as-wasi, but this also fails.
I've also tried adapting tsconfig.json to include
{
"extends": "assemblyscript/std/assembly.json",
"include": [
"../node_modules/as-wasi/assembly/*.ts",
"./**/*.ts"
]
}
But this also has no effect.
What is causing asc to think that the required library should be in the directory called ~lib/ and how should I point it to the correct place?
Thanks
Your question threw me in a bit of a rabbit hole, but I think I solved it.
So, apparently, after the wasi schism, AssemblyScript added the wasi-shim repository, that you have to install as well:
npm install --save wasi-shim
The import "wasi" is no longer necessary after version 0.20 of AssemblyScript according to the same page, so you have to remove that import entirely. Also, be sure to add the extends to your asconfig.json, as recommended in the same wasi-shim page. Mine looks like this:
{
"extends": "./node_modules/#assemblyscript/wasi-shim/asconfig.json",
"targets": {
"debug": {
"outFile": "build/debug.wasm",
"textFile": "build/debug.wat",
"sourceMap": true,
"debug": true
},
"release": {
"outFile": "build/release.wasm",
"textFile": "build/release.wat",
"sourceMap": true,
"optimizeLevel": 3,
"shrinkLevel": 0,
"converge": false,
"noAssert": false
}
},
"options": {
"bindings": "esm"
}
}
It is just the generated original asconfig.json plus that extends.
Now the things got interesting. I got a compilation error:
ERROR TS2300: Duplicate identifier 'wasi_abort'.
:
1100 │ export function wasi_abort(
│ ~~~~~~~~~~
└─ in ~lib/as-wasi/assembly/as-wasi.ts(1100,17)
:
19 │ export function wasi_abort(
│ ~~~~~~~~~~
└─ in ~lib/wasi_internal.ts(19,17)
So I investigated, and it seems that as-wasi was exporting a symbol that was the same as a symbol exported by wasi_shim. No biggie, I went into node_modules/as-wasi/, and I renamed that function into as_wasi_abort. I did this also with the invokations of the function, namely three instances found in the package.json from as-wasi:
{
"asbuild:untouched": "asc assembly/index.ts -b build/untouched.wasm -t build/untouched.wat --use abort=as_wasi_abort --debug",
"asbuild:small": "asc assembly/index.ts -b build/optimized.wasm -t build/optimized.wat --use abort=as_wasi_abort -O3z ",
"asbuild:optimized": "asc assembly/index.ts -b build/optimized.wasm -t build/optimized.wat --use abort=as_wasi_abort -O3",
}
Having done all this, the package compiled and the example from Wasm By Example finally worked.
Your code should compile now, and I will try to make a pull request to all the places necessary so that the examples are updated, the code in as-wasi is updated, and so that nobody has to go through this again. Please comment if there are further problems.
Edit: It seems that I was right about the wasi_abort function being a problem. It is actually removed on the as-wasi repo, but the npm package is outdated. I asked in my pull request for it to be updated.

Django - weird debug output when using migrations management commands after installing matplotlib

Running GeoDjango in a Docker container - have added additional libraries via pip in the Dockerfile, and am now experiencing unwanted console output whenever I invoke any of the migrations commands, e.g. manage.py showmigrations/makemigrations/migrate.
The output is as follows:
user#host:/src$ ./manage.py showmigrations
CONFIGDIR=/home/django/.config/matplotlib
(private) matplotlib data path: /usr/local/lib/python3.7/site-packages/matplotlib/mpl-data
matplotlib data path: /usr/local/lib/python3.7/site-packages/matplotlib/mpl-data
loaded rc file /usr/local/lib/python3.7/site-packages/matplotlib/mpl-data/matplotlibrc
matplotlib version 3.2.1
interactive is False
platform is linux
loaded modules: ['sys', 'builtins', '_frozen_importlib', '_imp', '_thread', '_warnings', '_weakref', 'zipimport', '_frozen_importlib_external', '_io', 'marshal', 'posix', 'encodings', 'codecs', '_codecs', ...
Comprehensive modules listing snipped, it continues:
Using fontManager instance from /home/django/.cache/matplotlib/fontlist-v310.json
Loaded backend qt5agg version unknown.
Loaded backend tkagg version unknown.
Loaded backend agg version unknown.
Loaded backend agg version unknown.
Found GEOS DLL: <CDLL '/usr/local/lib/python3.7/site-packages/shapely/.libs/libgeos_c-5031f9ac.so.1.13.1', handle 5608f64e4c40 at 0x7f22a5aaaf10>, using it.
Trying `CDLL(libc.so.6)`
Library path: 'libc.so.6'
DLL: <CDLL 'libc.so.6', handle 7f22c4809000 at 0x7f22aef3b650>
GDAL_DATA not found in environment, set to '/usr/local/lib/python3.7/site-packages/fiona/gdal_data'.
PROJ data files are available at built-in paths
Entering env context: <fiona.env.Env object at 0x7f22a0798450>
Starting outermost env
No GDAL environment exists
New GDAL environment <fiona._env.GDALEnv object at 0x7f22a0798490> created
Logging error handler pushed.
All drivers registered.
GDAL_DATA found in environment: '/usr/local/lib/python3.7/site-packages/fiona/gdal_data'.
PROJ data files are available at built-in paths
Started GDALEnv <fiona._env.GDALEnv object at 0x7f22a0798490>.
Updated existing <fiona._env.GDALEnv object at 0x7f22a0798490> with options {}
Entered env context: <fiona.env.Env object at 0x7f22a0798450>
Exiting env context: <fiona.env.Env object at 0x7f22a0798450>
Cleared existing <fiona._env.GDALEnv object at 0x7f22a0798490> options
Stopping GDALEnv <fiona._env.GDALEnv object at 0x7f22a0798490>.
Error handler popped.
Stopped GDALEnv <fiona._env.GDALEnv object at 0x7f22a0798490>.
Exiting outermost env
Exited env context: <fiona.env.Env object at 0x7f22a0798450>
Could not import boto3, continuing with reduced functionality.
PROJ data files are available at built-in paths
Finally, the normal migrations output is displayed:
admin
[X] 0001_initial
[X] 0002_logentry_remove_auto_add
[snipped ...]
user#host:/src$
It's on a production system with gunicorn, NOT running in DEBUG mode. On the development system, with the same libraries set up, but in DEBUG mode, the output is normal. Dev and production dockerfiles are almost identical, stemming from python:3.7.4-buster.
On a first glance, it looks like a "chatty" library that is printing all that when it is loaded? Not sure if there is something broken or if this is normal? There are no signs of problems in the gunicorn error log. This also seems to only affect the migrations commands, no other manage.py commands.
Any hints appreciated!
Looking at the source for Fiona we find
log.debug("Entering env context: %r", self)
Since that's a debug level message, it wouldn't be visible by default.
This is a clue that logging has been configured to log debug-level messages.
This could happen via your Django config (which uses logging.dictConfig() under the hood), or by e.g. some module having run logging.basicConfig(level=logging.DEBUG).
Also, given how the .pth files that some packages use to hook themselves up to your import path work (they're just Python), this could inadvertently happen even if the package itself isn't imported.
I'd suggest removing some of those packages one by one until you find which one causes extra chattiness.

How to provide sys.path in python script to docker file

I have added sys.path
sys.path.append("C:\\Program Files\\FME\\fmeobjects\\python27")
in python script which works well when I run the script. I am not trying to dockerize the script. My docker script is
FROM python:2.7-alpine
ADD test1.py /
CMD [ "python", "./test1.py" ]
it builds the image but while running the image it gives error
Traceback (most recent call last):
File "./test1.py", line 17, in <module>
import fmeobjects
ImportError: No module named fmeobjects
It seems like your script cannot import fmeobjects because it is outside the container. Try adding the import for fmeobjects in the directory you ADD.
What does test1.py do?
If fmeobjects is a package / module, you need to add as mentioned above to the environment of the image.
You can also set up a distutils for it and you can pip install it in the image.
Effectively, as currently constructed, you're trying to import a package in your script that does not exist because it has not been installed.
Even for small standalone applications, using the standard distribution tools streamlines this process significantly. This is doubly true if you have colleagues that might have different usernames, directory layouts, or even operating systems. Don't manually edit sys.path in your script.
You should write a setup.py file that uses the setuptools library. Complete documentation is here but a minimal example might look like:
#!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name="fmeobjects",
version="0.1",
packages=find_packages(),
entry_points={
'console_scripts': [
'fmeobjects = fmeobjects.main:main'
]
}
)
For development use, create a virtual environment and install your package in it.
virtualenv vpy
. vpy/bin/activate
pip install -e .
The . activate line sets some additional environment variables for you, including adding the virtual environment to your $PATH. (source is an equivalent vendor extension that works in some shells; . is part of the standard and works even in minimal shells like what you get in Alpine or Busybox installations.) You can now run fmeobjects at the shell prompt, which will call the main() function in fmeobjects/main.py (see the entry_points declaration).
You have a couple of options of how to install this in Docker. Probably the most straightforward is to simply import your source tree and install it. Since Docker containers provide isolated filesystems and generally do only one thing, there's not much point in supporting an isolated Python installation within that; just install your package into the global Python.
FROM python:2.7
WORKDIR /usr/src/app
COPY . .
RUN pip install .
CMD ["fmeobjects"]
(If your virtual environment is in your source tree, you can add vpy to a .dockerignore file to cause it to not be copied, saving time and space.)

Nix Gradle dist - Failed to load native library 'libnative-platform.so' for Linux amd64

I am trying to build a Freeplane derivation based on Freemind, see: https://github.com/razvan-panda/nixpkgs/blob/freeplane/pkgs/applications/misc/freeplane/default.nix
{ stdenv, fetchurl, jdk, jre, gradle }:
stdenv.mkDerivation rec {
name = "freeplane-${version}";
version = "1.6.13";
src = fetchurl {
url = "mirror://sourceforge/project/freeplane/freeplane%20stable/freeplane_src-${version}.tar.gz";
sha256 = "0aabn6lqh2fdgdnfjg3j1rjq0bn4d1947l6ar2fycpj3jy9g3ccp";
};
buildInputs = [ jdk gradle ];
buildPhase = "gradle dist";
installPhase = ''
mkdir -p $out/{bin,nix-support}
cp -r ../bin/dist $out/nix-support
sed -i 's/which/type -p/' $out/nix-support/dist/freeplane.sh
cat >$out/bin/freeplane <<EOF
#! /bin/sh
JAVA_HOME=${jre} $out/nix-support/dist/freeplane.sh
EOF
chmod +x $out/{bin/freeplane,nix-support/dist/freeplane.sh}
'';
meta = with stdenv.lib; {
description = "Mind-mapping software";
homepage = https://www.freeplane.org/wiki/index.php/Home;
license = licenses.gpl2Plus;
platforms = platforms.linux;
};
}
During the gradle build step it is throwing the following error:
building path(s)
‘/nix/store/9dc1x2aya5p8xj4lq9jl0xjnf08n7g6l-freeplane-1.6.13’
unpacking sources unpacking source archive
/nix/store/c0j5hgpfs0agh3xdnpx4qjy82aqkiidv-freeplane_src-1.6.13.tar.gz
source root is freeplane-1.6.13 setting SOURCE_DATE_EPOCH to timestamp
1517769626 of file freeplane-1.6.13/gitinfo.txt patching sources
configuring no configure script, doing nothing building
FAILURE: Build failed with an exception.
What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64.
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. builder for ‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed with exit code 1 error: build of
‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed
Running gradle dist from terminal works fine. I'm guessing that maybe one of the globally installed Nix packages provides a fix to the issue and they are not visible during the build.
I searched a lot but couldn't find any working solution. For example, removing the ~/.gradle folders didn't help.
Update
To reproduce the issue just git clone https://github.com/razvan-panda/nixpkgs, checkout the freeplane branch and run nix-build -A freeplane in the root of the repository.
Link to GitHub issue
Maybe you just don't have permission for the folder/file
sudo chmod 777 yourFolderPath
you can also : sudo chmod 777 yourFolderPath/* (All folder)
Folder will not be locked,then You can use it normally
[At least I succeeded。。。]
EX:
sudo chmod 777 Ruby/
now ,that's ok
To fix this error: What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64. do the following:
Check if your Gradle cache (**~user/.gradle/**native folder exist at all or not).
Check if your Gradle cache (~user/.gradle/native folder exist and the file in question i.e. libnative-platform.so exists in that directory or not).
Check if the above folder ~user/.gradle or ~/.gradle/native or file: ~/.gradle/native/libnative-platform.so has valid permissions (should not be read-only. Running chmod -R 755 ~/.gradle is enough).
IF you don't see native folder at all or if your native folder seems corrupted, run your Gradle task ex: gradle clean build using -g or --gradle-user-home option and pass it's value.
Ex: If I ran mkdir /tmp/newG_H_Folder; gradle clean build -g /tmp/newG_H_Folder, you'll see Gradle will populate all the required folder/files (that it needs to run even before running any task or any option) are now in this new Gradle Home folder (i.e. /tmp/newG_H_Folder/.gradle directory).
From this folder, you can copy - just the native folder to your user's ~/.gradle folder (take backup of existing native folder in ~/.gradle first if you want to) if it already exists -or copy the whole .gradle folder to your ~ (home directory).
Then rerun your Gradle task and it won't error out anymore.
Gradle docs says:
https://docs.gradle.org/current/userguide/command_line_interface.html
-g, --gradle-user-home
Specifies the Gradle user home directory. The default is the .gradle directory in the user’s home directory.

Sublime Text 2 build with Grunt v0.4.0

I'm trying to setup Sublime build process to run Grunt (v0.4)
This is my build snippet:
{
"cmd": ["grunt", "--no-color"],
"selector": ["Gruntfile.js"],
"path": "/usr/local/bin",
"working_dir": "${project_path}",
"osx": {
"cmd": ["grunt", "--no-color"]
}
}
When I hit Command-B I get the following error:
grunt-cli: The grunt command line interface. (v0.1.6)
Fatal error: Unable to find local grunt.
If you're seeing this message, either a Gruntfile wasn't found or grunt
hasn't been installed locally to your project. For more information about
installing and configuring grunt, please see the Getting Started guide:
http://gruntjs.com/getting-started
[Finished in 0.2s with exit code 99]
When I run grunt from the terminal everything is working.
Any ideas?
It's a "bug" of Sublime Text. When you hit Ctrl+B, it will call the build command with the first open folder as the working directory. So if you haven't opened the folder, it cannot find the build file (Makefile, or in your case Gruntfile).
So in order to build successfully, you need to put your Gruntfile in the folder as the working directory, and then open the folder in Sublime Text and hit Ctrl+B.
Before running GruntJS, you need to install it with:
npm install grunt --save-dev
In the new version of GruntJS, 'grunt' is not installed globally, so in every project you can use different versions of GruntJS.
Enter which grunt from the command line, then put the full path in the "cmd" section - for example, "cmd": ["/opt/local/bin/grunt", "--no-color"]. The $PATH for ST2 can be different from the one in your CLI environment.