py2app - preserve directory structure - python-2.7

I would like to make a mac os x app for distributing my program.
The problem is that current script puts "images/Diagram.png" and other files under "images" folder to "Resources" but not to "Resources/images" as expected
What can I change in this setup.py part in order to put png files under images?
mainscript, ico, icns = "aproxim.py", "sum.ico", "sum.icns"
files = ["images/Diagram.png", "images/document.png", "images/Exit.png", "images/floppydisc.png",
"images/folder.png", "images/Info.png", "images/settings.png"]
scripts = ["app_settings_dialog", "app_window", "approximator", "approximator2", "data_format",
"easy_excel", "functions", "main_window", "settings_dialog",
"util_data", "util_parameters"]
description = "Approximator is program for experimental data approximation and interpolation (20 dependencies for select)"
common_options = dict(name = "Approximator", version = "1.7", description = description)
if sys.platform == 'darwin':
setup(
setup_requires = ['py2app'],
app = [mainscript],
options = dict(py2app = dict(includes = scripts, resources = files, iconfile = icns)),
**common_options)

It is easy, hope it is useful for other Python developers:
resources = [('images', files), ('', [ico])]

Related

Browsing through multiple audio files for playback on Asterisk dialplan

I'm trying to make a voicemail system on Asterisk 16, FreePBX 16 and CentOS 7 that lets people browse and choose from a list of prerecorded audio files. When the caller enters the menu to select the audio files, they're told how to browse through the different files. In the /var/lib/asterisk/sounds/ directory, the files are named from 1 to 3 (1.wav, 2.wav etc.) currently and the caller presses 1 to go to the previous file and 2 to go to the next file and 3 to select the current file they're listening to. The extensions part goes as such:
[prerecorded]
exten = s,1,NoOp(Pre-recorded messages)
same = n,Set(LOOP=0)
same = n,Set(FILE=1)
same = n(timeout),Wait(1)
same = n,Playback(browsing_tutorial)
same = n(loop),NoOp(Loop)
same = n,Background(${FILE})
same = n,WaitExten(5)
exten = 1,1,NoOp(Previous file)
same = n,Set(FILE=$[ ${FILE} - 1])
same = n,GoToIf($[ ${FILE} = 0 ]?:s,loop)
same = n,Playback(first_file)
same = n,Set(FILE=1)
same = n,GoTo(s,loop)
exten = 2,1,NoOp(Next file)
same = n,Set(FILE=$[ ${FILE} + 1])
same = n,GoToIf($[ ${FILE} = 4 ]?:s,loop)
same = n,Playback(last_file)
same = n,Set(FILE=3)
same = n,GoTo(s,loop)
exten = #,1,NoOp(Repeat)
same = n,GoTo(s,1)
exten = t,1,NoOp(No input)
same = n,Set(LOOP=$[ ${LOOP} + 1 ])
same = n,GoToIf($[ ${LOOP} > 2 ]?:s,timeout)
same = n,HangUp()
Doing it this way lets me browse through the files, but it warrants editing the extensions every time I add or remove any prerecorded files (which will be done often). If there's any way I can do this without needing to edit the extensions, it would be great.
You can write your own application using native C/C++ interface in asterisk, if you skilled enough. See app_voicemail.c
You also can use AGI or ARI interface and control dialplan with scripted language
Other option is voicemail storage in db and db-driven app using func_odbc

Dynamic task generators

I am evaluating waf build for an existing project that has tasks similar to that:
1. preprocessing phase: datafile => TransformationTask => library name list
2. for each library name:
2.1 import files from repository
2.2 build library
The library list depends on the preprocessing task and is naturally not known in advance.
How can this be achieved with waf?
You have to generate a file with the library list with a first task. Another task will have the output of the first as an input and will process the corresponding file if needed to generate what you need.
It is somehow the example given in §11.4.2 of the waf book. You have to replace the compiler output parsing with your library description file parsing. You need to copy the example and change the run method in mytool.py like:
class src2c(Task.Task):
color = 'PINK'
quiet = True
before = ['cstlib']
def run(self):
libnode = self.inputs[0]
libinfo = libnode.read_json()
name = libinfo['name']
files = [f"repo/{file}" for file in libinfo['files']]
taskgen = self.generator
# library name
taskgen.link_task.outputs = []
taskgen.link_task.add_target(name)
# library sources files
nodes = [taskgen.path.make_node(f) for f in files]
# update discovered dependancies
taskgen.bld.raw_deps[self.uid()] = [self.signature()] + nodes
with g_lock:
self.add_c_tasks(nodes)
# cf waf book § 11.4.2
def add_c_tasks(self, lst):
...
# cf waf book § 11.4.2
def runnable_status(self):
...
In the wscript, I simulate the datafile transformation with a copy.:
def options(opt):
opt.load("compiler_c")
def configure(cnf):
cnf.load("compiler_c")
cnf.load("mytool", tooldir=".")
def build(bld):
bld(source = "libs.json", target = "libs.src", features = "subst")
bld(source = "libs.src", features = ["c", "cstlib"])
With a simple my_lib.json:
{
"name": "mylib2",
"files": ["f1.c", "f2.c"]
}
And files repo/f1.c and repo/f2.c like void f1(){} and void f2(){}

How to download a file in Bazel from a BUILD file?

Is there a way to download a file in Bazel directly from a BUILD file? I know I can probably use wget and enable networking, but I'm looking for a solution that would work with bazel fetch.
I have a bunch of files to download that are going to be consumed by just a single package. It feels wrong to use the standard approach of adding a http_file() rule in WORKSPACE at the monorepo root. It would be decoupled from the package and it would pollute a totally unrelated file.
Create a download.bzl and load it in your WORKSPACE file
WORKSPACE:
load("//my_project/my_sub_project:download.bzl", "downlad_files")
load("#bazel_skylib//rules:copy_file.bzl", "copy_file")
download_files()
download.bzl:
BUILD_FILE_CONTENT_some_3d_model = """
filegroup(
name = "some_3d_model",
srcs = [
"BMW_315_DA2.obj",
],
visibility = ["//visibility:public"],
)
"""
def download_files():
http_archive(
name = "some_3d_model",
build_file_content = BUILD_FILE_CONTENT_some_3d_model,
#sha256 = "...",
urls = ["https://vertexwahn.de/lfs/v1/some_3d_model.zip"],
)
copy_file(
name = "copy_resources_some_3d_model",
src = "#some_3d_model",
out = "my/destination/path/some_file.obj",
)

AWS Python script (Rekognition) working when called from command line, but not automatically

i am working on a script which uploads pictures to S3 and then adds that picture to a rekognition collection. When i run this script from the command line everything works perfectly - no issues. However, when the system executes it automatically (the script runs whenever a new file is added to the specified upload folder) the rekognition portion of the code does not run. Everything up through os.remove works fine automatically, but i can't get the images added to the collection. After days of messing with the code i am looking for some help - please let me know if i am missing something here.
After messing around with the script a bit and debugging it is client=boto3.client('rekognition') which for some reason does not allow the script to run. Any thoughts on why that would be?
import boto3
import os
#Get file name of newly uploaded picture
path = "/var/www/html/upload/webcam-capture/"
files = os.listdir(path)
for name in files:
#Split up file name for proper processing
components = name.split('-')
schoolName = components[0]
imageType = components[1]
idNumber = components[2]
# Upload files to bucket with Python SDK
s3 = boto3.resource('s3')
s3.meta.client.upload_file(path + name, schoolName + '-' + imageType + '-' + 'media.XXX.school', idNumber)
# Delete file from webcam-capture temp folder
os.remove (path+name)
#Add Face to Facial Recognition Collection
collection_id = schoolName + '-' + imageType
bucket= collection_id + '-media.XXX.school'
client=boto3.client('rekognition')
response=client.index_faces(CollectionId=collection_id,Image={'S3Object':{'Bucket':bucket,'Name':idNumber}},MaxFaces=1,QualityFilter="AUTO",DetectionAttributes=['ALL'])```

Crossplatform building Boost with SCons

I tried hard but couldn't find an example of using SCons (or any build system for that matter) to build on both gcc and mvc++ with boost libraries.
Currently my SConstruct looks like
env = Environment()
env.Object(Glob('*.cpp'))
env.Program(target='test', source=Glob('*.o'), LIBS=['boost_filesystem-mt', 'boost_system-mt', 'boost_program_options-mt'])
Which works on Linux but doesn't with Visual C++ which starting with 2010 doesn't let you specify global include directories.
You'll need something like:
import os
env = Environment()
boost_prefix = ""
if is_windows:
boost_prefix = "path_to_boost"
else:
boost_prefix = "/usr" # or wherever you installed boost
sources = env.Glob("*.cpp")
env.Append(CPPPATH = [os.path.join(boost_prefix, "include")])
env.Append(LIBPATH = [os.path.join(boost_prefix, "lib")])
app = env.Program(target = "test", source = sources, LIBS = [...])
env.Default(app)