I have several micro controller projects for home automation. Each of my nodes have a version number which is manually set in the code. This version number is reported during the startup of the node to inform me which code is running.
Sometimes changing the version number is forgotten after making some changes in the code. So an automatic solution has to be found.
I have some idea about the solution:
create a file (version.h): #define BUILDNO xxx
include it in the relevant c codes
auto increment xxx before every build
Can it be implemented? Or are there any other solutions with similar result?
I have made some research based on answers to my question. PlatformIO can run custom scripts before compile. Here is the process to generate a build number and include it into your project code:
Create a Python script into the project folder: buildscript_versioning.py
FILENAME_BUILDNO = 'versioning'
FILENAME_VERSION_H = 'include/version.h'
version = 'v0.1.'
import datetime
build_no = 0
try:
with open(FILENAME_BUILDNO) as f:
build_no = int(f.readline()) + 1
except:
print('Starting build number from 1..')
build_no = 1
with open(FILENAME_BUILDNO, 'w+') as f:
f.write(str(build_no))
print('Build number: {}'.format(build_no))
hf = """
#ifndef BUILD_NUMBER
#define BUILD_NUMBER "{}"
#endif
#ifndef VERSION
#define VERSION "{} - {}"
#endif
#ifndef VERSION_SHORT
#define VERSION_SHORT "{}"
#endif
""".format(build_no, version+str(build_no), datetime.datetime.now(), version+str(build_no))
with open(FILENAME_VERSION_H, 'w+') as f:
f.write(hf)
Add a line to platformio.ini:
extra_scripts =
pre:buildscript_versioning.py
Building your project will run the script. 2 files will be created:
versioning: a simple text file to store the last build number
include/version.h: header file to be included
Now you can add this line to your C code:
#include <version.h>
I started a gitlab repository with some documentation here: https://gitlab.com/pvojnisek/buildnumber-for-platformio/tree/master
Further ideas are welcome!
I solved this problem with git describe and the PlatformIO's advanced scripting.
First off, The project that I used this on relies heavily on git tags for version control. In my opinion, manually tracking version numbers in multiple places is a pain, it should all be based on the git tags. This makes CI and such really easy, Since I never forget to update the version in some file somewhere, It just need to refer to the git tags (regex protecting tags on github/gitlab helps too).
Using git describe, we can inject a commit description into the PIO build.
Here is an example:
platformio.ini
[env:my_env]
platform = teensy
board = teensy40
framework = arduino
extra_scripts =
pre:auto_firmware_version.py
auto_firmware_version.py
import subprocess
Import("env")
def get_firmware_specifier_build_flag():
ret = subprocess.run(["git", "describe"], stdout=subprocess.PIPE, text=True) #Uses only annotated tags
#ret = subprocess.run(["git", "describe", "--tags"], stdout=subprocess.PIPE, text=True) #Uses any tags
build_version = ret.stdout.strip()
build_flag = "-D AUTO_VERSION=\\\"" + build_version + "\\\""
print ("Firmware Revision: " + build_version)
return (build_flag)
env.Append(
BUILD_FLAGS=[get_firmware_specifier_build_flag()]
)
main.cpp
#include <Arduino.h>
void setup(){
serial.begin(115200);
serial.print("Firmware Version: ");
serial.println(AUTO_VERSION); // Use the preprocessor directive
// OR //
char firmware_char_array[] = AUTO_VERSION;
serial.println(firmware_char_array, sizeof(firmware_char_array));
}
void loop(){
// Loop
}
With this configuration, you get the firmware version as a string literal. You can use it however you want since it is dealt with in the preprocessor and not compiler.
This, for example, will print the tag that the commit is aligned with:
v1.2.3
or, if there isnt a tag at the commit, the relation to the latest tag:
v1.2.3-13-gabc1234
└────┤ └┤ └─────┴─ Short commit Hash (not the g)
│ └─ Distance from tag
└─ Latest Tag in Git
You can customize this string however you like in the python script, For example:
build_version = "My_Project_Firmware-" + ret.stdout.strip() + "-" + env['PIOENV'].upper()
would produce:
My_Project_Firmware-v1.2.3-13-gabc1234-MY_ENV
I use the env['PIOENV'] to distinguish between different build environments, Useful if you have regular builds and debug builds.
Mostly a copy of a platformIO forum post:
https://community.platformio.org/t/how-to-build-got-revision-into-binary-for-version-output/15380/5?u=awbmilne
You'll have to depend on pre-build programs when using C or C++ (Arduino). You have to add a pre-build program which updates a file with a simple:
#define VERSION "1.0.0"
Your automatic increment program needs to store the current version somewhere (preferably inside the version.h so it won't get out of sync) and read, increment and store it upon compilation.
You can use a solution like this one from vurdalakov or this one on cplusadd.blogspot.com which uses Makefiles.
For my use-case I did NOT necessarily need an incremental number that is always increased by one, but any kind of ascending numbers is fine, therefore using timeticks was for me just fine.
In your platformio.ini:
[common]
firmware_version = '"0.1.0+${UNIX_TIME}"'
[env:release]
build_flags =
-D FIRMWARE_VERSION=${common.firmware_version}
This shall give you a macro definition in the following format:
#define FIRMWARE_VERSION "0.1.0+1615469592"
I like your solution for the problem. But wouldn't a version number based on the source code be more useful ? PlatformIO has a section on dynamic build variables with an example that pulls a hash from the git source version
https://docs.platformio.org/en/latest/projectconf/section_env_build.html#id4 (scroll down to the Dynamic Build Variables section)
autoinc-semver is also a good solution for it.
You just have to include the version.h file in your code which looks as followed.
#define _VERSION_MAJOR 0
#define _VERSION_MINOR 0
#define _VERSION_PATCH 1
#define _VERSION_BUILD 41
#define _VERSION_DATE 04-07-2020
#define _VERSION_TIME 14:40:18
#define _VERSION_ONLY 0.0.1
#define _VERSION_NOBUILD 0.0.1 (04-07-2020)
#define _VERSION 2.0.3+41 (04-07-2020)
And then you just have to add the semver-build.bat ./version.h or semver-build.sh ./version.h command as pre-built option to your compiler environment.
I wasn't happy with any of the non-git answers, so here's a solution which is easier to customise. This will increment the #build by default.
Make the file src/version.h
#define VERSION "0.0.0+0
Put this in a source file
#include "version.h"
Make a script autoincrement.py in the project root
import sys, re, datetime
PATH_VERSION = './src/version.h'
MAJOR, MINOR, PATCH, BUILD = 0, 1, 2, 3
# Read
with open(PATH_VERSION, 'r') as reader:
# Find "MAJOR.MINOR.PATCH+BUILD" from the first line
line = re.search(r'"([^"]*)"', reader.readline()).group()[1:-1]
# Extract old values for MAJOR.MINOR.PATCH+BUILD
versions = re.split('\.|\+', line)
# Increment value
versions[BUILD] = int(versions[BUILD]) + 1
# Write
with open(PATH_VERSION, 'w') as writer:
time = datetime.datetime.now()
datestamp = time.strftime('%Y-%m-%d')
timestamp = time.strftime('%H:%M')
version = '%s.%s.%s+%d' % (versions[MAJOR], versions[MINOR], versions[PATCH], versions[BUILD])
versionFull = version + ' %s %s' % (datestamp, timestamp)
writer.writelines([
'#define VERSION "%s"' % version,
'\n#define VERSION_MAJOR %s' % versions[MAJOR],
'\n#define VERSION_MINOR %s' % versions[MINOR],
'\n#define VERSION_PATCH %s' % versions[PATCH],
'\n#define VERSION_BUILD %s' % versions[BUILD],
'\n#define VERSION_DATE "%s"' % datestamp,
'\n#define VERSION_TIME "%s"' % timestamp,
'\n#define VERSION_FULL "%s"' % versionFull
])
print('Release: ' + version)
In platformio.ini, add this to your device
extra_scripts = post:autoincrement.py
It'll then generate something the below in version.h
#define VERSION "0.0.1+68"
#define VERSION_MAJOR 0
#define VERSION_MINOR 0
#define VERSION_PATCH 1
#define VERSION_BUILD 68
#define VERSION_DATE "2022-11-27"
#define VERSION_TIME "23:35"
#define VERSION_FULL "0.0.1+68 2022-11-27 23:35"
Related
Right now, I have a really dumb pretty-print script which does a little git-fu to find files to format (unconditionally) and then runs those through clang-format -i. This approach has several shortcomings:
There are certain files which are enormous and take forever to pretty print.
The pretty printing is always done, regardless of whether or not the underlying file actually changed or not.
In the past, I was able to do things with CMake that had several nice properties which I would like to reproduce in bazel:
Only ever build code after it has gone through linting / pretty printing / etc.
Only lint / pretty print / etc. stuff that has changed
Pretty print stuff regardless of whether or not it is under VC or not
In CMake-land, I used this strategy, inspired by SCons proxy-target trickery:
Introduce a dummy target (e.g. source -> source.formatted). The action associated with this target does two things: a) run clang-format -i source, b) output/touch a file called source.formatted (this guarantees that for reasonable file systems, if source.formatted is newer than source, source doesn't need to be reformatted)
Add a dummy target (target_name.aggregated_formatted) which aggregates all the .formatted files corresponding to a particular library / executable target's sources
Make library / executable targets depend on target_name.aggregated_formatted as a pre-build step
Any help would be greatly appreciated.
#abergmeier is right. Let's take it one step further by implementing the macro and its components.
We'll use the C++ stage 1 tutorial in bazelbuild/examples.
Let's first mess up hello-world.cc:
#include <ctime>
#include <string>
#include <iostream>
std::string get_greet(const std::string& who) {
return "Hello " + who;
}
void print_localtime() {
std::time_t result =
std::time(nullptr);
std::cout << std::asctime(std::localtime(&result));
}
int main(int argc, char** argv) {
std::string who = "world";
if (argc > 1) {who = argv[1];}
std::cout << get_greet(who) << std::endl;
print_localtime();
return 0;
}
This is the BUILD file:
cc_binary(
name = "hello-world",
srcs = ["hello-world.cc"],
)
Since cc_binary doesn't know anything about clang-format or linting in general, let's create a macro called clang_formatted_cc_binary and replace cc_binary with it. The BUILD file now looks like this:
load(":clang_format.bzl", "clang_formatted_cc_binary")
clang_formatted_cc_binary(
name = "hello-world",
srcs = ["hello-world.cc"],
)
Next, create a file called clang_format.bzl with a macro named clang_formatted_cc_binary that's just a wrapper around native.cc_binary:
# In clang_format.bzl
def clang_formatted_cc_binary(**kwargs):
native.cc_binary(**kwargs)
At this point, you can build the cc_binary target, but it's not running clang-format yet. We'll need to add an intermediary rule to do that in clang_formatted_cc_binary which we'll call clang_format_srcs:
def clang_formatted_cc_binary(name, srcs, **kwargs):
# Using a filegroup for code cleaniness
native.filegroup(
name = name + "_unformatted_srcs",
srcs = srcs,
)
clang_format_srcs(
name = name + "_formatted_srcs",
srcs = [name + "_unformatted_srcs"],
)
native.cc_binary(
name = name,
srcs = [name + "_formatted_srcs"],
**kwargs
)
Note that we have replaced the native.cc_binary's sources with the formatted files, but kept the name to allow for in-place replacements of cc_binary -> clang_formatted_cc_binary in BUILD files.
Finally, we'll write the implementation of the clang_format_srcs rule, in the same clang_format.bzl file:
def _clang_format_srcs_impl(ctx):
formatted_files = []
for unformatted_file in ctx.files.srcs:
formatted_file = ctx.actions.declare_file("formatted_" + unformatted_file.basename)
formatted_files += [formatted_file]
ctx.actions.run_shell(
inputs = [unformatted_file],
outputs = [formatted_file],
progress_message = "Running clang-format on %s" % unformatted_file.short_path,
command = "clang-format %s > %s" % (unformatted_file.path, formatted_file.path),
)
return struct(files = depset(formatted_files))
clang_format_srcs = rule(
attrs = {
"srcs": attr.label_list(allow_files = True),
},
implementation = _clang_format_srcs_impl,
)
This rule goes through every file in the target's srcs attribute, declaring a "dummy" output file with the formatted_ prefix, and running clang-format on the unformatted file to produce the dummy output.
Now if you run bazel build :hello-world, Bazel will run the actions in clang_format_srcs before running the cc_binary compilation actions on the formatted files. We can prove this by running bazel build with the --subcommands flag:
$ bazel build //main:hello-world --subcommands
..
SUBCOMMAND: # //main:hello-world_formatted_srcs [action 'Running clang-format on main/hello-world.cc']
..
SUBCOMMAND: # //main:hello-world [action 'Compiling main/formatted_hello-world.cc']
..
SUBCOMMAND: # //main:hello-world [action 'Linking main/hello-world']
..
Looking at the contents of formatted_hello-world.cc, looks like clang-format did its job:
#include <ctime>
#include <string>
#include <iostream>
std::string get_greet(const std::string& who) { return "Hello " + who; }
void print_localtime() {
std::time_t result = std::time(nullptr);
std::cout << std::asctime(std::localtime(&result));
}
int main(int argc, char** argv) {
std::string who = "world";
if (argc > 1) {
who = argv[1];
}
std::cout << get_greet(who) << std::endl;
print_localtime();
return 0;
}
If all you want are the formatted sources without compiling them, you can run build the target with the _formatted_srcs suffix from clang_format_srcs directly:
$ bazel build //main:hello-world_formatted_srcs
INFO: Analysed target //main:hello-world_formatted_srcs (0 packages loaded).
INFO: Found 1 target...
Target //main:hello-world_formatted_srcs up-to-date:
bazel-bin/main/formatted_hello-world.cc
INFO: Elapsed time: 0.247s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
You might be able to use aspects for that. Being not certain, a Bazel-dev will probably point that out if it indeed is possible.
If you are familiar with Rules and Actions and the like, the quick and dirty way (which is similar to the CMake hackery) is to write a Macro. For e.g. cc_library you would do:
def clean_cc_library(name, srcs, **kwargs):
lint_sources(
name = "%s_linted" % name,
srcs = srcs,
)
pretty_print_sources(
name = "%s_pretty" % name,
srcs = ["%s_linted"],
)
return native.cc_library(
name = name,
srcs = ["%s_pretty"],
**kwargs
)
Then you of course need to replace every cc_library with clean_cc_library. And lint_sources and pretty_print_sources are rules that you have to implement yourself and need to produce the list of cleaned up files.
#abergmeier mentions maybe being able to use Aspects. You can, and I've made a prototype of a general linting system that leverages Aspects functionality so that BUILD files do not need to be modified to use Macros like clang_formatted_cc_library in-place of the core rules.
The basic idea is to have a bazel build step that is a pure function f(linter, sources) -> linted_sources_diff and a subsequent bazel run step that takes those diffs and applies them back to your source code to fix lint errors.
The prototype implementation is available at https://github.com/thundergolfer/bazel-linting-system.
Years ago, when compiling with GCC, the following defines in a #include .h file could be pre-processed for use in info.plist:
#define MAJORVERSION 2
#define MINORVERSION 6
#define MAINTVERSION 4
<key>CFBundleShortVersionString</key> <string>MAJORVERSION.MINORVERSION.MAINTVERSION</string>
...which would turn into "2.6.4". That worked because GCC supported the "-traditional" flag. (see Tech Note TN2175 Info.plist files in Xcode Using the C Preprocessor, under "Eliminating whitespace between tokens in the macro expansion process")
However, fast-forward to 2016 and Clang 7.0.2 (Xcode 7.2.1) apparently does not support either "-traditional" or "-traditional-cpp" (or support it properly), yielding this string:
"2 . 6 . 4"
(see Bug 12035 - Preprocessor inserts spaces in macro expansions, comment 4)
Because there are so many different variations (CFBundleShortVersionString, CFBundleVersion, CFBundleGetInfoString), it would be nice to work around this clang problem, and define these once, and concatenate / stringify the pieces together. What is the commonly-accepted pattern for doing this now? (I'm presently building on MacOS but the same pattern would work for IOS)
Here is the Python script I use to increment my build number, whenever a source code change is detected, and update one or more Info.plist files within the project.
It was created to solve the issue raised in this question I asked a while back.
You need to create buildnum.ver file in the source tree that looks like this:
version 1.0
build 1
(you will need to manually increment version when certain project milestones are reached, but buildnum is incremented automatically).
NOTE the location of the .ver file must be in the root of the source tree (see SourceDir, below) as this script will look for modified files in this directory. If any are found, the build number is incremented. Modified means source files changes after the .ver file was last updated.
Then create a new Xcode target to run an external build tool and run something like:
tools/bump_buildnum.py SourceDir/buildnum.ver SourceDir/Info.plist
(make it run in ${PROJECT_DIR})
and then make all the actual Xcode targets dependent upon this target, so it runs before any of them are built.
#!/usr/bin/env python
#
# Bump build number in Info.plist files if a source file have changed.
#
# usage: bump_buildnum.py buildnum.ver Info.plist [ ... Info.plist ]
#
# andy#trojanfoe.com, 2014.
#
import sys, os, subprocess, re
def read_verfile(name):
version = None
build = None
verfile = open(name, "r")
for line in verfile:
match = re.match(r"^version\s+(\S+)", line)
if match:
version = match.group(1).rstrip()
match = re.match(r"^build\s+(\S+)", line)
if match:
build = int(match.group(1).rstrip())
verfile.close()
return (version, build)
def write_verfile(name, version, build):
verfile = open(name, "w")
verfile.write("version {0}\n".format(version))
verfile.write("build {0}\n".format(build))
verfile.close()
return True
def set_plist_version(plistname, version, build):
if not os.path.exists(plistname):
print("{0} does not exist".format(plistname))
return False
plistbuddy = '/usr/libexec/Plistbuddy'
if not os.path.exists(plistbuddy):
print("{0} does not exist".format(plistbuddy))
return False
cmdline = [plistbuddy,
"-c", "Set CFBundleShortVersionString {0}".format(version),
"-c", "Set CFBundleVersion {0}".format(build),
plistname]
if subprocess.call(cmdline) != 0:
print("Failed to update {0}".format(plistname))
return False
print("Updated {0} with v{1} ({2})".format(plistname, version, build))
return True
def should_bump(vername, dirname):
verstat = os.stat(vername)
allnames = []
for dirname, dirnames, filenames in os.walk(dirname):
for filename in filenames:
allnames.append(os.path.join(dirname, filename))
for filename in allnames:
filestat = os.stat(filename)
if filestat.st_mtime > verstat.st_mtime:
print("{0} is newer than {1}".format(filename, vername))
return True
return False
def upver(vername):
(version, build) = read_verfile(vername)
if version == None or build == None:
print("Failed to read version/build from {0}".format(vername))
return False
# Bump the version number if any files in the same directory as the version file
# have changed, including sub-directories.
srcdir = os.path.dirname(vername)
bump = should_bump(vername, srcdir)
if bump:
build += 1
print("Incremented to build {0}".format(build))
write_verfile(vername, version, build)
print("Written {0}".format(vername))
else:
print("Staying at build {0}".format(build))
return (version, build)
if __name__ == "__main__":
if os.environ.has_key('ACTION') and os.environ['ACTION'] == 'clean':
print("{0}: Not running while cleaning".format(sys.argv[0]))
sys.exit(0)
if len(sys.argv) < 3:
print("Usage: {0} buildnum.ver Info.plist [... Info.plist]".format(sys.argv[0]))
sys.exit(1)
vername = sys.argv[1]
(version, build) = upver(vername)
if version == None or build == None:
sys.exit(2)
for i in range(2, len(sys.argv)):
plistname = sys.argv[i]
set_plist_version(plistname, version, build)
sys.exit(0)
First, I would like to clarify what each key is meant to do:
CFBundleShortVersionString
A string describing the released version of an app, using semantic versioning. This string will be displayed in the App Store description.
CFBundleVersion
A string specifing the build version (released or unreleased). It is a string, but Apple recommends to use numbers instead.
CFBundleGetInfoString
Seems to be deprecated, as it is no longer listed in the Information Property List Key Reference.
During development, CFBundleShortVersionString isn't changed that often, and I normally set CFBundleShortVersionString manually in Xcode. The only string I change regularly is CFBundleVersion, because you can't submit a new build to iTunes Connect/TestFlight, if the CFBundleVersion wasn't changed.
To change the value, I use a Rake task with PlistBuddy to write a time stamp (year, month, day, hour, and minute) to CFBundleVersion:
desc "Bump bundle version"
task :bump_bundle_version do
bundle_version = Time.now.strftime "%Y%m%d%H%M"
sh %Q{/usr/libexec/PlistBuddy -c "Set CFBundleVersion #{bundle_version}" "DemoApp/DemoApp-Info.plist"}
end
You can use PlistBuddy, if you need to automate CFBundleShortVersionString as well.
Normally I try to avoid the use of macros, so I actually don't know how to use them beyond the very most basic ones, but I'm trying to do some meta-manipulation so I assume macros are needed.
I have an enum listing various log entries and their respective id, e.g.
enum LogID
{
LOG_ID_ITEM1=0,
LOG_ID_ITEM2,
LOG_ID_ITEM3=10,
...
}
which is used within my program when writing data to the log file. Note that they will not, in general, be in any order.
I do most of my log file post-processing in Matlab so I'd like to write the same variable names and values to a file for Matlab to load in. e.g., a file looking like
LOG_ID_ITEM1=0;
LOG_ID_ITEM2=1;
LOG_ID_ITEM3=10;
...
I have no idea how to go about doing this, but it seems like it shouldn't be too complicated. If it helps, I am using c++11.
edit:
For clarification, I'm not looking for the macro itself to write the file. I want a way to store the enum element names and values as strings and ints somehow so I can then use a regular c++ function to write everything to file. I'm thinking the macro might then be used to build up the strings and values into vectors? Does that work? If so, how?
I agree with Adam Burry that a separate script is likely best for this. Not sure which languages you're familiar with, but here's a quick Python script that'll do the job:
#!/usr/bin/python
'''Makes a .m file from an enum in a C++ source file.'''
from __future__ import print_function
import sys
import re
def parse_cmd_line():
'''Gets a filename from the first command line argument.'''
if len(sys.argv) != 2:
sys.stderr.write('Usage: enummaker [cppfilename]\n')
sys.exit(1)
return sys.argv[1]
def make_m_file(cpp_file, m_file):
'''Makes an .m file from enumerations in a .cpp file.'''
in_enum = False
enum_val = 0
lines = cpp_file.readlines()
for line in lines:
if in_enum:
# Currently processing an enumeration
if '}' in line:
# Encountered a closing brace, so stop
# processing and reset value counter
in_enum = False
enum_val = 0
else:
# No closing brace, so process line
if '=' in line:
# If a value is supplied, use it
ev_string = re.match(r'[^=]*=(\d+)', line)
enum_val = int(ev_string.group(1))
# Write output line to file
e_out = re.match(r'[^=\n,]+', line)
m_file.write(e_out.group(0).strip() + '=' +
str(enum_val) + ';\n')
enum_val += 1
else:
# Not currently processing an enum,
# so check for an enum definition
enumstart = re.match(r'enum \w+ {', line)
if enumstart:
in_enum = True
def main():
'''Main function.'''
# Get file names
cpp_name = parse_cmd_line()
m_name = cpp_name.replace('cpp', 'm')
print('Converting ' + cpp_name + ' to ' + m_name + '...')
# Open the files
try:
cpp_file = open(cpp_name, 'r')
except IOError:
print("Couldn't open " + cpp_name + ' for reading.')
sys.exit(1)
try:
m_file = open(m_name, 'w')
except IOError:
print("Couldn't open " + m_name + ' for writing.')
sys.exit(1)
# Translate the cpp file
make_m_file(cpp_file, m_file)
# Finish
print("Done.")
cpp_file.close()
m_file.close()
if __name__ == '__main__':
main()
Running ./enummaker.py testenum.cpp on the following file of that name:
/* Random code here */
enum LogID {
LOG_ID_ITEM1=0,
LOG_ID_ITEM2,
LOG_ID_ITEM3=10,
LOG_ID_ITEM4
};
/* More random code here */
enum Stuff {
STUFF_ONE,
STUFF_TWO,
STUFF_THREE=99,
STUFF_FOUR,
STUFF_FIVE
};
/* Yet more random code here */
produces a file testenum.m containing the following:
LOG_ID_ITEM1=0;
LOG_ID_ITEM2=1;
LOG_ID_ITEM3=10;
LOG_ID_ITEM4=11;
STUFF_ONE=0;
STUFF_TWO=1;
STUFF_THREE=99;
STUFF_FOUR=100;
STUFF_FIVE=101;
This script assumes that the closing brace of an enum block is always on a separate line, that the first identifier is defined on the line following the opening brace, that there are no blank lines between the braces, that enum appears at the start of a line, and that there is no space following the = and the number. Easy enough to modify the script to overcome these limitations. You could have your makefile run this automatically.
Have you considered "going the other way"? It usually makes more sense to maintain your data definitions in a (text) file, then as part of your build process you can generate a C++ header and include it. Python and mako is a good tool for doing this.
Im trying to execute the following code in Python 2.7 on Windows7. The purpose of the code is to take back up from the specified folder to a specified folder as per the naming pattern given.
However, Im not able to get it work. The output has always been 'Backup Failed'.
Please advise on how I get resolve this to get the code working.
Thanks.
Code :
backup_ver1.py
import os
import time
import sys
sys.path.append('C:\Python27\GnuWin32\bin')
source = 'C:\New'
target_dir = 'E:\Backup'
target = target_dir + os.sep + time.strftime('%Y%m%d%H%M%S') + '.zip'
zip_command = "zip -qr {0} {1}".format(target,''.join(source))
print('This is a program for backing up files')
print(zip_command)
if os.system(zip_command)==0:
print('Successful backup to', target)
else:
print('Backup FAILED')
See if escaping the \'s helps :-
source = 'C:\\New'
target_dir = 'E:\\Backup'
I'm working on game project. I use python 2.7.2 for scripting. My application works fine with non unicode path to .exe. But it can't load scripts with unicode path using
boost::python::import (import_path.c_str());
I tried this example
5.3. Pure Embedding http://docs.python.org/extending/embedding.html#embedding-python-in-c
It also can't handle unicode path. I linked python as dll.
Explain me, please, how to handle such path.
boost::python::import needs a std::string, so chances are that import_path misses some characters.
Do you have to work on multiple platform ? On Windows, you could call GetShortPathName to retreive the 8.3 filename and use that to load your dll.
You can make a quick test :
Rename your extension to "JaiDéjàTestéÇaEtJaiDétestéÇa.pyd".
At the command line, type dir /x *.pyd to get the short file name (JAIDJT~1.PYD on my computer)
Use the short name to load your extension.
+The file name above if French for "I already tested this and I didn't like it". It is a rhyme that takes the edge off working with Unicode ;)
This isn't really an answer that will suit your needs, but maybe it will give you something to go on.
I ran into a very similar problem with Python, in my case my application is a pure Python application. I noticed as well that if my application was installed to a directory with a path string that could not be encoded in MBCS (what Python converts to internally for imports, at least Python prior to 3.2 as far as I understand), the Python interpreter would fail, claiming not module of that name existed.
What I had to do was write an Import Hook to trick it into loading those files anyway.
Here's what I came up with:
import imp, os, sys
class UnicodeImporter(object):
def find_module(self,fullname,path=None):
if isinstance(fullname,unicode):
fullname = fullname.replace(u'.',u'\\')
exts = (u'.pyc',u'.pyo',u'.py')
else:
fullname = fullname.replace('.','\\')
exts = ('.pyc','.pyo','.py')
if os.path.exists(fullname) and os.path.isdir(fullname):
return self
for ext in exts:
if os.path.exists(fullname+ext):
return self
def load_module(self,fullname):
if fullname in sys.modules:
return sys.modules[fullname]
else:
sys.modules[fullname] = imp.new_module(fullname)
if isinstance(fullname,unicode):
filename = fullname.replace(u'.',u'\\')
ext = u'.py'
initfile = u'__init__'
else:
filename = fullname.replace('.','\\')
ext = '.py'
initfile = '__init__'
if os.path.exists(filename+ext):
try:
with open(filename+ext,'U') as fp:
mod = imp.load_source(fullname,filename+ext,fp)
sys.modules[fullname] = mod
mod.__loader__ = self
return mod
except:
print 'fail', filename+ext
raise
mod = sys.modules[fullname]
mod.__loader__ = self
mod.__file__ = os.path.join(os.getcwd(),filename)
mod.__path__ = [filename]
#init file
initfile = os.path.join(filename,initfile+ext)
if os.path.exists(initfile):
with open(initfile,'U') as fp:
code = fp.read()
exec code in mod.__dict__
return mod
sys.meta_path = [UnicodeImporter()]
I still run into two issues when using this:
Double clicking on the launcher file (a .pyw file) in windows explorer does not work when the application is installed in a trouble directory. I believe this has to do with how Windows file associations passes the arguments to pythonw.exe (my guess is Windows passes the full path string, which includes the non-encodeable character, as the argument to the exe). If I create a batch file and have the batch file call the Python executable with just the file name of my launcher, and ensure it's launched from the same directory, it launches fine. Again, I'm betting this is because now I can use a relative path as the argument for python.exe, and avoid those trouble characters in the path.
Packaging my application using py2exe, the resulting exe will not run if placed in one of these trouble paths. I think this has to do with the zipimporter module, which unfortunately is a compiled Python module so I cannot easily modify it (I would have to recompile, etc etc).