import a go module created using swig - c++

I just picked up swig in an attempt to port a large library written in C++, to golang. The build and install went okay, and now I'd like to test it by using it in another project I'm working on. I built the module in a separate swig directory at the root of the library directory, and pushed a fork of the library with the swig changes to my own repo. The structure then looks like this
.
├── bin
├── buildfiles
├── doc
├── GoPro
├── internal
├── lib
├── libraw
├── m4
├── object
├── RawSpeed
├── RawSpeed3
├── samples
├── src
└── swig
├── go.mod
├── libraw.go
├── libraw.i
├── libraw_c_api.cxx
└── libraw_wrap.c
The module name in the swig/go.mod file is github.com/MRHT-SRProject/LibRawGo.
I tried to include the module in another project but it failed with the error module github.com/MRHT-SRProject/LibRawGo#latest found (v0.0.0-20221005050554-bc562f90d08d), but does not contain package github.com/MRHT-SRProject/LibRawGo. I am assuming it has to do with the module being a sub folder of the project, but I'm not really sure.

It turns out that go is expecting a sub-folder within the module with the same package name as the module defined within swig. By renaming the swig directory to the name given to the module I got it working.

Related

Is this a good directory structure to maintain a large C++ project?

I have been trying to figure out a good directory structure that is maintainable for large C++ projects. During my search I came across this resource here link. If I loosely follow the structure stated in that document, it seems I get something similar to the following
.
├── CMakeLists.txt
├── build
│   ├── Executable-OutputA
│   └── Library-OutputA
├── cmake
│   └── *.cmake
├── docs
├── include
│   └── *.h
├── lib
│   ├── LibraryA
│   │   ├── *.cpp
│   │   └── *.h
│   └── LibraryB
│   ├── *.cpp
│   └── *.h
├── src
│   ├── ExecutableA
│   │   ├── *.cpp
│   │   └── *.h
│   └── ExecutableB
│   ├── *.cpp
│   └── *.h
├── tests
└── third_party
├── External-ProjectA
└── External-ProjectB
build: holds the outputted executables and libraries generated by the project
cmake: holds all the cmake packages that the project may require
doc: holds documentation files, typically doxygen
include: holds public headers files (might not be needed, not sure)
lib: holds all the libraries the user creates with their respective source and header files
src: holds all the executable projects the user makes with their respective headers and source files
tests: files to test both the executables and libraries
third_party: any third party projects, libraries, ect. usually downloaded from online or cloned
I believe this is an appropriate structure for large projects, but I do not have too much experience with projects that produce more than 3 or 4 targets. I want to ask the community for feedback and if they agree with the structure laid out above, or have better suggestions.
Edit: I have not been able to find too many posts detailing multiple target outputs as well as third party dependencies for large projects.

Adding headers in a Modern CMake way in a large project

I'm working on a project that encompasses an SDL-based game framework, an editor, and a game. The directory structure is shaping up to be something like this (edited for brevity)
├── CMake
├── Libraries
├── Resources
└── Source
├── Editor
├── Framework
│   └── Source
│   ├── AI
│   │   ├── Private
│   │   └── Public
│   ├── Game
│   │   ├── Private
│   │   └── Public
│   ├── Graphics
│   │   ├── Private
│   │   └── Public
│   └── Networking
├── Game
│   └── Source
│   └── Runtime
│   └── Launch
│   ├── Private
│   └── Public
└── Server
My add_executable command is being ran in Game/Source/Runtime/Launch/Private, and is dependent on files from the other modules.
According to some CMake how-tos and the accepted response for the following question, the headers for a given target should be included in the add_executable call in order for them to be listed as dependencies in the makefile.
How to properly add include directories with CMake
My question is, what is the cleanest way to accomplish this, given the abundance of header files and directories in my project? I can't imagine that the best practice would be to maintain a huge list of files directly in the add_executable call, but I could be wrong.
I was thinking that each directory's CMakeLists.txt could be responsible for adding to a variable that would eventually be used in add_executable, but distributing that intent through so many files seems like poor practice. Is there a better way?
You can follow exactly the pattern in the link you sent. For each library (I suppose they are libraries/targets in CMake), use:
target_include_directories(${target} PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/Private PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/Public)
By doing this, you tell CMake that the Private folders are just for the current target, but if someone uses the target for another target, it should add the Public include directories.
Now, you just need to do:
target_link_libraries(${target} PRIVATE Framework_Source)
If Framework_Source is the name of a target. ${target} is the name of the target you are currently building.

Importing modules in python script and figuring out layout of directory/package

I am very new to python...and I have been reading around trying to figure out a better answer... but I am still struggling.
I was trying to import some script to get code talking to each other. I've tried importing as suggested via python documentation:
from SomePackage.somefile import object
The actual directory looks like this
Foo/
├── bin
│
├── README.txt
├── setup.py
├── development.ini
├── SomePackage
│ ├── somefile
│ │ ├── __init__.py
│ │ ├── object.py
│ ├── __init__.py
Do I need to import in my .py file with? Each module has an empty init.py file. Should I move where my program is located?
Any help is truly appreciated!!
I figured out the issue! I need to create a setup.py document that created the packages and modules!
setup(
#contents
packages=['somepackage', 'somepackage.someotherpackage'],
py_modules=['somepackage.module.file.py'],
scripts=['bin/somescript-cmd'],
#more code...

Merging an independent Git repo with another Git repo that is a conduit with Subversion: avoiding duplication when merging

I am happily developing a Django project in my own Git repo on my localhost. I am creating branches, committing and merging happily. The path is something like:
/path/to/git/django/
and the structure is:
project
├── README
├── REQUIREMENTS
├── __init__.py
├── fabfile.py
├── app1
├── manage.py
├── app2
├── app3
├── app4
└── project
The rest of my development team still use Subversion, which is one giant repo with multiple projects. When I am working with that on my localhost I am still using Git (via git-svn). The path is something like
/path/to/giant-svn-repo/
Projects live under this like:
giant-svn-repo
|── project1
|── project2
|── project3
└── project4
When I want to work with the latest changes from the remote svn repo I just do a git-svn rebase. Then for new features I create a new branch, develop, commit, checkout master, merge branch with master, delete branch, and then a final git-svn dcommit. Cool. Everything works well.
These two repositories (lets call them git-django and git-svn) are completely independent right now.
Now I want to add git-django into the git-svn repo as a new project (ie. in a child directory called djangoproject). I have this working pretty well, using the following workflow:
cd into git-svn repo
Create a new branch in the git-svn repo
Make a new directory to host my django project
Add a new remote that links to my original Django project
Merge the remote into my local directory
Read-tree with the prefix of relative path djangoproject so it puts the codebase into the correct location based on the root of git-svn repo
Commit the changes so everything gets dumped into the correct place
From the command line this looks like:
> cd /path/to/giant-svn-repo
> git checkout -b my_django_project
> mkdir /path/to/giant-svn-repo/djangoproject
> git remote add -f local_django_dev /path/to/git/django/project
> git merge -s ours --no-commit local_django_dev/master
> git read-tree --prefix=djangoproject -u local_django_dev/master
> git commit -m 'Merged local_django_dev into subdirectory djangoproject'
This works, but in addition to the contents of the django git repo being in /path/to/giant-svn-repo/djangoproject it is also in the main root of the repository tree!
project
├── README
├── REQUIREMENTS
├── __init__.py
├── fabfile.py
├── djangoproject
│   ├── README
│   ├── REQUIREMENTS
│   ├── __init__.py
│   ├── fabfile.py
│   ├── app1
│   ├── manage.py
│   ├── app2
│   ├── app3
│   ├── app4
│   └── project
├── app1
├── manage.py
├── app2
├── app3
├── app4
└── project
I seem to have polluted the parent directory where all the projects of the giant-svn-repo are located.
Is there any way I can stop this happening?
(BTW this has all been done in a test directory structure - I haven't corrupted anything yet. Just trying to figure out the best way to do it)
I am sure it is just (re)defining one more argument to either git merge, git read-tree or git commit but I am pretty much at my limit of git kung-fu.
Thanks in advance.

How can I correctly set DJANGO_SETTINGS_MODULE for my Django project (I am using virtualenv)?

I am having some trouble setting the DJANGO_SETTINGS_MODULE for my Django project.
I have a directory at ~/dev/django-project. In this directory I have a virtual environment which I have set up with virtualenv, and also a django project called "blossom" with an app within it called "onora". Running tree -L 3 from ~/dev/django-project/ shows me the following:
.
├── Procfile
├── blossom
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── fixtures
│   │   └── initial_data_test.yaml
│   ├── manage.py
│   ├── onora
│   │   ├── __init__.py
│   │   ├── __init__.pyc
│   │   ├── admin.py
│   │   ├── admin.pyc
│   │   ├── models.py
│   │   ├── models.pyc
│   │   ├── tests.py
│   │   └── views.py
│   ├── settings.py
│   ├── settings.pyc
│   ├── sqlite3-database
│   ├── urls.py
│   └── urls.pyc
├── blossom-sqlite3-db2
├── requirements.txt
└── virtual_environment
├── bin
│   ├── activate
│   ├── activate.csh
│   ├── activate.fish
│   ├── activate_this.py
│   ├── django-admin.py
│   ├── easy_install
│   ├── easy_install-2.7
│   ├── gunicorn
│   ├── gunicorn_django
│   ├── gunicorn_paster
│   ├── pip
│   ├── pip-2.7
│   ├── python
│   └── python2.7 -> python
├── include
│   └── python2.7 -> /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
└── lib
└── python2.7
I am trying to dump my data from the database with the command
django-admin.py dumpdata
My approach is to run cd ~/dev/django-project and then run source virtual_environment/bin/activate and then run django-admin.py dumpdata
However, I am getting the following error:
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
I did some googling and found this page: https://docs.djangoproject.com/en/dev/topics/settings/#designating-the-settings
which tell me that
When you use Django, you have to tell it which settings you're using.
Do this by using an environment variable, DJANGO_SETTINGS_MODULE. The
value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g.
mysite.settings. Note that the settings module should be on the Python
import search path.
Following a suggestion at Setting DJANGO_SETTINGS_MODULE under virtualenv? I appended the lines
export DJANGO_SETTINGS_MODULE="blossom.settings"
echo $DJANGO_SETTINGS_MODULE
to virtual_environment/bin/activate. Now, when I run the activate command in order to activate the virtual environment, I get output reading:
DJANGO_SETTINGS_MODULE set to blossom.settings
This looks good to me, but now the problem I have is that running
django-admin.py dumpdata
returns the following error:
ImportError: Could not import settings 'blossom.settings' (Is it on sys.path?): No module named blossom.settings
What am I doing wrong? How can I check thesys.path? How is this supposed to work?
Thanks.
Don't run django-admin.py for anything other than the initial project creation. For everything after that, use manage.py, which takes care of the finding the settings.
I just encountered the same error, and eventually managed to work out what was going on (the big clue was (Is it on sys.path?) in the ImportError).
You need add your project directory to PYTHONPATH — this is what the documentation means by
Note that the settings module should be on the Python import search path.
To do so, run
$ export PYTHONPATH=$PYTHONPATH:$PWD
from the ~/dev/django-project directory before you run django-admin.py.
You can add this command (replacing $PWD with the actual path to your project, i.e. ~/dev/django-project) to your virtualenv's source script. If you choose to advance to virtualenvwrapper at some point (which is designed for this kind of situation), you can add the export PY... line to the auto-generated postactivate hook script.
mkdjangovirtualenv automates this even further, adding the appropriate entry to the Python path for you, but I have not tested it myself.
On unix-like machine you can simply alias virtualenv like this and use alias instead of typing everytime:
.bashrc
alias cool='source /path_to_ve/bin/activate; export DJANGO_SETTINGS_MODULE=django_settings_folder.settings; cd path_to_django_project; export PYTHONPATH=$PYTHONPATH:$PWD'
My favourite alternative is passing settings file as runtime parameter to manage.py in a python package syntax, e.g:
python manage.py runserver --settings folder.filename
more info django docs
I know there are plenty answers, but this one worked for me just for the record.
Navigate to your .virtual_env folder where all the virtual environments are.
Go to the environment folder specific to your project
Append export DJANGO_SETTINGS_MODULE=<django_project>.settings
or export DJANGO_SETTINGS_MODULE=<django_project>.settings.local if you are using a separate settings file stored in a settings folder.
Yet another way to do deal with this issue is to use the python dotenv package and include PYTHONPATH and DJANGO_SETTINGS_MODULE in the .env file along with your other environment variables. Then modify your manage.py and wsgi.py to load them as stated in the instructions.
from dotenv import load_dotenv
load_dotenv()
I had similar error while working on windows machine. My problem was using wrong debug configuration. Use Python:django as your debug config option.
First ensure you've exported/set django_settings_module correctly here.