I'm creating a boilerplate project for working with the Foundation framework using gulp and browserify with the following file structure while following this handy guide:
.
├── build
| ├── js
| └── bundle.js
| ├── stylesheets
| ├── main.css
| └── main.min.css
| └── index.html
├── node_modules
| └── [gulp, gulp-watch, etc...]
├── src
| ├── js
| └── main.js
├── sass
| ├── _main_settings.scss [for overriding default Foundation settings]
| └── main.scss
├── templates
| └── index.jade
├── bower.json
├── gulpfile.js
└── package.json
Should I bower install foundation in the root folder, or would it make more sense to put it inside the src folder? I've gotten the scss working by following this guide, however I'm not sure how to use browserify to bundle up the foundation js so that I can just import one neat compiled file in index.html. In addition, I'd like to import modernizr separately, in the head section rather than at the end of the body.
If it helps, here is my gulpfile for reference.
I deciding to use npm to install foundation rather than bower, however either way I left bower_components / node_modules in the root folder rather than build.
browserify-shim is necessary to get browserify to play nice with Foundation, as was answered here.
I ended up making a Yeoman generator to scaffold out a project with Foundation, Gulp and Browserify, feel free to check it out: https://github.com/dougmacklin/generator-foundation-browserify
Related
I just picked up swig in an attempt to port a large library written in C++, to golang. The build and install went okay, and now I'd like to test it by using it in another project I'm working on. I built the module in a separate swig directory at the root of the library directory, and pushed a fork of the library with the swig changes to my own repo. The structure then looks like this
.
├── bin
├── buildfiles
├── doc
├── GoPro
├── internal
├── lib
├── libraw
├── m4
├── object
├── RawSpeed
├── RawSpeed3
├── samples
├── src
└── swig
├── go.mod
├── libraw.go
├── libraw.i
├── libraw_c_api.cxx
└── libraw_wrap.c
The module name in the swig/go.mod file is github.com/MRHT-SRProject/LibRawGo.
I tried to include the module in another project but it failed with the error module github.com/MRHT-SRProject/LibRawGo#latest found (v0.0.0-20221005050554-bc562f90d08d), but does not contain package github.com/MRHT-SRProject/LibRawGo. I am assuming it has to do with the module being a sub folder of the project, but I'm not really sure.
It turns out that go is expecting a sub-folder within the module with the same package name as the module defined within swig. By renaming the swig directory to the name given to the module I got it working.
I have been struggling trying to deploy my Golang app to AWS EB for couple days.
I am trying to deploy my app on an EB server Preconfigured Docker - Go 1.4 running on 64bit Debian/2.9.2 using the eb cli via the command eb deploy in my app folder.
After couple seconds, I've got an error message saying that my app wasn't deployed because of an error.
Looking at the eb-activity.log, here what I can see:
/var/log/eb-activity.log
-------------------------------------
Fetching https://golang.org/x/crypto?go-get=1
Parsing meta tags from https://golang.org/x/crypto?go-get=1 (status code 200)
golang.org/x/crypto (download)
Fetching https://golang.org/x/sys/unix?go-get=1
Parsing meta tags from https://golang.org/x/sys/unix?go-get=1 (status code 200)
get "golang.org/x/sys/unix": found meta tag main.metaImport{Prefix:"golang.org/x/sys", VCS:"git", RepoRoot:"https://go.googlesource.com/sys"} at https://golang.org/x/sys/unix?go-get=1
get "golang.org/x/sys/unix": verifying non-authoritative meta tag
Fetching https://golang.org/x/sys?go-get=1
Parsing meta tags from https://golang.org/x/sys?go-get=1 (status code 200)
golang.org/x/sys (download)
github.com/randomuser/private-repo (download)
# cd .; git clone https://github.com/randomuser/private-repo /go/src/github.com/randomuser/private-repo
Cloning into '/go/src/github.com/randomuser/private-repo'...
fatal: could not read Username for 'https://github.com': No such device or address
package github.com/Sirupsen/logrus
imports golang.org/x/crypto/ssh/terminal
imports golang.org/x/sys/unix
imports github.com/randomuser/private-repo/apis: exit status 128
package github.com/Sirupsen/logrus
imports golang.org/x/crypto/ssh/terminal
imports golang.org/x/sys/unix
imports github.com/randomuser/private-repo/app
imports github.com/randomuser/private-repo/app
imports github.com/randomuser/private-repo/app: cannot find package "github.com/randomuser/private-repo/app" in any of:
/usr/src/go/src/github.com/randomuser/private-repo/app (from $GOROOT)
/go/src/github.com/randomuser/private-repo/app (from $GOPATH)
I suppose there is an issue when the server tries to install the app, it seems it's trying to retrieve from my private repo on github ...
I referenced my app sub packages as github.com/randomuser/private-repo/subpackage I supposed this is why it behaves like that.
Is there a way to deploy all my code, forcing my private repo to be populated within the GOROOT src/github.com/randomuser/private-repo/ so the server doesn't have to try to get it?
I didn't found any proper example from Amazon docs (multi-packages apps) nor from Github.
Am I missing anything? Is there a better solution?
On a side note, I also tried to deploy my compiled binary directly (create a folder where I put only the binary, zip it and upload it on the ebs env) but it didn't worked neither ... Maybe this option requires yet another env config (if so, which one?).
Thanks for your help :)
Configuration
Golang app having the following folders:
├── Dockerfile
├── server.go
├── Gopkg.lock
├── Gopkg.toml
├── Makefile
├── apis
│ ├── auth.go
│ ├── auth_test.go
│ ├── ...
├── app
│ ├── config.go
│ ├── init.go
│ ├── logger.go
│ ├── scope.go
│ ├── transactional.go
│ └── version.go
├── config
│ ├── dev.app.yaml
│ ├── errors.yaml
│ └── prod.app.yaml
├── daos
│ ├── auth.go
│ ├── auth_test.go
│ ├── ...
├── errors
│ ├── api_error.go
│ ├── api_error_test.go
│ ├── errors.go
│ ├── errors_test.go
│ ├── template.go
│ └── template_test.go
├── models
│ ├── identity.go
│ ├── ...
├── services
│ ├── auth.go
│ ├── auth_test.go
│ ├── ...
├── util
│ ├── paginated_list.go
│ └── paginated_list_test.go
Here is the content of my server.go
package main
import (
"flag"
"fmt"
"net/http"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mysql"
"github.com/randomuser/private-repo/apis"
"github.com/randomuser/private-repo/app"
"github.com/randomuser/private-repo/daos"
"github.com/randomuser/private-repo/errors"
"github.com/randomuser/private-repo/services"
)
func main() {
// getting env from command line
// env is either prod, preprod or dev
// by default, env is prod
env := flag.String("env", "prod", "environment: prod, preprod or dev")
flag.Parse()
...
router.To("GET,HEAD", "/ping", func(c *routing.Context) error {
c.Abort() // skip all other middlewares/handlers
return c.Write("OK " + app.Version)
})
...
// Serve on port 5000
My Dockerfile content:
FROM golang:1.4.2-onbuild
ADD . /go/src/github.com/randomuser/private-repo
RUN go install github.com/randomuser/private-repo
EXPOSE 5000
ENTRYPOINT /go/bin/private-repo
I finally managed to make it works.
So I created a brand new eb app (without docker).
I then figured out that my app wasn't able to retrieve env vars set in the console somehow ...
So what I did is that I forced the env variables to be passed to my app at startup time from my production config file using the build.sh script see below :
#!/bin/bash -xe
# See http://tldp.org/LDP/abs/html/options.html
# -x -> Print each command to stdout before executing it, expand commands
# -e -> Abort script at first error, when a command exits with non-zero status
# (except in until or while loops, if-tests, list constructs)
# $GOPATH isn't set by default, nor do we have a usable Go workspace :'(
GOPATH="/var/app/current"
APP_BUILD_DIR="$GOPATH/src/to-be-defined" # We will build the app here
APP_STAGING_DIR="/var/app/staging" # Current directory
DEP_VERSION="v0.3.2" # Use specific version for stability
ENV_VAR_PREFIX="TO_BE_DEFINED_"
# Install dep, a Go dependency management tool, if not already installed or if
# the version does not match.
if ! hash dep 2> /dev/null ||\
[[ $(dep version | awk 'NR==2{print $3}') != "$DEP_VERSION" ]]; then
# /usr/local/bin is expected to be on $PATH.
curl -L \
-s https://github.com/golang/dep/releases/download/$DEP_VERSION/dep-linux-amd64 \
-o /usr/local/bin/dep
chmod +x /usr/local/bin/dep
fi
# Remove the $APP_BUILD_DIR just in case it was left behind in a failed build.
rm -rf $APP_BUILD_DIR
# Setup the application directory
mkdir -p $APP_BUILD_DIR
# mv all files to $APP_BUILD_DIR
# https://superuser.com/questions/62141/how-to-move-all-files-from-current-directory-to-upper-directory
mv * .[^.]* $APP_BUILD_DIR
cd $APP_BUILD_DIR
# Pull in dependencies into vendor/.
dep ensure
# Build the binary with jsoniter tag.
go build -o application -tags=jsoniter .
# Modify permissons to make the binary executable.
chmod +x application
# Move the binary back to staging dir.
# Along with the configuration files.
mkdir $APP_STAGING_DIR/bin
# By default, `bin/application` is executed. This way, a Procfile isn't needed.
mv application $APP_STAGING_DIR/bin
cp -r config $APP_STAGING_DIR
# TODO: Fix the viper not working with env var
# Generate prod config from env variables
/opt/elasticbeanstalk/bin/get-config environment --output YAML | sed s/${ENV_VAR_PREFIX}//g > $APP_STAGING_DIR/config/prod.app.yaml
# Copy .ebextensions back to staging directory.
# cp -r .ebextensions $APP_STAGING_DIR
# Clean up.
rm -rf $APP_BUILD_DIR
echo "Build successful!!"
My build.sh file is called by EBS using this Buildfile:
make: ./build.sh
Et voilà ! Everything works properly now :)
I am building a console application that go out to GitHub via octokit and fetch all the matched readme.md files. Then I saved these .md files to _posts folder in my Jekyll project.
I used the jekyll build command to build at the _posts dir level. It created a _site folder only contains .md files but .html. What am I missing here?
jekyll build shouldn't run inside _posts folder, it should be executed at root level, i.e., tipically where your _config.yml is.
.
├── 404.html
├── about.md
├── _config.yml
├── Gemfile
├── Gemfile.lock
├── index.md
└── _posts
└── 2017-08-06-welcome-to-jekyll.markdown
Then your generated website will be located at /_site.
I am happily developing a Django project in my own Git repo on my localhost. I am creating branches, committing and merging happily. The path is something like:
/path/to/git/django/
and the structure is:
project
├── README
├── REQUIREMENTS
├── __init__.py
├── fabfile.py
├── app1
├── manage.py
├── app2
├── app3
├── app4
└── project
The rest of my development team still use Subversion, which is one giant repo with multiple projects. When I am working with that on my localhost I am still using Git (via git-svn). The path is something like
/path/to/giant-svn-repo/
Projects live under this like:
giant-svn-repo
|── project1
|── project2
|── project3
└── project4
When I want to work with the latest changes from the remote svn repo I just do a git-svn rebase. Then for new features I create a new branch, develop, commit, checkout master, merge branch with master, delete branch, and then a final git-svn dcommit. Cool. Everything works well.
These two repositories (lets call them git-django and git-svn) are completely independent right now.
Now I want to add git-django into the git-svn repo as a new project (ie. in a child directory called djangoproject). I have this working pretty well, using the following workflow:
cd into git-svn repo
Create a new branch in the git-svn repo
Make a new directory to host my django project
Add a new remote that links to my original Django project
Merge the remote into my local directory
Read-tree with the prefix of relative path djangoproject so it puts the codebase into the correct location based on the root of git-svn repo
Commit the changes so everything gets dumped into the correct place
From the command line this looks like:
> cd /path/to/giant-svn-repo
> git checkout -b my_django_project
> mkdir /path/to/giant-svn-repo/djangoproject
> git remote add -f local_django_dev /path/to/git/django/project
> git merge -s ours --no-commit local_django_dev/master
> git read-tree --prefix=djangoproject -u local_django_dev/master
> git commit -m 'Merged local_django_dev into subdirectory djangoproject'
This works, but in addition to the contents of the django git repo being in /path/to/giant-svn-repo/djangoproject it is also in the main root of the repository tree!
project
├── README
├── REQUIREMENTS
├── __init__.py
├── fabfile.py
├── djangoproject
│ ├── README
│ ├── REQUIREMENTS
│ ├── __init__.py
│ ├── fabfile.py
│ ├── app1
│ ├── manage.py
│ ├── app2
│ ├── app3
│ ├── app4
│ └── project
├── app1
├── manage.py
├── app2
├── app3
├── app4
└── project
I seem to have polluted the parent directory where all the projects of the giant-svn-repo are located.
Is there any way I can stop this happening?
(BTW this has all been done in a test directory structure - I haven't corrupted anything yet. Just trying to figure out the best way to do it)
I am sure it is just (re)defining one more argument to either git merge, git read-tree or git commit but I am pretty much at my limit of git kung-fu.
Thanks in advance.
I am having some trouble setting the DJANGO_SETTINGS_MODULE for my Django project.
I have a directory at ~/dev/django-project. In this directory I have a virtual environment which I have set up with virtualenv, and also a django project called "blossom" with an app within it called "onora". Running tree -L 3 from ~/dev/django-project/ shows me the following:
.
├── Procfile
├── blossom
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── fixtures
│ │ └── initial_data_test.yaml
│ ├── manage.py
│ ├── onora
│ │ ├── __init__.py
│ │ ├── __init__.pyc
│ │ ├── admin.py
│ │ ├── admin.pyc
│ │ ├── models.py
│ │ ├── models.pyc
│ │ ├── tests.py
│ │ └── views.py
│ ├── settings.py
│ ├── settings.pyc
│ ├── sqlite3-database
│ ├── urls.py
│ └── urls.pyc
├── blossom-sqlite3-db2
├── requirements.txt
└── virtual_environment
├── bin
│ ├── activate
│ ├── activate.csh
│ ├── activate.fish
│ ├── activate_this.py
│ ├── django-admin.py
│ ├── easy_install
│ ├── easy_install-2.7
│ ├── gunicorn
│ ├── gunicorn_django
│ ├── gunicorn_paster
│ ├── pip
│ ├── pip-2.7
│ ├── python
│ └── python2.7 -> python
├── include
│ └── python2.7 -> /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7
└── lib
└── python2.7
I am trying to dump my data from the database with the command
django-admin.py dumpdata
My approach is to run cd ~/dev/django-project and then run source virtual_environment/bin/activate and then run django-admin.py dumpdata
However, I am getting the following error:
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
I did some googling and found this page: https://docs.djangoproject.com/en/dev/topics/settings/#designating-the-settings
which tell me that
When you use Django, you have to tell it which settings you're using.
Do this by using an environment variable, DJANGO_SETTINGS_MODULE. The
value of DJANGO_SETTINGS_MODULE should be in Python path syntax, e.g.
mysite.settings. Note that the settings module should be on the Python
import search path.
Following a suggestion at Setting DJANGO_SETTINGS_MODULE under virtualenv? I appended the lines
export DJANGO_SETTINGS_MODULE="blossom.settings"
echo $DJANGO_SETTINGS_MODULE
to virtual_environment/bin/activate. Now, when I run the activate command in order to activate the virtual environment, I get output reading:
DJANGO_SETTINGS_MODULE set to blossom.settings
This looks good to me, but now the problem I have is that running
django-admin.py dumpdata
returns the following error:
ImportError: Could not import settings 'blossom.settings' (Is it on sys.path?): No module named blossom.settings
What am I doing wrong? How can I check thesys.path? How is this supposed to work?
Thanks.
Don't run django-admin.py for anything other than the initial project creation. For everything after that, use manage.py, which takes care of the finding the settings.
I just encountered the same error, and eventually managed to work out what was going on (the big clue was (Is it on sys.path?) in the ImportError).
You need add your project directory to PYTHONPATH — this is what the documentation means by
Note that the settings module should be on the Python import search path.
To do so, run
$ export PYTHONPATH=$PYTHONPATH:$PWD
from the ~/dev/django-project directory before you run django-admin.py.
You can add this command (replacing $PWD with the actual path to your project, i.e. ~/dev/django-project) to your virtualenv's source script. If you choose to advance to virtualenvwrapper at some point (which is designed for this kind of situation), you can add the export PY... line to the auto-generated postactivate hook script.
mkdjangovirtualenv automates this even further, adding the appropriate entry to the Python path for you, but I have not tested it myself.
On unix-like machine you can simply alias virtualenv like this and use alias instead of typing everytime:
.bashrc
alias cool='source /path_to_ve/bin/activate; export DJANGO_SETTINGS_MODULE=django_settings_folder.settings; cd path_to_django_project; export PYTHONPATH=$PYTHONPATH:$PWD'
My favourite alternative is passing settings file as runtime parameter to manage.py in a python package syntax, e.g:
python manage.py runserver --settings folder.filename
more info django docs
I know there are plenty answers, but this one worked for me just for the record.
Navigate to your .virtual_env folder where all the virtual environments are.
Go to the environment folder specific to your project
Append export DJANGO_SETTINGS_MODULE=<django_project>.settings
or export DJANGO_SETTINGS_MODULE=<django_project>.settings.local if you are using a separate settings file stored in a settings folder.
Yet another way to do deal with this issue is to use the python dotenv package and include PYTHONPATH and DJANGO_SETTINGS_MODULE in the .env file along with your other environment variables. Then modify your manage.py and wsgi.py to load them as stated in the instructions.
from dotenv import load_dotenv
load_dotenv()
I had similar error while working on windows machine. My problem was using wrong debug configuration. Use Python:django as your debug config option.
First ensure you've exported/set django_settings_module correctly here.