Go build: "Cannot find package" (even though GOPATH is set) - build

Even though I have GOPATH properly set, I still can't get "go build" or "go run" to find my own packages. What am I doing wrong?
$ echo $GOROOT
/usr/local/go
$ echo $GOPATH
/home/mitchell/go
$ cat ~/main.go
package main
import "foobar"
func main() { }
$ cat /home/mitchell/go/src/foobar.go
package foobar
$ go build main.go
main.go:3:8: import "foobar": cannot find package

It does not work because your foobar.go source file is not in a directory called foobar. go build and go install try to match directories, not source files.
Set $GOPATH to a valid directory, e.g. export GOPATH="$HOME/go"
Move foobar.go to $GOPATH/src/foobar/foobar.go and building should work just fine.
Additional recommended steps:
Add $GOPATH/bin to your $PATH by: PATH="$GOPATH/bin:$PATH"
Move main.go to a subfolder of $GOPATH/src, e.g. $GOPATH/src/test
go install test should now create an executable in $GOPATH/bin that can be called by typing test into your terminal.

Although the accepted answer is still correct about needing to match directories with package names, you really need to migrate to using Go modules instead of using GOPATH. New users who encounter this problem may be confused about the mentions of using GOPATH (as was I), which are now outdated. So, I will try to clear up this issue and provide guidance associated with preventing this issue when using Go modules.
If you're already familiar with Go modules and are experiencing this issue, skip down to my more specific sections below that cover some of the Go conventions that are easy to overlook or forget.
This guide teaches about Go modules: https://golang.org/doc/code.html
Project organization with Go modules
Once you migrate to Go modules, as mentioned in that article, organize the project code as described:
A repository contains one or more modules. A module is a collection of
related Go packages that are released together. A Go repository
typically contains only one module, located at the root of the
repository. A file named go.mod there declares the module path: the
import path prefix for all packages within the module. The module
contains the packages in the directory containing its go.mod file as
well as subdirectories of that directory, up to the next subdirectory
containing another go.mod file (if any).
Each module's path not only serves as an import path prefix for its
packages, but also indicates where the go command should look to
download it. For example, in order to download the module
golang.org/x/tools, the go command would consult the repository
indicated by https://golang.org/x/tools (described more here).
An import path is a string used to import a package. A package's
import path is its module path joined with its subdirectory within the
module. For example, the module github.com/google/go-cmp contains a
package in the directory cmp/. That package's import path is
github.com/google/go-cmp/cmp. Packages in the standard library do not
have a module path prefix.
You can initialize your module like this:
$ go mod init github.com/mitchell/foo-app
Your code doesn't need to be located on github.com for it to build. However, it's a best practice to structure your modules as if they will eventually be published.
Understanding what happens when trying to get a package
There's a great article here that talks about what happens when you try to get a package or module: https://medium.com/rungo/anatomy-of-modules-in-go-c8274d215c16
It discusses where the package is stored and will help you understand why you might be getting this error if you're already using Go modules.
Ensure the imported function has been exported
Note that if you're having trouble accessing a function from another file, you need to ensure that you've exported your function. As described in the first link I provided, a function must begin with an upper-case letter to be exported and made available for importing into other packages.
Names of directories
Another critical detail (as was mentioned in the accepted answer) is that names of directories are what define the names of your packages. (Your package names need to match their directory names.) You can see examples of this here: https://medium.com/rungo/everything-you-need-to-know-about-packages-in-go-b8bac62b74cc
With that said, the file containing your main method (i.e., the entry point of your application) is sort of exempt from this requirement.
As an example, I had problems with my imports when using a structure like this:
/my-app
├── go.mod
├── /src
├── main.go
└── /utils
└── utils.go
I was unable to import the code in utils into my main package.
However, once I put main.go into its own subdirectory, as shown below, my imports worked just fine:
/my-app
├── go.mod
├── /src
├── /app
| └── main.go
└── /utils
└── utils.go
In that example, my go.mod file looks like this:
module git.mydomain.com/path/to/repo/my-app
go 1.14
When I saved main.go after adding a reference to utils.MyFunction(), my IDE automatically pulled in the reference to my package like this:
import "git.mydomain.com/path/to/repo/my-app/src/my-app"
(I'm using VS Code with the Golang extension.)
Notice that the import path included the subdirectory to the package.
Dealing with a private repo
If the code is part of a private repo, you need to run a git command to enable access. Otherwise, you can encounter other errors This article mentions how to do that for private Github, BitBucket, and GitLab repos: https://medium.com/cloud-native-the-gathering/go-modules-with-private-git-repositories-dfe795068db4
This issue is also discussed here: What's the proper way to "go get" a private repository?

I solved this problem by set my go env GO111MODULE to off
go env -w GO111MODULE=off
Note: setting GO111MODULE=off will turn off the latest GO Modules feature.
Reference: Why is GO111MODULE everywhere, and everything about Go Modules (updated with Go 1.17)
GO111MODULE with Go 1.16
As of Go 1.16, the default behavior is GO111MODULE=on, meaning that if
you want to keep using the old GOPATH way, you will have to force Go
not to use the Go Modules feature:
export GO111MODULE=off

In the recent go versions from 1.14 onwards, we have to do go mod vendor before building or running, since by default go appends -mod=vendor to the go commands.
So after doing go mod vendor, if we try to build, we won't face this issue.

Edit: since you meant GOPATH, see fasmat's answer (upvoted)
As mentioned in "How do I make go find my package?", you need to put a package xxx in a directory xxx.
See the Go language spec:
package math
A set of files sharing the same PackageName form the implementation of a package.
An implementation may require that all source files for a package inhabit the same directory.
The Code organization mentions:
When building a program that imports the package "widget" the go command looks for src/pkg/widget inside the Go root, and then—if the package source isn't found there—it searches for src/widget inside each workspace in order.
(a "workspace" is a path entry in your GOPATH: that variable can reference multiple paths for your 'src, bin, pkg' to be)
(Original answer)
You also should set GOPATH to ~/go, not GOROOT, as illustrated in "How to Write Go Code".
The Go path is used to resolve import statements. It is implemented by and documented in the go/build package.
The GOPATH environment variable lists places to look for Go code.
On Unix, the value is a colon-separated string.
On Windows, the value is a semicolon-separated string.
On Plan 9, the value is a list.
That is different from GOROOT:
The Go binary distributions assume they will be installed in /usr/local/go (or c:\Go under Windows), but it is possible to install them in a different location.
If you do this, you will need to set the GOROOT environment variable to that directory when using the Go tools.

TL;DR: Follow Go conventions! (lesson learned the hard way), check for old go versions and remove them. Install latest.
For me the solution was different. I worked on a shared Linux server and after verifying my GOPATH and other environment variables several times it still didn't work. I encountered several errors including 'Cannot find package' and 'unrecognized import path'. After trying to reinstall with this solution by the instructions on golang.org (including the uninstall part) still encountered problems.
Took me some time to realize that there's still an old version that hasn't been uninstalled (running go version then which go again... DAHH) which got me to this question and finally solved.

Running go env -w GO111MODULE=auto worked for me

Without editing GOPATH or anything, in my case just worked the following:
/app
├── main.go
├── /utils
└── utils.go
Import packages where needed. This can be unintuitive, because it isn't relative to the app path. You need to add the app in the package path too:
main.go:
package main
import(
"app/util"
)
Being in app directory, run:
go mod init app
go get <package/xxx>
go build main.go / go run main.go
You should be good to go.
GOPATH = /home/go
appPath = /home/projects/app
Create a proper go.mod and go.sum with go mod init app (delete old before)
After that resolve all dependencies like missing packages with go get github.com/example/package.

In simple words you can solve the import problem even with GO111MODULE=on with the following syntax for import:
import <your_module_name>/<package_name>
your_module_name -> module name which can be found in the go.mod file of the module as the first line.
example: github.com/nikhilg-hub/todo/ToDoBackend
package_name -> Path to your package within module.
example: orm
So the import statement would look like:
import "github.com/nikhilg-hub/todo/ToDoBackend/orm"
According to me we need to specify the module name + package name because we may need a same package name in two or more different modules.
Note: If you are importing a package from same module still you need to specify the full import path like above.

If you have a valid $GOROOT and $GOPATH but are developing outside of them, you might get this error if the package (yours or someone else's) hasn't been downloaded.
If that's the case, try go get -d (-d flag prevents installation) to ensure the package is downloaded before you run, build or install.

GOROOT should be set to your installation directory (/usr/local/go).
GOPATH should be set to your working directory (something like /home/username/project_folder).
GOPATH should not be set to GOROOT as your own project may need to install packages, and it's not recommended to have those packages in the Go installation folder. Check out this link for more.

For me none of the above solutions worked. But my go version was not the latest one. I have downloaded the latest version and replaced the older version in my mac os after that it worked perfectly.

I had a similar problem when building a docker file:
[1/3] STEP 9/9: RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go
api/v1alpha1/XXX.go:5:2: cannot find package "." in:
/workspace/client/YYY/YYY.go
This only appeared when building the Dockerfile, building locally worked fine.
The problem turned out to be a missing statement in my Dockerfile:
COPY client/ client/

I do not understand why this happens, we must be able to import from wherever our file is in its nest, since I have discovered that if we have more than one nest this will throw an error.
package main
import (
"fmt"
"indexer/source/testPackage3" // this will be show a GOROOT error.
"indexer/testPackage"
"indexer/testPackage2"
)
func main() {
fmt.Println("Agile content indexer -")
fmt.Println(testPackage.Greeting())
fmt.Println(testPackage2.Greeting())
fmt.Println(testPackage3.Greeting())
}
├── testPackage2
│ ├── entry2.go
│ └── source
│ └── entry3.go
To conclude, I just want to tell you, the entry3.go file will not work when imported into my main file, which in this case is (main.go), I do not understand why, therefore, I have simply chosen to use a depth folder in the packages I need to export.
entry.go, entry2.go will work perfectly when imported, but entry3.go will not work..
In addition, both the directory and the name of the package must be the same so that they work properly when importing them.

Have you tried adding the absolute directory of go to your 'path'?
export PATH=$PATH:/directory/to/go/

Related

Include C++ library in Bazel project

I'm currently messing around with Google's Mediapipe, which uses Bazel as a build tool. The folder has the following structure:
mediapipe
├ mediapipe
| └ examples
| └ desktop
| └ hand_tracking
| └ BUILD
├ calculators
| └ tensor
| └ tensor_to_landmarks_calculator.cc
| └ BUILD
└ WORKSPACE
There are a bunch of other files in there as well, but they are rather irrelevant to this problem. They can be found in the git repo linked above if you need them.
I'm at a stage where I can build and run the hand_tracking example without any problems. Now, I want to include the cereal library to the build, so that I can use #include <cereal/archives/binary.hpp> from within tensors_to_landmarks_calculator.cc. The cereal library is located at C:\\cereal, but can be moved to other locations if it simplifies the process.
Basically, I'm looking for the Bazel equivalent of adding a path to Additional Include Directories in Visual Studio.
How would I need to modify the WORKSPACE and BUILD files in order to include the library in my project, assuming they are in a default state?
Unfortunately, this official doc page only covers one-file libraries, and other implementations kept giving me File could not be found errors at build time.
Thanks in advance!
First you have to tell Bazel about the code living "outside" the
workspace area. It needs to know how to find it, how to build it, and
what to call it, etc. These are known as remote repositories. They
can be local to your disk (outside the Bazel workspace area), or
actually remote on another machine or server, like github. The
important thing is it must be described to Bazel with enough
information that it can use.
As most third party code does not come with BUILD.bazel files, you may
need to provide one yourself and tell Bazel "use this as if it was a
build file found in that code."
For a local directory outside your bazel project
Add a repository rule like this to your WORKSPACE file:
# This could go in your WORKSPACE file
# (But prefer the http_archive solution below)
new_local_repository(
name = "cereal",
build_file = "//third_party:cereal.BUILD.bazel",
path = "<path-to-directory>",
)
("new_local_repository" is built-in to bazel)
Somewhere under your Bazel WORKSPACE area you'll also need to make a
cereal.BUILD.bazel file and export it from the package. I choose a directory called //third_party, but you can put it anywhere
else, and name it anything else, as long as the repository rule
provides a proper bazel label for it.) The contents might look like
this:
# contents of //third_party/cereal.BUILD.bazel
cc_library(
name = "cereal-lib",
srcs = glob(["**/*.hpp"]),
includes = ["include"],
visibility = ["//visibility:public"],
)
Bazel will pretend this was the BUILD file that "came with" the remote
repository, even though it's actually local to your repo. When Bazel fetches this remote repostiory code it copies it, and the BUILD file you provide, into its external area for caching, building, etc.
To make //third_party:cereal.BUILD.bazel a valid target in your directory, add a BUILD.bazel file to that directory:
# contents of //third_party/BUILD.bazel
exports_files = [
"cereal.BUILD.bazel",
]
Without exporting it, you won't be able to refer to the buildfile from your repository rule.
Local disk repositories aren't very portable since people may have
different versions installed and it's not very hermetic (making it
hard to share caches of builds with others), and it requires they put
them in the same place, and that kind of setup can be problematic. It
also will fail when you mix operating systems, etc, if you refer to it as "C:..."
Downloading a tarball of the library from github, for example
A better way is to download a fixed version from github, for example,
and let Bazel manage it for you in its external area:
http_archive(
name = "cereal",
sha256 = "329ea3e3130b026c03a4acc50e168e7daff4e6e661bc6a7dfec0d77b570851d5",
urls =
["https://github.com/USCiLab/cereal/archive/refs/tags/v1.3.0.tar.gz"],
build_file = "//third_party:cereal.BUILD.bazel",
)
The sha256 is important, since it downloads and computes it, compares to what you specified, and can cache it. In the future, it won't re-download it if the local file's sha matches.
Notice, it again says build_file = //third_party:cereal.BUILD.bazel., all
the same things from new_local_repository above apply here. Make sure you provide the build file for it to use, and export it from where you put it.
*To test that the remote repository is setup ok
on the command line issue
bazel fetch #cereal//:cereal-lib
I sometimes have to clear it out to make it try again, if my rule isn't quite right, but the "bad" version sticks around.
bazel clean --expunge
will remove it, but might be overkill.
Finally
We have:
defined a remote repository called #cereal
defined a target in it called cereal-lib
the target is thus #cereal//:cereal-lib
To use it
Go to the package where you would like to include cereal, and add a
dependency on this repository to the rule that builds the c++ code that would like to use cereal. That is, in your case, the BUILD rule that causes tensor_to_landmarks_calculator.cc to get built, add:
deps = [
"#cereal//:cereal-lib",
...
]
And then in your c++ code:
#include "cereal/cereal.hpp"
That should do it.

Import package within sub-package [duplicate]

Assume I have the following files,
pkg/
pkg/__init__.py
pkg/main.py # import string
pkg/string.py # print("Package's string module imported")
Now, if I run main.py, it says "Package's string module imported".
This makes sense and it works as per this statement in this link:
"it will first look in the package's directory"
Assume I modified the file structure slightly (added a core directory):
pkg/
pkg/__init__.py
plg/core/__init__.py
pkg/core/main.py # import string
pkg/string.py # print("Package's string module imported")
Now, if I run python core/main.py, it loads the built-in string module.
In the second case too, if it has to comply with the statement "it will first look in the package's directory" shouldn't it load the local string.py because pkg is the "package directory"?
My sense of the term "package directory" is specifically the root folder of a collection of folders with __init__.py. So in this case, pkg is the "package directory". It is applicable to main.py and also files in sub- directories like core/main.py because it is part of this "package".
Is this technically correct?
PS: What follows after # in the code snippet is the actual content of the file (with no leading spaces).
Packages are directories with a __init__.py file, yes, and are loaded as a module when found on the module search path. So pkg is only a package that you can import and treat as a package if the parent directory is on the module search path.
But by running the pkg/core/main.py file as a script, Python added the pkg/core directory to the module search path, not the parent directory of pkg. You do have a __init__.py file on your module search path now, but that's not what defines a package. You merely have a __main__ module, there is no package relationship to anything else, and you can't rely on implicit relative imports.
You have three options:
Do not run files inside packages as scripts. Put a script file outside of your package, and have that import your package as needed. You could put it next to the pkg directory, or make sure the pkg directory is first installed into a directory already on the module search path, or by having your script calculate the right path to add to sys.path.
Use the -m command line switch to run a module as if it is a script. If you use python -m pkg.core Python will look for a __main__.py file and run that as a script. The -m switch will add the current working directory to your module search path, so you can use that command when you are in the right working directory and everything will work. Or have your package installed in a directory already on the module search path.
Have your script add the right directory to the module search path (based on os.path.absolute(__file__) to get a path to the current file). Take into account that your script is always named __main__, and importing pkg.core.main would add a second, independent module object; you'd have two separate namespaces.
I also strongly advice against using implicit relative imports. You can easily mask top-level modules and packages by adding a nested package or module with the same name. pkg/time.py would be found before the standard-library time module if you tried to use import time inside the pkg package. Instead, use the Python 3 model of explicit relative module references; add from __future__ import absolute_import to all your files, and then use from . import <name> to be explicit as to where your module is being imported from.

Where is settings.py supposed to be with Eve?

I keep getting an exception stating
DOMAIN dictionary missing or wrong.
My folder structure looks like this
.
└──myapp
├── app.py
├── app.wsgi
└── settings.py
So Eve is stating that Dictionary cannot be found, even though it's defined in the settings file. I'm running this app in a virtualenv and I have my apache virtual host pointed to the wsgi file.
Hit this too, and after a bit of rooting around this is the conclusion I've reached.
Eve has, to me, an unintuitive method of finding the settings file. You can see it in the code. It does the following:
Gets the name of the settings file from the environment variable 'EVE_SETTINGS' if set, otherwise from the string passed as the settings= key-word argument to the Eve() constructor, default 'settings.py'.
Looks in the directory of sys.argv[0] to see if the file is there. It seems odd to me that this is useful enough to be the first option.
Walks every directory in sys.path to try and find a settings file. Which means if any module you have installed in your environment has a file named settings.py anywhere in it, Eve will try to use that.
If Eve doesn't find one, then it doesn't use one.
So, my problem was that I was running Eve from a virtualenv that also had django installed. Django has (at least one) sample 'settings.py' file in it. So Eve was trying to use that. And it was crashing. Thanks Eve.
Also note that it doesn't look in the current directory for a settings file at all (unless it's in your PYTHONPATH). This seems a bit nuts to me.
I have raised an issue on Eve's github page.

django dev on mac having to explicitly name full path

After a long time away from an app i wrote in Django and didn't complete, I've come back to it on a new Mac.
I'm struggling to get the code to refer to the apps and the files within them without the explicit path. For instance:
from myproject.app.file import object
Whereas I remember not having to use myproject every time.
Is this something that has changed? I seem to remember being about to add to the path in manage.py which is called every time you run the dev server, but this hasn't worked this time.
sys.path.append /path/to/myproject
Should that fix the issue I'm having?
I started with a simple answer and it grew into more details on how to add subdirectories of your project to python path. Maybe a bit off-topic, but it could be useful to you so I'm pushing the post button anyway.
I usually have a bunch of small re-usable apps of mine I keep inside my project tree, because I don't want them to grow into independent modules. My projet tree will look like this:
manage.py
myproject/apps
myproject/libs
myproject/settings
...
Still, by default, Django only adds the project root to python path. Yet it makes no sense in my opinion to have apps load modules with full path:
from myproject.apps.author.models import Author
from myproject.libs.rest_filters import filters
That's both way too verbose, and it breaks reusability as I only use absolute imports. Not to mention if I someday build an actual python package out of some of the libs, it will break.
So, I took the following steps. I added the relevant folders to the path:
# in manage.py
root = os.path.dirname(__file__)
sys.path.append(os.path.realpath(os.path.join(root, 'myproject', 'apps')))
sys.path.append(os.path.realpath(os.path.join(root, 'myproject', 'libs')))
But you must ensure those packages cannot be loaded from the root of the project, or you will have odd issues as python would load another copy of the module. For instance, isinstance(libs.foo.bar(), myproject.libs.foo.bar) == False
It's not hard though : just remove __init__.py from the folders you add to the path. That will make sure they cannot be descended into from the project.
Also, Django's discover runner will not descend into those paths unless you specify them manually. That may be fine with you (if every module has its own test suite). Or you can extend the runner, so it knows about this: sample code.

Do I need to update the path when installing a module?

So I'm trying to install a module, specifically xlutils. I've read through the resources that I've linked at the bottom, but none of those resources have allowed me to successfully install and import the module. I'm running Windows 8 and using Python 2.7.
I downloaded the .tar.gz file containing xlutils, and unpacked it to C:\Python which was then a .tar file, so I unpacked that to the same folder. This created a folder, xlutils, which looked like it contained what I need. I also read somewhere that these should be stored in site-packages, so I moved it there.
But when I run import commands, they don't work, just tell me the module couldn't be found. When I look at the path browser, it doesn't see the folder, but I'm certain it's in there. That leads me to wonder, do I need to do something to manually update what the path browser can view?
Note that I've also already tried going to the command line, navigating to the folder containing the module, and typing python setup.py install but that just tells me that the term "python" is not found. In general, my command line always does this though. Usually I have to type .\python instead to run Python from the command line, but I also tried doing that here (i.e navigating to the folder and typing .\python setup.py install but it still says the same thing).
Also note that I can import numpy and scipy just fine, and I can see them in the path browser--not sure why those work while this one doesn't.
Resources I've already read but hasn't solved my problem:
(... Well, I tried providing the resources I've already viewed, but can't so many links with such low reputation. But basically, I've read the first ten links on a Google search and two or three past Stack questions and answers.)
Solutions I see:
You can use the absolute path C:\Python27\python.exe setup.py install
You can add the Python directory C:\Python27\ to your path variable before running python setup.py install