I have two go files with different build constraints in the header.
constants_production.go:
// +build production,!staging
package main
const (
URL = "production"
)
constants_staging.go:
// +build staging,!production
package main
const (
URL = "staging"
)
main.go:
package main
func main() {
fmt.Println(URL)
}
When I do a go install -tags "staging", sometimes, it prints production; Sometimes, it prints staging. Similarly, when I do go install -tags "production",...
How do I get a consistent output on every build? How do I make it print staging when I specify staging as a build flag? How do I make it print production when I specify production as a build flag? Am I doing something wrong here?
go build and go install will not rebuild the package (binary) if it looks like nothing has changed -- and it's not sensitive to changes in command-line build tags.
One way to see this is to add -v to print the packages as they are built:
$ go install -v -tags "staging"
my/server
$ go install -v -tags "production"
(no output)
You can force a rebuild by adding the -a flag, which tends to be overkill:
$ go install -a -v -tags "production"
my/server
...or by touching a server source file before the build:
$ touch main.go
$ go install -a -tags "staging"
...or manually remove the binary before the build:
$ rm .../bin/server
$ go install -a -tags "production"
Related
I'm trying to use Docker in Windows to create a Gitlab-Runner to build a C++ application. It works so far, but I guess there are better aproaches. Here's what I did:
Here's my initial Docker Container:
FROM mcr.microsoft.com/windows/servercore:2004
# Restore the default Windows shell for correct batch processing.
SHELL ["cmd", "/S", "/C"]
# Download the Build Tools bootstrapper.
ADD https://aka.ms/vs/16/release/vs_buildtools.exe C:\TEMP\vs_buildtools.exe
# Install Build Tools with the Microsoft.VisualStudio.Workload.AzureBuildTools workload, excluding workloads and components with known issues.
RUN C:\TEMP\vs_buildtools.exe --quiet --wait --norestart --nocache `
--installPath C:\BuildTools `
--add Microsoft.VisualStudio.Workload.VCTools `
--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 `
--add Microsoft.VisualStudio.Component.VC.CMake.Project `
--add Microsoft.VisualStudio.Component.Windows10SDK.19041 `
--locale en-US `
|| IF "%ERRORLEVEL%"=="3010" EXIT 0
# Define the entry point for the docker container.
# This entry point starts the developer command prompt and launches the PowerShell shell.
ENTRYPOINT ["cmd","/k", "C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat", "&&", "powershell.exe", "-NoLogo", "-ExecutionPolicy", "Bypass"]
And my .gitlab-ci.yml looks like this:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
This works so far and everthing builds correctly. The main problem however is that if the build fails the job succeeds anyways. I suspect that my entrypoint is wrong because powershell is executed inside of a cmd and only the exit code of cmd is checked which always succeeds.
So I tried to use powershell directly as entrypoint. I need to set environment variables via vcvars64.bat but that is not that trivial to do. I tried to execute the "Developer Powershell for VS 2019" but I can't execute the link in the entrypoint directly and the link looks like this:
"C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe -noe -c "&{Import-Module """C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"""; Enter-VsDevShell 6f66c5f6}"
Which I don't quit understand what it does and the hash also varies from installation to installation. Also simply using this as entrypoint didn't work.
I then tried to use the Invoke-Environment Script taken from "https://github.com/nightroman/PowerShelf/blob/master/Invoke-Environment.ps1". This allows me to execute the .bat file from powershell like this:
Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
But to do this I need to add this function to my profile, as far as I understood. I did this by copying it to "C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1" so that it would be accessible by all users.
In my Docker file I added:
COPY Invoke-Environment.ps1 C:\Windows\system32\WindowsPowerShell\v1.0\profile.ps1
and replaced the entrypoint with:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe", "-NoExit", "-NoLogo", "-ExecutionPolicy", "Bypass", "Invoke-Environment C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat"]
But that didn't initialize the environment variables correctly. Also "Invoke-Environment" is not found by the gitlab-runner. My last resort was to write a small script (Init64.ps1) that executes the Invoke-Environment function with vcvars64.bat:
function Invoke-Environment {
param
(
# Any cmd shell command, normally a configuration batch file.
[Parameter(Mandatory=$true)]
[string] $Command
)
$Command = "`"" + $Command + "`""
cmd /c "$Command > nul 2>&1 && set" | . { process {
if ($_ -match '^([^=]+)=(.*)') {
[System.Environment]::SetEnvironmentVariable($matches[1], $matches[2])
}
}}
}
Invoke-Environment C:\BuildTools\VC\Auxiliary\Build\vcvars64.bat
I copied this in docker via:
COPY Init64.ps1 Init64.ps1
and used this entrypoint:
ENTRYPOINT ["C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe"]
In my build script I need to manually call it to setup the variables:
build Docker Windows:
image: buildtools2019_core
stage: build
tags:
- win-docker
script:
- C:\Init64.ps1
- mkdir build
- cd build
- cmake -DCMAKE_BUILD_TYPE=Release -DenableWarnings=ON -G Ninja -DCMAKE_MAKE_PROGRAM=Ninja ../
- ninja
Now everything works as intended the build works and the job only succeeds if the build succeeds.
However, I would prefer to setup my environment in the entrypoint so that I don't have to do this in my build script.
Is there a better way to do this? Also feel free to suggest any improvements I could make.
Ok, after some struggling, here is my entry.bat that correctly loads the environment exports the error-level/return-value:
REM Load environment
call C:\\BuildTools\\VC\\Auxiliary\\Build\\vcvars64.bat
REM If there are no parameters call cmd
IF [%1] == [] GOTO NOCALL
REM If there are parameters call cmd with /S for it to exit directly
cmd /S /C "%*"
exit %errorlevel%
:NOCALL
cmd
exit %errorlevel%
The question
I'm trying to write a package to install libmonome, a toolkit for using the monome hardware (OSC controllers with LEDs, mostly for music).
My efforts are based on these instructions for building libmonome from source. Note that those instructions use waf, not make.
My hello world packages that use make build successfully. But my libmonome package that tries to use python waf analogously is not building:
[jeff#jbb-dell:~/nix/jbb-config/custom-packages/libmonome]$ nix-build
these derivations will be built:
/nix/store/kkf4c8l0njqdapnm2qgk6ffmybmafrpv-libmonome.drv
building '/nix/store/kkf4c8l0njqdapnm2qgk6ffmybmafrpv-libmonome.drv'...
Error: Cannot unpack waf lib into /nix/store/dk3pnwg7z9q14f4yj35y2kaqdmahnhhh-libmonome/.waf-2.0.17-6b308e91b5eb321c61bd5469cd6d43aa
Move waf in a writable directory
builder for '/nix/store/kkf4c8l0njqdapnm2qgk6ffmybmafrpv-libmonome.drv' failed with exit code 1
error: build of '/nix/store/kkf4c8l0njqdapnm2qgk6ffmybmafrpv-libmonome.drv' failed
[jeff#jbb-dell:~/nix/jbb-config/custom-packages/libmonome]$
Some maybe-helpful, maybe-redundant information
If you're cloning the repo, note that you'll need to fetch submodules to get all the libmonome code. Here's one way to do that:
git clone --recurse-submodules https://github.com/JeffreyBenjaminBrown/nixos-experiments
git checkout c20581f839f8e0fb39b2762baeea7d0a7ab10783
I already put absolute links to my code above, but in case you would rather see that code on this page, here is my default.nix file:
{...}:
with (import <nixpkgs> {});
derivation {
name = "libmonome";
builder = "${bash}/bin/bash";
args = [ ./builder.sh ];
buildInputs = [ git
coreutils
liblo
python2
];
# I would like to use fetchGit but haven't gotten it to work.
# src = fetchGit {
# url = "https://github.com/monome/libmonome.git";
# };
repo = ./libmonome;
system = builtins.currentSystem;
}
and here is the builder.sh script it calls:
set -e # Exit the build on any error
unset PATH # because it's initially a non-existant path
for p in $buildInputs; do
export PATH=$p/bin${PATH:+:}$PATH
done
cd $repo
# I've tried with python2 and python3
python2 ./waf configure
python2 ./waf
sudo python2 ./waf install
I am trying to build a Freeplane derivation based on Freemind, see: https://github.com/razvan-panda/nixpkgs/blob/freeplane/pkgs/applications/misc/freeplane/default.nix
{ stdenv, fetchurl, jdk, jre, gradle }:
stdenv.mkDerivation rec {
name = "freeplane-${version}";
version = "1.6.13";
src = fetchurl {
url = "mirror://sourceforge/project/freeplane/freeplane%20stable/freeplane_src-${version}.tar.gz";
sha256 = "0aabn6lqh2fdgdnfjg3j1rjq0bn4d1947l6ar2fycpj3jy9g3ccp";
};
buildInputs = [ jdk gradle ];
buildPhase = "gradle dist";
installPhase = ''
mkdir -p $out/{bin,nix-support}
cp -r ../bin/dist $out/nix-support
sed -i 's/which/type -p/' $out/nix-support/dist/freeplane.sh
cat >$out/bin/freeplane <<EOF
#! /bin/sh
JAVA_HOME=${jre} $out/nix-support/dist/freeplane.sh
EOF
chmod +x $out/{bin/freeplane,nix-support/dist/freeplane.sh}
'';
meta = with stdenv.lib; {
description = "Mind-mapping software";
homepage = https://www.freeplane.org/wiki/index.php/Home;
license = licenses.gpl2Plus;
platforms = platforms.linux;
};
}
During the gradle build step it is throwing the following error:
building path(s)
‘/nix/store/9dc1x2aya5p8xj4lq9jl0xjnf08n7g6l-freeplane-1.6.13’
unpacking sources unpacking source archive
/nix/store/c0j5hgpfs0agh3xdnpx4qjy82aqkiidv-freeplane_src-1.6.13.tar.gz
source root is freeplane-1.6.13 setting SOURCE_DATE_EPOCH to timestamp
1517769626 of file freeplane-1.6.13/gitinfo.txt patching sources
configuring no configure script, doing nothing building
FAILURE: Build failed with an exception.
What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64.
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. builder for ‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed with exit code 1 error: build of
‘/nix/store/id4vfk3r6fd4zpyb15dq9xfghf342qaa-freeplane-1.6.13.drv’
failed
Running gradle dist from terminal works fine. I'm guessing that maybe one of the globally installed Nix packages provides a fix to the issue and they are not visible during the build.
I searched a lot but couldn't find any working solution. For example, removing the ~/.gradle folders didn't help.
Update
To reproduce the issue just git clone https://github.com/razvan-panda/nixpkgs, checkout the freeplane branch and run nix-build -A freeplane in the root of the repository.
Link to GitHub issue
Maybe you just don't have permission for the folder/file
sudo chmod 777 yourFolderPath
you can also : sudo chmod 777 yourFolderPath/* (All folder)
Folder will not be locked,then You can use it normally
[At least I succeeded。。。]
EX:
sudo chmod 777 Ruby/
now ,that's ok
To fix this error: What went wrong: Failed to load native library 'libnative-platform.so' for Linux amd64. do the following:
Check if your Gradle cache (**~user/.gradle/**native folder exist at all or not).
Check if your Gradle cache (~user/.gradle/native folder exist and the file in question i.e. libnative-platform.so exists in that directory or not).
Check if the above folder ~user/.gradle or ~/.gradle/native or file: ~/.gradle/native/libnative-platform.so has valid permissions (should not be read-only. Running chmod -R 755 ~/.gradle is enough).
IF you don't see native folder at all or if your native folder seems corrupted, run your Gradle task ex: gradle clean build using -g or --gradle-user-home option and pass it's value.
Ex: If I ran mkdir /tmp/newG_H_Folder; gradle clean build -g /tmp/newG_H_Folder, you'll see Gradle will populate all the required folder/files (that it needs to run even before running any task or any option) are now in this new Gradle Home folder (i.e. /tmp/newG_H_Folder/.gradle directory).
From this folder, you can copy - just the native folder to your user's ~/.gradle folder (take backup of existing native folder in ~/.gradle first if you want to) if it already exists -or copy the whole .gradle folder to your ~ (home directory).
Then rerun your Gradle task and it won't error out anymore.
Gradle docs says:
https://docs.gradle.org/current/userguide/command_line_interface.html
-g, --gradle-user-home
Specifies the Gradle user home directory. The default is the .gradle directory in the user’s home directory.
I installed Go with homebrew and it usually works. Following the tutorial here on creating serverless api in Go. When I try to run the unit tests, I get the following error:
# _/Users/pro/Documents/Code/Go/ServerLess
main_test.go:6:2: cannot find package "github.com/strechr/testify/assert" in any of:
/usr/local/Cellar/go/1.9.2/libexec/src/github.com/strechr/testify/assert (from $GOROOT)
/Users/pro/go/src/github.com/strechr/testify/assert (from $GOPATH)
FAIL _/Users/pro/Documents/Code/Go/ServerLess [setup failed]
Pros-MBP:ServerLess Santi$ echo $GOROOT
I have installed the test library with : go get github.com/stretchr/testify
I would appreciate it if anyone could point me in the right direction.
Also confusing is when I run echo $GOPATH it doesnt return anything. same goes for echo $GOROOT
Some things to try/verify:
As JimB notes, starting with Go 1.8 the GOPATH env var is now optional and has default values: https://rakyll.org/default-gopath/
While you don't need to set it, the directory does need to have the Go workspace structure: https://golang.org/doc/code.html#Workspaces
Once that is created, create your source file in something like: $GOPATH/src/github.com/DataKid/sample/main.go
cd into that directory, and re-run the go get commands:
go get -u -v github.com/stretchr/testify
go get -u -v github.com/aws/aws-lambda-go/lambda
Then try running the test command again: go test -v
The -v option is for verbose output, the -u option ensures you download the latest package versions (https://golang.org/cmd/go/#hdr-Download_and_install_packages_and_dependencies).
I'm running Jenkins on a Linux host. I'm automating the build of a C++ application. In order to build the application I need to use the 4.7 version of g++ which includes support for c++11. In order to use this version of g++ I run the following command at a command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
So I created a "Execute shell" build step and put the following commands, which properly builds the C++ application on the command prompt:
exec /usr/bin/scl enable devtoolset-1.1 bash
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
The problem is that Jenkins will not run any of the commands after the "exec /usr/bin/scl enable devtoolset-1.1 bash" command, but instead just runs the "exec" command, terminates and marks the build as successful.
Any ideas on how I can re-structure the above so that Jenkins will run all the commands?
Thanks!
At the begining of your "Execute shell" script, execute source /opt/rh/devtoolset-1.1/enable to enable the devtoolet "inside" of your shell.
Which gives:
source /opt/rh/devtoolset-1.1/enable
libtoolize
autoreconf --force --install
./configure --prefix=/home/tomcat/.jenkins/workspace/project
make
make install
cd procs
./makem.sh /home/tomcat/.jenkins/workspace/project
I needed to look up what scl actually does.
Examples
scl enable example 'less --version'
runs command 'less --version' in the environment with collection 'example' enabled
scl enable foo bar bash
runs bash instance with foo and bar Software Collections enabled
So what you are doing is running a bash shell. I guess, that the bash shell returns immediately, since you are in non-interactive mode. exec runs the the command within the shell without creating a new shell. That means if the newly opened bash ends it also ends your shell prematurely. I would suggest to put all your build steps into a bash script (e.g. run_my_build.sh) and call it in the following way.
exec /usr/bin/scl enable devtoolset-1.1 run_my_build.sh
This kind of thing normally works in "find" commands, but may work here. Rather than running two, or three processes, you run one "sh" that executes multiple things, like this:
exec sh -c "thing1; thing2; thing3"
If you require each step to succeed before the next step, replace the semi-colons with double ampersands:
exec sh -c "thing1 && thing2 && thing3"
I have no idea which of your steps you wish to run together, so I am hoping you can adapt the concept to fit your needs.
Or you can put the whole lot into a script and exec that.