Github actions job cancelled for no obvious reason - build

I'm trying to set up a github actions script for a project of mine. The project is private at the moment since I want it in an RC state before the first release. As it is close to ready, I intended to set up an automated build now, but I'm seeing some strange behavior. The project is a simple c# library, and so the .yml file is quite simple:
name: .NET Core Desktop
on: [push, pull_request]
jobs:
build:
strategy:
matrix:
configuration: [Debug, Release]
runs-on: windows-latest
env:
Solution_Name: Replacement.sln # Replace with your solution name, i.e. MyWpfApp.sln.
Test_Project_Path: UnitTest.csproj # Replace with the path to your test project, i.e. MyWpfApp.Tests\MyWpfApp.Tests.csproj.
steps:
- name: Checkout
uses: actions/checkout#v3
with:
fetch-depth: 0
# Run build
- name: Run build
run: ./build.cmd --target pack --configuration ${{ matrix.configuration }}
Sometimes, one of the builds (Debug or Release, or both) fail with an entry such as
Terminate batch job (Y/N)?
Error: The operation was canceled.
in the log. I certainly did not cancel the build in any way. This may happen after like 3 minutes.
Am I running into the github build time limit (how would I know)? Or is there something else wrong I'm missing?

Related

GoogleCloudPlatform Buildpacks failed to build

I haven't changed anything recently in my project, but when I tried to deploy it last, I received this error in the logs: ERROR: Could not build wheels for pyarrow, which is required to install pyproject.toml-based projects
See the full log here: log-d20114fe-3eeb-4a8d-8926-3a971882894c.txt
This is my requirements.txt:
requirements.txt
It seems like it is an issue with the dependencies for the snowflake-connector-python package, but I am not really sure what would have caused this. I see in the logs:
-- Running cmake for pyarrow
Step #0 - "Buildpack": cmake -DPYTHON_EXECUTABLE=/layers/google.python.runtime/python/bin/python3 -DPython3_EXECUTABLE=/layers/google.python.runtime/python/bin/python3 "" -DPYARROW_BUILD_CUDA=off -DPYARROW_BUILD_FLIGHT=off -DPYARROW_BUILD_GANDIVA=off -DPYARROW_BUILD_DATASET=off -DPYARROW_BUILD_ORC=off -DPYARROW_BUILD_PARQUET=off -DPYARROW_BUILD_PARQUET_ENCRYPTION=off -DPYARROW_BUILD_PLASMA=off -DPYARROW_BUILD_S3=off -DPYARROW_BUILD_HDFS=off -DPYARROW_USE_TENSORFLOW=off -DPYARROW_BUNDLE_ARROW_CPP=off -DPYARROW_BUNDLE_BOOST=off -DPYARROW_GENERATE_COVERAGE=off -DPYARROW_BOOST_USE_SHARED=on -DPYARROW_PARQUET_USE_SHARED=on -DCMAKE_BUILD_TYPE=release /tmp/pip-install-w1g_50oc/pyarrow_4a54282bee5f4c3c8399d3428e4134e6
Step #0 - "Buildpack": error: command 'cmake' failed: No such file or directory
This makes me think CMake is the problem, but I tried explicitly adding CMake to my requirements file and had the same result.
I also looked at the last successful build, and it looks like I was running python version 3.10.8, and the one that failed first was running 3.11. How can I change what python version cloud build uses? I am using the cloudbuild.yaml file instead of docker.
Figured it out! The issue was with not specifying a version in Cloud Build for Python, so it was defaulting to 3.11, which does not yet have support for pyarrow. I ended up setting the version in the cloud build yaml file to 3.10.8 like so:
steps:
- name: gcr.io/k8s-skaffold/pack
env:
- GOOGLE_ENTRYPOINT=$_ENTRYPOINT
- GOOGLE_RUNTIME_VERSION=$_RUNTIME_VERSION
args:
- build
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- '--builder=gcr.io/buildpacks/builder:v1'
- '--network=cloudbuild'
- '--path=.'
- '--env=GOOGLE_ENTRYPOINT'
- '--env=GOOGLE_RUNTIME_VERSION'
For those of you who are using a Procfile
I was able to fix by creating a .python-version file that contains just the version number needed,
i.e. 3.10.4

Using MPI on a Github Action

I'm working on a C++ project which uses MPI and OMP for some parallel stuff. The thing is that I'd like to include a Github Action in the repo which compiles the code and run the tests after each push on master. I have already use Travis CI as a CI tool and it worked perfectly (both compiling and running the tests), but for some reason I cannot configure the action properly. Every time I push to master it appears the following error when the compiling stage reaches a file which uses MPI:
/home/runner//.file.cpp:14:10: fatal error: mpi.h: No such file or directory
14 | #include <mpi.h>
| ^~~~~~~
The action.yml file is as following:
name: CMake
on:
push:
branches:
- master
- develop
pull_request:
env:
# Customize the CMake build type here (Release, Debug, RelWithDebInfo, etc.)
BUILD_TYPE: Release
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: egor-tensin/setup-gcc#v1
with:
version: 10
platform: x64
- uses: mpi4py/setup-mpi#v1
with:
mpi: 'mpich'
- name: Compile
run: |
cmake -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}}
export CC=cc
export CXX=CC
cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}} --target all -- -j 14
- name: Run Tests
run:
./${{github.workspace}}/bin/tests
I had tried to change the configuration several times using other MPI versions but the problem always appears.

DotNetCoreCLI#2 pack task ignores version suffix directive

I'm creating my first Azure build pipeline for a .Net Core 2.1 solution. I've had success with DotNetCoreCLI#2 for all of my steps, that is, except for the pack step.
This works, and is currently what I have resorted to:
- script: |
dotnet pack src/MyProject/MyProject.csproj --version-suffix $(VersionSuffix) --configuration $(BuildConfiguration) --no-restore --no-build --output $(Build.ArtifactStagingDirectory)
displayName: 'dotnet pack [$(BuildConfiguration)]'
This does not work, in that it ignores the --version-suffix directive:
- task: DotNetCoreCLI#2
inputs:
command: 'pack'
# packagesToPack: '**/*.csproj; **/!*Test*.csproj' - TODO pack all projects, except test projects
packagesToPack: 'src/MyProject/MyProject.csproj'
arguments: '--version-suffix $(VersionSuffix) --configuration $(BuildConfiguration) --no-restore --no-build --output $(Build.ArtifactStagingDirectory)'
displayName: 'dotnet pack [$(BuildConfiguration)]'
(I've left one of my TODOs in there as a side quest)
Also, the version prefix resides in the csproj file:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
<PackageId>MyProject</PackageId>
<Authors>Me</Authors>
<Description>A description</Description>
<VersionPrefix>0.1.0</VersionPrefix>
<IsPackable>true</IsPackable>
</PropertyGroup>
<Project/>
When I use the dotnet pack I see a NuGet package with the full version (i.e. <prefix>-<suffix>) as I expect; e.g. 0.1.0-190813.02.abcdef.
When I use the DotNetCoreCLI#2 task the version is limited to the version prefix; e.g. 0.1.0.
What have I missed? Ideally I would like the pipeline yaml file to be consistent.
What have I missed? Ideally I would like the pipeline yaml file to be consistent.
No, you did not miss anything. This behavior is by designed for DotNetCoreCLI#2.
When you check that task in the classic editor without YAML, you can see there is no such Arguments option, instead of Pack options:
So, we could use this option to define the package version.
Besides, according to the document .NET Core CLI task, the description for the arguments when you use it in YAML:
The argument -version for Pack is not accept, we need use the custom command, which is the method you are using now.
So, you are on the right way now, do not need to worry about it.
Hope this helps.
I also stumbled onto this problem and found out that DotNetCoreCLI is not going to help me with version suffixes per #Leo answer.
Way to overcome this problem is to pack your project with powershell task. A simple example:
- powershell: 'dotnet pack -o $(build.artifactstagingdirectory) --no-build --no-restore -c ${{ parameters.configuration }} ${{ parameters.projectPath }}'
```
The way I fixed was to use Nuget Command
- task: NuGetCommand#2
displayName: 'Pack Test'
inputs:
command: 'pack'
packagesToPack: '**/test.nuspec'
versioningScheme: 'byBuildNumber'

How to fail safe if failure occurs in one of the layer during dockerfile multi-stage builds

I am writing Dockerfile and using multi-stage build concept to add layers.
One of the FROM layer is docker-sonarqube-scanner that pushes coverage reports to sonar server.
Dockerfile execution fails if this layer fails.
I would want this layer to be fail safe, that said, if there is no coverage report or dir exist, sonarqube should fail silently and Image building process should continue.
FROM hub.docker.com/st/docker-sonarqube-scanner:${scannerVersion} as sonar
ARG sonarProjKey
ARG sonarOpts
COPY --from=test /root/app /root/app
WORKDIR /root/app
RUN sonar-scanner --debug -Dsonar.projectKey=${sonarProjKey} ${sonarOpts}```
Where, sonarProjKey is name of project and sonarOpts is sonar options
If above layer fails, image building should continue.
I found a simple fix. Use semi-colons as instruction conjunctions and append RUN command with true, like this:
RUN sonar-scanner --debug -Dsonar.projectKey=${sonarProjKey} ${sonarOpts} ; true

Getting error while building project "com.bea.util.jam.internal.javadoc.JavadocClassloadingException:"

I am getting error while trying to build a java project in TeamCity. The same project builds and excecutes well on my local. I recently pushed changes to this project on GitLab. This is my first time working with GitLab and TeamCity together. Other projects have no issues during build. I am unable to understand what is causing this error:
[15:58:54][Step 1/1] compile.earCommons (4s)
[15:58:54][compile.earCommons] echo
[15:58:54][compile.earCommons] echo
[15:58:54][compile.earCommons] wlcompile (4s)
[15:58:59][wlcompile]
com.bea.util.jam.internal.javadoc.JavadocClassloadingException: An error
has occurred while invoking javadoc to inspect your source
files. This may be due to the fact that $JAVA_HOME/lib/tools.jar does
not seem to be in your system classloader. One common case in which
this happens is when using the 'ant' tool, which uses a special
context classloader to load classes from tools.jar.
This situation elicits what is believed to a javadoc bug in the initial
release of JDK 1.6. Javadoc attempts to use its own context classloader
tools.jar but ignores one that may have already been set, which leads
to some classes being loaded into two different classloaders. The
telltale sign of this problem is a javadoc error message saying that
'languageVersion() must return LanguageVersion - you might see this
message in your process' output.
This will hopefully be fixed in a later release of JDK 1.6; if a new
version of 1.6 has become available, you might be able to solve this
by simply upgrading to the latest JDK.
Alternatively, you can work around it by simply including
$JAVA_HOME/lib/tools.jar in the java -classpath
parameter. If you are running ant, you will need to modify the standard
ant script to include tools.jar in the -classpath.
[15:58:59][Step 1/1] Process exited with code 1
[15:58:59][Step 1/1] Ant output
[15:59:10][Step 1/1] Process exited with code 1 (Step: Ant)
[15:58:59][Step 1/1] Step Ant failed
****Update****
Build Step: Ant
Step 1:
Runner type: Ant (Runner for Ant build.xml files)
Execute: If all previous steps finished successfully
build.xml file: \ant\build.xml
Working directory: same as checkout directory
Targets: none specified
Ant home path: C:\apache-ant-1.7.0
Additional Ant command line parameters: -lib c:\WebLogic\12.1.2\wlserver\server\lib\javaee.jar;c:\WebLogic\12.1.2\wlserver\server\lib\weblogic.jar;c:\WebLogic\12.1.2\wlserver\server\lib\webservices.jar
JDK home path: c:\Program Files\Java\jdk1.7.0_80
JVM command line parameters: not specified
Reduce test failure feedback time: OFF
Java code coverage: disabled
Docker Settings
Docker Image: unset
I'll appreciate any help in this regard.
I found there was character encoding issue with one of the files that prevented compiler from loading the java classes. Once that was fixed, the build worked fine.