Travis CI with Swift Framework - swift3

I'm not sure if Travis CI is behaving like it should, I feel like it's "overreacting". It's testing when I open a PR, when I merge the PR and when I (for example) edit the README.md and push to master.
This is my Travis CI config YML.
language: objective-c
xcode_workspace: {name}
xcode_scheme: {name}Tests
xcode_sdk: iphonesimulator10.0
osx_image: xcode8.3
branches:
only:
- master
before_install:
- pod repo update
script:
- xcodebuild -workspace {name}.xcworkspace -scheme {name} -sdk iphonesimulator ONLY_ACTIVE_ARCH=no
- xcodebuild test -workspace {name}.xcworkspace -scheme {name}Tests -sdk iphonesimulator ONLY_ACTIVE_ARCH=NO -destination 'platform=iOS Simulator,name=iPhone 6s Plus,OS=10.0'

It is not "overreacting" :p
Actually, from my own kinda newbie experience, it is its default behaviour, which is justified. When you open a PR, it tests whether the branch you want to merge is not breaking anything, which is normal.
Once the merge happened, it tests if nothing has been broken by the merge, which is maybe what you call overreacting. But it actually happened to me once, even if i can't give you the reason why, it was quite a long time ago.
The documentation of Travis-CI explains how to limit the jobs to branches, but i guess you already know it when i read your travis.yml. Just in case, it is this part. What you may not be aware of is the possibility to skip a build by specifying
[ci skip] or [skip ci]
in the git commit message. Here is the doc reference.
I haven't actually ever heard about a way to prevent Travis to test before and after PR.
Hope this helps.
Edit : When watching your last build on travis-ci.org, on the top right, you have the "More options" button. Click on it and select Setting. Here you can select whether you want to Build branch updates and/or Build pull request updates. Take a look a bit further on this setting page, you can also decide to auto-cancel builds when there i a new one in the queue, e.g. when you just realize you made a mistake in your last push and make a new push to fix it.

Related

Rider can't resolve symbol for UnityTest although the tests run (OSX, M1)

I recently started having this issue where although my tests run properly, Rider forgets the reference to the [UnityTest] attribute. It shows up as an error in Rider, but not in Unity. Although the namespace is already imported, after I do a quick action to fix the reference the errors disappear but only until the next save.
Mac OS: Ventura 13.0.1
Rider version: 2022.2.4
Unity version: 2021.3.15f1
I've attached a gif to show what exactly is happening:
Do you work with Assembly definitions?
In that case you will need to add it there as dependency.
I suspect Rider's quick action basically does this within the csproj files but with the next save Unity recompiles and overwrites the .csproj files according to the assembly definitions
-> should be looking like e.g.
I finally found the answer in this post. I'm not sure what exactly went wrong and got cached but the solution was to invalidate the cache by calling this action:
File | Invalidate Caches | Invalidate and Restart
Hope this will help others!

Running SonarQube analysis scan - SonarSource build-wrapper

I'm new to running SonarQube scans and I get this error message in the log in Jenkins:
16:17:39 16:17:36.926 ERROR - The only way to get an accurate analysis of your C/C++/Objective-C project is by using the SonarSource build-wrapper. If for any reason, the use of the build-wrapper is not possible on your project, you can bypass it with the help of the "sonar.cfamily.build-wrapper-output.bypass=true" property. By using that property, you'll switch to an "at best" mode that could result in false-positives and false-negatives.
Can someone please advise where I can find and run this SonarSource build-wrapper?
Thanks a lot for your help!
To solve this issue, download the Build Wrapper directly from your SonarQube Server, so that its version perfectly matches your version of the plugin:
Build Wrapper for Linux can be downloaded from URL
http://localhost:9000/static/cpp/build-wrapper-linux-x86.zip
Unzip the downloaded Build Wrapper,
Configure it in your PATH because it's just more convenient
export PATH=$PATH:/path/where/you/unzip
Once done, Run below commands.
build-wrapper-linux-x86-64 --out-dir <dir-name> <build-command>
build-wrapper-linux-x86-64 --out-dir build_output make clean all
Once all this done, you have to modify your sonar-project.properties file with following line. Note the dir-name is same directory which we defined in previous command.
sonar.cfamily.build-wrapper-output=<dir-name>
and then you can run the sonar scanner command.
sonar-scanner
this will do the analysis against your code. For more details, you can check this link.
Contacted support, turns out this was caused by missing the argument sonar.cfamily.build-wrapper-output in the scanner begin command.
Build wrapper downloads:
Linux: https://sonarcloud.io/static/cpp/build-wrapper-linux-x86.zip
macOS: https://sonarcloud.io/static/cpp/build-wrapper-macosx-x86.zip
Windows: https://sonarcloud.io/static/cpp/build-wrapper-win-x86.zip
Some links covering how to run the build wrapper:
https://docs.sonarqube.org/latest/analysis/languages/cfamily/
https://blog.sonarsource.com/with-great-power-comes-great-configuration/

ember-cli-eslint error build passes

In an Ember application, I'm trying to switch from jshint to eslint. I've followed the readme for https://github.com/ember-cli/ember-cli-eslint. Everything seems to work when I run "ember test", except when there is an eslint error the build will still pass. I see the eslint error in the console output, but the build will say "ok" when it is done and all the unit tests have passed. How do I get the build to fail if there is an eslint error?
It looks like the default approach to this changed in 1.5.0 of ember-cli-eslint as described in this issue and the corresponding PR: https://github.com/ember-cli/ember-cli-eslint/issues/66
Looks like you have the option of switching the testGenerator function around to one of your own choosing however, so you should've able to pass one in to replace the default behavior if you want to. Might make upgrades to the extension a bit more fragile though ...

Not getting any test results with xunitmultiprocess

I am running tests through Jenkins on a windows box. In my "Execute Windows Batch command" portion of the project configuration I have the following command:
nosetests --nocapture --with-xunitmp --eval-attr "%APPLICATION% and priority<=%PRIORITY% and smoketest and not dev" --processes=4 --process-timeout=2000
The post build actions have "Publish JUnit test result report" with the Test report XMLs path being:
trunk\automation\selenium\src\nosetests.xml
When I do a test run, the nosetests.xml file is created, however it is empty, and I am not getting any Test Results for the build.
I am not really sure what is wrong here.
EDIT 1
I ran the tests with just --with-xunit and REM'd out the --processes and got test results. Does anyone of problems with xunitmp not working with a Windows environment?
EDIT 2
I unstalled an reinstalled nose and nose_xunitmp to no avail.
The nosetest plugin for parallelizing tests and plugin for producing xml output are incompatible. Enabling them at the same time will produce the exact result you got.
If you want to keep using nosetest, you need to execute tests sequentially or find other means of parallelizing them (e.g. by executing multiple parallel nosetest commands (which is what I do at work.))
Alternatively you can use another test runner like nose2 or py.test which do not have this limitation.
Apparently the problem is indeed Windows and how it handles threads. We attempted several tests outside of our Windows Jenkins server and they do not work either. Stupid Windows.

Teamcity running build steps even when tests fail

I am having problems with Teamcity, where it is proceeding to run build steps even if the previous ones were unsuccessful.
The final step of my Build configuration deploys my site, which I do not want it to do if any of my tests fail.
Each build step is set to only execute if all previous steps were successful.
In the Build Failure Conditions tab, I have checked the following options under Fail build if:
-build process exit code is not zero
-at least one test failed
-an out-of-memory or crash is detected (Java only)
This doesn't work - even when tests fail TeamCity deploys my site, why?
I even tried to add an additional build failure condition that will look for specific text in the build log (namely "Test Run Failed.")
When viewing a completed test in the overview page, you can see the error message against the latest build:
"Test Run Failed." text appeared in build log
But it still deploys it anyway.
Does anyone know how to fix this? It appears that the issue has been running for a long time, here.
Apparently there is a workaround:
So far we do not consider this feature as very important as there is
an obvious workaround: the script can check the necessary condition
and do not produce the artifacts as configured in TeamCity.
e.g. a script can move the artifacts from a temporary directory to the
directory specified in the TeamCity as publish artifacts from just
before the finish and in case the build operations were successful.
But that is not clear to me on exactly how to do that, and doesn't sound like the best solution either. Any help appreciated.
Edit: I was also able to workaround the problem with a snapshot dependency, where I would have a separate 'deploy' build that was dependent on the test build, and now it doesn't run if tests fail.
This was useful for setting the dependency up.
This is a known problem as of TeamCity 7.1 (cf. http://youtrack.jetbrains.com/issue/TW-17002) which has been fixed in TeamCity 8.x+ (see this answer).
TeamCity distinguishes between a failed build and a failed build step. While a failing unit test will fail the build as a whole, unfortunately TeamCity still considers the test step itself successful because it did not return a non-zero error code. As a result, subsequent steps will continue running.
A variety of workarounds have been proposed, but I've found they either require non-trivial setup or compromise on the testing experience in TeamCity.
However, after reviewing a suggestion from #arex1337, we found an easy way to get TeamCity to do what we want. Just add an extra Powershell build step after your existing test step that contains the following inline script (replacing YOUR_TEAMCITY_HOSTNAME with your actual TeamCity host/domain):
$request = [System.Net.WebRequest]::Create("http://YOUR_TEAMCITY_HOSTNAME/guestAuth/app/rest/builds/%teamcity.build.id%")
$xml = [xml](new-object System.IO.StreamReader $request.GetResponse().GetResponseStream()).ReadToEnd()
Microsoft.PowerShell.Utility\Select-Xml $xml -XPath "/build" | % { $status = $_.Node.status }
if ($status -eq "FAILURE") {
throw "Failing this step because the build itself is considered failed. This is our way to workaround the fact that TeamCity incorrectly considers a test step to be successful even if there are test failures. See http://youtrack.jetbrains.com/issue/TW-17002"
}
This inline PowerShell script is just using the TeamCity REST API to ask whether or not the build itself, as a whole, is considered failed (the variable %teamcity.build.id%" will be replaced by TeamCity with the actual build id when the step is executed). If the build as a whole is considered failed (say, due to a test failure), then this PowerShell script throws an error, causing the process to return a non-zero error code which results in the individual build step itself to be considered unsuccessful. At that point, subsequent steps can be prevented from running.
Note that this script uses guestAuth, which requires the TeamCity guest account to be enabled. Alternately, you can use httpAuth instead, but you'll need to update the script to include a TeamCity username and password (e.g. http://USERNAME:PASSWORD#YOUR_TEAMCITY_HOSTNAME/httpAuth/app/rest/builds/%teamcity.build.id%).
So, with this additional step in place, all subsequent steps set to execute "Only if all previous steps were successful" will be skipped if there are any previous unit test failures. We're using this to prevent automated deployment if any of our NUnit tests are not successful until JetBrains fixes the problem.
Thanks to #arex1337 for the idea.
Just to prevent confusion, this issue is fixed in Team City v8.x, We don't need those workarounds now.
You can specify the step execution policy via the Execute step option:
Only if build status is successful - before starting the step, the build agent requests the build status from the server, and skips the step if the status is failed.
https://confluence.jetbrains.com/display/TCD8/Configuring+Build+Steps
Of course you need to fail the build if at least one unit test failed:
https://confluence.jetbrains.com/display/TCD8/Build+Failure+Conditions
On the Build Failure Conditions page, the Fail build if area, specify when TeamCity will fail builds:
at least one test failed: Check this option to mark the build as failed if the build fails at least one test.
This is (as you have found) a known issue with TeamCity, there are a set of linked issues in their Issue Tracker. This issue is hopefully scheduled to be resolved in the next release of TeamCity (version 8.x)
In the mean time, the way we identified to resolve the issue (for version 6.5.5) was to download the test results file as part of the later steps. This was then parsed to check for any test failures, returning an error code and hence breaking the build properly (performing any cleanup we needed as part of that failure) which would probably work for you.
TeamCity build failure does not mean that it will stop the build and it will publish the artifacts if your build is providing the the build output files as required by TeamCity. It will only update the build status properly.
But, you can very well stop the build process by modification to your build script to stop the build on test case failure. If you are using MSBuild, then ContinueOnError="false" will do that.
In the end, I was able to solve the problem with a snapshot dependency, where I would have a separate 'deploy' build that was dependent on the test build, and now it doesn't run if tests fail.
This was useful for setting the dependency up.