starting position
A tasks.json that defines a working build task (currently the only one, defined as default and working fine, also triggered by [CTRL]+[SHIFT]+[B]).
It triggers an external command (batch/shell script) and passes some parameters. Excerpt:
{
"label": "sample task",
"windows": {
"command": "${workspaceFolder}\\procedures_win\\doStuff.bat"
},
"linux": {
"command": "${workspaceFolder}/procedures/doStuff.sh"
},
"type": "shell",
"args": [
"fixedParm",
"${fileBasename}"
]
}
(the complete one is much longer, mainly because of the amount of entries in the problemMatcher, contains target specific environment settings, 100 lines+ ...)
goal
Create a second task "sample task (check-only)" that is completely identical but passes one extra argument to the script "check-only".
options?
Is it possible to "extend" the given task "overriding" only the args?
If not: Is it possible to have a task actually run (not depend on) another task and setting an environment parameter that may then be used by the original task as "${env:someValue}" (an will either result to the empty string or the requested "check-only")?
As a last resort one possibly could define 5 instead of the two tasks (1 nearly identical to the current one, but taking an input from an external command/file; 2+3 meta tasks depending on 4+5, 4+5 command that creates a file ${workspaceFolder}/.taskmode that contains either nothing or "check-only").
question
How does a working solution without installing extensions look like?
Related
I want to override some settings for specific files.
For example, instead of creating a .prettierrc file at the root of my project, I want to be able to define some global overrides to all files ending with .sol in my settings.json of VS Code.
{
"overrides": [
{
"files": "*.sol",
"options": {
"printWidth": 80,
"tabWidth": 2,
"useTabs": true,
"singleQuote": false,
"bracketSpacing": false,
"explicitTypes": "never"
}
}
]
}
I would like to add the above to my global settings in VS Code.
Prettier doesn't support global overrides intentionally
I was trying to do the same thing as you, and realized after researching the issue that it's intentionally unsupported.
From the docs:
Prettier intentionally doesn’t support any kind of global configuration. This is to make sure that when a project is copied to another computer, Prettier’s behavior stays the same. Otherwise, Prettier wouldn’t be able to guarantee that everybody in a team gets the same consistent results.
Also see this response in a closed issue on Github:
Prettier doesn't support global configuration. It's supposed to be configured per project.
What causes the folder name under '_work' to change on a Private Agent?
We are currently using _work/10/s etc. It has used this for the last few builds, but what would cause it to step up to using /11?
I should say we are still in the early days of using VSTS, hence why there are so few builds.
I get the feeling that it is either that we didn't initially perform any cleaning of the work directory, we do now, or that it changes when we change the build definition. Both sound plausible.
Each build definition gets its own folder. This allows for total isolation of source code and build outputs.
You should never rely on hard-coding this path; you can reference a build's working directory with the $(System.DefaultWorkingDirectory) variable.
There is SourceRootMapping folder in the working folder and there are Mappings.json and SourceFolder.json files (SourceRootMapping{guid} folder{build definition id} folder\sourceFolder.json) in this folder.
Mappings.json:
{
"lastBuildFolderCreatedOn": "05/16/2018 13:20:06 +08:00",
"lastBuildFolderNumber": 2
}
A SourceFolder.json:
{
"build_artifactstagingdirectory": "1\\a",
"agent_builddirectory": "1",
"collectionUrl": "https://XXX.visualstudio.com/",
"definitionName": "a",
"fileFormatVersion": 3,
"lastRunOn": "05/16/2018 13:18:06 +08:00",
"repositoryType": "TfsGit",
"lastMaintenanceAttemptedOn": "",
"lastMaintenanceCompletedOn": "",
"build_sourcesdirectory": "1\\s",
"common_testresultsdirectory": "1\\TestResults",
"collectionId": "21136b22-dbe8-4fae-a111-3f8c5b0fed9b",
"definitionId": "285",
"hashKey": "d2545895fec8eea22c60ecc24f6593a986106b80",
"repositoryUrl": "https://starain.visualstudio.com/Scrum2017/_git/cppbase",
"system": "build"
}
So, it’s easy to find that the VSTS agent increase the folder number per to Mappings.json and SourceFolder.json file is used to mapping build definition and its corresponding working folder.
I have a npm task in which I watch my TypeScript application for changes, compile it and then run tests automatically. I'm trying to have Visual Studio Code warn me in the Problems tab whenever a test fails.
While I've managed to achieve that, whenever I fix the code so that the tests pass again, the warning remains in the Problems tab. This is quite bothering, since I would be getting lots of false positives and I might overlook actual test failures. I wonder if there is a way to flush the contents of the Problems tab every time my tests are executed?
Here's my tasks.json file:
{
"version": "0.1.0",
"command": "npm",
"isShellCommand": true,
"showOutput": "silent",
"suppressTaskName": true,
"tasks": [
{
"taskName": "start",
"args": ["start"],
"isBackground": true,
"problemMatcher": {
"fileLocation": ["relative", "${workspaceRoot}"],
"pattern": [
// Omitted for brevity
],
"watching": {
"activeOnStart": true,
"beginsPattern": "\\[1\\] Starting 'test'\\.\\.\\.",
"endsPattern": ".*Finished 'test' after.*"
}
}
}
]
}
Thanks!
While this doesn't directly answer my question, I'm sharing this in case someone stumbles upon the same issue.
Right now I'm using node-tdd, which was a bit difficult to set up since its Windows support is limited, but it seems to fit the bill just fine.
Cuda is Nivida provided api that lets c/c++ use gpu for some stuff, even though i don't what that some stuff is & would like to know, from what i saw the gains were remarkable. Also cuda only works for nivida gpus...
There does exist a module for nodejs, but it's only for 64bit version of windows, yet there exists cuda for 32bit version as well so only thing missing binding/extension for nodejs to cuda in c++. And There is no sign of documents anywhere on github or internet about that module. Last commits were like 1/2 year+ ago.
If it's all possible than it'd be very great. As nodejs would be able to use gpu for operations, putting it in the whole new level for web stuff, and other applications. Also given parallel nature of nodejs it fits perfectly with gpu's parallel nature.
Suppose there is no module that exists right now. What are my choices.
it's been done already by someone else: http://www.cs.cmu.edu/afs/cs/academic/class/15418-s12/www/competition/r2jitu.com/418/final_report.pdf
here is a binding.gyp file that will build a node extension using two source files
hello.cpp, goodby.cu, and goodby1.cu
{
## for windows, be sure to do node-gyp rebuild -msvs_version=2013,
## and run under a msvs shell
## for all targets
'conditions': [
[ 'OS=="win"', {'variables': {'obj': 'obj'}},
{'variables': {'obj': 'o'}}]],
"targets": [
{
"target_name": "hello",
"sources": [ "hello.cpp", "goodby.cu", "goodby1.cu",],
'rules': [{
'extension': 'cu',
'inputs': ['<(RULE_INPUT_PATH)'],
'outputs':[ '<(INTERMEDIATE_DIR)/<(RULE_INPUT_ROOT).<(obj)'],
'conditions': [
[ 'OS=="win"',
{'rule_name': 'cuda on windows',
'message': "compile cuda file on windows",
'process_outputs_as_sources': 0,
'action': ['nvcc -c <(_inputs) -o <(_outputs)'],
},
{'rule_name': 'cuda on linux',
'message': "compile cuda file on linux",
'process_outputs_as_sources': 1,
'action': ['nvcc','-Xcompiler','-fpic','-c',
'<#(_inputs)','-o','<#(_outputs)'],
}]]}],
'conditions': [
[ 'OS=="mac"', {
'libraries': ['-framework CUDA'],
'include_dirs': ['/usr/local/include'],
'library_dirs': ['/usr/local/lib'],
}],
[ 'OS=="linux"', {
'libraries': ['-lcuda', '-lcudart'],
'include_dirs': ['/usr/local/include'],
'library_dirs': ['/usr/local/lib', '/usr/local/cuda/lib64'],
}],
[ 'OS=="win"', {
'conditions': [
['target_arch=="x64"',
{
'variables': { 'arch': 'x64' }
}, {
'variables': { 'arch': 'Win32' }
}
],
],
'variables': {
'cuda_root%': '$(CUDA_PATH)'
},
'libraries': [
'-l<(cuda_root)/lib/<(arch)/cuda.lib',
'-l<(cuda_root)/lib/<(arch)/cudart.lib',
],
"include_dirs": [
"<(cuda_root)/include",
],
}, {
"include_dirs": [
"/usr/local/cuda/include"
],
}]
]
}
]
}
The proper way to do this is to use the Nvidia CUDA toolkit to write your cuda app in C++ and then invoke it as a separate process from node. This way you can get the most from CUDA and draw on the power of node for controlling that process.
For example, if you have a cuda application and you want to scale it to, say, 32 computers, you would write the application in fast C or C++ and then use node to push it to all the PC's in the cluster and handle communication with each remote process over the network. Node shines in this area. Once each CUDA app instance finishes it's job, you join all the data with node and present it to the user.
The most natural way to hook up CUDA and Node.js would be through an "addon", which allows you to expose c++ code to your javascript programs running on node.
Node itself is a c++ app built on top of the v8 javascript engine, and addons are a way for you to write c++ libraries that can be used by javascript libraries in the same sort of way that node's own libraries do.
From the outside, an addon just looks like a module. The c++ gets compiled into a dynamic library and then exposed to node like any other module.
e.g. my-addon.cc -> (compile) -> my-addon.dylib -> (node-gyp) -> my-addon.node -> var myFoo = require('my-addon').foo()
From inside the addon, you use the v8 and Node APIs to interface with the Javascript environment, and access CUDA with the the normal c++ APIs.
There are a lot of moving parts down at this level. Something as simple as passing a value from one to the other means you need to worry about both c++ memory management and the javascript garbage collector while you wrap/unwrap javascript values to and from appropriate c++ types.
The good news is that most of the issues are fine individually, with great docs and supporting libraries abounding e.g. nan will get a skeleton addon running in no time, and on the CUDA side, you're talking about their normal c++ interface, with truckloads of of docs and tutorials.
I want to build my Dojo JavaScript code that I have carefully structured into packages into a single JavaScript file. I'm a little confused as to how to do it.
For now I have this:
var profile = {
...
layers: {
'app': {
include: [
'dojo/module1',
'dojo/module2',
...,
'dojo/moduleN',
'package2/module1',
'package2/module2',
...,
'package2/moduleN'
]
}
}
...
};
Do I really have to manually add all the modules to the app layer? Can't I just say "all", or better yet, "all referenced"? I don't want to include the dojo/something modul if I don't use it. Also, in my release folder, that's all I would like to have - one file.
So - can this even be achieved? Clean Dojo automatic build of only referenced modules into a single (minified and obfuscated of course) JavaScript file?
Take a look at the examples in the Layers section of this build tutorial:
It’s also possible to create a custom build of dojo.js; this is particularly relevant when using AMD, since by default (for backwards compatibility), the dojo/main module is added automatically by the build system to dojo.js, which wastes space by loading modules that your code may not actually use. In order to create a custom build of dojo.js, you simply define it as a separate layer, setting both customBase and boot to true:
var profile = {
layers: {
"dojo/dojo": {
include: [ "dojo/dojo", "app/main" ],
customBase: true,
boot: true
}
}
};
You can include an entire "app" in a single layer by including the root of that app (or module). Note that if a module in that app is not explicitly required by that app, it would have to be included manually. See the second example in the Layers section in the above tutorial for an illustration of that.
You can also define packages to include in your layers, if you want to change or customize the layout of your project:
packages: [
{name:'dojo', location:'other/dojotoolkit/location/dojo'},
/* ... */
],
layers: {
'dojo/dojo': { include: ['dojo/dojo'] },
/* ... */
}
You don't have to specify all the modules, if the module you add already has dependencies on others. For example, if you include 'app/MainApplication' to a layer, the builder would include all the modules that app/MainApplication depens on. If your MainApplication.js touches everything in your project, everything would be included.
During the build of a layer, dojo parses require() and define() calls in every module. Then it builds the dependency tree. Nls resources are also included.
In your code, you should name your layer as a file in existing package. In my build, it caused errors when I name a layer with a single word. You should code
var profile =
layers: {
'existingPackage/fileName': {
...
}
}
If you want to have exacltly one file, you have to include 'dojo/dojo' in your layer and specify customBase and boot flags.
Dojo always build every package before building layers. You will always have dojo and dijit folders in your release directory containing minified versions of dojo filies in them.
Just copy the layer file you need and delete everything other.