Cuda is Nivida provided api that lets c/c++ use gpu for some stuff, even though i don't what that some stuff is & would like to know, from what i saw the gains were remarkable. Also cuda only works for nivida gpus...
There does exist a module for nodejs, but it's only for 64bit version of windows, yet there exists cuda for 32bit version as well so only thing missing binding/extension for nodejs to cuda in c++. And There is no sign of documents anywhere on github or internet about that module. Last commits were like 1/2 year+ ago.
If it's all possible than it'd be very great. As nodejs would be able to use gpu for operations, putting it in the whole new level for web stuff, and other applications. Also given parallel nature of nodejs it fits perfectly with gpu's parallel nature.
Suppose there is no module that exists right now. What are my choices.
it's been done already by someone else: http://www.cs.cmu.edu/afs/cs/academic/class/15418-s12/www/competition/r2jitu.com/418/final_report.pdf
here is a binding.gyp file that will build a node extension using two source files
hello.cpp, goodby.cu, and goodby1.cu
{
## for windows, be sure to do node-gyp rebuild -msvs_version=2013,
## and run under a msvs shell
## for all targets
'conditions': [
[ 'OS=="win"', {'variables': {'obj': 'obj'}},
{'variables': {'obj': 'o'}}]],
"targets": [
{
"target_name": "hello",
"sources": [ "hello.cpp", "goodby.cu", "goodby1.cu",],
'rules': [{
'extension': 'cu',
'inputs': ['<(RULE_INPUT_PATH)'],
'outputs':[ '<(INTERMEDIATE_DIR)/<(RULE_INPUT_ROOT).<(obj)'],
'conditions': [
[ 'OS=="win"',
{'rule_name': 'cuda on windows',
'message': "compile cuda file on windows",
'process_outputs_as_sources': 0,
'action': ['nvcc -c <(_inputs) -o <(_outputs)'],
},
{'rule_name': 'cuda on linux',
'message': "compile cuda file on linux",
'process_outputs_as_sources': 1,
'action': ['nvcc','-Xcompiler','-fpic','-c',
'<#(_inputs)','-o','<#(_outputs)'],
}]]}],
'conditions': [
[ 'OS=="mac"', {
'libraries': ['-framework CUDA'],
'include_dirs': ['/usr/local/include'],
'library_dirs': ['/usr/local/lib'],
}],
[ 'OS=="linux"', {
'libraries': ['-lcuda', '-lcudart'],
'include_dirs': ['/usr/local/include'],
'library_dirs': ['/usr/local/lib', '/usr/local/cuda/lib64'],
}],
[ 'OS=="win"', {
'conditions': [
['target_arch=="x64"',
{
'variables': { 'arch': 'x64' }
}, {
'variables': { 'arch': 'Win32' }
}
],
],
'variables': {
'cuda_root%': '$(CUDA_PATH)'
},
'libraries': [
'-l<(cuda_root)/lib/<(arch)/cuda.lib',
'-l<(cuda_root)/lib/<(arch)/cudart.lib',
],
"include_dirs": [
"<(cuda_root)/include",
],
}, {
"include_dirs": [
"/usr/local/cuda/include"
],
}]
]
}
]
}
The proper way to do this is to use the Nvidia CUDA toolkit to write your cuda app in C++ and then invoke it as a separate process from node. This way you can get the most from CUDA and draw on the power of node for controlling that process.
For example, if you have a cuda application and you want to scale it to, say, 32 computers, you would write the application in fast C or C++ and then use node to push it to all the PC's in the cluster and handle communication with each remote process over the network. Node shines in this area. Once each CUDA app instance finishes it's job, you join all the data with node and present it to the user.
The most natural way to hook up CUDA and Node.js would be through an "addon", which allows you to expose c++ code to your javascript programs running on node.
Node itself is a c++ app built on top of the v8 javascript engine, and addons are a way for you to write c++ libraries that can be used by javascript libraries in the same sort of way that node's own libraries do.
From the outside, an addon just looks like a module. The c++ gets compiled into a dynamic library and then exposed to node like any other module.
e.g. my-addon.cc -> (compile) -> my-addon.dylib -> (node-gyp) -> my-addon.node -> var myFoo = require('my-addon').foo()
From inside the addon, you use the v8 and Node APIs to interface with the Javascript environment, and access CUDA with the the normal c++ APIs.
There are a lot of moving parts down at this level. Something as simple as passing a value from one to the other means you need to worry about both c++ memory management and the javascript garbage collector while you wrap/unwrap javascript values to and from appropriate c++ types.
The good news is that most of the issues are fine individually, with great docs and supporting libraries abounding e.g. nan will get a skeleton addon running in no time, and on the CUDA side, you're talking about their normal c++ interface, with truckloads of of docs and tutorials.
Related
I want to override some settings for specific files.
For example, instead of creating a .prettierrc file at the root of my project, I want to be able to define some global overrides to all files ending with .sol in my settings.json of VS Code.
{
"overrides": [
{
"files": "*.sol",
"options": {
"printWidth": 80,
"tabWidth": 2,
"useTabs": true,
"singleQuote": false,
"bracketSpacing": false,
"explicitTypes": "never"
}
}
]
}
I would like to add the above to my global settings in VS Code.
Prettier doesn't support global overrides intentionally
I was trying to do the same thing as you, and realized after researching the issue that it's intentionally unsupported.
From the docs:
Prettier intentionally doesn’t support any kind of global configuration. This is to make sure that when a project is copied to another computer, Prettier’s behavior stays the same. Otherwise, Prettier wouldn’t be able to guarantee that everybody in a team gets the same consistent results.
Also see this response in a closed issue on Github:
Prettier doesn't support global configuration. It's supposed to be configured per project.
With IDA Pro Decompiler,
i was looking for an way to trace the address of a function when the file changed.
For example, I have a .so (ELF) file and its version 1.0
there a function called
[ Writer_Starting ]
and This Function Address is
[ 0x3DA224 ]
well after a while , and after updating the .so (elf) file by his owner to 1.1
the function address changed to
[ 0x3DA228 ]
well , is there any way to automatically find all the changed address for functions
by comparing the old version of the same file ?
or a way to get specific address like example
i put the old address
[ 0x3DA224 ]
then i found the new one
[ 0x3DA228 ] ?
You can create signatures of the functions in v1.0 with a tool like IDB2PAT, then match those signatures in v1.1 with FLIRT functionality of IDA. But the function body must remain completely unchanged for it to work.
In itself, in Visual Studio Code, there is a shortcut (or emmet, not sure if it's called like this) that creates the basic structure of HTML. Typing exclamation mark and pressing TAB bring all the codes that we need to write when we start to build a web application.
However, I am looking for similar shortcut in Visual Studio Community 2019 that brings starting codes for basic applications in C.
For instance,
include <stdio.h>
include <stdlib.h>
int main(){
return 0;
}
P.S. : I know that not every application runs the same starting template or they are not even have a starting template. But, I as a new learner, just writing this structure above all the time so, it is kinda a starting template.
This page explains how to create your own snippets
Those are the steps on macOS:
Select User Snippets under Code > Pereferences
Select the C language
Create the snippet in the JSON file
A snippet generating the basic structure of a program could look like this:
{
"Skeleton": {
"prefix": "main",
"body": [
"#include <stdio.h>",
"#include <stdlib.h>",
"",
"int main(void)",
"{",
"\t${1:code}",
"\treturn 0;",
"}",
""
],
"description": "Dummy C program skeleton"
}
}
This is what it looks like:
Take enough time to understand the answer before ignorantly under-voting.
There isn't an out-of-the-box shortcut for C snippets.
However, you can create your own snippets or install this extension from the market.
Name: C Snippets
Id: harry-ross-software.c-snippets
Description: Snippets for the C programming language
Version: 1.3.0
Publisher: Harry Ross Software
VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=Harry-Ross-Software.c-snippets
click to see image example
you can check all available snippets by pressing ctrl+space together
I have a windows c++ DLL. It provides some functions like add(1,2). But I don't have the source code for this DLL, is it possible call functions in this DLL through nodejs, I mean, through web side and http. If it possible, what should I do?
Did you check out the ffi nodejs library? https://github.com/node-ffi/node-ffi
var ffi = require('ffi');
var libm = ffi.Library('libm', {
'ceil': [ 'double', [ 'double' ] ]
});
libm.ceil(1.5); // 2
The
https://github.com/node-ffi/node-ffi
was indeed a good solution but not maintained since 2019.
The new version is:
https://github.com/node-ffi-napi/node-ffi-napi
I want to build my Dojo JavaScript code that I have carefully structured into packages into a single JavaScript file. I'm a little confused as to how to do it.
For now I have this:
var profile = {
...
layers: {
'app': {
include: [
'dojo/module1',
'dojo/module2',
...,
'dojo/moduleN',
'package2/module1',
'package2/module2',
...,
'package2/moduleN'
]
}
}
...
};
Do I really have to manually add all the modules to the app layer? Can't I just say "all", or better yet, "all referenced"? I don't want to include the dojo/something modul if I don't use it. Also, in my release folder, that's all I would like to have - one file.
So - can this even be achieved? Clean Dojo automatic build of only referenced modules into a single (minified and obfuscated of course) JavaScript file?
Take a look at the examples in the Layers section of this build tutorial:
It’s also possible to create a custom build of dojo.js; this is particularly relevant when using AMD, since by default (for backwards compatibility), the dojo/main module is added automatically by the build system to dojo.js, which wastes space by loading modules that your code may not actually use. In order to create a custom build of dojo.js, you simply define it as a separate layer, setting both customBase and boot to true:
var profile = {
layers: {
"dojo/dojo": {
include: [ "dojo/dojo", "app/main" ],
customBase: true,
boot: true
}
}
};
You can include an entire "app" in a single layer by including the root of that app (or module). Note that if a module in that app is not explicitly required by that app, it would have to be included manually. See the second example in the Layers section in the above tutorial for an illustration of that.
You can also define packages to include in your layers, if you want to change or customize the layout of your project:
packages: [
{name:'dojo', location:'other/dojotoolkit/location/dojo'},
/* ... */
],
layers: {
'dojo/dojo': { include: ['dojo/dojo'] },
/* ... */
}
You don't have to specify all the modules, if the module you add already has dependencies on others. For example, if you include 'app/MainApplication' to a layer, the builder would include all the modules that app/MainApplication depens on. If your MainApplication.js touches everything in your project, everything would be included.
During the build of a layer, dojo parses require() and define() calls in every module. Then it builds the dependency tree. Nls resources are also included.
In your code, you should name your layer as a file in existing package. In my build, it caused errors when I name a layer with a single word. You should code
var profile =
layers: {
'existingPackage/fileName': {
...
}
}
If you want to have exacltly one file, you have to include 'dojo/dojo' in your layer and specify customBase and boot flags.
Dojo always build every package before building layers. You will always have dojo and dijit folders in your release directory containing minified versions of dojo filies in them.
Just copy the layer file you need and delete everything other.