JRule is showing all sorts of error in rule team server , but works good in Rule Studio - business-rules

I have a small rule which is running propely in Rule Studio when I tested with DVS. Now I deployed the rule to rule team server and when I opened the same rule in RTS. Its showing me all sorts of errors !. I am sure the error is not with the code,it is some issue or bug in IBM Jrules ILOG Rule Team Server. Any experts know about this bug ? I am using a trial version of IBM JRules v7.1
Please see the below attached image which shows the error in the RTS.

Sounds like the BOM is not deployed :)
If JRules is complaining about every single word in the rule then, there is something wrong with the rule project; reference to the BOM or BOM itself if contained in the same rule project.
Dbl-chk and if needed redeploy the BOM.
The only "bug" is that the deployment wizard allows you to deploy a rule project on RTS even if the referenced BOM is not deployed on RTS. This is a regression "bug".
Because I think 6.x disallowed this.
I used 7.1.0 on my laptop for more than a year...And it worked like a charm.
Hope it helps

My Asnwer sounds like a repetition of Damien, However I would mention
1) Please get the BOM Rule Project published to Rule Team Server.
2) Please deploy your other Rule Project that have a dependency on this.

You might have to edit the project dependencies (under Manage Project), and delete the project dependencies (be sure to save after the delete) and then add them again. If that does not work, also try emptying the cache. See the suggestions in
Compile errors on rules in Decision Center and Rule Team Server (RTS)

Related

Xcode failed to load #rpath/libvulkan.1.dylib [duplicate]

I am trying to run a Swift app on my iPhone 4s. It works fine on the simulator, and my friend can successfully run it on his iPhone 4s. I have iOS 8 and the official release of Xcode 6.
I have tried
Restarting Xcode, iPhone, computer
Cleaning & rebuilding
Revoking and creating new certificate/provision profile
Runpath Search Paths is $(inherited) #executable_path/Frameworks
Embedded Content Contains Swift Code is 'Yes'
Code Signing Identity is developer
Below is the error in entirety
dyld: Library not loaded: #rpath/libswiftCore.dylib
Referenced from: /private/var/mobile/Containers/Bundle/Application/LONGSERIALNUMBER/AppName.app/AppName
Reason: no suitable image found. Did find:
/private/var/mobile/Containers/Bundle/Application/LONGSERIALNUMBER/AppName.app/Frameworks/libswiftCore.dylib: mmap() error 1 at
address=0x008A1000, size=0x001A4000 segment=__TEXT in Segment::map() mapping
/private/var/mobile/Containers/Bundle/Application/LONGSERIALNUMBER/APPLICATION_NAME/Frameworks/libswiftCore.dylib
For me none of the previous solutions worked. We discovered that there is an "Always Embed Swift Standard Libraries" flag in the Build Settings that needs to be set to YES. It was NO by default!
Build Settings > Always Embed Swift Standard Libraries
After setting this, clean the project before building again.
For keen readers some explanation
The most important part is:
set the Embedded Content Contains Swift Code (EMBEDDED_CONTENT_CONTAINS_SWIFT) build setting to YES in your app as shown in Figure 2. This build setting, which specifies whether a target's product has embedded content with Swift code, tells Xcode to embed Swift standard libraries in your app when set to YES.
The flag was formerly called Embedded Content Contains Swift Code
Surprisingly enough, all i did was "Clean" my project (shift+cmd+K) and it worked. Did seem to be related to the certificate though.
I started getting this error when I removed:
#executable_path/Frameworks
from Runpath Search Paths in my build settings. Replacing it fixed everything up again (thank goodness for source control!)
I don't know how it got there, but it appears to be needed for a binary to find its embedded Swift runtime.
For the device, you also need to add the dynamic framework to the Embedded binaries section in the General tab of the project.
In Xcode 8 the option for Embedded Content Contains Swift Code option is no longer available.
It has been renamed to "Always Embed Swift Standard Libraries = YES"
Xcode 13 here (13.1 with react-native).
Created a clean react-native project and saw /usr/lib/swift as an entry in Runpath Search Paths.
After adding that, my project finally ran without crashing!
Nothing helped from what was suggested before.
I think it's a bug when certificates are generated directly from Xcode. To resolve (at least in Xcode 6.1 / 6A1052d):
go to the Apple Developer website where certificates are managed: https://developer.apple.com/account/ios/certificate/certificateList.action
select your certificate(s) (which should show "Managed by Xcode" under "Status") and "Revoke" it
follow instructions here to manually generate a new certificate: https://developer.apple.com/library/ios/documentation/IDEs/Conceptual/AppDistributionGuide/MaintainingCertificates/MaintainingCertificates.html#//apple_ref/doc/uid/TP40012582-CH31-SW32
go to Xcode > Preferences > Accounts > [your Apple ID] > double-click your team name > hit refresh button to update certificates and provisioning profiles
I was having this issue with running my Swift tests (but not my app). It turns out that the test needed to have more than #executable_path/Frameworks in it's Runpath Search Paths build setting for the test target. Setting the Runpath Search Paths to the following worked a charm for me:
$(inherited)
#executable_path/Frameworks
#loader_path/Frameworks
OK, sharing here another cause of this error. It took me a few hours to sort this out.
In my case the trust policy of my certificate in Keychain Access was Always Trust, changing it back to defaults solved the problem.
In order to open the certificate settings window double click the certificate in the Keychain Access list of certificates.
This issue occurs again in Xcode 10.2. You must download and install the following package from Apple. It provides Swift 5 Runtime Support for Command Line Tools.
https://support.apple.com/kb/DL1998?locale=en_US
You have to set the Runpath Search Paths to #executable_path/Frameworks as showed in the following screenshot of Build Settings:
If you have any embedded frameworks made in Swift, than you can set to YES the Build Options Embedded Content Contains Swift Code.
I think Apple has already summarized it under Swift app crashes when trying to reference Swift library libswiftCore.dylib
Cited from Technical Q&A QA1886:
Swift app crashes when trying to reference Swift library
libswiftCore.dylib.
Q: What can I do about the libswiftCore.dylib loading error in my
device's console that happens when I try to run my Swift language app?
A: To correct this problem, you will need to sign your app using code
signing certificates with the Subject Organizational Unit (OU) set to
your Team ID. All Enterprise and standard iOS developer certificates
that are created after iOS 8 was released have the new Team ID field
in the proper place to allow Swift language apps to run.
Usually this error appears in the device's console log with a message
similar to one of the following:
[....] [deny-mmap] mapped file has no team identifier and is not a platform binary:
/private/var/mobile/Containers/Bundle/Application/5D8FB2F7-1083-4564-94B2-0CB7DC75C9D1/YourAppNameHere.app/Frameworks/libswiftCore.dylib
Dyld Error Message:
Library not loaded: #rpath/libswiftCore.dylib
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x0000000120021088
Triggered by Thread: 0
Referenced from: /private/var/mobile/Containers/Bundle/Application/C3DCD586-2A40-4C7C-AA2B-64EDAE8339E2/TestApp.app/TestApp
Reason: no suitable image found. Did find:
/private/var/mobile/Containers/Bundle/Application/C3DCD586-2A40-4C7C-AA2B-64EDAE8339E2/TestApp.app/Frameworks/libswiftCore.dylib: mmap() error 1 at address=0x1001D8000, size=0x00194000 segment=__TEXT in Segment::map() mapping /private/var/mobile/Containers/Bundle/Application/C3DCD586-2A40-4C7C-AA2B-64EDAE8339E2/TestApp.app/Frameworks/libswiftCore.dylib
Dyld Version: 353.5
The new certificates are needed when building an archive and packaging
your app. Even if you have one of the new certificates, just resigning
an existing swift app archive won’t work. If it was built with a
pre-iOS 8 certificate, you will need to build another archive.
Important: Please use caution if you need to revoke and setup up a new
Enterprise Distribution certificate. If you are an in-house Enterprise
developer you will need to be careful that you do not revoke a
distribution certificate that was used to sign an app any one of your
Enterprise employees is still using as any apps that were signed with
that enterprise distribution certificate will stop working
immediately. The above only applies to Enterprise Distribution
certificates. Development certs are safe to revoke for
enterprise/standard iOS developers.
As the AirSign guys state the problem roots from the missing OU attribute in the subject field of the In-House certificate.
Subject: UID=269J2W3P2L, CN=iPhone Distribution: Company Name, OU=269J2W3P2L, O=Company Name, C=FR
I have an enterprise development certificate, creating a new one solved the issue.
Let's project P is importing custom library L, then you must add L into
P -> Build Phases -> Embed Frameworks -> +. That works for me.
This error message can also be caused when upgrading Xcode (and subsequently to a new version of Swift) and your project uses a framework built/compiled with an older/previous version of Swift.
In this case rebuilding the framework and re-adding it will fix the problem.
The most easy and easy to ignored way : clean and rebuild.
This solved the issue after tried the answers above and did not worked.
I was having the same problem after moving to a new mac, and after hours, trying all the suggested answers in the questions, none of this worked for me.
The solution for me was installing this missing certificate.
http://developer.apple.com/certificationauthority/AppleWWDRCA.cer
Found the answer here.
https://stackoverflow.com/a/14495100/976628
Change Copy Pods Resources for the target from:
"${SRCROOT}/Pods/Target Support Files/Pods-Wishlist/Pods-Wishlist-resources.sh"
to:
"${SRCROOT}/Pods/Target Support Files/Pods-Wishlist/Pods-Wishlist-frameworks.sh"
I solved by deleting the derived data and this time it worked correctly. Tried with Xcode 7.3.1GM
After having tried out everything, I finally found out, that the build seems not always include every detail again and again. Maybe for speeding up the process...
In order to ensure WHOLE packaging before running on a device, make a Clean first: Shift-Cmd-K.
Then build with: Cmd-B.
After that run it on your device.
Easy.
Kind regards to all you nice guys in that place!
In my case, it was just the name of my target :
I renamed it like this : MyApp.something and the same issue appeared.
But I saw in the build Settings window, my product module name has been changed like this MyApp-something.
So, I removed the dot in my target name (MyAppSomething) and the issue was gone.
For me, having tried everything with no success, what worked was to remove #executable_path/Frameworks from the Packaging section (don't know how it came to be in there in the first place)
We had a unity project that creates an xcode project that includes libraries that use swift.
We tried each and every reasonable suggestion on this thread.
Nothing worked. Code runs fine on new devices, and crashes on iOS<=12
It seems that swift is so smart, that even if you set it to "ALWAYS_EMBED_SWIFT_LIBRAIES"="YES" it does not include the swift libraries.
What actually solved the problem for us is to include a dummy swift file in the project.
The file must contain calls to dispatch, foundation libraries.
Apparently this hints mighty-xcode to force include the libraries, but this time for real.
Here is the dummy file we added that made it work:
import Dispatch
import Foundation
class ForceSwiftInclusion {
init() {
// Force dispatch library.
DispatchQueue.main.async {
print("something")
}
// Force foundation library.
let uuid = UUID().uuidString
print("\(uuid)")
}
}
For unity, also add project.AddBuildProperty(target, "SWIFT_VERSION", "Swift 5"); to your post processing for creating the xcode project.
What worked for me in Xcode 11 was going to General -> Frameworks, Libraries, and Embedded Content and changing the "Embed" option for the framework in question to "Embed & Sign"
None of the solutions worked for me. Restarting the phone fixed it. Strange but it worked.
none of these solutions seemed to work but when I changed the permission of the world Wide Developer cert to Use System defaults then it worked. I have included the steps and screenshots in the link below
I would encourage you to log the ticket in apple bug report as mentioned here as Apple really should solve this massive error:
https://stackoverflow.com/a/41401354/559760
I had the same issue for Xcode 13+ when I create a release build. Had to waste my time on troubleshooting this issue. Finally I was able to fix the issue with following step.
I added a new entry for Release in Runpath Search Paths in Build Settings -> Linking.
/usr/lib/swift
After adding that, I could run my app without crashing!
Xcode 7.2, iOS 9.2 on one device, 9.0 on other. Both had the error. No idea what changed that caused it, but the solutions above for the WWDR were correct for me. Install that cert and problem solved.
https://forums.developer.apple.com/message/43547
https://forums.developer.apple.com/message/84846
There are lot's of answers there but might be my answer will help some one.
I am having same issue, My app works fine on Simulator but on Device got crashed as I Lunches app and gives error as above. I have tried all answers and solutions . In My Case , My Project I am having multiple targets .I have created duplicate target B from target A. Target B works fine while target A got crashed. I am using different Image assets for each target. After searching and doing google I have found something which might help to someone.
App stop crashing when I change name of Launch images assets for both apps . e.g Target A Launch Image asset name LaunchImage A . Target B Lunch Image asset name LaunchImage B and assigned properly in General Tab of each target . My Apps works fine.
For me building a MacOS command line Swift app that depended on 3rd party Swift libs (e.g. SQLite) none of the above solutions seemed to work. What did work was directly adding the following path to my Runpath Search Paths in the Build Settings:
/Applications/Xcode.app/Contents//Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/macosx/
Doing that did give a warning at runtime saying that Xcode had found 2 versions of libswiftCore - which makes sense. Except that not including that line resulted in Xcode not finding any versions of libswiftCore.
Anyway, that'll do for me even if it doesn't seem right - my app is just a utility that I'm not intending to distribute and at least it runs now!
I have multiple version of Xcode installed at the same time. The framework was built with a newer version of Xcode. The app that I tried to compile was with an older version of Xcode. When I cleaned and compiled both the framework and the app with the same version of Xcode then things worked.

visual studio removes backslash from path, generated by cmake [duplicate]

We just did a move from storing all files locally to a network drive. Problem is that is where my VS projects are also stored now. (No versioning system yet, working on that.) I know I heard of problems with doing this in the past, but never heard of a work-around. Is there a work around?
So my VS is installed locally. The files are on a network drive. How can I get this to work?
EDIT: I know what SHOULD be done, but is there a band-aid I can put on right now to fix this and maintain the network drive?
EDIT 2: I am sure I am not understanding something, but Bob King has the right idea. I'll work with the lead web developer when he gets back into the office to figure out a temporary solution until we get some sort of version control setup. Thanks for the ideas.
While we do use Source Control, we do also run all our projects from Network Drives (not shared directories, private directories on network drives). The network drives are backed up nightly, and also use Volume Shadow Copy, so if you need to revert to something before it made it's way to SC, then you can.
To get projects to run correctly with the right permission, follow these steps.
Basically, you've just got to map the shared directory to a drive, and then grant permission, based on that Url, to all code. Say you map to "N:\", then use "N:\*" as your Url pattern. It isn't obvious you need to wildcard, but you do.
The question is rather generic so I'll give an answer to one issue I was facing.
I run Visual Studio 2010 using a Parallels virtual machine on my Mac while keeping all my projects on the mac side via a network share. Visual Studio however wouldn't load the projects assembly files from there. Trying to set the rights using "caspol" alone didn't help in my case.
What finally worked for me to allow Visual Studio to load assemblies from a network share was to edit the file
"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe.config" (assuming a default installation).
in the xml "<runtime>" section you have to add
<loadFromRemoteSources enabled="true"/>
You may have to change the permissions on that file to allow write access. Save the file. Restart Visual Studio.
In the interests of actually answering the question, I copied this comment from jcarle.com:
Trusting Network Shares with Visual Studio 2010 / .NET Framework v4.0
January 20, 2011, 4:10 pm
If you are like me and you store all your code on a server, you will have likely learned about trusting a network share using CasPol.exe. However, when moving from Visual Studio 2008 (.NET Framework 2.0/3.0/3.5) over to Visual Studio 2010 (.NET Framework 4.0), you may find yourself scratching your head.
If you are used to using the Visual Studio Command Prompt to quickly get to CasPol, you may find that some of your projects will not seem to respect your new FullTrust settings. The reason is that, unless you are carefully paying attention, the Visual Studio Command Prompt defaults to adding the .NET Framework 4.0 folder to its path. If your project is still running under .NET Framework 2.0/3.0/3.5, it will require setting CasPol for those versions as well. Just a note, I have also personally had more success with using 1 as a code group instead of 1.2.
To trust a network share for all versions of the .NET Framework, simply call CasPol for each version using the full path as below:
C:\Windows\Microsoft.NET\Framework\v2.0.50727\CasPol -m -ag 1 -url file://YourSharePath* FullTrust
C:\Windows\Microsoft.NET\Framework\v4.0.30319\CasPol -m -ag 1 -url file://YourSharePath* FullTrust
I would not recommend doing that if you have (or even if you don't have) multiple people who are working on the projects. You're just asking for trouble.
If you're the only one working on it, on the other hand, you'll avoid much of the trouble. Performance is going to out the window, though. As far as how to get it to work, you just open the solution file from VS. You'll likely run into security issues, but can correct that using CASPOL. As I said, though, performance is going to be terrible. Again, not recommended at all.
Do yourself and your team a favor and install SVN or some other form of source control and put the code in there ASAP.
EDIT: I'll partially retract my comments. Bob King explains below the reason they run VS projects from a network drive and it makes sense. I would say unless you're doing it for a specific reason like Bob, stay away from it. Otherwise, get your ducks in a row before setting up such a development environment.
So I was having a similar issue. Visual Studio wouldn't recognize a network location I had mapped for a drive letter for anything. The funny thing is, it worked for a day. I set up my project and began working on it and had no issues. Then, I shut down and the next day nothing works. I couldn't read/write files in code, output my executables or anything. My project is local but my output was intended to be thrown up on the network.
Anyways, the problem is probably about the administrator context but one way to fix it which I found while digging around online is to get Visual Studio to browse to the drive in question some how. There are plenty of ways to do this but VS will magically be able to recognize mapped drive letters. My solution is to go the the Debug Output Location in the Project Properties, click browse and go to my previously made output location on my network drive and Voila!!!
I wanted to put this up because I spent half a day trying to figure this out and figured it might save someone else some time. Thanks much and good luck!!!
Erik
I understand this is an older thread, but this was the best thread I found when looking to solve a similar issue I had visual studio 2013 on a virtual box (using Win 8.1) and the code on the host machine (Win 7). Although I could open the solution, I could not compile. All of the other answers on this relate to older software, so I am adding this answer to update this frequently found question with the solution that worked for me.
Here's what I did; Made a registry entry to be able to use a UNC path as the current directory.
WARNING: Using Registry Editor incorrectly can cause serious, system-wide problems that may require you to reinstall Windows NT to correct them. Microsoft cannot guarantee that any problems resulting from the use of Registry Editor can be solved. Use this tool at your own risk.
Under the registry path:
HKEY_CURRENT_USER
\Software
\Microsoft
\Command Processor
add the value DisableUNCCheck REG_DWORD and set the value to 0 x 1 (Hex).
WARNING: If you enable this feature and start a Console that has a current directory of an UNC name, start applications from that Console, and then close the Console, it could cause problems in the applications started from that Console.
Found this information at link: http://support.microsoft.com/kb/156276
How about we rephrase this into a question that everyone can answer? I have the exact same problem as the initial poster.
I have a copy of VB 2008 (recently upgraded from VB6). If I store my solutions on the backed up network drive, then it won't run a single thing ever. It gives "partially trusted caller" errors for accessing a module, even when "allowpartiallytrustedcallers" is set in the assembly. If I store the files on my (not backed up) C:, then it will run wonderfully, until I put it on the share drive for everyone to use, and I'm back to my same problem.
This isn't a big request. I just want to be able to put a solution and executable on the share drive and run it without an absurd amount of nonsense about security. I shouldn't have to cram all my work into form files.
-Edit: I found the problem with why it was ignoring the AllowPartialllyTrustedCallers command. I'm trying to reference ADODB, which doesn't allow partially trusted. So, no network executable can access a database? What does Microsoft have against intranets anyway?
I was facing the same issue just recently so this answer is more for the sake of keeping track of my own knowledge. Anyway, should soumeone find it useful, below is the issue and the solution.
Issue:
NET 4.0 projects, SVN repo, checkout folders are on local drives, referenced assemblies are build by build server and available on a network drive. Visual studio on W7 is is able to add the reference but unable to build projects.
Solution:
Since NET 4.0 does not automatically provide a sandbox anymore for network assemblies, you have to make those full-trusted via machine.config update. http://msdn.microsoft.com/en-us/library/dd409252.aspx
I had a similar problem with opening Visual Studio projects on a network drive, and I fixed it by creating a symbolic link on my local C:\ drive that points to the UNC directory
e.g.
mklink /D "C:\Users\Self\Documents" "\\domain.net\users\self\My Documents"
then you can just open the project using the C:\Users\Self\Documents\ path, instead of the UNC path
(You have to be careful, because Visual Studio will automatically redirect you to the '\\domain.net..' path if you double click the symlink when you're browsing for the project. I had to copy paste the 'C:\Users\' path to get it to open with the drive letter path)
Don't do it. If you have source control (versioning), you do not want your files on a network drive. It totally bypasses all you want to achieve by using source control, because once your files are on a network drive, anyone can modify them .... even while you're currently building your project. Ka-boooom!
PS: this sounds like a typical case of over-engineering to me.
Are you having any specific problems?
If you allow more than one person to open the solution, your first problem will be that the .NCB file (Intellisense) will be locked exclusively and only one user will be able to browse the class tree. And of course you have the potential for one user's changes to overwrite the other user's changes.
You should be warned that some feature in Visual Studio will refuse to work with network drive.
For example, mdf file of SQL Express user instance must be located in local drive.
For another example, if you use UNC path, you have to make sure they are short enought.
i found this helpful while trying use vc11 with parallels which run on mac:
http://social.msdn.microsoft.com/Forums/en-US/toolsforwinapps/thread/2ffdcb01-c511-4961-834b-afd5f2fbb8e1, and specifically:
1) You can switch from local debugging to remote debugging and set the machine name as 'localhost'. This will do a remote deployment on your local machine (thus not using the project's directory). You don't need to install the Remote Debugger tools, nor start msvsmon for this to work on localhost.
In case this helps anyone else, I had to do the steps outlined here to add the network share location to Windows intranet zone. In particular, I was having trouble with Visual Studio hanging on load when opening a solution on a network share (i.e. using VMware Fusion and opening a solution from my Mac's hard drive). I also had problems with PostSharp running in this scenario.
If i understand you correctly, your Visual Studio project files are stored on the network drive and you are running them from there. This is what I do and don't have any problems. You will need to make sure that you have set the security policy. You can use Caspol to do this, or via the control panel-admin tools menu.
"How can I get this to work?"
You have a couple choices:
Choice A:
1. Move all files back to your local hard drive
2. Implement some type of backup software on your machine
3. Test said backup solution
4. keep on coding
Choice B:
1. Get a copy of one of the FREE source control products and implement it.
2. Make sure it's being backed up
3. Test it
Choice C:
Use one of the many ONLINE source control repositories available. Google, SourceForge, CodePlex, something.
Well, my question would be why you are asking this. Is it not working when you are storing it on a network drive? I haven't tried this myself, and one problem I could envision would be that .NET code running from a network drive (ie. from the bin\Debug directory, also located on the network drive) would be running in a sandbox mode, unless you mess around with CASPOL (or use 3.5 SP1 which I hear has removed that obstacle).
If you have specific problems, ask about them. Never ask "Why is doing X not working?".
You're not saying if you're just one person or multiple persons accessing the same remote drive, but I'm assuming you're just one for each network directory. Is this correct? If not, no, there is no band-aid. Get version control, move the files back to a local disk.

VSCode "go to definition" not working

I installed Visual Studio Code 1.1 with the C/C++ extension,
opened my C++ project and tried to use "Go to definition" in vain.
The "Go to definition" is not working at all.
Example, go to definition of a class member:
int i = m_myVar;
(I opened a simpler project with one file and it was working for this one)
In the end, what I want is good indexation of my big project, is there a way to install Intellisense?
I had a the same issue: F12 and Ctrl + Click and Right Click "Go To Definition" wasn't working.
The fix for me was:
Go to Extensions
Click "Disable All Installed Extensions"
Close and Reopen VS Code
Back to Extensions and "Enable All Extensions"
Essentially enable/disable all extensions fixed the issue.
I recently came across this same issue and after trying all of the suggested solutions I could find with no success, I found this article:
https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc
Basically my project grew too large and VS code was no longer able to track all files, which messed up the "go to definition" functionality.
After following the steps on the link to increase the maximum number of files to be tracked, the issue was resolved.
The correction is pretty simple (tested on Ubuntu 18.04):
Add this line:
fs.inotify.max_user_watches=524288
to the end of the file /etc/sysctl.conf
After saving, run the following command:
sudo sysctl -p
Hopefully this will be useful to someone else, this has been bothering me for the last few days.
I had a similar problem except with Python and google searches for solutions kept bringing me back to this post so I figured I'd post my solution here in the hopes that it might help other people.
I was working on a remote cluster through VScode Remote and was getting similar errors to the original question(all 'go to ___' functionality was unavailable and was even getting a 'too large to track' error) and I thought I had to increase the number of watches, which didn't end up helping.
All I needed to do was install a python interpreter on the remote VScode server. This fixed my problem.
I believe vscode 1.1 (well, 1.1.1 actually) + the C++ extension (cpptools) is as much Intellisense as we can get for now.
You should load your big project with the "open folder" function to make vscode know about the other files.
https://blogs.msdn.microsoft.com/vcblog/2016/03/31/cc-extension-for-visual-studio-code/ warns about letting the indexing finish first (red icon in lower right corner during indexing) and mentions the current limitations on the source code parsing.
It wasn't working on my laptop as well after installing a few VSCode extensions. I decided to close and re-open VSCode with administrator permission and suddenly it sorted out.
I have been trying to fix this for a long time. In the end, what worked for me was simply reinstalling VSCode, then installing the latest C/C++ extension (v0.18.1). Then, in your .vscode/c_cpp_properties.json file, under includePath, add your include folder which has all your header files.
I tried the methods mentioned in this thread none of them seemed to work for me. A simple solution that worked for me is that I closed the current workspace and created a new workspace, added the folders which I required(same as the old workspace), and saved the new workspace. Waited for a couple of minutes to index and IntelliSense is able to find definitions now.
I am using VSCode 1.52.1 on Ubuntu 20.04.
In my case, for whatever reason,c_cpp_properties.json has become set to Disabled in ~/.config/Code/User/settings.json.
Manually changing it to Enabled solved the problem.
Fixed mine by UNCHECKING C_Cpp > Default > Limit Symbols To Included Headers
Your mileage may vary. Good luck!
Have you saved your workspace? Or did you just open a folder with File->Open Folder? This question already has many answers, but none of them address this case, which was my issue.
The question is not specific enough for me to know if you are having the exact same symptoms as my case.
If:
You have not saved your workspace. vscode doesn't say "(workspace)" at the top of the window.
None of the goto functions are working, but instead report: "No ___ found for ____"
The tag parser database icon in the bottom right is always there but only reports "Parsing open files", rather than telling you how many files have been parsed.
Then:
Try saving your workspace.
If you have multiple versions of a language on your PC, specify the exact language you are using in the VScode(in my case, I am using Python, so I must specify the version to the python Interpreter in VS Code)
If you could not do it whatsoever, then uninstall all the other versions that you don't use and then if you go to VS Code, it will ask the version to be used, and you would have only one version, so when you select the version, the "Go To Definition" will be activated.
I was having a similar issue with java on Ubuntu 20.04 using OpenJDK version 11 (openjdk-11-jdk in apt). At first I didn't have the JRE installed, so I installed it and it still didn't work.
Afterwards, I went to the CTRL + SHIFT + P menu and then to Java: Configure Java Runtime, there I saw in the Java Tooling Runtime tab that /usr/lib/jvm/java-11-openjdk-amd64 was selected, changed it to /usr/lib/jvm/java-1.11.0-openjdk-amd64 just to see if it would work, and after a restart it did. I'm not sure why this is, but I hope it may help somone else.
For python ensure your code analysis settings are correct. In my case the languageServer was accidentally set to 'None'. Reverting it to 'default' or 'pylance' did the trick.
Just to inform if none of above works then
In my case i was using Kite extension in my VS code, I just disabled it and it worked. I think kite extension is blocking this feature.
OS: Linux Ubuntu 22.04
if you encountered with following error:
"The .NET Core SDK cannot be located. .NET Core debugging will not be enabled. Make sure the .NET Core SDK is installed and is on the path."
Normally Vscode remains unable to locate .Net sdk. need to set path manually.
sudo ln -s /snap/dotnet-sdk/current/dotnet /usr/local/bin/dotnet
restart omnisharp & restart vscode
No need to do anything. Just close and re-open. It will work.
I also faced similar problem. In my mac os cmnd + 'click' is used to 'go to definition' then it suddenly stoped working. If that is the case then please follow these steps:
restart vs code
restart pc
uninstall all extensions and reinstall again followed by a pc restart.
I had a similar issue with the extension C/C++ installed. I solved it by downloading an older version of the extension and upgrading to the last version. Somehow it solved the problem...

Netbeans always compiling from the beginning

I'm having a quite big C++ project in Netbeans. It takes about 3 minutes for it to compile (with -j5 mode enabled).
I'm using my VM server (FreeBSD) hosted on Windows 8 and using SFTP option to compile.
Everything is working like a charm except that it looks like Netbeans is always making clean while compiling (no clean messages appearing in the output console though!). It's really annoying for me to wait 3 minutes for each change I have to make in my source code.
My friend had a similar issue some time ago - it was related to the Netbeans timestamps files (different time setting on the local & remote VM machine). In my case the VM machine time setting is the same as on my PC.
I am currently running Netbeans version 7.3.1 (because later & latest version are having some odd SFTP issue not working correctly). I've also tried the latest beta build including earlier versions and it doesn't seem to solve my problem.
Whats the problem? I will appreciate every solution.
There is excellent article "Make Dependency Checking" on this topic by NetBeans team which is worth reading to understand this behaviour.
NetBeans internally uses make utility for dependency checking defined in Makefile. When we create a new project in NetBeans, it enables "Full rebuild" feature. This leads to this particular behaviour.
However if want to avoid this, we can change this particular feature to "Incremental rebuild".
For complete information and to understand its consequences, please refer the above article from NetBeans team.

Is it possible to use a file for build configurations in TeamCity like TravisCI?

One great feature of TravisCI is the ability to add a configuration file to your repository which describes how Travis should run your build. Is something similar possible with TeamCity?
I've done some pretty heavy configuration for enterprise TeamCity servers so I'm fairly well versed in it, but I've never seen or heard of this type of configuration and so far as I can tell it's not in the docs. I'd be okay with a plugin as well, assuming it's stable.
It seems this feature was requested some time ago and is being tested in a newer version of TC.
Full discussion: http://youtrack.jetbrains.com/issue/TW-2806
Relevant comment: http://youtrack.jetbrains.com/issue/TW-2806#comment=27-781368
Try out EAP, this feature is already there.
EAP is currently free http://confluence.jetbrains.com/display/TW/TeamCity+EAP+(Latest)