Eclipse CDT: Run/debug settings doesn't see system environment variables - c++

I have a setup where I use my own makefiles to build a c++ project from within eclipse. I kick of eclipse from shell in unix.
I was able to setup the build without a problem because environment variables (inherited from the shell) were listed in build configuration's variable list.
But the run/debug setting seems to be seeing only the native eclipse environment variables - the system environment variables are not included in the variable list I get within the run/debug setting configurations.
What am I missing? I want to run a script before running eclipse to set the an environment variable which eclipse can inherit and use to know the path to the executable. I could pass this to the building configuration, but not to the run/debug config.
I don't want to put an absolute path because I am part of a development team and I want others who are in different branches/streams to reuse my setup seamlessly.

Related

Should I move windows project folder into WSL?

I'm trying to set up a work environment on a new machine and I am a bit confused how best to procede.
I've set up a new windows machine and have WSL2 set-up; I plan on using that with VS Code for my development environment.
I have a previous django project that I want to continue working on stored in a folder in a thumb drive.
Do I move the [windows] project folder into the linux folder system and everything is magically ready to go?
Will my previous virtual environment in the existing folder still work or do I need to start a new one?
Is it better to just start a new folder via linux terminal and pull the project from github?
I haven't installed pip, python, or django on the windows OR linux side just yet either.
Any other things to look out for while setting this up would be really appreciated. I'm trying to avoid headaches later by getting it all set-up correctly now!
I would pull it from github, and make sure you have the correct settings for line endings, since they are different between windows and linux. Just let git manage these though:
https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings
Some other suggestions:
Use a version manager in linux to manage your python versions - something like pyenv or asdf. It will make life easier.
Make sure to always create a virtual environment for everything and don't pip install anything in your main python. (I use direnv for virtual env management)
The single exception to the previous suggestion is pipx, which I do install in the main python and then use to install things like cli tools, black, isort, pip-tools etc.
Configure VScode to use the pipx installed versions of black, flake8 etc. for linting purposes.
If you're using Docker, enable the WSL integration for your WSL flavour (probably Ubuntu). Note that docker desktop needs starting before your WSL session.

C++ Remote development IDE supporting shell scripts

We have a legacy C++ web application project which does not have a make file, instead there are a number of shell scripts which read parameters from txt file that build whole project or individual files along with their dependencies.
Is there a way can use any IDE (NETBEANS, ECLIPSE etc) which support C++ remote development on Linux(target), to build the project by running shell scripts rather than a make.
Is there any way we can modify the build process to allow us to use these existing scripts ?
Cheers,
Ash
Most IDEs support "custom" targets or projects where you could write some custom commands to be executed when building. You could use that to have a script call the remote commands through SSH or similar techniques.

How can I configure KDevelop to build before executing?

In most IDEs (e.g. Visual Studio, all the java IDEs, etc) by default the sources are build when the user selects to execute or debug the application. How can I configure KDevelop to do the same?
Go to the Run->Launch Configurations menu item.
There you can configure a launch configuration, with arguments, working directory, etc.
At the bottom, there is a dependencies block, where you can specify a build target that should be run before the execution of the launch configuration.

How to change Gradle download location

I am starting to learn gradle.
However when I am building Spring with Gradle; it downloads the dependency jars to
C:\Users\UserName\.gradle
Is there any way I can specify Gradle to download the dependency jars to a specific location?
Just like I can specify repository location in Maven.
System information:
Windows 7 64bit
Gradle version 1.0
You can set the GRADLE_USER_HOME environment variable, gradle.user.home system property, or --gradle-user-home command line parameter.
On android studio just go to File > Settings > Build Execution, Deployment > Gradle > Service directory path choose directory what you want.
You can also try to go in eclipse at window ->preferences -> gradle and change the directory there
You can add following line to your gradle.properties:
systemProp.gradle.user.home=/tmp/changed-gradle
Steps:
Set the GRADLE_USER_HOME environment variable to new path
On android studio just go to File > Settings > Build Execution, Deployment > Gradle > Service directory path choose directory what you want.
Restart the PC (important step, no one mentioned this surprisingly)
If you are using gradle plugin in your eclipse and trying to import the gradle project than your gradle home is set to
C:\Users\UserName.gradle
In some cases your import build model will not work because of your user directory permission issue.
In this case you can copy your .gradle directory from below path
C:\Users\UserName\**.gradle**
paste into some directory where you have all permission and import the project.
In my case i moved my .gradle dir to z drive and than imported the project than made build model and it worked.
If you want to run gradle tasks through IDE then:
You can also set in intelliJ editor>File>settings:
Then restart the IDE.
If you want to run gradle tasks through command line then you have to set GRADLE_USER_HOME in environment system variables. Then restart the pc as others also said.

Tomcat + Hudson and testing a Django Application

I'm using Hudson for the expected purpose of testing our Django application. In initial testing, I would deploy Hudson using the war method:
java -jar hudson.war
This worked great. However, we wanted to run the Hudson instance on Tomcat for stability and better flexibility for security.
However, now with Tomcat running Hudson does not seem to recognize previously-recognized Python libraries like Virtualenv. Here's an output from a test:
+ bash ./config/testsuite/hudson-build.sh
./config/testsuite/hudson-build.sh: line 5: virtualenv: command not found
./config/testsuite/hudson-build.sh: line 6: ./ve/bin/activate: No such file or directory
./config/testsuite/hudson-build.sh: line 7: pip: command not found
virtualenv and pip were both installed using sudo easy_install, where are they?
virtualenv: /usr/local/bin/virtualenv
pip: /usr/local/bin/pip
Hudson now runs under the tomcat6 user. If I su into the tomcat6 user and check for virtualenv, it recognizes it. Thus, I am at a loss as to why it doesn't recognize it there.
I tried removing the commands from a script and placing it line-by-line into the shell execute box in Hudson and still same issue.
Any ideas? Cheers.
You can configure your environment variables globally via Manage Hudson ->
Environment Variables or per machine via Machine -> Configure ->
Environment Variables (or per build with the Setenv plugin). It sounds like
you may need to set the PATH and PYTHONPATH appropriately; at least that's the
simple solution.
Edited to add: I feel as though the following is a bit of a rant, though not really directed at you or your situation. I think that you already have the right mindset here since you're using virtualenv and pip in the first place -- and it's not unreasonable for you to say, "we expect our build machines to have virtualenv and pip installed in /usr/local," and be done with it. Take the rest as you will...
While the PATH is a simple thing to set up, having different build
environments (or relying on a user's environment) is an integration "smell".
If you depend on a certain environment in your build, then you should either
verify the environment or explicitly set it up as part of the build. I put
environment setup in the build scripts rather than in Hudson.
Maybe your only assumption is that virtualenv and pip are in the PATH (because
those are good tools for managing other dependencies), but build
assumptions tend to grow and get forgotten (until you need to set up a new
machine or user). I find it useful to either have explicit checks, or refer to
explicit executable paths that are part of my defined build environment. It is
especially useful to have a explicitly defined environment when you have
legacy builds or if you depend on specific versions of your build tools.
As part of builds where I've had environment problems (especially on Windows
with cygwin), I print the environment as the first build step. (But I tend to
be a little paranoid proactive.)
I don't mean to sound so preachy, I'm just trying to share my perspective.
Just to add to Dave Bacher's comment:
If you set your path in .profile, it is most likely not executed when running tomcat. The .profile (or whatever the name is on your system) is only executed when you have a login shell. To set necessary environment variables, you have to use a different set of file. Sometimes they are called .env and they exist on global and user level. In my environment (AIX), the user level .env file can have a different name (name is set in the env variable either in global environment file (eg. /etc/environment) or by parameter, when starting the shell).
Disclaimer: This is for the IBM AIX ksh, but should be the same for ksh on other systems.
P.S. I just found a nice explanation for .profile and .env from the HP site. Notice that they speak of a login shell (!) when they speak about the execution of the .profile file.