How to rename bazelisk to bazel - build

I am currently trying, without great success, to build tensorflow from source.
As suggested here: https://www.tensorflow.org/install/source, I tried to do so by installing bazelisk. Unfortunately, I wasn't able to do so as the ./compile cannot find bazel as bazelisk replaces it.
This link: https://github.com/bazelbuild/bazelisk/issues/122 suggested to alias or rename the environment variable to "bazel" in the PATH.
As described in the issue above, aliasing did not work out for the configure.py.
My next step would be to rename it but I, unfortunately, was not able to figure out how the renaming of environment works under Linux.
I did add the following:export PATH=$PATH:$(go env GOPATH)/bin
to my .profile under my /home folder, which, the way I understand it, adds the path to Bazelisk binaries to my environment path but I am not sure how the renaming would work in this situation.
Would it be possible to explain how I could proceed?

Download the bazelisk binary from the releases page and save the file as bazel in a directory somewhere in your $PATH.
For example, if you have export PATH=$PATH:$HOME/bin in your .profile/.bashrc/.bash_profile, and in $HOME/bin, store the bazelisk binary as $HOME/bin/bazel.

You can have 2 more options:
sudo ln -s /usr/local/bin/bazelisk /usr/local/bin/bazel which makes a symlink to bazelisk (personally i prefer it, because its more explicit)
alias bazel='bazelisk' in your ~/.zshrc, ~/.bashrc or ~/.profile. This also works well, but there could be some issues if you want to run vim-bazel and such.

Related

Github Actions path does not update

Right now, I'm trying to build a tool from source and use it to build a C++ project. I'm able to extract the tar file (gcc-arm-none-eabi). But, when I try to add it to path (using $GITHUB_PATH, not add-path), the path doesn't apply on my next action and I can't build the file. The error states that it can't find the gcc-arm-none-eabi toolset, which means that it didn't go to path.
Here's the script for the entrypoint of the first function (make is ran in the next action to allow for path to apply)
echo "Downloading ARM Toolchain"
# The one from apt isn't updated so I have to build from source
curl -L https://developer.arm.com/-/media/Files/downloads/gnu-rm/10-2020q4/gcc-arm-none-eabi-10-2020-q4-major-x86_64-linux.tar.bz2 -o gcc-arm-none-eabi.tar.bz2
tar -xjf gcc-arm-none-eabi.tar.bz2
echo "/github/workspace/gcc-arm-none-eabi-10-2020-q4-major/bin" >> $GITHUB_PATH
I can't even debug by seeing what's in the path because running echo $(PATH) just says that PATH cannot be found. What should I do?
I can't even debug by seeing what's in the path because running echo $(PATH) just says that PATH cannot be found. What should I do?
First, PATH is not a command so if you want to print its value, it would be something like echo "${PATH}" or echo "$PATH"
Then, if you want to add a value to an existing environment variable, it would be something like
export PATH="${PATH}:/github/workspace/gcc-arm-none-eabi-10-2020-q4-major/bin"
EDIT: seems not a valid way to add something to the path using Github Actions, meanwhile it seems correct in the question. To get more details: https://docs.github.com/en/free-pro-team#latest/actions/reference/workflow-commands-for-github-actions#adding-a-system-path . Thanks to Benjamin W. for pointing this out in the comments.
Finally I think it would be a better fit if you use a docker image that already contains that kind of dependancies (you could easily write your own Dockerfile if this image doesn't already exists). Github action is designed to use docker (or OCI containers) image that contains the dependancies you need to perform your build actions. You should take a look here: https://docs.github.com/en/free-pro-team#latest/actions/creating-actions/dockerfile-support-for-github-actions

How to use SSTATE_DUPWHITELIST variable in yocto

I'll try to explain it as easy as I can. I tried to include and build package "A" in my Yocto image, but package A depends on libftdi and ftdi-eeprom. Now, "ftdi-eeprom" depends on the "libftdi".
In the newer versions of the "libftdi" the tarball also includes the ftdi-eeprom sources too and when you build the libftdi it builds both of the packages. Although because of the way that package "A" is configured I need two different recipes for each of the dependencies.
long story short, I made the two bitbake recipes as best as I could and successfully built "libftdi". Now when I run the "ftdi-eeprom" recipe, it wants to populate some files into the sysroot that are already installed there by libftdi. Here is where the error occurs... duplicates!
Apparently I need to set a SSTATE_DUPWHITELIST variable and declare that these duplicate files are safe to replace the old ones in the image (this overwrite must happen). Can someone please help me with configuring the SSTATE_DUPWHITELIST? I am not that pro working with Yocto.
Errors that I get on screen are uploaded in Dropbox
Thanks in advance!
The answer is to not use SSTATE_DUPWHITELIST for this at all. Instead, in the libftdi recipe's do_install (or do_install_append, if the recipe itself doesn't define its own do_install) you should delete the duplicate files from within ${D} and then they won't get staged and the error won't occur.
I got it to work by using:
SSTATE_DUPWHITELIST = "/"
Dont forget the quotes. Here's my bb excerpt:
SSTATE_DUPWHITELIST = "/"
DEPENDS = ""
do_unpack() {
mkdir -pv ${S}
tar xvf ${DL_DIR}/${FILENAME}.tar -C ${S}
}
do_install() {
install -d -m 755 ${D}${includedir}
install -m 644 ${S}/${MYPATH}/inc/myHeader1.h ${D}${includedir}
install -m 644 ${S}/${MYPATH}/inc/myHeader2.h ${D}${includedir}
install -m 644 ${S}/${MYPATH}/inc/myHeader3.h ${D}${includedir}
}
I managed to solve this problem by adding the SSTATE_DUPWHITELIST to the bitbake recipe of the package as follows:
SSTATE_DUPWHITELIST = "${TMPDIR}/PATH/TO/THE/FILES"
I added the absolute path of all of the 6,7 files that had the conflict to the list. I did that because they were basically coming from a same source and it was all safe to do that. correct me if there is a better way though.
Hope this helps someone!

How to install 2 Opencv versions on one Ubuntu machine and How to activate one at a time for compilation?

I have installed two versions of opencv in my ubuntu12.04 machine , one in /usr/local/ (opencv3.0.0) and another in /usr/ (opencv2.4.9).
To activate particular version i am using these commands in terminals.
Example :To activate opencv2.4.9,
sudo sh -c 'echo "/usr/" > /etc/ld.so.conf.d/opencv.conf' (shell script)
sudo ldconfig
export PKG_CONFIG_PATH=/usr/lib/pkgconfig
After executing these commands version is changing.
Checked with command, pkg-config --modversion opencv.
Then i compiled my code and checked used libraries, Using ldd command,
It is listing opencv3.0.0 version not opencv2.4.9.
Please help correct way of switching opencv versions.
Thanks in advance
Thank you,
I found a solution for this problem, but I am not sure the solution what iIfound is correct way or not. But it is working fine for me.
When we install two versions of opencv in different locations,we will found two opencv.pc file in {path}/lib/pkgconfig/opencv.pc.
In above example opencv2.4.9's opencv.pc file is in this path usr/lib/pkgconfig/opencv.pc.
and opencv3.0.0's opencv.pc file is in this path /usr/local/lib/pkgconfig/opencv.pc
When we compile a code it will search in both location for opencv.pc configuration file, it will use which ever first it is getting, neglecting second one.
so if want compile code with particular version we need to remove this opencv.pc file from that location.
If you want to use opencv2.4.9 remove(or rename)opencv.pc from opencv3.0.0's lib/pkgconfig/ location. Again if want activate opencv3.0.0 add opencv.pc to its lib/pkgconfig/ location and remove opencv2.4.9's opencv.pc file from /lib/pkgconfig/opencv.pc.
If somebody knows a better way to do this, please comment.
You still can install both versions and append on the environment path the path of the version you want to use.
If you don't know how to change system path check this ( How to permanently set $PATH on Linux? )

"Missing separate debuginfos" in non-root account

I have the same problem as reported here:
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.47.el6_2.9.i686 libgcc-4.4.6-3.el6.i686 libstdc++-4.4.6-3.el6.i686
However, I am not the root user so I can't just run debuginfo-install .... I was wondering if there's a relatively easy way for me to get these libraries and add a Path to them in my home directory without using a root account.
There is a way, though I'm not sure I would call it easy. The essential idea is to install the files in your $HOME and then tell gdb how to find them.
The steps are like:
Download the RPMs.
Install them somewhere in $HOME. Sometimes you can do this with rpm -i --prefix=..., though I don't know if that will work for debuginfo RPMs. You can always extract the files from an RPM using cpio. Be sure to preserve the directory names.
In gdb, use set debug-file-directory to tell gdb to look at your new directory. You can put multiple directories here by separating them with ;.
Some more fiddling with source directories (see dir) might be needed after this.
It's maybe worth noting that you normally don't actually need system debuginfo.

Using %{buildroot} in a SPEC file

I'm creating a simple RPM installer, I just have to copy files to a directory structure I create in the %install process.
The %install process is fine, I create the following folder /opt/company/application/ with the command mkdir -p %{buildroot}/opt/company/%{name} and then I proceed to copy the files and subdirectories from my package. I've tried to install it and it works.
The doubt I have comes when uninstalling. I want to remove the folder /opt/company/application/ and I thought you're supposed to use %{buildroot} anywhere when referencing the install location. Because my understanding is the user might have a different structure and you can't assume that rmdir /opt/company/%{name}/ will work. Using that command in the %postun section deletes succesfully the directories whereas using rmdir ${buildroot}/opt/company/%{name} doesn't delete the folders.
My question is, shouldn't you be using ${buildroot} in the %postun in order to get the proper install location? If that's not the case, why?
Don't worry about it. If you claim the directory as your own in the %files section, RPM will handle it for you.
FYI, %{buildroot} probably won't exist on the target machine.