I have a project which is using yocto for building libraries including gstreamer. I found out that I need to patch some gstreamer element thus creating new bitbake recipe with patch..
I usually have to run bitbake with image name as parameter which will rebuild whole yocto (which is quite long):
MACHINE=some_machine nice bitbake yocto-etc-etc
How do I rebuild just that part which I need and not whole yocto?
I heard about devtool, but I am not sure how to use that.
you can pass different command to bitbake based on what you need.
To remove temp:
bitbake -c clean gstreamer
To remove temp and sstate cache (I use this most):
bitbake -c cleansstate gstreamer
To remove download as well, and lets begin build starting from do_fetch and all
bitbake -c cleanall gstreamer
Once you are done with either of these clean, which ever suits you, you can simple give build command for the specified:
bitbake gstreamer
Certainly, this is easy to do. Just specify the recipe you want to build instead of the image name, for example if it was the main gstreamer recipe you had changed (which at least in current versions is called gstreamer1.0):
MACHINE=some-machine bitbake gstreamer1.0
Note that the name expected on the command line is always a recipe name or something from PROVIDES in a recipe, and not a runtime package name.
Regarding devtool, it can certainly put you into an environment where you can more easily make changes to the source for a recipe and generate patches from them, but the actual building part we are discussing here doesn't really change. You can find more information on how to use devtool in the Yocto Project Development Manual
You can also
clean: Removes all output files for a target
cleanall: Removes all output files, shared state cache, and downloaded source files for a target, depending on the changes
bitbake -c clean task
bitbake -c cleanall task
First you can create a patch on the gstreamer using quilt or diff etc...
Put the patch into your meta layer and include it into,SRC_URI += "file://xxxx.patch".
Make sure you have added FILESEXTRAPATHS_PREPEND variable in the bbappend file of recipe.
Then do a cleansstate of the package.
bitabake gstreamer** -c cleansstate
Then execute do_patch operation and check our patch has been applied properly.
bitabake gstreamer*** -c patch
Then do the full build of the component followed by built the final target.
You can also launch the taskes you are interested in, for example:
If you want to apply only the patch you can do something like:
# Apply the patch you have located and sourced in SRC_URI variable previously
MACHINE=some_machine nice bitbake -c patch gstreamer
# Compile the recipe
MACHINE=some_machine nice bitbake -c compile gstreamer
# In case there are more necessary tasks, launch them as previous
Now you can get the generated package, and pass it to your board (eg. via ssh/serial(zmodem) ), test it and repeat until you like the resul, then regenerate the image doing:
for i in clean cleanall cleansstate;do bitbake -c ${i} gstreamer;done
MACHINE=some_machine nice bitbake yocto-etc-etc
you can build any specific recipe by providing the recipe name along with the bitbake command
For instance, If you want to build gstreamer
poky/meta/recipes-multimedia/gstreamer1.0_1.16.3.bb
you can use the following command
MACHINE=<your-machine-name> bitbake gstreamer1.0
Note that the PROVIDES value will be parsed from .bb filename excluding the characters after underscore.
Additional suggestions
If you want to do some experimental changes in your source and wants to compile for each minimalistic change, you could do that by navigating to
work directory
cd build/tmp/work/armv5e-poky-linux-gnueabi/gstreamer1.0/1.16.3-r0/
here you can apply your changes in src directory and you can use ./temp/run.do_compile to compile, which will take very less time compared to the entire build time.
Related
I have recently started using Yocto. I 'm looking for option to include/add altered package into final build image. Below I have described the scenario.
I'm working on RDK, which is yocto based system for STB(Set-top Box) Emulator. I have already build complete system once. Now I'm making some changes in some particular module, to see final effect of that in build/image, I rebuilt that particular module(at this point I came to know bitbake doesn't work like makefile utility, that you make changes and it will take care of rest and your package will be compiled as well as included into final image/binary), I used bitbake -c cleansstate <module_name>, then bitbake <module_name> to rebuild the package.
Next thing was to get it inside the final image, but the same thing I had to go through the pain again, bitbake -c cleansstate <image_name>, then bitbake <image_name> to rebuild the image.
Basically, only once package is changed and to include that into final image I have create complete image again.Which is very time-consuming process!!!
I'm wondering is there any way that I can reduce this build time and include altered package into final image?
NOTE: Not looking for optimization option, I know about local.conf BB_NUMBER_THREADS and PARALLEL_MAKE options. It is just about, can we add package into final image without generating all dependency for final image as described in scenario.
Assuming by "making changes" you mean modifying the underlying code, I would suggest using devtool modify - this will set up a local source tree for the recipe where you can make your changes, and each time you make a change and then run bitbake on the recipe or something that depends upon it (such as your image) it will rebuild it including your changes. Basic steps:
devtool modify <recipe>
Make your changes within the source tree that is set up
bitbake <recipe> or bitbake <image>
Test the result; loop back to step 2 if you need to make further changes
devtool finish <recipe> to write your changes back as patches against the recipe
I happened to me that after adding a recipe on meta/recipes-extended/myrecipe_0.0.1.bb
I was able to build my new recipe with the command
bitbake myrecipe
but the binaries never got included on the rootfs image when running
bitbake core-image-minimal
To add the output of my recipe to the output images, I've added the following to my ${BUILDDIR}/conf/local.conf file:
IMAGE_INSTALL_append = " myrecipe"
in my local.conf file.
Having used CMake, I've become used to out-of-source builds, which are encouraged with CMake. How can out-of-source builds be done with Cargo?
Using in-source-builds again feels like a step backwards:
Development tools need to be configured to ignore paths. Sometimes multiple plugins and development tools - especially using VIM or Emacs!
Some tools can't be configured to easily hide build files. While dotfiles are typically hidden, they will still show Cargo.lock and target/, worse still, recursively exposing their contents.
Deleting un-tracked files to remove everything outside of version control, typically to cleanup editor temp files or some test output, can backfire if you forgot to add a new file to version control and don't manually check the file list properly before deleting them.
Dependencies are downloaded into your source code path, sometimes adding *.rs files in the target directory as part of building indirect deps, so operating on all *.rs files may accidentally pickup other files which aren't in a hidden directory, so might not be ignored even after development tools have been configured.
While it's possible to work around all these issues, I'd rather just have an external build path and keep the source directory pristine.
You can specify the directory of the target/ folder either via configuration file (key build.target-dir) or environment variable (CARGO_TARGET_DIR). Here is an example using a configuration file:
Suppose you want to have a directory ~/work/ in which you want to save the Cargo project (~/work/foo/) and next to it the target directory (~/work/my-target/).
$ cd ~/work
$ cargo new --bin foo
$ mkdir .cargo
$ $EDITOR .cargo/config
Then insert the following into the configuration file:
[build]
target-dir = "./my-target"
If you then build in your normal Cargo project directory:
$ cd foo
$ cargo build
You will notice that there is no target/ dir, but everything is in ~/work/my-target/.
However, the Cargo.lock is still saved inside the Cargo project directory, but that kinda makes sense. For executables, you should check the Cargo.lock file into your git! For libraries, you shouldn't. I guess having to ignore one file is better than having to ignore an entire folder.
Lastly, there are a few caveats to changing the target-dir, which are listed in the PR which introduced the feature.
While useful manually setting this up isn't all that convenient, I wanted to be able to build multiple crates within a source tree, having all of them out-of-source, something that ../target-dir configuration option wouldn't achieve.
Helper utility for convenient out-of-source builds
Using the environment variable I've written a small utility to wrap cargo, so it automatically builds out-of-source, supporting crates both at the top-level, on in a subdirectory of the source tree.
Thanks to Lukas for pointing out CARGO_TARGET_DIR and target-dir configuration option.
What I really wanted was a dynamic CARGO_TARGET_DIR that changes relative to where I am.
This bash alias puts all builds in a mirrored directory structure, e.g. instead of putting target into ~/mydir/myproj it puts in into ~/rustbuild/mydir/myproj
alias cargo='CARGO_TARGET_DIR=$(echo $PWD | sed "s|$HOME|$HOME/rustbuild|g") cargo'
You could also make your rustbuild directory hidden.
I'm a beginner of Yocto project.
So, I'm really hope to know how to build *.bb files which I added.
I am added a .bb file (dlt-daemon) to meta-/meta-*/recipes-expends/dlt-daemon/dlt-daemon_v2.14.1.bb.
However, whenever I try to build it (bitbake core-image-weston) it isn't built.
I tried to build a *.bb file only ( bitbake -b ******/*.bb -c compile ) but there is no output file in the rootfs. ( I found the output files at build/tmp/work/arch****/dlt-daemon/2.14.1-r0/build/***** )
I'm not sure why it doesn't work?
Please, can I know how to build *.bb files which I added?
Preferably, you should add your own recipes in your own layer.
But nevertheless, just adding a recipe (ie .bb-file) won't add it to any rootfs. If you can run
bitbake your-recipe
without getting any errors, your recipe is working as it should (there could still be some issues if you're not installing any files etc). You can confirm that it's working by either looking at the logs for the different tasks (in ${WORKDIR}/<arch>/recipe-name/recipe-version/temp/).
Still being able to build your recipe isn't enough for what you want. For the application in question to appear in you rootfs, you need to add it to your image. Temporarily, you can add the following line to your conf/local.conf:
IMAGE_INSTALL_append = " <package-name>"
Note the leading space. To make it permanent, you should add the <package-name> to IMAGE_INSTALL directly in you image recipe.
Open your local.conf file and add below line e.g: hello.bb
IMAGE_INSTALL_append = " hello" # "space" before hello. this will add to your rfs image
Then compile your rfs using bitbake core-image-minimal
I would like to edit an existing software to add a new source file (Source.cpp).
But, I can't manage the compilation process (it seems to be automake and it looks very complicated).
The software (iperf 2: https://sourceforge.net/projects/iperf2/files/?source=navbar) is compiled using a classical ./configure make then make install.
If I just add the file to the corresponding source and include directory, I got this error message:
Settings.cpp:(.text+0x969) : undefined reference to ...
It looks like the makefile isn't able to produce the output file associated with my new source file (Source.cpp). So, I probably need to indicate it manually somewhere.
I searched a bit in the project files and it seemed that the file to edit was: "Makefile.am".
I added my source to the variable iperf_SOURCES in that file but it didn't workded.
Could you help me to find the file where I need to indicate my new source file (it seems a pretty standard compilation scheme but I never used automake softwares and this one seems very complicated).
Thank you in advance
This project is built with the autotools, as you already figured out.
The makefiles are built by automake. It takes its input in files that usually have a am file name extension.
The iperf program is built by the makefile generated from src/Makefile.am. This is indicated by:
bin_PROGRAMS = iperf
All (actually this is a simplification, but which holds in this case) source files of a to be built binary are in the corresponding name_SOURCES variable, thus in this case iperf_SOURCES. Just add your source file to the end of that list, like so (keeping their formatting):
iperf_SOURCES = \
Client.cpp \
# lines omitted
tcp_window_size.c \
my_new_file.c
Now, to reflect this change in any future generated src/Makefile you need to run automake. This will modify src/Makefile.in, which is a template that is used by config.sub at the end of configure to generate the actual makefile.
Running automake can happen in various ways:
If you already have makefiles that were generated after an configure these should take care of rebuilding themselves. This seems to fail sometimes though!
You could run automake (in the top level directory) by hand. I've never done this, as there is the better solution to...
Run autoreconf --install (possibly add --force to the arguments) in the top level directory. This will regenerate the entire build system, calling all needed programs such as autoheader, autoconf and of course automake. This is my favorite solution.
The later two options require calling configure again, IMO ideally doing an out of source built:
# in top level dir
mkdir build
cd build
../configure # arguments
make # should now also compile and link your new source file
In relatively big projects which are using plain old make, even building the project when nothing has changed takes a few tens of seconds. Especially with many executions of make -C, which have the new process overhead.
The obvious solution to this problem is a build tool based on inotify-like feature of the OS. It would look out when a certain file is changed, and based on that list it would compile this file alone.
Is there such machinery out there? Bonus points for open source projects.
You mean like Tup:
From the home page:
"Tup is a file-based build system - it inputs a list of file changes and a directed acyclic graph (DAG), then processes the DAG to execute the appropriate commands required to update dependent files. The DAG is stored in an SQLite database. By default, the list of file changes is generated by scanning the filesystem. Alternatively, the list can be provided up front by running the included file monitor daemon."
I am just wondering if it is stat()ing the files that takes so long. To check this here is a small systemtap script I wrote to measure the time it takes to stat() files:
# call-counts.stp
global calls, times
probe kernel.function(#1) {
times[probefunc()] = gettimeofday_ns()
}
probe kernel.function(#1).return {
now = gettimeofday_ns()
delta = now - times[probefunc()]
calls[probefunc()] <<< delta
}
And then use it like this:
$ stap -c "make -rC ~/src/prj -j8 -k" ~/tmp/count-calls.stp sys_newstat
make: Entering directory `/home/user/src/prj'
make: Nothing to be done for `all'.
make: Leaving directory `/home/user/src/prj'
calls["sys_newstat"] #count=8318 #min=684 #max=910667 #sum=26952500 #avg=3240
The project I ran it upon has 4593 source files and it takes ~27msec (26952500nsec above) for make to stat all the files along with the corresponding .d files. I am using non-recursive make though.
If you're using OSX, you can use fswatch
https://github.com/alandipert/fswatch
Here's how to use fswatch to for changes to a file and then run make if it detects any
fswatch -o anyFile | xargs -n1 -I{} make
You can run fswatch from inside a makefile like this:
watch: $(FILE)
fswatch -o $^ | xargs -n1 -I{} make
(Of course, $(FILE) is defined inside the makefile.)
make can now watch for changes in the file like this:
> make watch
You can watch another file like this:
> make watch anotherFile
Install inotify-tools and write a few lines of bash to invoke make when certain directories are updated.
As a side note, recursive make scales badly and is error prone. Prefer non-recursive make.
The change-dependency you describe is already part of Make, but Make is flexible enough that it can be used in an inefficient way. If the slowness really is caused by the recursion (make -C commands) -- which it probably is -- then you should reduce the recursion. (You could try putting in your own conditional logic to decide whether to execute make -C, but that would be a very inelegant solution.)
Roughly speaking, if your makefiles look like this
# main makefile
foo:
make -C bar baz
and this
# makefile in bar/
baz: quartz
do something
you can change them to this:
# main makefile
foo: bar/quartz
cd bar && do something
There are many details to get right, but now if bar/quartz has not been changed, the foo rule will not run.