Debugging Yocto Builds

How to inspect BitBake output, read task logs, and troubleshoot common recipe and build failures

One of the most important Yocto skills is learning how to debug a build calmly and methodically.

When a build fails, the real cause is often earlier in the process than the final error message suggests. A compile failure might really come from a bad patch. A packaging failure might really come from files being installed into the wrong directory. A missing dependency might show up only once a later task runs.

The goal is not to memorise every error. It is to build a routine that helps you find the real cause quickly.

Start with the Failing Task

The first question is which task actually failed.

BitBake usually tells you this near the end of the output.

Common examples are:

  • do_fetch
  • do_patch
  • do_configure
  • do_compile
  • do_install
  • do_package
  • do_rootfs

Each task points to a different category of problem.

For example:

  • a do_fetch failure often means a bad source URL, missing credentials, or incorrect SRCREV
  • a do_patch failure usually points to patch drift or the wrong source tree
  • a do_compile failure often means missing dependencies, wrong build flags, or upstream source errors
  • a do_install failure usually means paths, permissions, or install logic are wrong
  • a do_rootfs failure often means package selection or dependency resolution is broken

Where BitBake Writes Debug Information

Most useful debugging information lives in the recipe work directory.

You will usually find it under a path like:

tmp/work/<machine>/<recipe>/<version>/

Important places inside that directory often include:

temp/

Task logs and generated run scripts.

source tree

The unpacked and possibly patched source that BitBake is actually building.

image/

Files staged by do_install() before packaging.

packages-split/

The per-package output after packaging logic has run.

If you are not sure what BitBake really did, this directory is often where the answer lives.

The Most Useful Log Files

Inside temp/, you will usually find files such as:

log.do_compile
run.do_compile
log.do_install
run.do_install

These two file types are especially useful:

log.*

Shows the output produced while the task ran.

run.*

Shows the generated shell script or command sequence BitBake used for the task.

The short error message at the end of a build is often only a summary. The real details are usually in log.do_*.

Use Task-Specific Commands

You do not always need to rerun the whole build.

If you want to focus on one stage, you can run a specific task:

bitbake -c fetch mytool
bitbake -c patch mytool
bitbake -c configure mytool
bitbake -c compile mytool
bitbake -c install mytool

This is often faster and clearer than repeatedly running a full image build while you are still isolating the problem.

It also lets you match the task that failed to the logs that matter.

Use bitbake -e to Understand Final Metadata

Many Yocto problems are really metadata problems.

When that happens, one of the best tools is:

bitbake -e mytool

This prints the expanded environment for the recipe.

It helps you answer questions like:

  • what is the final value of S?
  • what does SRC_URI expand to?
  • which override changed DEPENDS?
  • what did a class add to do_install()?
  • what is the final package list?

The output is large, but it is extremely useful when you suspect the recipe is not being interpreted the way you expect.

A Practical Debugging Routine

When a build fails, a reliable sequence is:

  1. Identify the failing task.
  2. Read the corresponding log.do_* file.
  3. Check the generated run.do_* file if you need to see the exact commands.
  4. Inspect the recipe work directory to see the real source tree or staged files.
  5. Use bitbake -e if you suspect a metadata or override problem.
  6. Change the smallest thing that matches the evidence.

This prevents guesswork.

A lot of Yocto debugging becomes easier once you stop treating the system as a black box and start treating it as:

  • metadata
  • task execution
  • staged output
  • packaged results

Common Failure Types

It helps to recognise the most common failure categories.

Fetch failures

These often come from:

  • incorrect URLs
  • wrong branch names
  • invalid SRCREV
  • missing authentication
  • unavailable upstream servers

Typical checks:

  • verify SRC_URI
  • verify SRCREV
  • confirm that the protocol and branch are correct
  • run bitbake -c fetch <recipe>

Patch failures

These usually mean:

  • the patch no longer matches the source
  • the patch is for the wrong version
  • the patch path in SRC_URI is wrong
  • the source tree is not what you thought it was

Typical checks:

  • inspect log.do_patch
  • inspect the unpacked source tree
  • confirm the patch file is actually being picked up
  • check whether S points at the directory you expected

Configure failures

These often come from:

  • the wrong build class being used
  • missing configure dependencies
  • bad configure flags
  • building in the wrong source or build directory

Typical checks:

  • inspect log.do_configure
  • confirm whether the project uses Autotools, CMake, Meson, or something else
  • inspect S and related variables with bitbake -e
  • check variables such as EXTRA_OECONF

Compile failures

These often come from:

  • missing headers or libraries
  • missing build-time dependencies in DEPENDS
  • incorrect compiler flags
  • source code errors

Typical checks:

  • inspect log.do_compile
  • look for missing include files or linker symbols
  • check whether required dependencies are in DEPENDS
  • decide whether the issue is in your metadata or upstream source

Install failures

These often come from:

  • missing install targets
  • wrong paths in do_install()
  • forgetting to create directories before copying files
  • staging files outside ${D}

Typical checks:

  • inspect log.do_install
  • inspect the staged output under the work directory
  • check that files are installed into ${D}${bindir}, ${D}${sysconfdir}, and similar paths

Packaging failures

These often come from:

  • files being installed to unexpected paths
  • package contents not matching FILES:*
  • runtime dependencies not being satisfied
  • package name assumptions being wrong

Typical checks:

  • inspect packages-split/
  • check FILES:${PN} and related package variables
  • confirm which packages the recipe actually generated

Root filesystem failures

These often come from:

  • requesting packages that do not exist
  • dependency conflicts between packages
  • missing runtime providers
  • image configuration problems

Typical checks:

  • inspect the do_rootfs log
  • confirm that the package name is correct
  • check whether the package was actually emitted by the recipe
  • verify image package selections such as IMAGE_INSTALL

Distinguish Metadata Problems from Source Problems

One of the most useful debugging skills is learning where the bug actually lives.

Ask yourself:

  • is BitBake building the wrong source?
  • is the source correct, but the build flags are wrong?
  • is the software building correctly, but the install paths are wrong?
  • is the recipe correct, but the image is not selecting the right package?

This keeps you from fixing the wrong layer of the problem.

For example:

  • if the compiler cannot find a header, the fix may be DEPENDS
  • if a patch will not apply, the fix may be the patch itself
  • if a service file is missing from the final image, the fix may be packaging or image selection rather than compilation

Inspect What Was Actually Installed

If a package builds but the final result is wrong, look at what the recipe staged and packaged.

Two especially useful questions are:

  1. Did do_install() put the files in the right place?
  2. Did packaging put those files into the package you expected?

That is why directories like image/ and packages-split/ are so useful.

They let you see whether the problem happened:

  • before packaging
  • during packaging
  • or later when the image was assembled

Do Not Debug by Rewriting Everything

A common trap is making large recipe changes before understanding the evidence.

For example:

  • replacing an entire inherited task when only one extra file needs to be installed
  • changing classes before checking whether the source tree matches the expected build system
  • editing several variables at once and then losing track of what fixed the issue

Small, evidence-based changes are usually much easier to reason about.

Example Debug Flow

Suppose a recipe builds successfully, but the final image does not contain the expected executable.

A sensible path would be:

  1. Check whether do_install() staged the file under ${D}.
  2. Inspect packages-split/ to see whether it ended up in ${PN} or another package.
  3. Check FILES:${PN} if the file is in a non-standard location.
  4. Confirm that the image installs the correct package, not just the recipe name by assumption.
  5. Rebuild the affected task or image component and verify the result again.

This is much faster than treating the problem as a full build failure.

Summary

Debugging Yocto builds is mostly about following the evidence.

If you can:

  • identify the failing task
  • read log.do_* and run.do_*
  • inspect the work directory
  • use bitbake -e to understand final metadata
  • separate metadata issues from source issues

then you can solve most build problems much more confidently.

Quick quiz: debugging builds

A few checks on the evidence-based debugging workflow.

Question 1 A build ends with a compile error, but you suspect the real cause happened earlier. What is the first disciplined step?
Question 2 You need to know whether overrides or appends changed a variable in a way the recipe author did not expect. Which tool is the best fit?
Question 3 A package builds, but its executable is missing from the final image. Which sequence best matches the workflow from this lesson?