1

I have a code analysis tool that I'd like to run for each cc_library (and cc_binary, silently implied for rest of the question). The tool has a CLI interfaces taking:

  • A tool project file
    • Compiler specifics, such as type sizes, built-ins, macros etc.
    • Files to analyze
      • File path, includes, defines
    • Rules to (not) apply
  • Files to add to the project
  • Options for synchronizing files with build data
    • JSON compilation database
    • Parse build log
  • Analyze and generate analysis report

I've been looking at how to integrate this in Bazel so that the files to analyze AND the associated includes and defines are updated automatically, and that any analysis result is properly cached. Generating JSON compilation database (using third party lib) or parsing build log both requires separate runs and updating the source tree. For this question I consider that a workaround I'm trying to remove.

What I've tried so far is using aspects, adding an analysis aspect to any library. The general idea is having a base project file holding library invariant configuration, appended with the cc_library files to analysis, and finally an analysis is triggered generating the report. But I'm having trouble to execute, and I'm not sure it's even possible.

This is my aspect implementation so far, trying to iterate through cc_library attributes and target compilation context:

def _print_aspect_impl(target, ctx):
    # Make sure the rule has a srcs attribute
    if hasattr(ctx.rule.attr, 'srcs'):
        # Iterate through the files
        for src in ctx.rule.attr.srcs:
            for f in src.files.to_list():
                if f.path.endswith(".c"):
                    print("file: ")
                    print(f.path)
                    print("includes: ")
                    print(target[CcInfo].compilation_context.includes)
                    print("quote_includes: ")
                    print(target[CcInfo].compilation_context.quote_includes)
                    print("system_includes: ")
                    print(target[CcInfo].compilation_context.system_includes)
                    print("define: " + define)
                    print(ctx.rule.attr.defines)
                    print("local_defines: ")
                    print(ctx.rule.attr.local_defines)
                    print("") # empty line to separate file prints
    return []

What I cannot figure out is how to get ALL includes and defines used when compiling the library:

  • From libraries depended upon, recursively
    • copts, defines, includes
  • From the toolchain
    • features, cxx_builtin_include_directories

Questions:

  • How do I get the missing flags, continuing on presented technique?
  • Can I somehow retrieve the compile action command string?
    • Appended to analysis project using the build log API
  • Some other solution entirely?
    • Perhaps there is something one can do with cc_toolchain instead of aspects...

1 Answer 1

2

Aspects are the right tool to do that. The information you're looking for is contained in the providers, fragments, and toolchains of the cc_* rules the aspect has access to. Specifically, CcInfo has the target-specific pieces, the cpp fragment has the pieces configured from the command-line flag, and CcToolchainInfo has the parts from the toolchain.

CcInfo in target tells you if the current target has that provider, and target[CcInfo] accesses it.

The rules_cc my_c_compile example is where I usually look for pulling out a complete compiler command based on a CcInfo. Something like this should work from the aspect:

load("@rules_cc//cc:action_names.bzl", "C_COMPILE_ACTION_NAME")
load("@rules_cc//cc:toolchain_utils.bzl", "find_cpp_toolchain")

[in the impl]:
    cc_toolchain = find_cpp_toolchain(ctx)
    feature_configuration = cc_common.configure_features(
        ctx = ctx,
        cc_toolchain = cc_toolchain,
        requested_features = ctx.features,
        unsupported_features = ctx.disabled_features,
    )
    c_compiler_path = cc_common.get_tool_for_action(
        feature_configuration = feature_configuration,
        action_name = C_COMPILE_ACTION_NAME,
    )

[in the loop]
    c_compile_variables = cc_common.create_compile_variables(
        feature_configuration = feature_configuration,
        cc_toolchain = cc_toolchain,
        user_compile_flags = ctx.fragments.cpp.copts + ctx.fragments.cpp.conlyopts,
        source_file = src.path,
    )
    command_line = cc_common.get_memory_inefficient_command_line(
        feature_configuration = feature_configuration,
        action_name = C_COMPILE_ACTION_NAME,
        variables = c_compile_variables,
    )
    env = cc_common.get_environment_variables(
        feature_configuration = feature_configuration,
        action_name = C_COMPILE_ACTION_NAME,
        variables = c_compile_variables,
    )

That example only handles C files (not C++), you'll have to change the action names and which parts of the fragment it uses appropriately.

You have to add toolchains = ["@bazel_tools//tools/cpp:toolchain_type"] and fragments = ["cpp"] to the aspect invocation to use those. Also see the note in find_cc_toolchain.bzl about the _cc_toolchain attr if you're using legacy toolchain resolution.

The information coming from the rules and the toolchain is already structured. Depending on what your analysis tool wants, it might make more sense to extract it directly instead of generating a full command line. Most of the provider, fragment, and toolchain is well-documented if you want to look at those directly.

You might pass required_providers = [CcInfo] to aspect to limit propagation to rules which include it, depending on how you want to manage propagation of your aspect.

The Integrating with C++ Rules documentation page also has some more info.

Sign up to request clarification or add additional context in comments.

3 Comments

find_cpp_toolchain doesn't seem to work for aspects when using --crosstool-top. print(ctx.rule.attr) returns (among other things) _cc_toolchain = <target //toolchains:cc_toolchain_suite>, _cc_toolchain_type = Label("@bazel_tools//tools/cpp:toolchain_type"), meaning the _cc_toolchain is there, only not in ctx.attr as expected by find_cpp_toolchain. Leaning towards replacing --crosstool-top for --platforms, but perhaps you know a better way.
Platforms have other advantages, and are the plan for the future, so I'd go that route if you can. However, I think ctx.attr._cc_toolchain should exist for find_cpp_toolchain if you add it to the aspect's attrs (not the rule it's attached to, but the aspect itself)?
Using a if hasattr(ctx.rule.attr, "_cc_toolchain") ... else find_cpp_toolchain() in the aspect implementation seems to work. Should arguably be added to find_cpp_toolchain() itself (under a if hasattr(ctx, "rule")). But yeah platforms is the way.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.