Tips‎ > ‎Android Lint‎ > ‎

Writing a Lint Check

Updated March 14, 2014: Added section on type resolution for Java parsing, based on new features added in the most recent version of lint.

This document provides a brief introduction to writing a lint check. It is by no means a complete tutorial, but hopefully adds some useful commentary and hints on top of the API. (You may also want to take a look at the Writing Custom Lint Rules document for additional details and tips.)

To write a lint check, you'll want to
  1. Realize that the API is not final, so be prepared for the likely possibility of needing to adapt your code to future changes
  2. Skim through the Lint API project (lint/libs/lint_api)
  3. Skim through some of the existing Lint Checks (lint/libs/lint_checks) to get a sense for how lint checks are written. There are over 80 checks now so there's a good chance there's a similar check you can adapt.
Make sure you are using the ADT 17 codebase; ADT 17 has changed a lot from ADT 16 so you'd be wasting a lot of time if you based your work on ADT 16 at this point.

Issues versus Detectors

Note that lint distinguishes between "issues" and "detectors". An issue is a type of problem you want to find and show to the user. An issue has an associated description, fuller explanation, category, priority, etc. Issues are exposed to the user. In the Eclipse integration for example, you can open the Lint preference dialog to see all the various issues, and you can disable issues, or change the severity of a given issue (to for example mark an issue as an Error instead of a Warning, or you can set its severity to "Ignore").

An issue is just data. There's a single, final Issue class; you don't subclass it, you simply instantiate a new type of issue, and register it with the IssueRegistry.

What you want to implement is a "Detector". A detector is responsible for scanning through code and finding issue instances and reporting them. Note that a detector can report more than one type of issue. This allows you to have different severities for different types of issues, and the user has a more fine grained control for what they want to see.

As an example, the UnusedResourceDetector will search through all the resources in the project, and all the usages of those resources, and report resources that are unused. The detector reports two separate types of issues: UnusedResources, and UnusedIds. Some users want to keep "unused" id's because they don't much overhead (other than fields in the R class) and because they serve a documentation purpose in the layouts etc. Therefore, by having a separate issue for this, users can disable this issue and still look for other types of issues (and indeed UnusedIds is disabled by default).


The Scope enum lists many "parts" of an Android project:
  • Resource files
  • Java source files
  • Class files
  • Proguard configuration files
  • Manifest file
etc.  An issue will state the scope required to analyze the code. For example, a check which just looks for bugs in the manifest file can simply state that its scope is Scope.MANIFEST. This is used in several ways by the lint infrastructure. For one thing it is used to limit the number of detectors it invokes for a given file type. For another, it is used to support per-file linting; if you are editing a single file in Eclipse and you hit Ctrl-S, lint will rerun analysis on that single file for all the detectors that have a scope limited to that file only.

The scope flags also impact the interfaces a detector is expected to implement. (API Note: The Scope class is one of the areas which will likely be changed a bit soon.)

Detector Interfaces

Most detectors implement one or more of the following interfaces:
  • Detector.XmlScanner
  • Detector.JavaScanner
  • Detector.ClassScanner
For a detector which has scope={Manifest} for example, it will implement the XmlScanner interface.

These scanning interfaces are interfaces rather than classes because it's not unusual for a detector to implement more than one. Take the ApiChecker for example. It implements both the ClassScanner interface (in order to analyze .class files for API calls), and the XmlScanner (in order to analyze layout files, since <GridLayout> implies a call to the GridLayout constructor.)

Scanning XML Files

To analyze an XML file, you could just override the "visitDocument" method. It will be called once, per XML file, passing you the XML DOM model, which you could then iterate on your own and analyze however you want.

However, most rules are typically interesting in a particular tag, or a particular attribute, or a set of tags or a set of attributes or a combination of both.

To make scanning fast, a detector can specify which elements and attributes it is interested in.  Just implement getApplicableElements and/or getApplicableAttributes, returning a list of string tag or attribute names. Then, implement visitElement and/or visitAttribute. These methods will now be called for each occurrence of the given elements and attributes.

(The reason it works this way is that internally, at the beginning of scanning  a project, the lint infrastructure will create a multimap from tag names to a list of interested detectors, and similarly for attributes. That way, when it analyzes each and every XML file, it can simply do a single iteration through the model, and for each tag and attribute look up to see if it has any interested detectors, and if so dispatch to them. This means that if you add a new detector which looks for a particular tag name, you are not making every single file check slightly slower; your detector will only be called if that element actually occurs.)

There is a special "ALL" constant you can return from getApplicableAttributes and getApplicableElements, which lets your detector be called for all elements or attributes. This is for example used by the PxUsageDetector which checks whether any attribute values use the dimension "px" as the suffix in the XML attribute value.

(One tip on XML scanning: org.w3c.dom.Element.getAttribute() is supposed to never return null; for a nonexistent attribute it is supposed to return "". However, there are hard to reproduce but clear stacktraces showing that Eclipse sometimes returns null, so many detectors try to be defensive about this and check for null even though it's not supposed to be necessary).

Reporting Errors

If your detector identifies a problem, it just needs to call report() on the context object (which is passed into each of the detector methods).

In addition to listing the Issue it is reporting, it needs to provide a location, a "scope node", and a message.

The location is self explanatory: it points to where the error occurred. For XML and Java source files this is easy: just pass the corresponding XML DOM or Parse AST tree node to the context.getLocation method, which will create a location with the right file name and offsets corresponding to the given node. If your error pertains to an attribute, pass the attribute rather than the surrounding element to make the error pinpoint the error better. For class files it's a bit harder; see the ClassContext class for some useful utility methods, and of course examine some of the existing class based detectors. 

The "scope node" is the nearest AST/XML node surrounding the error. This is usually the same as the node you create a location from. This is used by the Lint infrastructure to support "suppress" annotations. For example, in Java files, the user can add @SuppressLint("Id") on some syntactic element surrounding the error location. Lint will search outwards from the scope node you're passing for an error to see if the error is suppressed.

In some cases you may want to check explicitly whether the error is suppressed yourself, either because the computation is really expensive and it's likely to have been suppressed, or because there are multiple possible suppress locations. For example, in the case of a consistency error (say translation consistency), perhaps the suppress attribute has been defined on the "other" location that this location is inconsistent with.

Note that the Location class (and the Location.Handle) contains a "client data" field; this is used by some detectors to stash the scope node temporarily.

Storing State

Many errors can be found easily: if such and such attribute is set, report it as an error. But many errors require more complicated computations: you need to check multiple pieces of data, spread over multiple files.

The way you typically do this is to sue the before/after file hooks, and the before/after project hooks. The detector class defines before and after callbacks each scanned file and each project. Many detectors set up some datastructures, and populate them as each file is scanned.

Then, in the afterProject hook, they walk through all the data and compute the errors.

One challenge here is that by the time you have gotten to the end of the project, you can't easily compute locations for the errors you have reported. There are a couple of solutions to this:
  1. Store location handles along the way. A location handle is a "light weight" handle on a location. Creating a real location involves some computation, since it needs to compute offset, line and column numbers. In some cases, you may not know yet that something is an error, but you want to be able to get its location on the off chance that it is an error. In that case, create a location handle instead (both the Java and XML parser offer createHandle methods). When you get to the code which needs to create an error, call handle.resolve() which will produce a fullblown location.
  2. Gather the exact location in a second pass. This is described in the Multi-pass section below.

Multiple Passes

Lint processes the files in the project in a deliberate and defined order:
  1. Manifest file
  2. Resource files, alphabetically (first alphabetically by resource folder, then alphabetically within each folder)
  3. Java source files
  4. Java class files, alphabetically (but outerclasses before inner classes, even though Foo$Bar.class is alphabetically earlier than Foo.class)
  5. Proguard file
This means that you can count on layout files being processed before value files (since "layout" < "values") and the default value folder before processed before a particular translation (since "values" < "values-de") etc.

Often, you can store some information you may need later in a datastructure, and then consult it when you get to the right datatype.

However, that's not always practical. Take the UnusedResourceDetector for example. It needs locations for every single resource, every single string, attribute, layout view etc on the off chance that the resource is unused. Instead of storing all of that information, it uses the "multi-phase" support in lint.

A detector can indicate that it is interested in another processing phase. Only detectors that request another phase are included in a subsequent phase, and they can only use the same or a narrower scope. And the lint infrastructure can and will limit the number of phases in case a detector is improperly written and keeps "recursing".

The unused resource detector will at the end of analyzing a project know which resources are unused. If (and only if) there are unused resources, it will request another pass. In the second pass it simply looks for occurrences of the resources it knows to be unused (stored in a map), and it then records accurate locations for these. At the end of the second phase it reports errors (using the new locations) for all the unused resources.

Whether you should be using multiple passes or storing extra data up front and using location handles is up to you; I'd say it depends on the likelihood of the error, the amount of data you'd need to store, etc.

Analyzing Java Code

If you want to analyze Java code, you have two options:
  1. Analyzing the .java files by implementing JavaScanner
  2. Analyzing the .class files by implementing a ClassScanner
There are pros and cons with each.

Analyzing the Java source code lets you
  • easily get accurate location information for the error: the AST node contains exact position information
  • you can access information which is only in the source file, such as resource constants (e.g. When a Java file is compiled into bytecode, this gets inlined so there is no record that the integer 0x1123123123 corresponds to  As another example, annotations on variable declarations are only available in the source file.
  • the AST node structure represents the source code, so it's easy to do things like "find the if-statement surrounding this call" or "get the expression which computes the first argument to this method call" and so on.
However, the parse tree does not contain resolved types(NOTE: This limitation has just been lifted; see "Type Resolution below). This is a limitation of the parse tree library we're using; javac or Eclipse ECJ can produce the information, but it's not exposed in the AST, and of course when run from the command line lint is not using either one of those two parsers, it's using a third one. Analyzing the bytecode is better for other types of analysis. The API checker for example can do a more accurate job by analyzing the bytecode, where fields are already inlined, string concatenations are already performed, and flow analysis is in general easier at the bytecode level.

(*: There are some facilities in Lombok to resolve types based on examining the import statements. None of the detectors are using this (yet), so I'm not sure if everything necessary is surfaced through the Lint/Java API yet.)

Type Resolution

In the next version of lint; Tools 27, Gradle plugin 0.9.2+, Android Studio 0.5.3, and ADT 27) Java AST parse tree detectors can both resolve types and declarations. This was just added to lint, and offers new APIs where you can ask for the resolved type, and the resolved declaration, of a given AST node. It is implemented in both IntelliJ as well as the command line lint variants (e.g. the lint script as well as the Gradle plugin). It is not yet implemented in Eclipse ADT, but should be soon since the port will be very easy (the command line is using the Eclipse Java compiler to do its type attribution, so should be easy to migrate into the Eclipse ADT plugin).

Analyzing Java Source Files

Lint uses the lombok.ast API to represent ASTs, as well as its facility to map existing parse trees into this form. When Lint is running inside Eclipse, the ECJ compiler's parse trees are converted into Lombok.AST's. This lets us write a single Java detector for lint and have it work across command line tools and IDE integrations.

To analyze Java source files, your detector should Implement a JavaScanner. There are several methods you can override from detector.

The usual way is to implement the "createJavaVisitor" method. You should return an AST Visitor which will be invoked on each AST; here you can visit anything from class declarations to method invocations to identifiers and specific keywords.

If you know you just need to visit one or two types of AST nodes, use the getApplicableNodeTypes method to specify those exact node types. Now your visitor will be called only for those specific nodes, and just like for the XML Visitor, this allows a more efficient shared single pass through the ASTs where it precomputes a multimap of detectors interested in each node type.

There are two special facilities:
  • getApplicableMethods: If you override this method, you can specify a set of method calls you are interested in. This saves you the trouble of finding the method calls in the AST, and more importantly you don't have to implement a visitor: you simply implement the visitMethod call. The StringFormatDetector for example uses this to look for calls to String#format, where it can see if the arguments match what was expected from the string definitions.
  • appliesToResourceRefs: If you return true from this method, lint will invoke the visitResourceReference method on your detector. This lets you register an interest in resource references (, R.layout.main, etc) without having to write a visitor.

Analyzing Java Classes

To analyze byte code, your detector should implement the ClassScanner. Lint uses the ASM library to process .class files. It will operate in two stages. First, it will skim all the class files (without reading the method bodies etc) to compute a "super class" map for all the classes found in the libraries used by the project as well as within the project itself. A class detector can ask lint during its own analysis for the super class of any given class. The API checker for example uses this to handle virtual methods, so if a class Foo extends Activity, if it sees a virtual dispatch to method "foo", it can walk up the parent chain to see whether this is an inherited method and to get its API level.

Once lint has the superclass map, it processes each class in turn, and produces a ClassNode (a "DOM" for the .class file), which is then passed to each ClassScanner. The detectors can then use these ClassNodes to analyze the bytecode as necessary. See some of the existing detectors for examples.

Incremental Lint

Some tools, such as the Eclipse integration of lint, allow lint to be run "incrementally". For example, in Eclipse, whenever you use the UI builder, or whenever you save an XML file or a Java file, Eclipse will run lint in an incremental mode where it only analyzes the current file, and updates all the issues in that file.

However, note that it cannot do certain kinds of checks by looking at just a single file. For example, to determine if a resource is unused, it needs to both look at the declaration (for example a drawable .png file) as well as all the Java code to make sure nobody references that drawable.

The way this is handled by the lint infrastructure is the scope attribute of the issues. Certain scopes refer to a single file -- such as Scope.RESOURCE_FILE, Scope.JAVA_FILE or Scope.CLASS_FILE. However, lint can only do incremental analysis of the given issue if its scope includes ONLY that single file scope. There are certain types of issues which apply to multiple different scopes, such as the ApiDetector, which can analyze both .xml files and .class files. However, each file can be analyzed independently. For that reason, an issue has a second optional type of scope: analysis scopes. Each analysis scope is a scope set that the issue can be analyzed in. Here's how the ApiDetector issue is registered:
    /** Accessing an unsupported API */
    public static final Issue UNSUPPORTED = Issue.create("NewApi", //$NON-NLS-1$
            "Finds API accesses to APIs that are not supported in all targeted API versions",

            "This check scans through all the Android API calls in the application and " +
            "warns about any calls that are not available on *all* versions targeted " +
            "by this application (according to its minimum SDK attribute in the manifest).\n" +
            "\n" +
            "If your code is *deliberately* accessing newer APIs, and you have ensured " +
            "(e.g. with conditional execution) that this code will only ever be called on a " +
            "supported platform, then you can annotate your class or method with the " +
            "@TargetApi annotation specifying the local minimum SDK to apply, such as" +
            "@TargetApi(11), such that this check considers 11 rather than your manifest " +
            "file's minimum SDK as the required API level.",
            EnumSet.of(Scope.CLASS_FILE, Scope.RESOURCE_FILE))

Look at the last two lines -- this is adding both a resource file, and a class file, as independent scopes that can be analyzed incrementally for this issue.

Any issue which requires more scopes than is available for the current analysis will be skipped during incremental lint analysis,

Unit Tests

Writing unit tests for lint is easy. Take a look at some of the existing examples. You typically extend AbstractCheckTest, and override the getDetector() method to return a new instance of your detector class.

You then call (from each test) the lintProject() method, passing a string which represents the expected error output, as well as a list of source files to use as data files in an Android project created on the fly for the unit test.

You typically don't know the error output when you're writing the test. Just put a blank test, run the test, and when the test files, double click on it in Eclipse and it will show you a diff; copy paste the output from the actual output, and paste it into your Expected string in the test -- assuming of course that the actual output is what you consider correct.

The test data files referenced in the lintProject call are relative to sdk/lint/libs/lint_checks/tests/src/com/android/tools/lint/checks/data/. Note that you don't want to check in .java or .class files there because it will cause these files to be considered part of the lint project itself. Instead, name them with the suffix .txt or .data.  Then, in your lint project name, you can use the special syntax foo=>bar to rename the file on the fly. For example, the ApiDetectorTest contains this: apicheck/>src/foo/bar/

Why Doesn't My Detector Work?

Make sure you've added it to the BuiltinRegistry class! Also make sure your detector has a public default constructor (such that it can be instantiated), and that it has the right scope.