Applications often present a way to allow a user to select a number of values. These values are then assembled in an IN() clause that contains a list of values to be matched, as in the following example:
But as the list of items to match grows longer, it takes SQL more time to perform it. Against a large number of rows, this can be especially problematic, since there's no way to take advantage of the index. The result is a table scan in which each row is compared to each value in the list. The values in the IN() clause typically come from a front-end application whose user makes a run-time selection. (If that wasn't the case, we could rewrite the query to optimize it.) A better approach is to build a table from the list of values and join the table to the target table. This enables SQL to take advantage of the index, which significantly increases performance. But how do we build the temporary table to hold the values? There are several ways, but the one I like uses a user-defined function (UDF) that parses a delimited string and returns a table in which each item in the string becomes a row in the table. The UDF looks like this:
You would execute this function, passing it a comma-delimited string, like this:
Now you can join this table to any other table, view, or table UDF, and get maximum performance with minimal disk reads. |