Skip to main content

2 posts tagged with "performance"

View All Tags

· 5 min read
James Xu

What is LIKE?

LIKE is a very useful SQL operator. It is used to do string pattern matching. The following examples for LIKE usage are from the Presto doc:
SELECT * FROM (VALUES ('abc'), ('bcd'), ('cde')) AS t (name)
WHERE name LIKE '%b%'
--returns 'abc' and 'bcd'

SELECT * FROM (VALUES ('abc'), ('bcd'), ('cde')) AS t (name)
WHERE name LIKE '_b%'
--returns 'abc'

SELECT * FROM (VALUES ('a_c'), ('_cd'), ('cde')) AS t (name)
WHERE name LIKE '%#_%' ESCAPE '#'
--returns 'a_c' and '_cd'

These examples show the basic usage of LIKE:

  • Use % to match zero or more characters.
  • Use _ to match exactly one character.
  • If we need to match % and _ literally, we can specify an escape char to escape them.

When we use Velox as the backend to evaluate Presto's query, LIKE operation is translated into Velox's function call, e.g. name LIKE '%b%' is translated to like(name, '%b%'). Internally Velox converts the pattern string into a regular expression and then uses regular expression library RE2 to do the pattern matching. RE2 is a very good regular expression library. It is fast and safe, which gives Velox LIKE function a good performance. But some popularly used simple patterns can be optimized using direct simple C++ string functions instead of regex. e.g. Pattern hello% matches inputs that start with hello, which can be implemented by direct memory comparison of prefix ('hello' in this case) bytes of input:

// Match the first 'length' characters of string 'input' and prefix pattern.
bool matchPrefixPattern(
StringView input,
const std::string& pattern,
size_t length) {
return input.size() >= length &&
std::memcmp(input.data(), pattern.data(), length) == 0;
}

It is much faster than using RE2. Benchmark shows it gives us a 750x speedup. We can do similar optimizations for some other patterns:

  • %hello: matches inputs that end with hello. It can be optimized by direct memory comparison of suffix bytes of the inputs.
  • %hello%: matches inputs that contain hello. It can be optimized by using std::string_view::find to check whether inputs contain hello.

These simple patterns are straightforward to optimize. There are some more relaxed patterns that are not so straightforward:

  • hello_velox%: matches inputs that start with 'hello', followed by any character, then followed by 'velox'.
  • %hello_velox: matches inputs that end with 'hello', followed by any character, then followed by 'velox'.
  • %hello_velox%: matches inputs that contain both 'hello' and 'velox', and there is a single character separating them.

Although these patterns look similar to previous ones, but they are not so straightforward to optimize, _ here matches any single character, we can not simply use memory comparison to do the matching. And if user's input is not pure ASCII, _ might match more than one byte which makes the implementation even more complex. Also note that the above patterns are just for illustrative purpose. Actual patterns can be more complex. e.g. h_e_l_l_o, so trivial algorithm will not work.

Optimizing Relaxed Patterns

We optimized these patterns as follows. First, we split the patterns into a list of sub patterns, e.g. hello_velox% is split into sub-patterns: hello, _, velox, %, because there is a % at the end, we determine it as a kRelaxedPrefix pattern, which means we need to do some prefix matching, but it is not a trivial prefix matching, we need to match three sub-patterns:

  • kLiteralString: hello
  • kSingleCharWildcard: _
  • kLiteralString: velox

For kLiteralString we simply do a memory comparison:

if (subPattern.kind == SubPatternKind::kLiteralString &&
std::memcmp(
input.data() + start + subPattern.start,
patternMetadata.fixedPattern().data() + subPattern.start,
subPattern.length) != 0) {
return false;
}

Note that since it is a memory comparison, it handles both pure ASCII inputs and inputs that contain Unicode characters.

Matching _ is more complex considering that there are variable length multi-bytes character in unicode inputs. Fortunately there are existing libraries which provides unicode related operations: utf8proc. It provides functions that tells us whether a byte in input is the start of a character or not, how many bytes current character consists of etc. So to match a sequence of _ our algorithm is:

if (subPattern.kind == SubPatternKind::kSingleCharWildcard) {
// Match every single char wildcard.
for (auto i = 0; i < subPattern.length; i++) {
if (cursor >= input.size()) {
return false;
}

auto numBytes = unicodeCharLength(input.data() + cursor);
cursor += numBytes;
}
}

Here:

  • cursor is the index in the input we are trying to match.
  • unicodeCharLength is a function which wraps utf8proc function to determine how many bytes current character consists of.

So the logic is basically repeatedly calculate size of current character and skip it.

It seems not that complex, but we should note that this logic is not effective for pure ASCII input. Every character is one byte in pure ASCII input. So to match a sequence of _, we don't need to calculate the size of each character and compare in a for-loop. In fact, we don't need to explicitly match _ for pure ASCII input as well. We can use the following logic instead:

for (const auto& subPattern : patternMetadata.subPatterns()) {
if (subPattern.kind == SubPatternKind::kLiteralString &&
std::memcmp(
input.data() + start + subPattern.start,
patternMetadata.fixedPattern().data() + subPattern.start,
subPattern.length) != 0) {
return false;
}
}

It only matches the kLiteralString pattern at the right position of the inputs, _ is automatically matched(actually skipped). No need to match it explicitly. With this optimization we get 40x speedup for kRelaxedPrefix patterns, 100x speedup for kRelaxedSuffix patterns.

Thank you Maria Basmanova for spending a lot of time reviewing the code.

· 4 min read
Laith Sakka

One of the queries shadowed internally at Meta was much slower in Velox compared to presto(2 CPU days vs. 4.5 CPU hours). Initial investigation identified that the overhead is related to casting empty strings inside a try_cast.

In this blogpost I summarize my learnings from investigating and optimizing try_cast.


Start and end results

Initial benchmark:

name                                             total time
try_cast(empty_string_col as int) 4.88s
try_cast(valid_string_col as int) 2.15ms

The difference between casting a valid and invalid input is huge (>1000X), although ideally casting an invalid string should be just setting a null and should not be that expensive.

Benchmark results after optimization:

name                                             total time
try_cast(empty_string_col as int) 1.24ms
try_cast(valid_string_col as int) 2.15ms

Sources of regression

The investigation revealed several factors that contributed to the huge gap, summarized in the points below in addition to their approximate significance.

Error logs overhead.

Whenever a VeloxUserError is thrown an error log used to be generated, however those errors are expected to, (1) either get converted to null if is thrown from within a try, (2) or show up to the user otherwise. Hence, no need for that expensive logging .

Moreover, each failing row used to generate two log message because VELOX_USER_FAIL was called twice. Disabling logging for user error helped save 2.6s of the 4.88s.

Throwing overhead.

Each time a row is casted four exception were thrown:

  1. From within Folly library.
  2. From Cast in Conversions.h, the function catch the exception thrown by Folly and convert it to Velox exception and throw it.
  3. From castToPrimitive function, which catch the exception and threw a new exception with more context.
  4. Finally, a forth throw came from applyToSelectedNoThrow which caught an exception and called toVeloxException to check exception type and re-throw.

Those are addressed and avoided using the following:

  1. When the input is an empty string, avoid calling folly by directly checking if the input is empty.
  2. Remove the catch and re-throw from Conversions.h
  3. Introduce setVeloxExceptionError, which can be used to set the error directly in evalContext without throwing (does not call toVeloxException).
  4. Optimize applyToSelectedNoThrow to call setVeloxExceptionError if it catches Velox exception.

With all those changes throwing exceptions is completely avoided when casting empty strings. This takes the runtime down to 382.07ms, but its still much higher than 2.15ms.

Velox exception construction overhead.

Constructing Velox exception is expensive, even when there is no throw at all! Luckily this can be avoided for try_cast, since the output can be directly set to null without having to use the errorVector to track errors. By doing so the benchmark runtime goes down to 1.24ms.


Follow up tasks

After all the changes we have the following performance numbers for other patterns of similar expressions (much better than before but still can be optimized a lot).

try_cast(empty_string_col as int)                     1.24ms    808.79

try_cast(invalid_string_col as int) 393.61ms 2.54

try(cast(empty_string_col as int)) 375.82ms 2.66

try(cast(invalid_string_col as int)) 767.74ms 1.30

All these can be optimized to have the same runtime cost of the first expression 1.24ms.

To do that two thing are needed:

1) Tracking errors for try, should not require constructing exceptions

The way errors are tracked when evaluating a try expression is by setting values in an ErrorVector; which is a vector of VeloxException pointers. This forces the construction of a Velox exception for each row, but that is not needed (for try expressions) since only row numbers need to be tracked to be converted eventually to nulls, but not the actual errors.

This can be changed such that errors are tracked using a selectivity vector. Its worth noting that for other expressions such as conjunct expressions this tracking is needed, hence we need to distinguish between both.

This would help optimize any try(x) expression where x throws for large number of rows.

2)Use throw-free cast library

Avoiding throw and instead returning a boolean should allow us to directly set null in try_cast and avoid construction of exceptions completely.

While this is done now for empty strings, its not done for all other types of errors. Folly provides a non-throwing API (folly::tryTo) that can be tried for that purpose. folly::tryTo. According to the folly documentation On the error path, you can expect tryTo to be roughly three orders of magnitude faster than the throwing to and to completely avoid any lock contention arising from stack unwinding.