Montag, 2. Mai 2022

Tuning Levenshtein Distance

On 2022-01-19 I released an improved version of Text::Levenshtein::BV. BV means Bit Vector, because it uses a bit-parallel algorithm with runtime complexity of O(ceil(m/w)*n), where w is the width of a CPU register. It's 20 times faster than the popular Text::Levenshtein using the traditional or simple algorithm with O(n*n) on strings with length 10.

Last summer I developed a variant with the simple algorithm named Levenshtein::Simple, because working on arrays is more flexible than strings. A string is easy to split into an array of characters. The same module could be used for arrays of characters, graphemes, words, lines.

Another reason is, that the Levenshtein alignment (shortest edit script) is more important than just the distance. Most modules on CPAN calculate only the distance for similarity, which is easier and faster with LCS.

It was a surprise that my Levenshtein::Simple without tuning was 50% faster than Text::Levenshtein. During development of an XS implementation I tuned Levenshtein::Simple to beat Text::Fuzzy::PP.

These are the results for the pure Perl (PP) implementations measured in Rate per second:

                         N=10
Text::WagnerFischer     4,931
Text::Levenshtein       6,339
Text::Fuzzy::PP        11,164
Levenshtein::Simple    27,926
Text::Levenshtein::BV 118,153

There are many XS versions on CPAN for calculating Levenshtein distance. Only three of them work without problems. They all use the simple algorithm. Text::Levenshtein::Flexible uses the C implementation of PostgreSQL, but has tricky code. Perl strings at the XS interface can be bytes or UTF-8. In C UTF-8 can be processed by iterating over UTF-8 or convert UTF-8 to 32-bit integers (numerical code points). Working with integers including the conversion is faster.

Measured via Perl the benchmarks are (TL::BVXS is mine, simple is my implementation just for fun, uni is with Unicode strings, bytes with bytes):

                            N=10
Text::Levenshtein::XS    347,539
TL::Flexible           2,725,258
Text::Fuzzy            3,026,401
TL::BVXS_simple        6,056,132
TL::BVXS_uni           8,495,407
TL::BVXS_bytes        12,743,110


The same simple algorithm, also implemented in C is 2 times faster than that of PostgreSQL. BV is 4 times faster (on longer strings the difference is larger). Compared to Text::Levenshtein it's 2000 times faster.

In pure C without the XS interface distance on bytes works at a rate of 25 Mio./second, codepoints ~10 % slower.

Optimising for fun.

Montag, 25. April 2022

Benchmark UTF-8 decoding in Perl

 In Perl there are two modules on CPAN for UTF-8 decoding:

- Encode

- Unicode::UTF-8

Unicode::UTF-8 claims to be faster. I was interested how fast it is compared to my own implementation in pure C to decode utf8 (1-6 byte, needing 31 bits) into 32-bit code points.

UTF-8 became a bottleneck and is worth to have a look at. New SIMD implementations can validate up to 4 GB UTF-8 per second with AVX512.

That's the result of a quick benchmark:

$octets: Chſerſplzon chars: 11 bytes 13
                    Rate        Encode Unicode::UTF8      TL::BVXS
Encode         1927529/s            --          -83%          -89%
Unicode::UTF8 11143773/s          478%            --          -36%
TL::BVXS      17311395/s          798%           55%            --

$octets: राज्यराज्य chars: 9 bytes 27
                    Rate        Encode Unicode::UTF8      TL::BVXS
Encode         1592888/s            --          -83%          -90%
Unicode::UTF8  9466311/s          494%            --          -42%
TL::BVXS      16287053/s          922%           72%            --

Mine is fastest with ~215 MB/s but still is far away from SIMD solutions. In my use case the decoding consumes 35 % of the execution time. But SIMD would not help much for short strings.

Trial: port some code to "use standard"

Just wanted to know how it feels to adapt code to use standard; 

I selected a small, unpublished module: Levenshtein::Simple. It's in the range of 530 lines of code without tests. A typical reinvent a better wheel, because Text::Levenshtein does not support arrays as input and misses an alignment method, which I needed in a simple and robust way.

Here the changes as diff:

49 changes, most of them explicit syntax for dereferencing.

2 barewords needed quoting: STDOUT, STDERR. But why are they there? They are not needed at all.

The nested ternary needed explicit braces. Now it's better readable.

1 LABEL not allowed as bareword. That's a bug of Guacamole. The label was a relict.

2 Invalid characters in the pod. Copy and paste of a reference. use standard; slurps the source as UTF-8 and it breaks. That's undocumented and the second bug. 

In summary simple changes and IMHO better readable now. Nice development tool but not ready for production.

Dienstag, 19. April 2022

Unicode, UTF-8, utf8 and Perl-XS revisited

Are you sure knowing everything about Unicode in Perl? You never can.

Imagine doing everything right in your code, use a UTF-8 boilerplate, use Test::More::UTF8 and then get a 'wide character' warning and the test fails.

What happend? 

Boiling it down to diagnostic code, it has something to do with the utf8-flag:

#!perl

use strict;
use warnings;
use utf8;

binmode(STDOUT,":encoding(UTF-8)");
binmode(STDERR,":encoding(UTF-8)");

my $ascii  = 'abc';
my $latin1 = 'äöü';
my $uni    = 'Chſ';

print '$ascii:  ',$ascii,' utf8::is_utf8($ascii): ',utf8::is_utf8($ascii),"\n";
print '$latin1: ',$latin1,' utf8::is_utf8($latin1): ',utf8::is_utf8($latin1),"\n";
print '$uni:    ',$uni,' utf8::is_utf8($uni): ',utf8::is_utf8($uni),"\n";

my $file = 'utf8-flag-ascii.txt';

my $ascii_file;
open(my $in,"<:encoding cannot="" die="" file:="" file="" line="<$in" my="" open="" or="" while="">) {
    chomp($line);
    $ascii_file = $line;
}
close($in);

print '$ascii_file: ',$ascii_file,
    ' utf8::is_utf8($ascii_file): ',utf8::is_utf8($ascii_file),"\n";

This prints:

# with 'use utf8;'

$ascii:  abc utf8::is_utf8($ascii):
$latin1: äöü utf8::is_utf8($latin1): 1
$uni:    Chſ utf8::is_utf8($uni): 1
$ascii_file: abc utf8::is_utf8($ascii_file): 1

Without use utf8:

# without 'use utf8;'

$ascii:  abc utf8::is_utf8($ascii):
$latin1: äöü utf8::is_utf8($latin1):
$uni:    Chſ utf8::is_utf8($uni):
$ascii_file: abc utf8::is_utf8($ascii_file): 1

That's known and expected. Perl doesn't set the utf-flag under use utf8 for ASCII string literals.

Let's have a look into the XS interface if we want to process strings in C:

int
dist_any (SV *s1, SV *s2)
{
    int dist;

    STRLEN m;
    STRLEN n;
    /* NOTE:
        SvPVbyte would downgrade (undocumented and destructive)
        SvPVutf8 would upgrade (also destructive)
    */
    unsigned char *a = (unsigned char*)SvPV (s1, m);
    unsigned char *b = (unsigned char*)SvPV (s2, n);

    if (SvUTF8 (s1) || SvUTF8 (s2) ) {
        dist = dist_utf8_ucs (a, m, b, n);
    }
    else {
        dist = dist_bytes (a, m, b, n);
    }
    return dist;
}

With two strings involved we have a problem, if one has the utf-flag set and the other not. With a constraint 'SvUTF8 (s1) || SvUTF8 (s2)'  both would be treated as bytes, even in the case that one is utf8 and the other a string literal in the ASCII range (which is also valid UTF-8). In combination with SvPVbyte the utf8 literal would be downgraded. That caused 'wide character' warning, because SvPVbyte changes the original string. The new code is not correct, because it could treat some other encoding as UTF-8. But I decided to be harsh and let the user run into his own problems (not using best practises).

The inconsequent treatment appears if we decode as UTF-8 from a file. Then the utf8-flag is also set for strings in the ASCII range. A brute force comparison of an English and a German dictionary, each 50,000 words resulted in 292,898,337 calls and the script needed 102 seconds runtime. That's a rate of ~3 M/sec. and slow, because it always needs to decode UTF-8, even if 100% of the English words are in the ASCII range. The nice lesson: With a small change in the code the decoding got faster with an overall acceleration of 13% via Perl and 19% in pure C. But the decoding still consumes 35% of the runtime.

Dienstag, 29. März 2022

 First impression of "use standard"


The idea of the Perl feature standard is to check if Perl code conforms to a reduced and easier to parse syntax. 

If one likes writing explicit syntax it's very near to standard.

It's part of Guacamole and uses Marpa::R2 as a parser. Looking into the source there is a large definition of the Perl syntax. Guacamole fails in many tests on CPAN-Testers, mainly on older Perl versions. This seems because Guacamole uses postfix dereferencing.

Installation via cpanm worked without problems. Same for Marpa::R2.

For a quick try I used one of the test scripts open my editor, 356 lines of code.

First modification is to activate the feature:
use standard;
Next we start the script:

$ perl t/10_basic_distance.t 
File 't/10_basic_distance.t' does not pass Standard Perl.
Parser says:
> Error in SLIF parse: No lexeme found at line 10, column 16
> * String before error: gs;\nuse utf8;\n\nuse standard;\n\nbinmode( STDOUT
> * The error was at line 10, column 16, and at character 0x002c ',', ...
> * here: , ":encoding(UTF-8)");\nbinmode( 'STDERR', ":encod
> Marpa::R2 exception 
    at ~/perl5/perlbrew/[...]/5.32.0/Guacamole.pm line 2103.
> 
> Failed to parse past: STDOUT (char 17, length 1), 
    expected LParen,OpArrow,PackageSep 
    at ~/perl5/perlbrew/[...]/5.32.0/Guacamole.pm line 2119.

Hmm, looks a little hard to read. But we can find the by line-number and position.

Seems it doesn't like STDOUT as a bareword parameter. That's against the documentation allowing STDOUT as a bareword. No problem, just quote it:

-binmode(STDOUT,":encoding(UTF-8)");
-binmode(STDERR,":encoding(UTF-8)");
+use standard;
+
+binmode( 'STDOUT', ":encoding(UTF-8)");
+binmode( 'STDERR', ":encoding(UTF-8)");

Next run, next problem. It finds 3 "forgotten" dereferences without curly brackets. That's fine and exactly what's expected:
-  for my $example (@$examples1) {
+  for my $example (@{$examples1}) {
At least it wants explicit syntax on subroutine calls:
-done_testing;
+done_testing();
In total 6 of 356 lines had to change.

In summary it does a good job and is fast, especially compared to Perl::Critic. But the start time increases to 0.727s compared to 0.069s without it on a system with SSD. Better remove it before release to production.

Maybe it's a better way to wrap the module in a small test method, which reads all Perl files of a source tree and checks the quality.