Are you sure knowing everything about Unicode in Perl? You never can.
Imagine doing everything right in your code, use a UTF-8 boilerplate, use Test::More::UTF8 and then get a 'wide character' warning and the test fails.
What happend?
Boiling it down to diagnostic code, it has something to do with the utf8-flag:
#!perl
use strict;
use warnings;
use utf8;
binmode(STDOUT,":encoding(UTF-8)");
binmode(STDERR,":encoding(UTF-8)");
my $ascii = 'abc';
my $latin1 = 'äöü';
my $uni = 'Chſ';
print '$ascii: ',$ascii,' utf8::is_utf8($ascii): ',utf8::is_utf8($ascii),"\n";
print '$latin1: ',$latin1,' utf8::is_utf8($latin1): ',utf8::is_utf8($latin1),"\n";
print '$uni: ',$uni,' utf8::is_utf8($uni): ',utf8::is_utf8($uni),"\n";
my $file = 'utf8-flag-ascii.txt';
my $ascii_file;
open(my $in,"<:encoding cannot="" die="" file:="" file="" line="<$in" my="" open="" or="" while="">) {
chomp($line);
$ascii_file = $line;
}
close($in);
print '$ascii_file: ',$ascii_file,
' utf8::is_utf8($ascii_file): ',utf8::is_utf8($ascii_file),"\n";
This prints:
# with 'use utf8;'
$ascii: abc utf8::is_utf8($ascii):
$latin1: äöü utf8::is_utf8($latin1): 1
$uni: Chſ utf8::is_utf8($uni): 1
$ascii_file: abc utf8::is_utf8($ascii_file): 1
Without use utf8:
# without 'use utf8;'
$ascii: abc utf8::is_utf8($ascii):
$latin1: äöü utf8::is_utf8($latin1):
$uni: Chſ utf8::is_utf8($uni):
$ascii_file: abc utf8::is_utf8($ascii_file): 1
That's known and expected. Perl doesn't set the utf-flag under use utf8 for ASCII string literals.
Let's have a look into the XS interface if we want to process strings in C:
int
dist_any (SV *s1, SV *s2)
{
int dist;
STRLEN m;
STRLEN n;
/* NOTE:
SvPVbyte would downgrade (undocumented and destructive)
SvPVutf8 would upgrade (also destructive)
*/
unsigned char *a = (unsigned char*)SvPV (s1, m);
unsigned char *b = (unsigned char*)SvPV (s2, n);
if (SvUTF8 (s1) || SvUTF8 (s2) ) {
dist = dist_utf8_ucs (a, m, b, n);
}
else {
dist = dist_bytes (a, m, b, n);
}
return dist;
}
With two strings involved we have a problem, if one has the utf-flag set and the other not. With a constraint 'SvUTF8 (s1) || SvUTF8 (s2)' both would be treated as bytes, even in the case that one is utf8 and the other a string literal in the ASCII range (which is also valid UTF-8). In combination with SvPVbyte the utf8 literal would be downgraded. That caused 'wide character' warning, because SvPVbyte changes the original string. The new code is not correct, because it could treat some other encoding as UTF-8. But I decided to be harsh and let the user run into his own problems (not using best practises).
The inconsequent treatment appears if we decode as UTF-8 from a file. Then the utf8-flag is also set for strings in the ASCII range. A brute force comparison of an English and a German dictionary, each 50,000 words resulted in 292,898,337 calls and the script needed 102 seconds runtime. That's a rate of ~3 M/sec. and slow, because it always needs to decode UTF-8, even if 100% of the English words are in the ASCII range. The nice lesson: With a small change in the code the decoding got faster with an overall acceleration of 13% via Perl and 19% in pure C. But the decoding still consumes 35% of the runtime.