Having not used either of these libraries I can't say for certain, but one possibility is that they don't use the same encoding type for the signature. For DSA/ECDSA there are two main formats, IEEE P1363 (used by Windows) and DER (used by OpenSSL).
The "Windows" format is to have a preset size (determined by Q for DSA and P for ECDSA (Windows doesn't support Char-2, but if it did it'd probably be M for Char-2 ECDSA)). Then both r
and s
are left-padded with 0
until they meet that length.
In the too small to be legal example of r = 0x305
and s = 0x810522
with sizeof(Q) being 3 bytes:
// r
000305
// s
810522
For the "OpenSSL" format it is encoded under the rules of DER as SEQUENCE(INTEGER(r), INTEGER(s)), which looks like
// SEQUENCE
30
// (length of payload)
0A
// INTEGER(r)
02
// (length of payload)
02
// note the leading 0x00 is omitted
0305
// INTEGER(s)
02
// (length of payload)
04
// Since INTEGER is a signed type, but this represented a positive number,
// a 0x00 has to be inserted to keep the sign bit clear.
00810522
or, compactly:
- Windows:
000305810522
- OpenSSL:
300A02020305020400810522
The "Windows" format is always even, always the same length. The "OpenSSL" format is usually about 6 bytes bigger, but can gain or lose a byte in the middle; so it's sometimes even, sometimes odd.
Base64-decoding your sig64
value shows that it is using the DER encoding. Generate a couple signatures with WebCrypto; if any don't start with 0x30
then you have the IEEE/DER problem.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…