• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1#! /usr/bin/env perl
2# Copyright 2010-2020 The OpenSSL Project Authors. All Rights Reserved.
3#
4# Licensed under the OpenSSL license (the "License").  You may not use
5# this file except in compliance with the License.  You can obtain a copy
6# in the file LICENSE in the source distribution or at
7# https://www.openssl.org/source/license.html
8
9#
10# ====================================================================
11# Written by Andy Polyakov <appro@openssl.org> for the OpenSSL
12# project. The module is, however, dual licensed under OpenSSL and
13# CRYPTOGAMS licenses depending on where you obtain it. For further
14# details see http://www.openssl.org/~appro/cryptogams/.
15# ====================================================================
16#
17# March, May, June 2010
18#
19# The module implements "4-bit" GCM GHASH function and underlying
20# single multiplication operation in GF(2^128). "4-bit" means that it
21# uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two
22# code paths: vanilla x86 and vanilla SSE. Former will be executed on
23# 486 and Pentium, latter on all others. SSE GHASH features so called
24# "528B" variant of "4-bit" method utilizing additional 256+16 bytes
25# of per-key storage [+512 bytes shared table]. Performance results
26# are for streamed GHASH subroutine and are expressed in cycles per
27# processed byte, less is better:
28#
29#		gcc 2.95.3(*)	SSE assembler	x86 assembler
30#
31# Pentium	105/111(**)	-		50
32# PIII		68 /75		12.2		24
33# P4		125/125		17.8		84(***)
34# Opteron	66 /70		10.1		30
35# Core2		54 /67		8.4		18
36# Atom		105/105		16.8		53
37# VIA Nano	69 /71		13.0		27
38#
39# (*)	gcc 3.4.x was observed to generate few percent slower code,
40#	which is one of reasons why 2.95.3 results were chosen,
41#	another reason is lack of 3.4.x results for older CPUs;
42#	comparison with SSE results is not completely fair, because C
43#	results are for vanilla "256B" implementation, while
44#	assembler results are for "528B";-)
45# (**)	second number is result for code compiled with -fPIC flag,
46#	which is actually more relevant, because assembler code is
47#	position-independent;
48# (***)	see comment in non-MMX routine for further details;
49#
50# To summarize, it's >2-5 times faster than gcc-generated code. To
51# anchor it to something else SHA1 assembler processes one byte in
52# ~7 cycles on contemporary x86 cores. As for choice of MMX/SSE
53# in particular, see comment at the end of the file...
54
55# May 2010
56#
57# Add PCLMULQDQ version performing at 2.10 cycles per processed byte.
58# The question is how close is it to theoretical limit? The pclmulqdq
59# instruction latency appears to be 14 cycles and there can't be more
60# than 2 of them executing at any given time. This means that single
61# Karatsuba multiplication would take 28 cycles *plus* few cycles for
62# pre- and post-processing. Then multiplication has to be followed by
63# modulo-reduction. Given that aggregated reduction method [see
64# "Carry-less Multiplication and Its Usage for Computing the GCM Mode"
65# white paper by Intel] allows you to perform reduction only once in
66# a while we can assume that asymptotic performance can be estimated
67# as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction
68# and Naggr is the aggregation factor.
69#
70# Before we proceed to this implementation let's have closer look at
71# the best-performing code suggested by Intel in their white paper.
72# By tracing inter-register dependencies Tmod is estimated as ~19
73# cycles and Naggr chosen by Intel is 4, resulting in 2.05 cycles per
74# processed byte. As implied, this is quite optimistic estimate,
75# because it does not account for Karatsuba pre- and post-processing,
76# which for a single multiplication is ~5 cycles. Unfortunately Intel
77# does not provide performance data for GHASH alone. But benchmarking
78# AES_GCM_encrypt ripped out of Fig. 15 of the white paper with aadt
79# alone resulted in 2.46 cycles per byte of out 16KB buffer. Note that
80# the result accounts even for pre-computing of degrees of the hash
81# key H, but its portion is negligible at 16KB buffer size.
82#
83# Moving on to the implementation in question. Tmod is estimated as
84# ~13 cycles and Naggr is 2, giving asymptotic performance of ...
85# 2.16. How is it possible that measured performance is better than
86# optimistic theoretical estimate? There is one thing Intel failed
87# to recognize. By serializing GHASH with CTR in same subroutine
88# former's performance is really limited to above (Tmul + Tmod/Naggr)
89# equation. But if GHASH procedure is detached, the modulo-reduction
90# can be interleaved with Naggr-1 multiplications at instruction level
91# and under ideal conditions even disappear from the equation. So that
92# optimistic theoretical estimate for this implementation is ...
93# 28/16=1.75, and not 2.16. Well, it's probably way too optimistic,
94# at least for such small Naggr. I'd argue that (28+Tproc/Naggr),
95# where Tproc is time required for Karatsuba pre- and post-processing,
96# is more realistic estimate. In this case it gives ... 1.91 cycles.
97# Or in other words, depending on how well we can interleave reduction
98# and one of the two multiplications the performance should be between
99# 1.91 and 2.16. As already mentioned, this implementation processes
100# one byte out of 8KB buffer in 2.10 cycles, while x86_64 counterpart
101# - in 2.02. x86_64 performance is better, because larger register
102# bank allows to interleave reduction and multiplication better.
103#
104# Does it make sense to increase Naggr? To start with it's virtually
105# impossible in 32-bit mode, because of limited register bank
106# capacity. Otherwise improvement has to be weighed against slower
107# setup, as well as code size and complexity increase. As even
108# optimistic estimate doesn't promise 30% performance improvement,
109# there are currently no plans to increase Naggr.
110#
111# Special thanks to David Woodhouse for providing access to a
112# Westmere-based system on behalf of Intel Open Source Technology Centre.
113
114# January 2010
115#
116# Tweaked to optimize transitions between integer and FP operations
117# on same XMM register, PCLMULQDQ subroutine was measured to process
118# one byte in 2.07 cycles on Sandy Bridge, and in 2.12 - on Westmere.
119# The minor regression on Westmere is outweighed by ~15% improvement
120# on Sandy Bridge. Strangely enough attempt to modify 64-bit code in
121# similar manner resulted in almost 20% degradation on Sandy Bridge,
122# where original 64-bit code processes one byte in 1.95 cycles.
123
124#####################################################################
125# For reference, AMD Bulldozer processes one byte in 1.98 cycles in
126# 32-bit mode and 1.89 in 64-bit.
127
128# February 2013
129#
130# Overhaul: aggregate Karatsuba post-processing, improve ILP in
131# reduction_alg9. Resulting performance is 1.96 cycles per byte on
132# Westmere, 1.95 - on Sandy/Ivy Bridge, 1.76 - on Bulldozer.
133
134$0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1;
135push(@INC,"${dir}","${dir}../../perlasm");
136require "x86asm.pl";
137
138$output=pop;
139open STDOUT,">$output";
140
141&asm_init($ARGV[0],$x86only = $ARGV[$#ARGV] eq "386");
142
143$sse2=0;
144for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
145
146($Zhh,$Zhl,$Zlh,$Zll) = ("ebp","edx","ecx","ebx");
147$inp  = "edi";
148$Htbl = "esi";
149
150$unroll = 0;	# Affects x86 loop. Folded loop performs ~7% worse
151		# than unrolled, which has to be weighted against
152		# 2.5x x86-specific code size reduction.
153
154sub x86_loop {
155    my $off = shift;
156    my $rem = "eax";
157
158	&mov	($Zhh,&DWP(4,$Htbl,$Zll));
159	&mov	($Zhl,&DWP(0,$Htbl,$Zll));
160	&mov	($Zlh,&DWP(12,$Htbl,$Zll));
161	&mov	($Zll,&DWP(8,$Htbl,$Zll));
162	&xor	($rem,$rem);	# avoid partial register stalls on PIII
163
164	# shrd practically kills P4, 2.5x deterioration, but P4 has
165	# MMX code-path to execute. shrd runs tad faster [than twice
166	# the shifts, move's and or's] on pre-MMX Pentium (as well as
167	# PIII and Core2), *but* minimizes code size, spares register
168	# and thus allows to fold the loop...
169	if (!$unroll) {
170	my $cnt = $inp;
171	&mov	($cnt,15);
172	&jmp	(&label("x86_loop"));
173	&set_label("x86_loop",16);
174	    for($i=1;$i<=2;$i++) {
175		&mov	(&LB($rem),&LB($Zll));
176		&shrd	($Zll,$Zlh,4);
177		&and	(&LB($rem),0xf);
178		&shrd	($Zlh,$Zhl,4);
179		&shrd	($Zhl,$Zhh,4);
180		&shr	($Zhh,4);
181		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
182
183		&mov	(&LB($rem),&BP($off,"esp",$cnt));
184		if ($i&1) {
185			&and	(&LB($rem),0xf0);
186		} else {
187			&shl	(&LB($rem),4);
188		}
189
190		&xor	($Zll,&DWP(8,$Htbl,$rem));
191		&xor	($Zlh,&DWP(12,$Htbl,$rem));
192		&xor	($Zhl,&DWP(0,$Htbl,$rem));
193		&xor	($Zhh,&DWP(4,$Htbl,$rem));
194
195		if ($i&1) {
196			&dec	($cnt);
197			&js	(&label("x86_break"));
198		} else {
199			&jmp	(&label("x86_loop"));
200		}
201	    }
202	&set_label("x86_break",16);
203	} else {
204	    for($i=1;$i<32;$i++) {
205		&comment($i);
206		&mov	(&LB($rem),&LB($Zll));
207		&shrd	($Zll,$Zlh,4);
208		&and	(&LB($rem),0xf);
209		&shrd	($Zlh,$Zhl,4);
210		&shrd	($Zhl,$Zhh,4);
211		&shr	($Zhh,4);
212		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
213
214		if ($i&1) {
215			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
216			&and	(&LB($rem),0xf0);
217		} else {
218			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
219			&shl	(&LB($rem),4);
220		}
221
222		&xor	($Zll,&DWP(8,$Htbl,$rem));
223		&xor	($Zlh,&DWP(12,$Htbl,$rem));
224		&xor	($Zhl,&DWP(0,$Htbl,$rem));
225		&xor	($Zhh,&DWP(4,$Htbl,$rem));
226	    }
227	}
228	&bswap	($Zll);
229	&bswap	($Zlh);
230	&bswap	($Zhl);
231	if (!$x86only) {
232		&bswap	($Zhh);
233	} else {
234		&mov	("eax",$Zhh);
235		&bswap	("eax");
236		&mov	($Zhh,"eax");
237	}
238}
239
240if ($unroll) {
241    &function_begin_B("_x86_gmult_4bit_inner");
242	&x86_loop(4);
243	&ret	();
244    &function_end_B("_x86_gmult_4bit_inner");
245}
246
247sub deposit_rem_4bit {
248    my $bias = shift;
249
250	&mov	(&DWP($bias+0, "esp"),0x0000<<16);
251	&mov	(&DWP($bias+4, "esp"),0x1C20<<16);
252	&mov	(&DWP($bias+8, "esp"),0x3840<<16);
253	&mov	(&DWP($bias+12,"esp"),0x2460<<16);
254	&mov	(&DWP($bias+16,"esp"),0x7080<<16);
255	&mov	(&DWP($bias+20,"esp"),0x6CA0<<16);
256	&mov	(&DWP($bias+24,"esp"),0x48C0<<16);
257	&mov	(&DWP($bias+28,"esp"),0x54E0<<16);
258	&mov	(&DWP($bias+32,"esp"),0xE100<<16);
259	&mov	(&DWP($bias+36,"esp"),0xFD20<<16);
260	&mov	(&DWP($bias+40,"esp"),0xD940<<16);
261	&mov	(&DWP($bias+44,"esp"),0xC560<<16);
262	&mov	(&DWP($bias+48,"esp"),0x9180<<16);
263	&mov	(&DWP($bias+52,"esp"),0x8DA0<<16);
264	&mov	(&DWP($bias+56,"esp"),0xA9C0<<16);
265	&mov	(&DWP($bias+60,"esp"),0xB5E0<<16);
266}
267
268$suffix = $x86only ? "" : "_x86";
269
270&function_begin("gcm_gmult_4bit".$suffix);
271	&stack_push(16+4+1);			# +1 for stack alignment
272	&mov	($inp,&wparam(0));		# load Xi
273	&mov	($Htbl,&wparam(1));		# load Htable
274
275	&mov	($Zhh,&DWP(0,$inp));		# load Xi[16]
276	&mov	($Zhl,&DWP(4,$inp));
277	&mov	($Zlh,&DWP(8,$inp));
278	&mov	($Zll,&DWP(12,$inp));
279
280	&deposit_rem_4bit(16);
281
282	&mov	(&DWP(0,"esp"),$Zhh);		# copy Xi[16] on stack
283	&mov	(&DWP(4,"esp"),$Zhl);
284	&mov	(&DWP(8,"esp"),$Zlh);
285	&mov	(&DWP(12,"esp"),$Zll);
286	&shr	($Zll,20);
287	&and	($Zll,0xf0);
288
289	if ($unroll) {
290		&call	("_x86_gmult_4bit_inner");
291	} else {
292		&x86_loop(0);
293		&mov	($inp,&wparam(0));
294	}
295
296	&mov	(&DWP(12,$inp),$Zll);
297	&mov	(&DWP(8,$inp),$Zlh);
298	&mov	(&DWP(4,$inp),$Zhl);
299	&mov	(&DWP(0,$inp),$Zhh);
300	&stack_pop(16+4+1);
301&function_end("gcm_gmult_4bit".$suffix);
302
303&function_begin("gcm_ghash_4bit".$suffix);
304	&stack_push(16+4+1);			# +1 for 64-bit alignment
305	&mov	($Zll,&wparam(0));		# load Xi
306	&mov	($Htbl,&wparam(1));		# load Htable
307	&mov	($inp,&wparam(2));		# load in
308	&mov	("ecx",&wparam(3));		# load len
309	&add	("ecx",$inp);
310	&mov	(&wparam(3),"ecx");
311
312	&mov	($Zhh,&DWP(0,$Zll));		# load Xi[16]
313	&mov	($Zhl,&DWP(4,$Zll));
314	&mov	($Zlh,&DWP(8,$Zll));
315	&mov	($Zll,&DWP(12,$Zll));
316
317	&deposit_rem_4bit(16);
318
319    &set_label("x86_outer_loop",16);
320	&xor	($Zll,&DWP(12,$inp));		# xor with input
321	&xor	($Zlh,&DWP(8,$inp));
322	&xor	($Zhl,&DWP(4,$inp));
323	&xor	($Zhh,&DWP(0,$inp));
324	&mov	(&DWP(12,"esp"),$Zll);		# dump it on stack
325	&mov	(&DWP(8,"esp"),$Zlh);
326	&mov	(&DWP(4,"esp"),$Zhl);
327	&mov	(&DWP(0,"esp"),$Zhh);
328
329	&shr	($Zll,20);
330	&and	($Zll,0xf0);
331
332	if ($unroll) {
333		&call	("_x86_gmult_4bit_inner");
334	} else {
335		&x86_loop(0);
336		&mov	($inp,&wparam(2));
337	}
338	&lea	($inp,&DWP(16,$inp));
339	&cmp	($inp,&wparam(3));
340	&mov	(&wparam(2),$inp)	if (!$unroll);
341	&jb	(&label("x86_outer_loop"));
342
343	&mov	($inp,&wparam(0));	# load Xi
344	&mov	(&DWP(12,$inp),$Zll);
345	&mov	(&DWP(8,$inp),$Zlh);
346	&mov	(&DWP(4,$inp),$Zhl);
347	&mov	(&DWP(0,$inp),$Zhh);
348	&stack_pop(16+4+1);
349&function_end("gcm_ghash_4bit".$suffix);
350
351if (!$x86only) {{{
352
353&static_label("rem_4bit");
354
355if (!$sse2) {{	# pure-MMX "May" version...
356
357$S=12;		# shift factor for rem_4bit
358
359&function_begin_B("_mmx_gmult_4bit_inner");
360# MMX version performs 3.5 times better on P4 (see comment in non-MMX
361# routine for further details), 100% better on Opteron, ~70% better
362# on Core2 and PIII... In other words effort is considered to be well
363# spent... Since initial release the loop was unrolled in order to
364# "liberate" register previously used as loop counter. Instead it's
365# used to optimize critical path in 'Z.hi ^= rem_4bit[Z.lo&0xf]'.
366# The path involves move of Z.lo from MMX to integer register,
367# effective address calculation and finally merge of value to Z.hi.
368# Reference to rem_4bit is scheduled so late that I had to >>4
369# rem_4bit elements. This resulted in 20-45% procent improvement
370# on contemporary µ-archs.
371{
372    my $cnt;
373    my $rem_4bit = "eax";
374    my @rem = ($Zhh,$Zll);
375    my $nhi = $Zhl;
376    my $nlo = $Zlh;
377
378    my ($Zlo,$Zhi) = ("mm0","mm1");
379    my $tmp = "mm2";
380
381	&xor	($nlo,$nlo);	# avoid partial register stalls on PIII
382	&mov	($nhi,$Zll);
383	&mov	(&LB($nlo),&LB($nhi));
384	&shl	(&LB($nlo),4);
385	&and	($nhi,0xf0);
386	&movq	($Zlo,&QWP(8,$Htbl,$nlo));
387	&movq	($Zhi,&QWP(0,$Htbl,$nlo));
388	&movd	($rem[0],$Zlo);
389
390	for ($cnt=28;$cnt>=-2;$cnt--) {
391	    my $odd = $cnt&1;
392	    my $nix = $odd ? $nlo : $nhi;
393
394		&shl	(&LB($nlo),4)			if ($odd);
395		&psrlq	($Zlo,4);
396		&movq	($tmp,$Zhi);
397		&psrlq	($Zhi,4);
398		&pxor	($Zlo,&QWP(8,$Htbl,$nix));
399		&mov	(&LB($nlo),&BP($cnt/2,$inp))	if (!$odd && $cnt>=0);
400		&psllq	($tmp,60);
401		&and	($nhi,0xf0)			if ($odd);
402		&pxor	($Zhi,&QWP(0,$rem_4bit,$rem[1],8)) if ($cnt<28);
403		&and	($rem[0],0xf);
404		&pxor	($Zhi,&QWP(0,$Htbl,$nix));
405		&mov	($nhi,$nlo)			if (!$odd && $cnt>=0);
406		&movd	($rem[1],$Zlo);
407		&pxor	($Zlo,$tmp);
408
409		push	(@rem,shift(@rem));		# "rotate" registers
410	}
411
412	&mov	($inp,&DWP(4,$rem_4bit,$rem[1],8));	# last rem_4bit[rem]
413
414	&psrlq	($Zlo,32);	# lower part of Zlo is already there
415	&movd	($Zhl,$Zhi);
416	&psrlq	($Zhi,32);
417	&movd	($Zlh,$Zlo);
418	&movd	($Zhh,$Zhi);
419	&shl	($inp,4);	# compensate for rem_4bit[i] being >>4
420
421	&bswap	($Zll);
422	&bswap	($Zhl);
423	&bswap	($Zlh);
424	&xor	($Zhh,$inp);
425	&bswap	($Zhh);
426
427	&ret	();
428}
429&function_end_B("_mmx_gmult_4bit_inner");
430
431&function_begin("gcm_gmult_4bit_mmx");
432	&mov	($inp,&wparam(0));	# load Xi
433	&mov	($Htbl,&wparam(1));	# load Htable
434
435	&call	(&label("pic_point"));
436	&set_label("pic_point");
437	&blindpop("eax");
438	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
439
440	&movz	($Zll,&BP(15,$inp));
441
442	&call	("_mmx_gmult_4bit_inner");
443
444	&mov	($inp,&wparam(0));	# load Xi
445	&emms	();
446	&mov	(&DWP(12,$inp),$Zll);
447	&mov	(&DWP(4,$inp),$Zhl);
448	&mov	(&DWP(8,$inp),$Zlh);
449	&mov	(&DWP(0,$inp),$Zhh);
450&function_end("gcm_gmult_4bit_mmx");
451
452# Streamed version performs 20% better on P4, 7% on Opteron,
453# 10% on Core2 and PIII...
454&function_begin("gcm_ghash_4bit_mmx");
455	&mov	($Zhh,&wparam(0));	# load Xi
456	&mov	($Htbl,&wparam(1));	# load Htable
457	&mov	($inp,&wparam(2));	# load in
458	&mov	($Zlh,&wparam(3));	# load len
459
460	&call	(&label("pic_point"));
461	&set_label("pic_point");
462	&blindpop("eax");
463	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
464
465	&add	($Zlh,$inp);
466	&mov	(&wparam(3),$Zlh);	# len to point at the end of input
467	&stack_push(4+1);		# +1 for stack alignment
468
469	&mov	($Zll,&DWP(12,$Zhh));	# load Xi[16]
470	&mov	($Zhl,&DWP(4,$Zhh));
471	&mov	($Zlh,&DWP(8,$Zhh));
472	&mov	($Zhh,&DWP(0,$Zhh));
473	&jmp	(&label("mmx_outer_loop"));
474
475    &set_label("mmx_outer_loop",16);
476	&xor	($Zll,&DWP(12,$inp));
477	&xor	($Zhl,&DWP(4,$inp));
478	&xor	($Zlh,&DWP(8,$inp));
479	&xor	($Zhh,&DWP(0,$inp));
480	&mov	(&wparam(2),$inp);
481	&mov	(&DWP(12,"esp"),$Zll);
482	&mov	(&DWP(4,"esp"),$Zhl);
483	&mov	(&DWP(8,"esp"),$Zlh);
484	&mov	(&DWP(0,"esp"),$Zhh);
485
486	&mov	($inp,"esp");
487	&shr	($Zll,24);
488
489	&call	("_mmx_gmult_4bit_inner");
490
491	&mov	($inp,&wparam(2));
492	&lea	($inp,&DWP(16,$inp));
493	&cmp	($inp,&wparam(3));
494	&jb	(&label("mmx_outer_loop"));
495
496	&mov	($inp,&wparam(0));	# load Xi
497	&emms	();
498	&mov	(&DWP(12,$inp),$Zll);
499	&mov	(&DWP(4,$inp),$Zhl);
500	&mov	(&DWP(8,$inp),$Zlh);
501	&mov	(&DWP(0,$inp),$Zhh);
502
503	&stack_pop(4+1);
504&function_end("gcm_ghash_4bit_mmx");
505
506}} else {{	# "June" MMX version...
507		# ... has slower "April" gcm_gmult_4bit_mmx with folded
508		# loop. This is done to conserve code size...
509$S=16;		# shift factor for rem_4bit
510
511sub mmx_loop() {
512# MMX version performs 2.8 times better on P4 (see comment in non-MMX
513# routine for further details), 40% better on Opteron and Core2, 50%
514# better on PIII... In other words effort is considered to be well
515# spent...
516    my $inp = shift;
517    my $rem_4bit = shift;
518    my $cnt = $Zhh;
519    my $nhi = $Zhl;
520    my $nlo = $Zlh;
521    my $rem = $Zll;
522
523    my ($Zlo,$Zhi) = ("mm0","mm1");
524    my $tmp = "mm2";
525
526	&xor	($nlo,$nlo);	# avoid partial register stalls on PIII
527	&mov	($nhi,$Zll);
528	&mov	(&LB($nlo),&LB($nhi));
529	&mov	($cnt,14);
530	&shl	(&LB($nlo),4);
531	&and	($nhi,0xf0);
532	&movq	($Zlo,&QWP(8,$Htbl,$nlo));
533	&movq	($Zhi,&QWP(0,$Htbl,$nlo));
534	&movd	($rem,$Zlo);
535	&jmp	(&label("mmx_loop"));
536
537    &set_label("mmx_loop",16);
538	&psrlq	($Zlo,4);
539	&and	($rem,0xf);
540	&movq	($tmp,$Zhi);
541	&psrlq	($Zhi,4);
542	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
543	&mov	(&LB($nlo),&BP(0,$inp,$cnt));
544	&psllq	($tmp,60);
545	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
546	&dec	($cnt);
547	&movd	($rem,$Zlo);
548	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
549	&mov	($nhi,$nlo);
550	&pxor	($Zlo,$tmp);
551	&js	(&label("mmx_break"));
552
553	&shl	(&LB($nlo),4);
554	&and	($rem,0xf);
555	&psrlq	($Zlo,4);
556	&and	($nhi,0xf0);
557	&movq	($tmp,$Zhi);
558	&psrlq	($Zhi,4);
559	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
560	&psllq	($tmp,60);
561	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
562	&movd	($rem,$Zlo);
563	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
564	&pxor	($Zlo,$tmp);
565	&jmp	(&label("mmx_loop"));
566
567    &set_label("mmx_break",16);
568	&shl	(&LB($nlo),4);
569	&and	($rem,0xf);
570	&psrlq	($Zlo,4);
571	&and	($nhi,0xf0);
572	&movq	($tmp,$Zhi);
573	&psrlq	($Zhi,4);
574	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
575	&psllq	($tmp,60);
576	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
577	&movd	($rem,$Zlo);
578	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
579	&pxor	($Zlo,$tmp);
580
581	&psrlq	($Zlo,4);
582	&and	($rem,0xf);
583	&movq	($tmp,$Zhi);
584	&psrlq	($Zhi,4);
585	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
586	&psllq	($tmp,60);
587	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
588	&movd	($rem,$Zlo);
589	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
590	&pxor	($Zlo,$tmp);
591
592	&psrlq	($Zlo,32);	# lower part of Zlo is already there
593	&movd	($Zhl,$Zhi);
594	&psrlq	($Zhi,32);
595	&movd	($Zlh,$Zlo);
596	&movd	($Zhh,$Zhi);
597
598	&bswap	($Zll);
599	&bswap	($Zhl);
600	&bswap	($Zlh);
601	&bswap	($Zhh);
602}
603
604&function_begin("gcm_gmult_4bit_mmx");
605	&mov	($inp,&wparam(0));	# load Xi
606	&mov	($Htbl,&wparam(1));	# load Htable
607
608	&call	(&label("pic_point"));
609	&set_label("pic_point");
610	&blindpop("eax");
611	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
612
613	&movz	($Zll,&BP(15,$inp));
614
615	&mmx_loop($inp,"eax");
616
617	&emms	();
618	&mov	(&DWP(12,$inp),$Zll);
619	&mov	(&DWP(4,$inp),$Zhl);
620	&mov	(&DWP(8,$inp),$Zlh);
621	&mov	(&DWP(0,$inp),$Zhh);
622&function_end("gcm_gmult_4bit_mmx");
623
624######################################################################
625# Below subroutine is "528B" variant of "4-bit" GCM GHASH function
626# (see gcm128.c for details). It provides further 20-40% performance
627# improvement over above mentioned "May" version.
628
629&static_label("rem_8bit");
630
631&function_begin("gcm_ghash_4bit_mmx");
632{ my ($Zlo,$Zhi) = ("mm7","mm6");
633  my $rem_8bit = "esi";
634  my $Htbl = "ebx";
635
636    # parameter block
637    &mov	("eax",&wparam(0));		# Xi
638    &mov	("ebx",&wparam(1));		# Htable
639    &mov	("ecx",&wparam(2));		# inp
640    &mov	("edx",&wparam(3));		# len
641    &mov	("ebp","esp");			# original %esp
642    &call	(&label("pic_point"));
643    &set_label	("pic_point");
644    &blindpop	($rem_8bit);
645    &lea	($rem_8bit,&DWP(&label("rem_8bit")."-".&label("pic_point"),$rem_8bit));
646
647    &sub	("esp",512+16+16);		# allocate stack frame...
648    &and	("esp",-64);			# ...and align it
649    &sub	("esp",16);			# place for (u8)(H[]<<4)
650
651    &add	("edx","ecx");			# pointer to the end of input
652    &mov	(&DWP(528+16+0,"esp"),"eax");	# save Xi
653    &mov	(&DWP(528+16+8,"esp"),"edx");	# save inp+len
654    &mov	(&DWP(528+16+12,"esp"),"ebp");	# save original %esp
655
656    { my @lo  = ("mm0","mm1","mm2");
657      my @hi  = ("mm3","mm4","mm5");
658      my @tmp = ("mm6","mm7");
659      my ($off1,$off2,$i) = (0,0,);
660
661      &add	($Htbl,128);			# optimize for size
662      &lea	("edi",&DWP(16+128,"esp"));
663      &lea	("ebp",&DWP(16+256+128,"esp"));
664
665      # decompose Htable (low and high parts are kept separately),
666      # generate Htable[]>>4, (u8)(Htable[]<<4), save to stack...
667      for ($i=0;$i<18;$i++) {
668
669	&mov	("edx",&DWP(16*$i+8-128,$Htbl))		if ($i<16);
670	&movq	($lo[0],&QWP(16*$i+8-128,$Htbl))	if ($i<16);
671	&psllq	($tmp[1],60)				if ($i>1);
672	&movq	($hi[0],&QWP(16*$i+0-128,$Htbl))	if ($i<16);
673	&por	($lo[2],$tmp[1])			if ($i>1);
674	&movq	(&QWP($off1-128,"edi"),$lo[1])		if ($i>0 && $i<17);
675	&psrlq	($lo[1],4)				if ($i>0 && $i<17);
676	&movq	(&QWP($off1,"edi"),$hi[1])		if ($i>0 && $i<17);
677	&movq	($tmp[0],$hi[1])			if ($i>0 && $i<17);
678	&movq	(&QWP($off2-128,"ebp"),$lo[2])		if ($i>1);
679	&psrlq	($hi[1],4)				if ($i>0 && $i<17);
680	&movq	(&QWP($off2,"ebp"),$hi[2])		if ($i>1);
681	&shl	("edx",4)				if ($i<16);
682	&mov	(&BP($i,"esp"),&LB("edx"))		if ($i<16);
683
684	unshift	(@lo,pop(@lo));			# "rotate" registers
685	unshift	(@hi,pop(@hi));
686	unshift	(@tmp,pop(@tmp));
687	$off1 += 8	if ($i>0);
688	$off2 += 8	if ($i>1);
689      }
690    }
691
692    &movq	($Zhi,&QWP(0,"eax"));
693    &mov	("ebx",&DWP(8,"eax"));
694    &mov	("edx",&DWP(12,"eax"));		# load Xi
695
696&set_label("outer",16);
697  { my $nlo = "eax";
698    my $dat = "edx";
699    my @nhi = ("edi","ebp");
700    my @rem = ("ebx","ecx");
701    my @red = ("mm0","mm1","mm2");
702    my $tmp = "mm3";
703
704    &xor	($dat,&DWP(12,"ecx"));		# merge input data
705    &xor	("ebx",&DWP(8,"ecx"));
706    &pxor	($Zhi,&QWP(0,"ecx"));
707    &lea	("ecx",&DWP(16,"ecx"));		# inp+=16
708    #&mov	(&DWP(528+12,"esp"),$dat);	# save inp^Xi
709    &mov	(&DWP(528+8,"esp"),"ebx");
710    &movq	(&QWP(528+0,"esp"),$Zhi);
711    &mov	(&DWP(528+16+4,"esp"),"ecx");	# save inp
712
713    &xor	($nlo,$nlo);
714    &rol	($dat,8);
715    &mov	(&LB($nlo),&LB($dat));
716    &mov	($nhi[1],$nlo);
717    &and	(&LB($nlo),0x0f);
718    &shr	($nhi[1],4);
719    &pxor	($red[0],$red[0]);
720    &rol	($dat,8);			# next byte
721    &pxor	($red[1],$red[1]);
722    &pxor	($red[2],$red[2]);
723
724    # Just like in "May" version modulo-schedule for critical path in
725    # 'Z.hi ^= rem_8bit[Z.lo&0xff^((u8)H[nhi]<<4)]<<48'. Final 'pxor'
726    # is scheduled so late that rem_8bit[] has to be shifted *right*
727    # by 16, which is why last argument to pinsrw is 2, which
728    # corresponds to <<32=<<48>>16...
729    for ($j=11,$i=0;$i<15;$i++) {
730
731      if ($i>0) {
732	&pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
733	&rol	($dat,8);				# next byte
734	&pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
735
736	&pxor	($Zlo,$tmp);
737	&pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
738	&xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
739      } else {
740	&movq	($Zlo,&QWP(16,"esp",$nlo,8));
741	&movq	($Zhi,&QWP(16+128,"esp",$nlo,8));
742      }
743
744	&mov	(&LB($nlo),&LB($dat));
745	&mov	($dat,&DWP(528+$j,"esp"))		if (--$j%4==0);
746
747	&movd	($rem[0],$Zlo);
748	&movz	($rem[1],&LB($rem[1]))			if ($i>0);
749	&psrlq	($Zlo,8);				# Z>>=8
750
751	&movq	($tmp,$Zhi);
752	&mov	($nhi[0],$nlo);
753	&psrlq	($Zhi,8);
754
755	&pxor	($Zlo,&QWP(16+256+0,"esp",$nhi[1],8));	# Z^=H[nhi]>>4
756	&and	(&LB($nlo),0x0f);
757	&psllq	($tmp,56);
758
759	&pxor	($Zhi,$red[1])				if ($i>1);
760	&shr	($nhi[0],4);
761	&pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2)	if ($i>0);
762
763	unshift	(@red,pop(@red));			# "rotate" registers
764	unshift	(@rem,pop(@rem));
765	unshift	(@nhi,pop(@nhi));
766    }
767
768    &pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
769    &pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
770    &xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
771
772    &pxor	($Zlo,$tmp);
773    &pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
774    &movz	($rem[1],&LB($rem[1]));
775
776    &pxor	($red[2],$red[2]);			# clear 2nd word
777    &psllq	($red[1],4);
778
779    &movd	($rem[0],$Zlo);
780    &psrlq	($Zlo,4);				# Z>>=4
781
782    &movq	($tmp,$Zhi);
783    &psrlq	($Zhi,4);
784    &shl	($rem[0],4);				# rem<<4
785
786    &pxor	($Zlo,&QWP(16,"esp",$nhi[1],8));	# Z^=H[nhi]
787    &psllq	($tmp,60);
788    &movz	($rem[0],&LB($rem[0]));
789
790    &pxor	($Zlo,$tmp);
791    &pxor	($Zhi,&QWP(16+128,"esp",$nhi[1],8));
792
793    &pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2);
794    &pxor	($Zhi,$red[1]);
795
796    &movd	($dat,$Zlo);
797    &pinsrw	($red[2],&WP(0,$rem_8bit,$rem[0],2),3);	# last is <<48
798
799    &psllq	($red[0],12);				# correct by <<16>>4
800    &pxor	($Zhi,$red[0]);
801    &psrlq	($Zlo,32);
802    &pxor	($Zhi,$red[2]);
803
804    &mov	("ecx",&DWP(528+16+4,"esp"));	# restore inp
805    &movd	("ebx",$Zlo);
806    &movq	($tmp,$Zhi);			# 01234567
807    &psllw	($Zhi,8);			# 1.3.5.7.
808    &psrlw	($tmp,8);			# .0.2.4.6
809    &por	($Zhi,$tmp);			# 10325476
810    &bswap	($dat);
811    &pshufw	($Zhi,$Zhi,0b00011011);		# 76543210
812    &bswap	("ebx");
813
814    &cmp	("ecx",&DWP(528+16+8,"esp"));	# are we done?
815    &jne	(&label("outer"));
816  }
817
818    &mov	("eax",&DWP(528+16+0,"esp"));	# restore Xi
819    &mov	(&DWP(12,"eax"),"edx");
820    &mov	(&DWP(8,"eax"),"ebx");
821    &movq	(&QWP(0,"eax"),$Zhi);
822
823    &mov	("esp",&DWP(528+16+12,"esp"));	# restore original %esp
824    &emms	();
825}
826&function_end("gcm_ghash_4bit_mmx");
827}}
828
829if ($sse2) {{
830######################################################################
831# PCLMULQDQ version.
832
833$Xip="eax";
834$Htbl="edx";
835$const="ecx";
836$inp="esi";
837$len="ebx";
838
839($Xi,$Xhi)=("xmm0","xmm1");	$Hkey="xmm2";
840($T1,$T2,$T3)=("xmm3","xmm4","xmm5");
841($Xn,$Xhn)=("xmm6","xmm7");
842
843&static_label("bswap");
844
845sub clmul64x64_T2 {	# minimal "register" pressure
846my ($Xhi,$Xi,$Hkey,$HK)=@_;
847
848	&movdqa		($Xhi,$Xi);		#
849	&pshufd		($T1,$Xi,0b01001110);
850	&pshufd		($T2,$Hkey,0b01001110)	if (!defined($HK));
851	&pxor		($T1,$Xi);		#
852	&pxor		($T2,$Hkey)		if (!defined($HK));
853			$HK=$T2			if (!defined($HK));
854
855	&pclmulqdq	($Xi,$Hkey,0x00);	#######
856	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
857	&pclmulqdq	($T1,$HK,0x00);		#######
858	&xorps		($T1,$Xi);		#
859	&xorps		($T1,$Xhi);		#
860
861	&movdqa		($T2,$T1);		#
862	&psrldq		($T1,8);
863	&pslldq		($T2,8);		#
864	&pxor		($Xhi,$T1);
865	&pxor		($Xi,$T2);		#
866}
867
868sub clmul64x64_T3 {
869# Even though this subroutine offers visually better ILP, it
870# was empirically found to be a tad slower than above version.
871# At least in gcm_ghash_clmul context. But it's just as well,
872# because loop modulo-scheduling is possible only thanks to
873# minimized "register" pressure...
874my ($Xhi,$Xi,$Hkey)=@_;
875
876	&movdqa		($T1,$Xi);		#
877	&movdqa		($Xhi,$Xi);
878	&pclmulqdq	($Xi,$Hkey,0x00);	#######
879	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
880	&pshufd		($T2,$T1,0b01001110);	#
881	&pshufd		($T3,$Hkey,0b01001110);
882	&pxor		($T2,$T1);		#
883	&pxor		($T3,$Hkey);
884	&pclmulqdq	($T2,$T3,0x00);		#######
885	&pxor		($T2,$Xi);		#
886	&pxor		($T2,$Xhi);		#
887
888	&movdqa		($T3,$T2);		#
889	&psrldq		($T2,8);
890	&pslldq		($T3,8);		#
891	&pxor		($Xhi,$T2);
892	&pxor		($Xi,$T3);		#
893}
894
895if (1) {		# Algorithm 9 with <<1 twist.
896			# Reduction is shorter and uses only two
897			# temporary registers, which makes it better
898			# candidate for interleaving with 64x64
899			# multiplication. Pre-modulo-scheduled loop
900			# was found to be ~20% faster than Algorithm 5
901			# below. Algorithm 9 was therefore chosen for
902			# further optimization...
903
904sub reduction_alg9 {	# 17/11 times faster than Intel version
905my ($Xhi,$Xi) = @_;
906
907	# 1st phase
908	&movdqa		($T2,$Xi);		#
909	&movdqa		($T1,$Xi);
910	&psllq		($Xi,5);
911	&pxor		($T1,$Xi);		#
912	&psllq		($Xi,1);
913	&pxor		($Xi,$T1);		#
914	&psllq		($Xi,57);		#
915	&movdqa		($T1,$Xi);		#
916	&pslldq		($Xi,8);
917	&psrldq		($T1,8);		#
918	&pxor		($Xi,$T2);
919	&pxor		($Xhi,$T1);		#
920
921	# 2nd phase
922	&movdqa		($T2,$Xi);
923	&psrlq		($Xi,1);
924	&pxor		($Xhi,$T2);		#
925	&pxor		($T2,$Xi);
926	&psrlq		($Xi,5);
927	&pxor		($Xi,$T2);		#
928	&psrlq		($Xi,1);		#
929	&pxor		($Xi,$Xhi)		#
930}
931
932&function_begin_B("gcm_init_clmul");
933	&mov		($Htbl,&wparam(0));
934	&mov		($Xip,&wparam(1));
935
936	&call		(&label("pic"));
937&set_label("pic");
938	&blindpop	($const);
939	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
940
941	&movdqu		($Hkey,&QWP(0,$Xip));
942	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
943
944	# <<1 twist
945	&pshufd		($T2,$Hkey,0b11111111);	# broadcast uppermost dword
946	&movdqa		($T1,$Hkey);
947	&psllq		($Hkey,1);
948	&pxor		($T3,$T3);		#
949	&psrlq		($T1,63);
950	&pcmpgtd	($T3,$T2);		# broadcast carry bit
951	&pslldq		($T1,8);
952	&por		($Hkey,$T1);		# H<<=1
953
954	# magic reduction
955	&pand		($T3,&QWP(16,$const));	# 0x1c2_polynomial
956	&pxor		($Hkey,$T3);		# if(carry) H^=0x1c2_polynomial
957
958	# calculate H^2
959	&movdqa		($Xi,$Hkey);
960	&clmul64x64_T2	($Xhi,$Xi,$Hkey);
961	&reduction_alg9	($Xhi,$Xi);
962
963	&pshufd		($T1,$Hkey,0b01001110);
964	&pshufd		($T2,$Xi,0b01001110);
965	&pxor		($T1,$Hkey);		# Karatsuba pre-processing
966	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
967	&pxor		($T2,$Xi);		# Karatsuba pre-processing
968	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
969	&palignr	($T2,$T1,8);		# low part is H.lo^H.hi
970	&movdqu		(&QWP(32,$Htbl),$T2);	# save Karatsuba "salt"
971
972	&ret		();
973&function_end_B("gcm_init_clmul");
974
975&function_begin_B("gcm_gmult_clmul");
976	&mov		($Xip,&wparam(0));
977	&mov		($Htbl,&wparam(1));
978
979	&call		(&label("pic"));
980&set_label("pic");
981	&blindpop	($const);
982	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
983
984	&movdqu		($Xi,&QWP(0,$Xip));
985	&movdqa		($T3,&QWP(0,$const));
986	&movups		($Hkey,&QWP(0,$Htbl));
987	&pshufb		($Xi,$T3);
988	&movups		($T2,&QWP(32,$Htbl));
989
990	&clmul64x64_T2	($Xhi,$Xi,$Hkey,$T2);
991	&reduction_alg9	($Xhi,$Xi);
992
993	&pshufb		($Xi,$T3);
994	&movdqu		(&QWP(0,$Xip),$Xi);
995
996	&ret	();
997&function_end_B("gcm_gmult_clmul");
998
999&function_begin("gcm_ghash_clmul");
1000	&mov		($Xip,&wparam(0));
1001	&mov		($Htbl,&wparam(1));
1002	&mov		($inp,&wparam(2));
1003	&mov		($len,&wparam(3));
1004
1005	&call		(&label("pic"));
1006&set_label("pic");
1007	&blindpop	($const);
1008	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1009
1010	&movdqu		($Xi,&QWP(0,$Xip));
1011	&movdqa		($T3,&QWP(0,$const));
1012	&movdqu		($Hkey,&QWP(0,$Htbl));
1013	&pshufb		($Xi,$T3);
1014
1015	&sub		($len,0x10);
1016	&jz		(&label("odd_tail"));
1017
1018	#######
1019	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
1020	#	[(H*Ii+1) + (H*Xi+1)] mod P =
1021	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
1022	#
1023	&movdqu		($T1,&QWP(0,$inp));	# Ii
1024	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1025	&pshufb		($T1,$T3);
1026	&pshufb		($Xn,$T3);
1027	&movdqu		($T3,&QWP(32,$Htbl));
1028	&pxor		($Xi,$T1);		# Ii+Xi
1029
1030	&pshufd		($T1,$Xn,0b01001110);	# H*Ii+1
1031	&movdqa		($Xhn,$Xn);
1032	&pxor		($T1,$Xn);		#
1033	&lea		($inp,&DWP(32,$inp));	# i+=2
1034
1035	&pclmulqdq	($Xn,$Hkey,0x00);	#######
1036	&pclmulqdq	($Xhn,$Hkey,0x11);	#######
1037	&pclmulqdq	($T1,$T3,0x00);		#######
1038	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
1039	&nop		();
1040
1041	&sub		($len,0x20);
1042	&jbe		(&label("even_tail"));
1043	&jmp		(&label("mod_loop"));
1044
1045&set_label("mod_loop",32);
1046	&pshufd		($T2,$Xi,0b01001110);	# H^2*(Ii+Xi)
1047	&movdqa		($Xhi,$Xi);
1048	&pxor		($T2,$Xi);		#
1049	&nop		();
1050
1051	&pclmulqdq	($Xi,$Hkey,0x00);	#######
1052	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
1053	&pclmulqdq	($T2,$T3,0x10);		#######
1054	&movups		($Hkey,&QWP(0,$Htbl));	# load H
1055
1056	&xorps		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1057	&movdqa		($T3,&QWP(0,$const));
1058	&xorps		($Xhi,$Xhn);
1059	 &movdqu	($Xhn,&QWP(0,$inp));	# Ii
1060	&pxor		($T1,$Xi);		# aggregated Karatsuba post-processing
1061	 &movdqu	($Xn,&QWP(16,$inp));	# Ii+1
1062	&pxor		($T1,$Xhi);		#
1063
1064	 &pshufb	($Xhn,$T3);
1065	&pxor		($T2,$T1);		#
1066
1067	&movdqa		($T1,$T2);		#
1068	&psrldq		($T2,8);
1069	&pslldq		($T1,8);		#
1070	&pxor		($Xhi,$T2);
1071	&pxor		($Xi,$T1);		#
1072	 &pshufb	($Xn,$T3);
1073	 &pxor		($Xhi,$Xhn);		# "Ii+Xi", consume early
1074
1075	&movdqa		($Xhn,$Xn);		#&clmul64x64_TX	($Xhn,$Xn,$Hkey); H*Ii+1
1076	  &movdqa	($T2,$Xi);		#&reduction_alg9($Xhi,$Xi); 1st phase
1077	  &movdqa	($T1,$Xi);
1078	  &psllq	($Xi,5);
1079	  &pxor		($T1,$Xi);		#
1080	  &psllq	($Xi,1);
1081	  &pxor		($Xi,$T1);		#
1082	&pclmulqdq	($Xn,$Hkey,0x00);	#######
1083	&movups		($T3,&QWP(32,$Htbl));
1084	  &psllq	($Xi,57);		#
1085	  &movdqa	($T1,$Xi);		#
1086	  &pslldq	($Xi,8);
1087	  &psrldq	($T1,8);		#
1088	  &pxor		($Xi,$T2);
1089	  &pxor		($Xhi,$T1);		#
1090	&pshufd		($T1,$Xhn,0b01001110);
1091	  &movdqa	($T2,$Xi);		# 2nd phase
1092	  &psrlq	($Xi,1);
1093	&pxor		($T1,$Xhn);
1094	  &pxor		($Xhi,$T2);		#
1095	&pclmulqdq	($Xhn,$Hkey,0x11);	#######
1096	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
1097	  &pxor		($T2,$Xi);
1098	  &psrlq	($Xi,5);
1099	  &pxor		($Xi,$T2);		#
1100	  &psrlq	($Xi,1);		#
1101	  &pxor		($Xi,$Xhi)		#
1102	&pclmulqdq	($T1,$T3,0x00);		#######
1103
1104	&lea		($inp,&DWP(32,$inp));
1105	&sub		($len,0x20);
1106	&ja		(&label("mod_loop"));
1107
1108&set_label("even_tail");
1109	&pshufd		($T2,$Xi,0b01001110);	# H^2*(Ii+Xi)
1110	&movdqa		($Xhi,$Xi);
1111	&pxor		($T2,$Xi);		#
1112
1113	&pclmulqdq	($Xi,$Hkey,0x00);	#######
1114	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
1115	&pclmulqdq	($T2,$T3,0x10);		#######
1116	&movdqa		($T3,&QWP(0,$const));
1117
1118	&xorps		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1119	&xorps		($Xhi,$Xhn);
1120	&pxor		($T1,$Xi);		# aggregated Karatsuba post-processing
1121	&pxor		($T1,$Xhi);		#
1122
1123	&pxor		($T2,$T1);		#
1124
1125	&movdqa		($T1,$T2);		#
1126	&psrldq		($T2,8);
1127	&pslldq		($T1,8);		#
1128	&pxor		($Xhi,$T2);
1129	&pxor		($Xi,$T1);		#
1130
1131	&reduction_alg9	($Xhi,$Xi);
1132
1133	&test		($len,$len);
1134	&jnz		(&label("done"));
1135
1136	&movups		($Hkey,&QWP(0,$Htbl));	# load H
1137&set_label("odd_tail");
1138	&movdqu		($T1,&QWP(0,$inp));	# Ii
1139	&pshufb		($T1,$T3);
1140	&pxor		($Xi,$T1);		# Ii+Xi
1141
1142	&clmul64x64_T2	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
1143	&reduction_alg9	($Xhi,$Xi);
1144
1145&set_label("done");
1146	&pshufb		($Xi,$T3);
1147	&movdqu		(&QWP(0,$Xip),$Xi);
1148&function_end("gcm_ghash_clmul");
1149
1150} else {		# Algorithm 5. Kept for reference purposes.
1151
1152sub reduction_alg5 {	# 19/16 times faster than Intel version
1153my ($Xhi,$Xi)=@_;
1154
1155	# <<1
1156	&movdqa		($T1,$Xi);		#
1157	&movdqa		($T2,$Xhi);
1158	&pslld		($Xi,1);
1159	&pslld		($Xhi,1);		#
1160	&psrld		($T1,31);
1161	&psrld		($T2,31);		#
1162	&movdqa		($T3,$T1);
1163	&pslldq		($T1,4);
1164	&psrldq		($T3,12);		#
1165	&pslldq		($T2,4);
1166	&por		($Xhi,$T3);		#
1167	&por		($Xi,$T1);
1168	&por		($Xhi,$T2);		#
1169
1170	# 1st phase
1171	&movdqa		($T1,$Xi);
1172	&movdqa		($T2,$Xi);
1173	&movdqa		($T3,$Xi);		#
1174	&pslld		($T1,31);
1175	&pslld		($T2,30);
1176	&pslld		($Xi,25);		#
1177	&pxor		($T1,$T2);
1178	&pxor		($T1,$Xi);		#
1179	&movdqa		($T2,$T1);		#
1180	&pslldq		($T1,12);
1181	&psrldq		($T2,4);		#
1182	&pxor		($T3,$T1);
1183
1184	# 2nd phase
1185	&pxor		($Xhi,$T3);		#
1186	&movdqa		($Xi,$T3);
1187	&movdqa		($T1,$T3);
1188	&psrld		($Xi,1);		#
1189	&psrld		($T1,2);
1190	&psrld		($T3,7);		#
1191	&pxor		($Xi,$T1);
1192	&pxor		($Xhi,$T2);
1193	&pxor		($Xi,$T3);		#
1194	&pxor		($Xi,$Xhi);		#
1195}
1196
1197&function_begin_B("gcm_init_clmul");
1198	&mov		($Htbl,&wparam(0));
1199	&mov		($Xip,&wparam(1));
1200
1201	&call		(&label("pic"));
1202&set_label("pic");
1203	&blindpop	($const);
1204	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1205
1206	&movdqu		($Hkey,&QWP(0,$Xip));
1207	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
1208
1209	# calculate H^2
1210	&movdqa		($Xi,$Hkey);
1211	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
1212	&reduction_alg5	($Xhi,$Xi);
1213
1214	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
1215	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
1216
1217	&ret		();
1218&function_end_B("gcm_init_clmul");
1219
1220&function_begin_B("gcm_gmult_clmul");
1221	&mov		($Xip,&wparam(0));
1222	&mov		($Htbl,&wparam(1));
1223
1224	&call		(&label("pic"));
1225&set_label("pic");
1226	&blindpop	($const);
1227	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1228
1229	&movdqu		($Xi,&QWP(0,$Xip));
1230	&movdqa		($Xn,&QWP(0,$const));
1231	&movdqu		($Hkey,&QWP(0,$Htbl));
1232	&pshufb		($Xi,$Xn);
1233
1234	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
1235	&reduction_alg5	($Xhi,$Xi);
1236
1237	&pshufb		($Xi,$Xn);
1238	&movdqu		(&QWP(0,$Xip),$Xi);
1239
1240	&ret	();
1241&function_end_B("gcm_gmult_clmul");
1242
1243&function_begin("gcm_ghash_clmul");
1244	&mov		($Xip,&wparam(0));
1245	&mov		($Htbl,&wparam(1));
1246	&mov		($inp,&wparam(2));
1247	&mov		($len,&wparam(3));
1248
1249	&call		(&label("pic"));
1250&set_label("pic");
1251	&blindpop	($const);
1252	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
1253
1254	&movdqu		($Xi,&QWP(0,$Xip));
1255	&movdqa		($T3,&QWP(0,$const));
1256	&movdqu		($Hkey,&QWP(0,$Htbl));
1257	&pshufb		($Xi,$T3);
1258
1259	&sub		($len,0x10);
1260	&jz		(&label("odd_tail"));
1261
1262	#######
1263	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
1264	#	[(H*Ii+1) + (H*Xi+1)] mod P =
1265	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
1266	#
1267	&movdqu		($T1,&QWP(0,$inp));	# Ii
1268	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1269	&pshufb		($T1,$T3);
1270	&pshufb		($Xn,$T3);
1271	&pxor		($Xi,$T1);		# Ii+Xi
1272
1273	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
1274	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
1275
1276	&sub		($len,0x20);
1277	&lea		($inp,&DWP(32,$inp));	# i+=2
1278	&jbe		(&label("even_tail"));
1279
1280&set_label("mod_loop");
1281	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1282	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
1283
1284	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1285	&pxor		($Xhi,$Xhn);
1286
1287	&reduction_alg5	($Xhi,$Xi);
1288
1289	#######
1290	&movdqa		($T3,&QWP(0,$const));
1291	&movdqu		($T1,&QWP(0,$inp));	# Ii
1292	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
1293	&pshufb		($T1,$T3);
1294	&pshufb		($Xn,$T3);
1295	&pxor		($Xi,$T1);		# Ii+Xi
1296
1297	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
1298	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
1299
1300	&sub		($len,0x20);
1301	&lea		($inp,&DWP(32,$inp));
1302	&ja		(&label("mod_loop"));
1303
1304&set_label("even_tail");
1305	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
1306
1307	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
1308	&pxor		($Xhi,$Xhn);
1309
1310	&reduction_alg5	($Xhi,$Xi);
1311
1312	&movdqa		($T3,&QWP(0,$const));
1313	&test		($len,$len);
1314	&jnz		(&label("done"));
1315
1316	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
1317&set_label("odd_tail");
1318	&movdqu		($T1,&QWP(0,$inp));	# Ii
1319	&pshufb		($T1,$T3);
1320	&pxor		($Xi,$T1);		# Ii+Xi
1321
1322	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
1323	&reduction_alg5	($Xhi,$Xi);
1324
1325	&movdqa		($T3,&QWP(0,$const));
1326&set_label("done");
1327	&pshufb		($Xi,$T3);
1328	&movdqu		(&QWP(0,$Xip),$Xi);
1329&function_end("gcm_ghash_clmul");
1330
1331}
1332
1333&set_label("bswap",64);
1334	&data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);
1335	&data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2);	# 0x1c2_polynomial
1336&set_label("rem_8bit",64);
1337	&data_short(0x0000,0x01C2,0x0384,0x0246,0x0708,0x06CA,0x048C,0x054E);
1338	&data_short(0x0E10,0x0FD2,0x0D94,0x0C56,0x0918,0x08DA,0x0A9C,0x0B5E);
1339	&data_short(0x1C20,0x1DE2,0x1FA4,0x1E66,0x1B28,0x1AEA,0x18AC,0x196E);
1340	&data_short(0x1230,0x13F2,0x11B4,0x1076,0x1538,0x14FA,0x16BC,0x177E);
1341	&data_short(0x3840,0x3982,0x3BC4,0x3A06,0x3F48,0x3E8A,0x3CCC,0x3D0E);
1342	&data_short(0x3650,0x3792,0x35D4,0x3416,0x3158,0x309A,0x32DC,0x331E);
1343	&data_short(0x2460,0x25A2,0x27E4,0x2626,0x2368,0x22AA,0x20EC,0x212E);
1344	&data_short(0x2A70,0x2BB2,0x29F4,0x2836,0x2D78,0x2CBA,0x2EFC,0x2F3E);
1345	&data_short(0x7080,0x7142,0x7304,0x72C6,0x7788,0x764A,0x740C,0x75CE);
1346	&data_short(0x7E90,0x7F52,0x7D14,0x7CD6,0x7998,0x785A,0x7A1C,0x7BDE);
1347	&data_short(0x6CA0,0x6D62,0x6F24,0x6EE6,0x6BA8,0x6A6A,0x682C,0x69EE);
1348	&data_short(0x62B0,0x6372,0x6134,0x60F6,0x65B8,0x647A,0x663C,0x67FE);
1349	&data_short(0x48C0,0x4902,0x4B44,0x4A86,0x4FC8,0x4E0A,0x4C4C,0x4D8E);
1350	&data_short(0x46D0,0x4712,0x4554,0x4496,0x41D8,0x401A,0x425C,0x439E);
1351	&data_short(0x54E0,0x5522,0x5764,0x56A6,0x53E8,0x522A,0x506C,0x51AE);
1352	&data_short(0x5AF0,0x5B32,0x5974,0x58B6,0x5DF8,0x5C3A,0x5E7C,0x5FBE);
1353	&data_short(0xE100,0xE0C2,0xE284,0xE346,0xE608,0xE7CA,0xE58C,0xE44E);
1354	&data_short(0xEF10,0xEED2,0xEC94,0xED56,0xE818,0xE9DA,0xEB9C,0xEA5E);
1355	&data_short(0xFD20,0xFCE2,0xFEA4,0xFF66,0xFA28,0xFBEA,0xF9AC,0xF86E);
1356	&data_short(0xF330,0xF2F2,0xF0B4,0xF176,0xF438,0xF5FA,0xF7BC,0xF67E);
1357	&data_short(0xD940,0xD882,0xDAC4,0xDB06,0xDE48,0xDF8A,0xDDCC,0xDC0E);
1358	&data_short(0xD750,0xD692,0xD4D4,0xD516,0xD058,0xD19A,0xD3DC,0xD21E);
1359	&data_short(0xC560,0xC4A2,0xC6E4,0xC726,0xC268,0xC3AA,0xC1EC,0xC02E);
1360	&data_short(0xCB70,0xCAB2,0xC8F4,0xC936,0xCC78,0xCDBA,0xCFFC,0xCE3E);
1361	&data_short(0x9180,0x9042,0x9204,0x93C6,0x9688,0x974A,0x950C,0x94CE);
1362	&data_short(0x9F90,0x9E52,0x9C14,0x9DD6,0x9898,0x995A,0x9B1C,0x9ADE);
1363	&data_short(0x8DA0,0x8C62,0x8E24,0x8FE6,0x8AA8,0x8B6A,0x892C,0x88EE);
1364	&data_short(0x83B0,0x8272,0x8034,0x81F6,0x84B8,0x857A,0x873C,0x86FE);
1365	&data_short(0xA9C0,0xA802,0xAA44,0xAB86,0xAEC8,0xAF0A,0xAD4C,0xAC8E);
1366	&data_short(0xA7D0,0xA612,0xA454,0xA596,0xA0D8,0xA11A,0xA35C,0xA29E);
1367	&data_short(0xB5E0,0xB422,0xB664,0xB7A6,0xB2E8,0xB32A,0xB16C,0xB0AE);
1368	&data_short(0xBBF0,0xBA32,0xB874,0xB9B6,0xBCF8,0xBD3A,0xBF7C,0xBEBE);
1369}}	# $sse2
1370
1371&set_label("rem_4bit",64);
1372	&data_word(0,0x0000<<$S,0,0x1C20<<$S,0,0x3840<<$S,0,0x2460<<$S);
1373	&data_word(0,0x7080<<$S,0,0x6CA0<<$S,0,0x48C0<<$S,0,0x54E0<<$S);
1374	&data_word(0,0xE100<<$S,0,0xFD20<<$S,0,0xD940<<$S,0,0xC560<<$S);
1375	&data_word(0,0x9180<<$S,0,0x8DA0<<$S,0,0xA9C0<<$S,0,0xB5E0<<$S);
1376}}}	# !$x86only
1377
1378&asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>");
1379&asm_finish();
1380
1381close STDOUT or die "error closing STDOUT: $!";
1382
1383# A question was risen about choice of vanilla MMX. Or rather why wasn't
1384# SSE2 chosen instead? In addition to the fact that MMX runs on legacy
1385# CPUs such as PIII, "4-bit" MMX version was observed to provide better
1386# performance than *corresponding* SSE2 one even on contemporary CPUs.
1387# SSE2 results were provided by Peter-Michael Hager. He maintains SSE2
1388# implementation featuring full range of lookup-table sizes, but with
1389# per-invocation lookup table setup. Latter means that table size is
1390# chosen depending on how much data is to be hashed in every given call,
1391# more data - larger table. Best reported result for Core2 is ~4 cycles
1392# per processed byte out of 64KB block. This number accounts even for
1393# 64KB table setup overhead. As discussed in gcm128.c we choose to be
1394# more conservative in respect to lookup table sizes, but how do the
1395# results compare? Minimalistic "256B" MMX version delivers ~11 cycles
1396# on same platform. As also discussed in gcm128.c, next in line "8-bit
1397# Shoup's" or "4KB" method should deliver twice the performance of
1398# "256B" one, in other words not worse than ~6 cycles per byte. It
1399# should be also be noted that in SSE2 case improvement can be "super-
1400# linear," i.e. more than twice, mostly because >>8 maps to single
1401# instruction on SSE2 register. This is unlike "4-bit" case when >>4
1402# maps to same amount of instructions in both MMX and SSE2 cases.
1403# Bottom line is that switch to SSE2 is considered to be justifiable
1404# only in case we choose to implement "8-bit" method...
1405