Home
last modified time | relevance | path

Searched refs:S64 (Results 1 – 17 of 17) sorted by relevance

/third_party/skia/third_party/externals/swiftshader/third_party/llvm-10.0/llvm/lib/Target/AMDGPU/
DAMDGPULegalizerInfo.cpp176 const LLT S64 = LLT::scalar(64); in AMDGPULegalizerInfo() local
236 S32, S64 in AMDGPULegalizerInfo()
240 S32, S64, S16 in AMDGPULegalizerInfo()
244 S32, S64, S16, V2S16 in AMDGPULegalizerInfo()
253 .legalFor({S32, S64, V2S16, V4S16, S1, S128, S256}) in AMDGPULegalizerInfo()
278 .legalFor({S32, S64}) in AMDGPULegalizerInfo()
279 .clampScalar(0, S32, S64) in AMDGPULegalizerInfo()
291 .legalFor({S32, S1, S64, V2S32, S16, V2S16, V4S16}) in AMDGPULegalizerInfo()
292 .clampScalar(0, S32, S64) in AMDGPULegalizerInfo()
314 .legalFor({S32, S64, S16}) in AMDGPULegalizerInfo()
[all …]
/third_party/openGLES/extensions/NV/
DNV_shader_atomic_int64.txt133 and ATOMS comes from NV_compute_program5.) "S64" should be allowed as a
144 use the "S64" storage modifier with the atomic operations "MIN',
149 (Add "U64" and or "S64" opcode modifiers to the table in "Section 2.X.8.Z:
157 ADD U32, S32, U64, S64 compute a sum
158 MIN U32, S32, U64, S64 compute minimum
159 MAX U32, S32, U64, S64 compute maximum
160 AND U32, S32, U64, S64 compute bit-wise AND
161 OR U32, S32, U64, S64 compute bit-wise OR
162 XOR U32, S32, U64, S64 compute bit-wise XOR
163 EXCH U32, S32, U64, S64 exchange memory with operand
[all …]
DNV_gpu_program5_mem_extended.txt136 "S32X2", "S32X4", "S64", "S64X2", "S64X4", "U8", "U8X2", "U8X4", "U16",
270 four. For F32X2, F64, S32X2, S64, U32X2, U64, S16X4, and U16X4, the
DNV_gpu_program5.txt355 | "S64"
830 S64 Fixed-point operation, signed 64-bit operands or
845 S64 Access one 64-bit signed integer value
906 "S64" modifiers are precision-specific data type modifiers that specify
909 component, respectively. The "F64", "U64", and "S64" modifiers are
928 "F64X2", "F64X4", "S8", "S16", "S32", "S32X2", "S32X4", "S64", "S64X2",
972 "S8", "S16", "S32", "S64", "U8", "U16", "U32", and "U64" storage modifiers
1160 case S64:
1266 case S64:
1328 four. For F32X2, F64, S32X2, S64, U32X2, and U64, the offset must be a
[all …]
/third_party/skia/third_party/externals/opengl-registry/extensions/NV/
DNV_shader_atomic_int64.txt133 and ATOMS comes from NV_compute_program5.) "S64" should be allowed as a
144 use the "S64" storage modifier with the atomic operations "MIN',
149 (Add "U64" and or "S64" opcode modifiers to the table in "Section 2.X.8.Z:
157 ADD U32, S32, U64, S64 compute a sum
158 MIN U32, S32, U64, S64 compute minimum
159 MAX U32, S32, U64, S64 compute maximum
160 AND U32, S32, U64, S64 compute bit-wise AND
161 OR U32, S32, U64, S64 compute bit-wise OR
162 XOR U32, S32, U64, S64 compute bit-wise XOR
163 EXCH U32, S32, U64, S64 exchange memory with operand
[all …]
DNV_gpu_program5_mem_extended.txt136 "S32X2", "S32X4", "S64", "S64X2", "S64X4", "U8", "U8X2", "U8X4", "U16",
270 four. For F32X2, F64, S32X2, S64, U32X2, U64, S16X4, and U16X4, the
DNV_gpu_program5.txt355 | "S64"
830 S64 Fixed-point operation, signed 64-bit operands or
845 S64 Access one 64-bit signed integer value
906 "S64" modifiers are precision-specific data type modifiers that specify
909 component, respectively. The "F64", "U64", and "S64" modifiers are
928 "F64X2", "F64X4", "S8", "S16", "S32", "S32X2", "S32X4", "S64", "S64X2",
972 "S8", "S16", "S32", "S64", "U8", "U16", "U32", and "U64" storage modifiers
1160 case S64:
1266 case S64:
1328 four. For F32X2, F64, S32X2, S64, U32X2, and U64, the offset must be a
[all …]
/third_party/skia/third_party/externals/swiftshader/third_party/llvm-10.0/llvm/lib/Target/Mips/MCTargetDesc/
DMipsABIFlagsSection.cpp27 case FpABIKind::S64: in getFpABIValue()
43 case FpABIKind::S64: in getFpABIString()
DMipsABIFlagsSection.h23 enum class FpABIKind { ANY, XX, S32, S64, SOFT }; enumerator
181 FpABI = FpABIKind::S64; in setFpAbiFromPredicates()
186 FpABI = FpABIKind::S64; in setFpAbiFromPredicates()
/third_party/skia/third_party/externals/swiftshader/third_party/llvm-10.0/llvm/lib/CodeGen/GlobalISel/
DLegalizerHelper.cpp3842 const LLT S64 = LLT::scalar(64); in lowerU64ToF32BitOps() local
3846 assert(MRI.getType(Src) == S64 && MRI.getType(Dst) == S32); in lowerU64ToF32BitOps()
3859 auto Zero64 = MIRBuilder.buildConstant(S64, 0); in lowerU64ToF32BitOps()
3869 auto Mask0 = MIRBuilder.buildConstant(S64, (-1ULL) >> 1); in lowerU64ToF32BitOps()
3870 auto ShlLZ = MIRBuilder.buildShl(S64, Src, LZ); in lowerU64ToF32BitOps()
3872 auto U = MIRBuilder.buildAnd(S64, ShlLZ, Mask0); in lowerU64ToF32BitOps()
3874 auto Mask1 = MIRBuilder.buildConstant(S64, 0xffffffffffULL); in lowerU64ToF32BitOps()
3875 auto T = MIRBuilder.buildAnd(S64, U, Mask1); in lowerU64ToF32BitOps()
3877 auto UShl = MIRBuilder.buildLShr(S64, U, MIRBuilder.buildConstant(S64, 40)); in lowerU64ToF32BitOps()
3881 auto C = MIRBuilder.buildConstant(S64, 0x8000000000ULL); in lowerU64ToF32BitOps()
[all …]
/third_party/lz4/programs/
Dutil.h69 typedef int64_t S64; typedef
77 typedef signed long long S64;
/third_party/vixl/src/aarch32/
Dinstructions-aarch32.cc463 case S64: in GetName()
Dinstructions-aarch32.h265 S64 = kDataTypeS | 64, enumerator
Dassembler-aarch32.cc134 case S64: in Dt_L_imm6_1()
171 case S64: in Dt_L_imm6_2()
256 case S64: in Dt_imm6_1()
289 case S64: in Dt_imm6_2()
465 case S64: in Dt_op_size_3()
927 case S64: in Dt_U_size_3()
1280 case S64: in Dt_size_14()
Ddisasm-aarch32.cc119 return S64; in Dt_L_imm6_1_Decode()
135 if (type_value == 0x1) return S64; in Dt_L_imm6_2_Decode()
184 return S64; in Dt_imm6_1_Decode()
198 if (type_value == 0x1) return S64; in Dt_imm6_2_Decode()
301 return S64; in Dt_op_size_3_Decode()
590 return S64; in Dt_U_size_3_Decode()
802 return S64; in Dt_size_14_Decode()
/third_party/skia/third_party/externals/swiftshader/third_party/llvm-10.0/llvm/lib/Target/Mips/
DMipsInstrFPU.td102 // S64 - single precision in 32 64bit fp registers (In64BitMode)
/third_party/skia/third_party/externals/swiftshader/third_party/llvm-10.0/llvm/lib/Target/Mips/AsmParser/
DMipsAsmParser.cpp8301 FpABI = MipsABIFlagsSection::FpABIKind::S64; in parseFpABIValue()