[−][src]Module core::arch::aarch64
Platform-specific intrinsics for the aarch64 platform.
See the module documentation for more details.
Structs
| APSR | ExperimentalAArch64 Application Program Status Register |
| ISH | ExperimentalAArch64 Inner Shareable is the required shareability domain, reads and writes are the required access types |
| ISHST | ExperimentalAArch64 Inner Shareable is the required shareability domain, writes are the required access type |
| NSH | ExperimentalAArch64 Non-shareable is the required shareability domain, reads and writes are the required access types |
| NSHST | ExperimentalAArch64 Non-shareable is the required shareability domain, writes are the required access type |
| OSH | ExperimentalAArch64 Outer Shareable is the required shareability domain, reads and writes are the required access types |
| OSHST | ExperimentalAArch64 Outer Shareable is the required shareability domain, writes are the required access type |
| ST | ExperimentalAArch64 Full system is the required shareability domain, writes are the required access type |
| SY | ExperimentalAArch64 Full system is the required shareability domain, reads and writes are the required access types |
| float32x2_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of two packed |
| float32x4_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of four packed |
| float64x1_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of one packed |
| float64x2_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of two packed |
| int16x2_t | ExperimentalAArch64 ARM-specific 32-bit wide vector of two packed |
| int16x4_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of four packed |
| int16x8_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of eight packed |
| int32x2_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of two packed |
| int32x4_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of four packed |
| int64x1_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of one packed |
| int64x2_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of two packed |
| int8x4_t | ExperimentalAArch64 ARM-specific 32-bit wide vector of four packed |
| int8x8_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of eight packed |
| int8x16_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of sixteen packed |
| int8x16x2_t | ExperimentalAArch64 ARM-specific type containing two |
| int8x16x3_t | ExperimentalAArch64 ARM-specific type containing three |
| int8x16x4_t | ExperimentalAArch64 ARM-specific type containing four |
| int8x8x2_t | ExperimentalAArch64 ARM-specific type containing two |
| int8x8x3_t | ExperimentalAArch64 ARM-specific type containing three |
| int8x8x4_t | ExperimentalAArch64 ARM-specific type containing four |
| poly64_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of one packed |
| poly128_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of one packed |
| poly16x4_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of four packed |
| poly16x8_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of eight packed |
| poly64x1_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of one packed |
| poly64x2_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of two packed |
| poly8x8_t | ExperimentalAArch64 ARM-specific 64-bit wide polynomial vector of eight packed |
| poly8x16_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of sixteen packed |
| poly8x16x2_t | ExperimentalAArch64 ARM-specific type containing two |
| poly8x16x3_t | ExperimentalAArch64 ARM-specific type containing three |
| poly8x16x4_t | ExperimentalAArch64 ARM-specific type containing four |
| poly8x8x2_t | ExperimentalAArch64 ARM-specific type containing two |
| poly8x8x3_t | ExperimentalAArch64 ARM-specific type containing three |
| poly8x8x4_t | ExperimentalAArch64 ARM-specific type containing four |
| uint16x2_t | ExperimentalAArch64 ARM-specific 32-bit wide vector of two packed |
| uint16x4_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of four packed |
| uint16x8_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of eight packed |
| uint32x2_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of two packed |
| uint32x4_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of four packed |
| uint64x1_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of one packed |
| uint64x2_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of two packed |
| uint8x4_t | ExperimentalAArch64 ARM-specific 32-bit wide vector of four packed |
| uint8x8_t | ExperimentalAArch64 ARM-specific 64-bit wide vector of eight packed |
| uint8x16_t | ExperimentalAArch64 ARM-specific 128-bit wide vector of sixteen packed |
| uint8x16x2_t | ExperimentalAArch64 ARM-specific type containing two |
| uint8x16x3_t | ExperimentalAArch64 ARM-specific type containing three |
| uint8x16x4_t | ExperimentalAArch64 ARM-specific type containing four |
| uint8x8x2_t | ExperimentalAArch64 ARM-specific type containing two |
| uint8x8x3_t | ExperimentalAArch64 ARM-specific type containing three |
| uint8x8x4_t | ExperimentalAArch64 ARM-specific type containing four |
Constants
| _TMFAILURE_CNCL | ExperimentalAArch64 Transaction executed a TCANCEL instruction |
| _TMFAILURE_DBG | ExperimentalAArch64 Transaction aborted due to a debug trap. |
| _TMFAILURE_ERR | ExperimentalAArch64 Transaction aborted because a non-permissible operation was attempted |
| _TMFAILURE_IMP | ExperimentalAArch64 Fallback error type for any other reason |
| _TMFAILURE_INT | ExperimentalAArch64 Transaction failed from interrupt |
| _TMFAILURE_MEM | ExperimentalAArch64 Transaction aborted because a conflict occurred |
| _TMFAILURE_NEST | ExperimentalAArch64 Transaction aborted due to transactional nesting level was exceeded |
| _TMFAILURE_REASON | ExperimentalAArch64 Extraction mask for failure reason |
| _TMFAILURE_RTRY | ExperimentalAArch64 Transaction retry is possible. |
| _TMFAILURE_SIZE | ExperimentalAArch64 Transaction aborted due to read or write set limit was exceeded |
| _TMFAILURE_TRIVIAL | ExperimentalAArch64 Indicates a TRIVIAL version of TM is available |
| _TMSTART_SUCCESS | ExperimentalAArch64 Transaction successfully started. |
Functions
| __breakpoint⚠ | ExperimentalAArch64 Inserts a breakpoint instruction. |
| __clrex⚠ | ExperimentalAArch64 Removes the exclusive lock created by LDREX |
| __crc32d⚠ | ExperimentalAArch64 and crcCRC32 single round checksum for quad words (64 bits). |
| __crc32cd⚠ | ExperimentalAArch64 and crcCRC32-C single round checksum for quad words (64 bits). |
| __crc32b⚠ | Experimentalcrc and v8 and AArch64CRC32 single round checksum for bytes (8 bits). |
| __crc32h⚠ | Experimentalcrc and v8 and AArch64CRC32 single round checksum for half words (16 bits). |
| __crc32w⚠ | Experimentalcrc and v8 and AArch64CRC32 single round checksum for words (32 bits). |
| __crc32cb⚠ | Experimentalcrc and v8 and AArch64CRC32-C single round checksum for bytes (8 bits). |
| __crc32ch⚠ | Experimentalcrc and v8 and AArch64CRC32-C single round checksum for half words (16 bits). |
| __crc32cw⚠ | Experimentalcrc and v8 and AArch64CRC32-C single round checksum for words (32 bits). |
| __dbg⚠ | ExperimentalAArch64 Generates a DBG instruction. |
| __dmb⚠ | ExperimentalAArch64 Generates a DMB (data memory barrier) instruction or equivalent CP15 instruction. |
| __dsb⚠ | ExperimentalAArch64 Generates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction. |
| __isb⚠ | ExperimentalAArch64 Generates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction. |
| __ldrex⚠ | ExperimentalAArch64 Executes a exclusive LDR instruction for 32 bit value. |
| __ldrexb⚠ | ExperimentalAArch64 Executes a exclusive LDR instruction for 8 bit value. |
| __ldrexh⚠ | ExperimentalAArch64 Executes a exclusive LDR instruction for 16 bit value. |
| __nop⚠ | ExperimentalAArch64 Generates an unspecified no-op instruction. |
| __qadd⚠ | ExperimentalAArch64 Signed saturating addition |
| __qadd8⚠ | ExperimentalAArch64 Saturating four 8-bit integer additions |
| __qadd16⚠ | ExperimentalAArch64 Saturating two 16-bit integer additions |
| __qasx⚠ | ExperimentalAArch64 Returns the 16-bit signed saturated equivalent of |
| __qdbl⚠ | ExperimentalAArch64 Insert a QADD instruction |
| __qsax⚠ | ExperimentalAArch64 Returns the 16-bit signed saturated equivalent of |
| __qsub⚠ | ExperimentalAArch64 Signed saturating subtraction |
| __qsub8⚠ | ExperimentalAArch64 Saturating two 8-bit integer subtraction |
| __qsub16⚠ | ExperimentalAArch64 Saturating two 16-bit integer subtraction |
| __rsr⚠ | ExperimentalAArch64 Reads a 32-bit system register |
| __rsrp⚠ | ExperimentalAArch64 Reads a system register containing an address |
| __sadd8⚠ | ExperimentalAArch64 Returns the 8-bit signed saturated equivalent of |
| __sadd16⚠ | ExperimentalAArch64 Returns the 16-bit signed saturated equivalent of |
| __sasx⚠ | ExperimentalAArch64 Returns the 16-bit signed equivalent of |
| __sel⚠ | ExperimentalAArch64 Select bytes from each operand according to APSR GE flags |
| __sev⚠ | ExperimentalAArch64 Generates a SEV (send a global event) hint instruction. |
| __shadd8⚠ | ExperimentalAArch64 Signed halving parallel byte-wise addition. |
| __shadd16⚠ | ExperimentalAArch64 Signed halving parallel halfword-wise addition. |
| __shsub8⚠ | ExperimentalAArch64 Signed halving parallel byte-wise subtraction. |
| __shsub16⚠ | ExperimentalAArch64 Signed halving parallel halfword-wise subtraction. |
| __smlabb⚠ | ExperimentalAArch64 Insert a SMLABB instruction |
| __smlabt⚠ | ExperimentalAArch64 Insert a SMLABT instruction |
| __smlad⚠ | ExperimentalAArch64 Dual 16-bit Signed Multiply with Addition of products and 32-bit accumulation. |
| __smlatb⚠ | ExperimentalAArch64 Insert a SMLATB instruction |
| __smlatt⚠ | ExperimentalAArch64 Insert a SMLATT instruction |
| __smlawb⚠ | ExperimentalAArch64 Insert a SMLAWB instruction |
| __smlawt⚠ | ExperimentalAArch64 Insert a SMLAWT instruction |
| __smlsd⚠ | ExperimentalAArch64 Dual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection. |
| __smuad⚠ | ExperimentalAArch64 Signed Dual Multiply Add. |
| __smuadx⚠ | ExperimentalAArch64 Signed Dual Multiply Add Reversed. |
| __smulbb⚠ | ExperimentalAArch64 Insert a SMULBB instruction |
| __smulbt⚠ | ExperimentalAArch64 Insert a SMULTB instruction |
| __smultb⚠ | ExperimentalAArch64 Insert a SMULTB instruction |
| __smultt⚠ | ExperimentalAArch64 Insert a SMULTT instruction |
| __smulwb⚠ | ExperimentalAArch64 Insert a SMULWB instruction |
| __smulwt⚠ | ExperimentalAArch64 Insert a SMULWT instruction |
| __smusd⚠ | ExperimentalAArch64 Signed Dual Multiply Subtract. |
| __smusdx⚠ | ExperimentalAArch64 Signed Dual Multiply Subtract Reversed. |
| __ssub8⚠ | ExperimentalAArch64 Inserts a |
| __strex⚠ | ExperimentalAArch64 Executes a exclusive STR instruction for 32 bit values |
| __strexb⚠ | ExperimentalAArch64 Executes a exclusive STR instruction for 8 bit values |
| __strexh⚠ | ExperimentalAArch64 Executes a exclusive STR instruction for 16 bit values |
| __tcancel⚠ | ExperimentalAArch64 and tmeCancels the current transaction and discards all state modifications that were performed transactionally. |
| __tcommit⚠ | ExperimentalAArch64 and tmeCommits the current transaction. For a nested transaction, the only effect is that the transactional nesting depth is decreased. For an outer transaction, the state modifications performed transactionally are committed to the architectural state. |
| __tstart⚠ | ExperimentalAArch64 and tmeStarts a new transaction. When the transaction starts successfully the return value is 0. If the transaction fails, all state modifications are discarded and a cause of the failure is encoded in the return value. |
| __ttest⚠ | ExperimentalAArch64 and tmeTests if executing inside a transaction. If no transaction is currently executing, the return value is 0. Otherwise, this intrinsic returns the depth of the transaction. |
| __usad8⚠ | ExperimentalAArch64 Sum of 8-bit absolute differences. |
| __usada8⚠ | ExperimentalAArch64 Sum of 8-bit absolute differences and constant. |
| __usub8⚠ | ExperimentalAArch64 Inserts a |
| __wfe⚠ | ExperimentalAArch64 Generates a WFE (wait for event) hint instruction, or nothing. |
| __wfi⚠ | ExperimentalAArch64 Generates a WFI (wait for interrupt) hint instruction, or nothing. |
| __wsr⚠ | ExperimentalAArch64 Writes a 32-bit system register |
| __wsrp⚠ | ExperimentalAArch64 Writes a system register containing an address |
| __yield⚠ | ExperimentalAArch64 Generates a YIELD hint instruction. |
| _cls_u32⚠ | ExperimentalAArch64 Counts the leading most significant bits set. |
| _cls_u64⚠ | ExperimentalAArch64 Counts the leading most significant bits set. |
| _clz_u8⚠ | ExperimentalAArch64 and v7Count Leading Zeros. |
| _clz_u16⚠ | ExperimentalAArch64 and v7Count Leading Zeros. |
| _clz_u32⚠ | ExperimentalAArch64 and v7Count Leading Zeros. |
| _clz_u64⚠ | ExperimentalAArch64 Count Leading Zeros. |
| _rbit_u32⚠ | ExperimentalAArch64 and v7Reverse the bit order. |
| _rbit_u64⚠ | ExperimentalAArch64 Reverse the bit order. |
| _rev_u16⚠ | ExperimentalAArch64 Reverse the order of the bytes. |
| _rev_u32⚠ | ExperimentalAArch64 Reverse the order of the bytes. |
| _rev_u64⚠ | ExperimentalAArch64 Reverse the order of the bytes. |
| brk⚠ | ExperimentalAArch64 Generates the trap instruction |
| udf⚠ | ExperimentalAArch64 Generates the trap instruction |
| vadd_f32⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vadd_f64⚠ | ExperimentalAArch64 and neonVector add. |
| vadd_s8⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vadd_s16⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vadd_s32⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vadd_u8⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vadd_u16⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vadd_u32⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaddd_s64⚠ | ExperimentalAArch64 and neonVector add. |
| vaddd_u64⚠ | ExperimentalAArch64 and neonVector add. |
| vaddl_s8⚠ | Experimentalneon and v7 and AArch64Vector long add. |
| vaddl_s16⚠ | Experimentalneon and v7 and AArch64Vector long add. |
| vaddl_s32⚠ | Experimentalneon and v7 and AArch64Vector long add. |
| vaddl_u8⚠ | Experimentalneon and v7 and AArch64Vector long add. |
| vaddl_u16⚠ | Experimentalneon and v7 and AArch64Vector long add. |
| vaddl_u32⚠ | Experimentalneon and v7 and AArch64Vector long add. |
| vaddq_f32⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaddq_f64⚠ | ExperimentalAArch64 and neonVector add. |
| vaddq_s8⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaddq_s16⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaddq_s32⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaddq_s64⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaddq_u8⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaddq_u16⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaddq_u32⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaddq_u64⚠ | Experimentalneon and v7 and AArch64Vector add. |
| vaesdq_u8⚠ | ExperimentalAArch64 and cryptoAES single round decryption. |
| vaeseq_u8⚠ | ExperimentalAArch64 and cryptoAES single round encryption. |
| vaesimcq_u8⚠ | ExperimentalAArch64 and cryptoAES inverse mix columns. |
| vaesmcq_u8⚠ | ExperimentalAArch64 and cryptoAES mix columns. |
| vand_s8⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vand_s16⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vand_s32⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vand_s64⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vand_u8⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vand_u16⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vand_u32⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vand_u64⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vandq_s8⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vandq_s16⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vandq_s32⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vandq_s64⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vandq_u8⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vandq_u16⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vandq_u32⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vandq_u64⚠ | Experimentalneon and v7 and AArch64Vector bitwise and |
| vceq_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare equal |
| vceq_f64⚠ | ExperimentalAArch64 and neonFloating-point compare equal |
| vceq_p64⚠ | ExperimentalAArch64 and neonCompare bitwise Equal (vector) |
| vceq_s8⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceq_s16⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceq_s32⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceq_s64⚠ | ExperimentalAArch64 and neonCompare bitwise Equal (vector) |
| vceq_u8⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceq_u16⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceq_u32⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceq_u64⚠ | ExperimentalAArch64 and neonCompare bitwise Equal (vector) |
| vceqq_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare equal |
| vceqq_f64⚠ | ExperimentalAArch64 and neonFloating-point compare equal |
| vceqq_p64⚠ | ExperimentalAArch64 and neonCompare bitwise Equal (vector) |
| vceqq_s8⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceqq_s16⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceqq_s32⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceqq_s64⚠ | ExperimentalAArch64 and neonCompare bitwise Equal (vector) |
| vceqq_u8⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceqq_u16⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceqq_u32⚠ | Experimentalneon and v7 and AArch64Compare bitwise Equal (vector) |
| vceqq_u64⚠ | ExperimentalAArch64 and neonCompare bitwise Equal (vector) |
| vcge_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare greater than or equal |
| vcge_f64⚠ | ExperimentalAArch64 and neonFloating-point compare greater than or equal |
| vcge_s8⚠ | Experimentalneon and v7 and AArch64Compare signed greater than or equal |
| vcge_s16⚠ | Experimentalneon and v7 and AArch64Compare signed greater than or equal |
| vcge_s32⚠ | Experimentalneon and v7 and AArch64Compare signed greater than or equal |
| vcge_s64⚠ | ExperimentalAArch64 and neonCompare signed greater than or equal |
| vcge_u8⚠ | Experimentalneon and v7 and AArch64Compare unsigned greater than or equal |
| vcge_u16⚠ | Experimentalneon and v7 and AArch64Compare unsigned greater than or equal |
| vcge_u32⚠ | Experimentalneon and v7 and AArch64Compare unsigned greater than or equal |
| vcge_u64⚠ | ExperimentalAArch64 and neonCompare unsigned greater than or equal |
| vcgeq_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare greater than or equal |
| vcgeq_f64⚠ | ExperimentalAArch64 and neonFloating-point compare greater than or equal |
| vcgeq_s8⚠ | Experimentalneon and v7 and AArch64Compare signed greater than or equal |
| vcgeq_s16⚠ | Experimentalneon and v7 and AArch64Compare signed greater than or equal |
| vcgeq_s32⚠ | Experimentalneon and v7 and AArch64Compare signed greater than or equal |
| vcgeq_s64⚠ | ExperimentalAArch64 and neonCompare signed greater than or equal |
| vcgeq_u8⚠ | Experimentalneon and v7 and AArch64Compare unsigned greater than or equal |
| vcgeq_u16⚠ | Experimentalneon and v7 and AArch64Compare unsigned greater than or equal |
| vcgeq_u32⚠ | Experimentalneon and v7 and AArch64Compare unsigned greater than or equal |
| vcgeq_u64⚠ | ExperimentalAArch64 and neonCompare unsigned greater than or equal |
| vcgt_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare greater than |
| vcgt_f64⚠ | ExperimentalAArch64 and neonFloating-point compare greater than |
| vcgt_s8⚠ | Experimentalneon and v7 and AArch64Compare signed greater than |
| vcgt_s16⚠ | Experimentalneon and v7 and AArch64Compare signed greater than |
| vcgt_s32⚠ | Experimentalneon and v7 and AArch64Compare signed greater than |
| vcgt_s64⚠ | ExperimentalAArch64 and neonCompare signed greater than |
| vcgt_u8⚠ | Experimentalneon and v7 and AArch64Compare unsigned highe |
| vcgt_u16⚠ | Experimentalneon and v7 and AArch64Compare unsigned highe |
| vcgt_u32⚠ | Experimentalneon and v7 and AArch64Compare unsigned highe |
| vcgt_u64⚠ | ExperimentalAArch64 and neonCompare unsigned highe |
| vcgtq_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare greater than |
| vcgtq_f64⚠ | ExperimentalAArch64 and neonFloating-point compare greater than |
| vcgtq_s8⚠ | Experimentalneon and v7 and AArch64Compare signed greater than |
| vcgtq_s16⚠ | Experimentalneon and v7 and AArch64Compare signed greater than |
| vcgtq_s32⚠ | Experimentalneon and v7 and AArch64Compare signed greater than |
| vcgtq_s64⚠ | ExperimentalAArch64 and neonCompare signed greater than |
| vcgtq_u8⚠ | Experimentalneon and v7 and AArch64Compare unsigned highe |
| vcgtq_u16⚠ | Experimentalneon and v7 and AArch64Compare unsigned highe |
| vcgtq_u32⚠ | Experimentalneon and v7 and AArch64Compare unsigned highe |
| vcgtq_u64⚠ | ExperimentalAArch64 and neonCompare unsigned highe |
| vcle_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare less than or equal |
| vcle_f64⚠ | ExperimentalAArch64 and neonFloating-point compare less than or equal |
| vcle_s8⚠ | Experimentalneon and v7 and AArch64Compare signed less than or equal |
| vcle_s16⚠ | Experimentalneon and v7 and AArch64Compare signed less than or equal |
| vcle_s32⚠ | Experimentalneon and v7 and AArch64Compare signed less than or equal |
| vcle_s64⚠ | ExperimentalAArch64 and neonCompare signed less than or equal |
| vcle_u8⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than or equal |
| vcle_u16⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than or equal |
| vcle_u32⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than or equal |
| vcle_u64⚠ | ExperimentalAArch64 and neonCompare unsigned less than or equal |
| vcleq_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare less than or equal |
| vcleq_f64⚠ | ExperimentalAArch64 and neonFloating-point compare less than or equal |
| vcleq_s8⚠ | Experimentalneon and v7 and AArch64Compare signed less than or equal |
| vcleq_s16⚠ | Experimentalneon and v7 and AArch64Compare signed less than or equal |
| vcleq_s32⚠ | Experimentalneon and v7 and AArch64Compare signed less than or equal |
| vcleq_s64⚠ | ExperimentalAArch64 and neonCompare signed less than or equal |
| vcleq_u8⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than or equal |
| vcleq_u16⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than or equal |
| vcleq_u32⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than or equal |
| vcleq_u64⚠ | ExperimentalAArch64 and neonCompare unsigned less than or equal |
| vclt_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare less than |
| vclt_f64⚠ | ExperimentalAArch64 and neonFloating-point compare less than |
| vclt_s8⚠ | Experimentalneon and v7 and AArch64Compare signed less than |
| vclt_s16⚠ | Experimentalneon and v7 and AArch64Compare signed less than |
| vclt_s32⚠ | Experimentalneon and v7 and AArch64Compare signed less than |
| vclt_s64⚠ | ExperimentalAArch64 and neonCompare signed less than |
| vclt_u8⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than |
| vclt_u16⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than |
| vclt_u32⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than |
| vclt_u64⚠ | ExperimentalAArch64 and neonCompare unsigned less than |
| vcltq_f32⚠ | Experimentalneon and v7 and AArch64Floating-point compare less than |
| vcltq_f64⚠ | ExperimentalAArch64 and neonFloating-point compare less than |
| vcltq_s8⚠ | Experimentalneon and v7 and AArch64Compare signed less than |
| vcltq_s16⚠ | Experimentalneon and v7 and AArch64Compare signed less than |
| vcltq_s32⚠ | Experimentalneon and v7 and AArch64Compare signed less than |
| vcltq_s64⚠ | ExperimentalAArch64 and neonCompare signed less than |
| vcltq_u8⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than |
| vcltq_u16⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than |
| vcltq_u32⚠ | Experimentalneon and v7 and AArch64Compare unsigned less than |
| vcltq_u64⚠ | ExperimentalAArch64 and neonCompare unsigned less than |
| vcombine_f32⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_f64⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_p8⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_p16⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_p64⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_s8⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_s16⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_s32⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_s64⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_u8⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_u16⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_u32⚠ | ExperimentalAArch64 and neonVector combine |
| vcombine_u64⚠ | ExperimentalAArch64 and neonVector combine |
| vdupq_n_s8⚠ | Experimentalneon and v7 and AArch64Duplicate vector element to vector or scalar |
| vdupq_n_u8⚠ | Experimentalneon and v7 and AArch64Duplicate vector element to vector or scalar |
| veor_s8⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veor_s16⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veor_s32⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veor_s64⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veor_u8⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veor_u16⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veor_u32⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veor_u64⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veorq_s8⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veorq_s16⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veorq_s32⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veorq_s64⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veorq_u8⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veorq_u16⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veorq_u32⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| veorq_u64⚠ | Experimentalneon and v7 and AArch64Vector bitwise exclusive or (vector) |
| vextq_s8⚠ | Experimentalneon and v7 and AArch64Extract vector from pair of vectors |
| vextq_u8⚠ | Experimentalneon and v7 and AArch64Extract vector from pair of vectors |
| vget_lane_u8⚠ | Experimentalneon and v7 and AArch64Move vector element to general-purpose register |
| vget_lane_u64⚠ | Experimentalneon and v7 and AArch64Move vector element to general-purpose register |
| vgetq_lane_u16⚠ | Experimentalneon and v7 and AArch64Move vector element to general-purpose register |
| vgetq_lane_u32⚠ | Experimentalneon and v7 and AArch64Move vector element to general-purpose register |
| vgetq_lane_u64⚠ | Experimentalneon and v7 and AArch64Move vector element to general-purpose register |
| vhadd_s8⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhadd_s16⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhadd_s32⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhadd_u8⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhadd_u16⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhadd_u32⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhaddq_s8⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhaddq_s16⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhaddq_s32⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhaddq_u8⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhaddq_u16⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhaddq_u32⚠ | Experimentalneon and v7 and AArch64Halving add |
| vhsub_s8⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsub_s16⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsub_s32⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsub_u8⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsub_u16⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsub_u32⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsubq_s8⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsubq_s16⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsubq_s32⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsubq_u8⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsubq_u16⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vhsubq_u32⚠ | Experimentalneon and v7 and AArch64Signed halving subtract |
| vld1q_s8⚠ | Experimentalneon and v7 and AArch64Load multiple single-element structures to one, two, three, or four registers |
| vld1q_u8⚠ | Experimentalneon and v7 and AArch64Load multiple single-element structures to one, two, three, or four registers |
| vmaxv_f32⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxv_s8⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxv_s16⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxv_s32⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxv_u8⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxv_u16⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxv_u32⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxvq_f32⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxvq_f64⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxvq_s8⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxvq_s16⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxvq_s32⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxvq_u8⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxvq_u16⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vmaxvq_u32⚠ | ExperimentalAArch64 and neonHorizontal vector max. |
| vminv_f32⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminv_s8⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminv_s16⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminv_s32⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminv_u8⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminv_u16⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminv_u32⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminvq_f32⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminvq_f64⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminvq_s8⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminvq_s16⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminvq_s32⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminvq_u8⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminvq_u16⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vminvq_u32⚠ | ExperimentalAArch64 and neonHorizontal vector min. |
| vmovl_s8⚠ | Experimentalneon and v7 and AArch64Vector long move. |
| vmovl_s16⚠ | Experimentalneon and v7 and AArch64Vector long move. |
| vmovl_s32⚠ | Experimentalneon and v7 and AArch64Vector long move. |
| vmovl_u8⚠ | Experimentalneon and v7 and AArch64Vector long move. |
| vmovl_u16⚠ | Experimentalneon and v7 and AArch64Vector long move. |
| vmovl_u32⚠ | Experimentalneon and v7 and AArch64Vector long move. |
| vmovn_s16⚠ | Experimentalneon and v7 and AArch64Vector narrow integer. |
| vmovn_s32⚠ | Experimentalneon and v7 and AArch64Vector narrow integer. |
| vmovn_s64⚠ | Experimentalneon and v7 and AArch64Vector narrow integer. |
| vmovn_u16⚠ | Experimentalneon and v7 and AArch64Vector narrow integer. |
| vmovn_u32⚠ | Experimentalneon and v7 and AArch64Vector narrow integer. |
| vmovn_u64⚠ | Experimentalneon and v7 and AArch64Vector narrow integer. |
| vmovq_n_u8⚠ | Experimentalneon and v7 and AArch64Duplicate vector element to vector or scalar |
| vmul_f32⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmul_f64⚠ | ExperimentalAArch64 and neonMultiply |
| vmul_s8⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmul_s16⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmul_s32⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmul_u8⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmul_u16⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmul_u32⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmull_p64⚠ | ExperimentalAArch64 and neonPolynomial multiply long |
| vmulq_f32⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmulq_f64⚠ | ExperimentalAArch64 and neonMultiply |
| vmulq_s8⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmulq_s16⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmulq_s32⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmulq_u8⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmulq_u16⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmulq_u32⚠ | Experimentalneon and v7 and AArch64Multiply |
| vmvn_p8⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvn_s8⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvn_s16⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvn_s32⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvn_u8⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvn_u16⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvn_u32⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvnq_p8⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvnq_s8⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvnq_s16⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvnq_s32⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvnq_u8⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvnq_u16⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vmvnq_u32⚠ | Experimentalneon and v7 and AArch64Vector bitwise not. |
| vorr_s8⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorr_s16⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorr_s32⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorr_s64⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorr_u8⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorr_u16⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorr_u32⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorr_u64⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorrq_s8⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorrq_s16⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorrq_s32⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorrq_s64⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorrq_u8⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorrq_u16⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorrq_u32⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vorrq_u64⚠ | Experimentalneon and v7 and AArch64Vector bitwise or (immediate, inclusive) |
| vpaddq_u8⚠ | ExperimentalAArch64 and neonAdd pairwise |
| vpmax_f32⚠ | Experimentalneon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_s8⚠ | Experimentalneon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_s16⚠ | Experimentalneon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_s32⚠ | Experimentalneon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_u8⚠ | Experimentalneon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_u16⚠ | Experimentalneon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_u32⚠ | Experimentalneon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmaxq_f32⚠ | ExperimentalAArch64 and neonFolding maximum of adjacent pairs |
| vpmaxq_f64⚠ | ExperimentalAArch64 and neonFolding maximum of adjacent pairs |
| vpmaxq_s8⚠ | ExperimentalAArch64 and neonFolding maximum of adjacent pairs |
| vpmaxq_s16⚠ | ExperimentalAArch64 and neonFolding maximum of adjacent pairs |
| vpmaxq_s32⚠ | ExperimentalAArch64 and neonFolding maximum of adjacent pairs |
| vpmaxq_u8⚠ | ExperimentalAArch64 and neonFolding maximum of adjacent pairs |
| vpmaxq_u16⚠ | ExperimentalAArch64 and neonFolding maximum of adjacent pairs |
| vpmaxq_u32⚠ | ExperimentalAArch64 and neonFolding maximum of adjacent pairs |
| vpmin_f32⚠ | Experimentalneon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_s8⚠ | Experimentalneon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_s16⚠ | Experimentalneon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_s32⚠ | Experimentalneon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_u8⚠ | Experimentalneon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_u16⚠ | Experimentalneon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_u32⚠ | Experimentalneon and v7 and AArch64Folding minimum of adjacent pairs |
| vpminq_f32⚠ | ExperimentalAArch64 and neonFolding minimum of adjacent pairs |
| vpminq_f64⚠ | ExperimentalAArch64 and neonFolding minimum of adjacent pairs |
| vpminq_s8⚠ | ExperimentalAArch64 and neonFolding minimum of adjacent pairs |
| vpminq_s16⚠ | ExperimentalAArch64 and neonFolding minimum of adjacent pairs |
| vpminq_s32⚠ | ExperimentalAArch64 and neonFolding minimum of adjacent pairs |
| vpminq_u8⚠ | ExperimentalAArch64 and neonFolding minimum of adjacent pairs |
| vpminq_u16⚠ | ExperimentalAArch64 and neonFolding minimum of adjacent pairs |
| vpminq_u32⚠ | ExperimentalAArch64 and neonFolding minimum of adjacent pairs |
| vqadd_s8⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqadd_s16⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqadd_s32⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqadd_u8⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqadd_u16⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqadd_u32⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqaddq_s8⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqaddq_s16⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqaddq_s32⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqaddq_u8⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqaddq_u16⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqaddq_u32⚠ | Experimentalneon and v7 and AArch64Saturating add |
| vqmovn_u64⚠ | Experimentalneon and v7 and AArch64Unsigned saturating extract narrow. |
| vqsub_s8⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsub_s16⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsub_s32⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsub_u8⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsub_u16⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsub_u32⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsubq_s8⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsubq_s16⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsubq_s32⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsubq_u8⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsubq_u16⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqsubq_u32⚠ | Experimentalneon and v7 and AArch64Saturating subtract |
| vqtbl1_p8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl1_s8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl1_u8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl1q_p8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl1q_s8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl1q_u8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl2_p8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl2_s8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl2_u8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl2q_p8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl2q_s8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl2q_u8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl3_p8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl3_s8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl3_u8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl3q_p8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl3q_s8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl3q_u8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl4_p8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl4_s8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl4_u8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl4q_p8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl4q_s8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbl4q_u8⚠ | ExperimentalAArch64 and neonTable look-up |
| vqtbx1_p8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx1_s8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx1_u8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx1q_p8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx1q_s8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx1q_u8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx2_p8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx2_s8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx2_u8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx2q_p8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx2q_s8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx2q_u8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx3_p8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx3_s8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx3_u8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx3q_p8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx3q_s8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx3q_u8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx4_p8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx4_s8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx4_u8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx4q_p8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx4q_s8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vqtbx4q_u8⚠ | ExperimentalAArch64 and neonExtended table look-up |
| vreinterpret_u64_u32⚠ | Experimentalneon and v7 and AArch64Vector reinterpret cast operation |
| vreinterpretq_s8_u8⚠ | Experimentalneon and v7 and AArch64Vector reinterpret cast operation |
| vreinterpretq_u16_u8⚠ | Experimentalneon and v7 and AArch64Vector reinterpret cast operation |
| vreinterpretq_u32_u8⚠ | Experimentalneon and v7 and AArch64Vector reinterpret cast operation |
| vreinterpretq_u64_u8⚠ | Experimentalneon and v7 and AArch64Vector reinterpret cast operation |
| vreinterpretq_u8_s8⚠ | Experimentalneon and v7 and AArch64Vector reinterpret cast operation |
| vrhadd_s8⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhadd_s16⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhadd_s32⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhadd_u8⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhadd_u16⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhadd_u32⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhaddq_s8⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhaddq_s16⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhaddq_s32⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhaddq_u8⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhaddq_u16⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrhaddq_u32⚠ | Experimentalneon and v7 and AArch64Rounding halving add |
| vrsqrte_f32⚠ | ExperimentalAArch64 and neonReciprocal square-root estimate. |
| vsha1cq_u32⚠ | ExperimentalAArch64 and cryptoSHA1 hash update accelerator, choose. |
| vsha1h_u32⚠ | ExperimentalAArch64 and cryptoSHA1 fixed rotate. |
| vsha1mq_u32⚠ | ExperimentalAArch64 and cryptoSHA1 hash update accelerator, majority. |
| vsha1pq_u32⚠ | ExperimentalAArch64 and cryptoSHA1 hash update accelerator, parity. |
| vsha1su0q_u32⚠ | ExperimentalAArch64 and cryptoSHA1 schedule update accelerator, first part. |
| vsha1su1q_u32⚠ | ExperimentalAArch64 and cryptoSHA1 schedule update accelerator, second part. |
| vsha256h2q_u32⚠ | ExperimentalAArch64 and cryptoSHA256 hash update accelerator, upper part. |
| vsha256hq_u32⚠ | ExperimentalAArch64 and cryptoSHA256 hash update accelerator. |
| vsha256su0q_u32⚠ | ExperimentalAArch64 and cryptoSHA256 schedule update accelerator, first part. |
| vsha256su1q_u32⚠ | ExperimentalAArch64 and cryptoSHA256 schedule update accelerator, second part. |
| vshlq_n_u8⚠ | Experimentalneon and v7 and AArch64Shift right |
| vshrq_n_u8⚠ | Experimentalneon and v7 and AArch64Unsigned shift right |
| vsub_f32⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsub_f64⚠ | ExperimentalAArch64 and neonSubtract |
| vsub_s8⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsub_s16⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsub_s32⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsub_s64⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsub_u8⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsub_u16⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsub_u32⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsub_u64⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsubq_f32⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsubq_f64⚠ | ExperimentalAArch64 and neonSubtract |
| vsubq_s8⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsubq_s16⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsubq_s32⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsubq_s64⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsubq_u8⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsubq_u16⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsubq_u32⚠ | Experimentalneon and v7 and AArch64Subtract |
| vsubq_u64⚠ | Experimentalneon and v7 and AArch64Subtract |
| vtbl1_p8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl1_s8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl1_u8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl2_p8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl2_s8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl2_u8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl3_p8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl3_s8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl3_u8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl4_p8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl4_s8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbl4_u8⚠ | ExperimentalAArch64 and neon,v7Table look-up |
| vtbx1_p8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx1_s8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx1_u8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx2_p8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx2_s8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx2_u8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx3_p8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx3_s8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx3_u8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx4_p8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx4_s8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |
| vtbx4_u8⚠ | ExperimentalAArch64 and neon,v7Extended table look-up |