1.27.0[−][src]Module core::arch::x86
Platform-specific intrinsics for the x86 platform.
See the module documentation for more details.
Structs
| CpuidResult | x86 Result of the |
| __m128 | x86 128-bit wide set of four |
| __m128d | x86 128-bit wide set of two |
| __m128i | x86 128-bit wide integer vector type, x86-specific |
| __m256 | x86 256-bit wide set of eight |
| __m256d | x86 256-bit wide set of four |
| __m256i | x86 256-bit wide integer vector type, x86-specific |
| __m512 | Experimentalx86 512-bit wide set of sixteen |
| __m512d | Experimentalx86 512-bit wide set of eight |
| __m512i | Experimentalx86 512-bit wide integer vector type, x86-specific |
Constants
| _CMP_EQ_OQ | x86 Equal (ordered, non-signaling) |
| _CMP_EQ_OS | x86 Equal (ordered, signaling) |
| _CMP_EQ_UQ | x86 Equal (unordered, non-signaling) |
| _CMP_EQ_US | x86 Equal (unordered, signaling) |
| _CMP_FALSE_OQ | x86 False (ordered, non-signaling) |
| _CMP_FALSE_OS | x86 False (ordered, signaling) |
| _CMP_GE_OQ | x86 Greater-than-or-equal (ordered, non-signaling) |
| _CMP_GE_OS | x86 Greater-than-or-equal (ordered, signaling) |
| _CMP_GT_OQ | x86 Greater-than (ordered, non-signaling) |
| _CMP_GT_OS | x86 Greater-than (ordered, signaling) |
| _CMP_LE_OQ | x86 Less-than-or-equal (ordered, non-signaling) |
| _CMP_LE_OS | x86 Less-than-or-equal (ordered, signaling) |
| _CMP_LT_OQ | x86 Less-than (ordered, non-signaling) |
| _CMP_LT_OS | x86 Less-than (ordered, signaling) |
| _CMP_NEQ_OQ | x86 Not-equal (ordered, non-signaling) |
| _CMP_NEQ_OS | x86 Not-equal (ordered, signaling) |
| _CMP_NEQ_UQ | x86 Not-equal (unordered, non-signaling) |
| _CMP_NEQ_US | x86 Not-equal (unordered, signaling) |
| _CMP_NGE_UQ | x86 Not-greater-than-or-equal (unordered, non-signaling) |
| _CMP_NGE_US | x86 Not-greater-than-or-equal (unordered, signaling) |
| _CMP_NGT_UQ | x86 Not-greater-than (unordered, non-signaling) |
| _CMP_NGT_US | x86 Not-greater-than (unordered, signaling) |
| _CMP_NLE_UQ | x86 Not-less-than-or-equal (unordered, non-signaling) |
| _CMP_NLE_US | x86 Not-less-than-or-equal (unordered, signaling) |
| _CMP_NLT_UQ | x86 Not-less-than (unordered, non-signaling) |
| _CMP_NLT_US | x86 Not-less-than (unordered, signaling) |
| _CMP_ORD_Q | x86 Ordered (non-signaling) |
| _CMP_ORD_S | x86 Ordered (signaling) |
| _CMP_TRUE_UQ | x86 True (unordered, non-signaling) |
| _CMP_TRUE_US | x86 True (unordered, signaling) |
| _CMP_UNORD_Q | x86 Unordered (non-signaling) |
| _CMP_UNORD_S | x86 Unordered (signaling) |
| _MM_EXCEPT_DENORM | x86 See |
| _MM_EXCEPT_DIV_ZERO | x86 See |
| _MM_EXCEPT_INEXACT | x86 See |
| _MM_EXCEPT_INVALID | x86 See |
| _MM_EXCEPT_MASK | x86 |
| _MM_EXCEPT_OVERFLOW | x86 See |
| _MM_EXCEPT_UNDERFLOW | x86 See |
| _MM_FLUSH_ZERO_MASK | x86 |
| _MM_FLUSH_ZERO_OFF | x86 See |
| _MM_FLUSH_ZERO_ON | x86 See |
| _MM_FROUND_CEIL | x86 round up and do not suppress exceptions |
| _MM_FROUND_CUR_DIRECTION | x86 use MXCSR.RC; see |
| _MM_FROUND_FLOOR | x86 round down and do not suppress exceptions |
| _MM_FROUND_NEARBYINT | x86 use MXCSR.RC and suppress exceptions; see |
| _MM_FROUND_NINT | x86 round to nearest and do not suppress exceptions |
| _MM_FROUND_NO_EXC | x86 suppress exceptions |
| _MM_FROUND_RAISE_EXC | x86 do not suppress exceptions |
| _MM_FROUND_RINT | x86 use MXCSR.RC and do not suppress exceptions; see
|
| _MM_FROUND_TO_NEAREST_INT | x86 round to nearest |
| _MM_FROUND_TO_NEG_INF | x86 round down |
| _MM_FROUND_TO_POS_INF | x86 round up |
| _MM_FROUND_TO_ZERO | x86 truncate |
| _MM_FROUND_TRUNC | x86 truncate and do not suppress exceptions |
| _MM_HINT_NTA | x86 See |
| _MM_HINT_T0 | x86 See |
| _MM_HINT_T1 | x86 See |
| _MM_HINT_T2 | x86 See |
| _MM_MASK_DENORM | x86 See |
| _MM_MASK_DIV_ZERO | x86 See |
| _MM_MASK_INEXACT | x86 See |
| _MM_MASK_INVALID | x86 See |
| _MM_MASK_MASK | x86 |
| _MM_MASK_OVERFLOW | x86 See |
| _MM_MASK_UNDERFLOW | x86 See |
| _MM_ROUND_DOWN | x86 See |
| _MM_ROUND_MASK | x86 |
| _MM_ROUND_NEAREST | x86 See |
| _MM_ROUND_TOWARD_ZERO | x86 See |
| _MM_ROUND_UP | x86 See |
| _SIDD_BIT_MASK | x86 Mask only: return the bit mask |
| _SIDD_CMP_EQUAL_ANY | x86 For each character in |
| _SIDD_CMP_EQUAL_EACH | x86 The strings defined by |
| _SIDD_CMP_EQUAL_ORDERED | x86 Search for the defined substring in the target |
| _SIDD_CMP_RANGES | x86 For each character in |
| _SIDD_LEAST_SIGNIFICANT | x86 Index only: return the least significant bit (Default) |
| _SIDD_MASKED_NEGATIVE_POLARITY | x86 Negates results only before the end of the string |
| _SIDD_MASKED_POSITIVE_POLARITY | x86 Do not negate results before the end of the string |
| _SIDD_MOST_SIGNIFICANT | x86 Index only: return the most significant bit |
| _SIDD_NEGATIVE_POLARITY | x86 Negates results |
| _SIDD_POSITIVE_POLARITY | x86 Do not negate results (Default) |
| _SIDD_SBYTE_OPS | x86 String contains signed 8-bit characters |
| _SIDD_SWORD_OPS | x86 String contains unsigned 16-bit characters |
| _SIDD_UBYTE_OPS | x86 String contains unsigned 8-bit characters (Default) |
| _SIDD_UNIT_MASK | x86 Mask only: return the byte mask |
| _SIDD_UWORD_OPS | x86 String contains unsigned 16-bit characters |
| _XCR_XFEATURE_ENABLED_MASK | x86
|
| _MM_CMPINT_EQ | Experimentalx86 Equal |
| _MM_CMPINT_FALSE | Experimentalx86 False |
| _MM_CMPINT_LE | Experimentalx86 Less-than-or-equal |
| _MM_CMPINT_LT | Experimentalx86 Less-than |
| _MM_CMPINT_NE | Experimentalx86 Not-equal |
| _MM_CMPINT_NLE | Experimentalx86 Not less-than-or-equal |
| _MM_CMPINT_NLT | Experimentalx86 Not less-than |
| _MM_CMPINT_TRUE | Experimentalx86 True |
| _MM_MANT_NORM_1_2 | Experimentalx86 interval [1, 2) |
| _MM_MANT_NORM_P5_1 | Experimentalx86 interval [0.5, 1) |
| _MM_MANT_NORM_P5_2 | Experimentalx86 interval [0.5, 2) |
| _MM_MANT_NORM_P75_1P5 | Experimentalx86 interval [0.75, 1.5) |
| _MM_MANT_SIGN_NAN | Experimentalx86 DEST = NaN if sign(SRC) = 1 |
| _MM_MANT_SIGN_SRC | Experimentalx86 sign = sign(SRC) |
| _MM_MANT_SIGN_ZERO | Experimentalx86 sign = 0 |
| _MM_PERM_AAAA | Experimentalx86 |
| _MM_PERM_AAAB | Experimentalx86 |
| _MM_PERM_AAAC | Experimentalx86 |
| _MM_PERM_AAAD | Experimentalx86 |
| _MM_PERM_AABA | Experimentalx86 |
| _MM_PERM_AABB | Experimentalx86 |
| _MM_PERM_AABC | Experimentalx86 |
| _MM_PERM_AABD | Experimentalx86 |
| _MM_PERM_AACA | Experimentalx86 |
| _MM_PERM_AACB | Experimentalx86 |
| _MM_PERM_AACC | Experimentalx86 |
| _MM_PERM_AACD | Experimentalx86 |
| _MM_PERM_AADA | Experimentalx86 |
| _MM_PERM_AADB | Experimentalx86 |
| _MM_PERM_AADC | Experimentalx86 |
| _MM_PERM_AADD | Experimentalx86 |
| _MM_PERM_ABAA | Experimentalx86 |
| _MM_PERM_ABAB | Experimentalx86 |
| _MM_PERM_ABAC | Experimentalx86 |
| _MM_PERM_ABAD | Experimentalx86 |
| _MM_PERM_ABBA | Experimentalx86 |
| _MM_PERM_ABBB | Experimentalx86 |
| _MM_PERM_ABBC | Experimentalx86 |
| _MM_PERM_ABBD | Experimentalx86 |
| _MM_PERM_ABCA | Experimentalx86 |
| _MM_PERM_ABCB | Experimentalx86 |
| _MM_PERM_ABCC | Experimentalx86 |
| _MM_PERM_ABCD | Experimentalx86 |
| _MM_PERM_ABDA | Experimentalx86 |
| _MM_PERM_ABDB | Experimentalx86 |
| _MM_PERM_ABDC | Experimentalx86 |
| _MM_PERM_ABDD | Experimentalx86 |
| _MM_PERM_ACAA | Experimentalx86 |
| _MM_PERM_ACAB | Experimentalx86 |
| _MM_PERM_ACAC | Experimentalx86 |
| _MM_PERM_ACAD | Experimentalx86 |
| _MM_PERM_ACBA | Experimentalx86 |
| _MM_PERM_ACBB | Experimentalx86 |
| _MM_PERM_ACBC | Experimentalx86 |
| _MM_PERM_ACBD | Experimentalx86 |
| _MM_PERM_ACCA | Experimentalx86 |
| _MM_PERM_ACCB | Experimentalx86 |
| _MM_PERM_ACCC | Experimentalx86 |
| _MM_PERM_ACCD | Experimentalx86 |
| _MM_PERM_ACDA | Experimentalx86 |
| _MM_PERM_ACDB | Experimentalx86 |
| _MM_PERM_ACDC | Experimentalx86 |
| _MM_PERM_ACDD | Experimentalx86 |
| _MM_PERM_ADAA | Experimentalx86 |
| _MM_PERM_ADAB | Experimentalx86 |
| _MM_PERM_ADAC | Experimentalx86 |
| _MM_PERM_ADAD | Experimentalx86 |
| _MM_PERM_ADBA | Experimentalx86 |
| _MM_PERM_ADBB | Experimentalx86 |
| _MM_PERM_ADBC | Experimentalx86 |
| _MM_PERM_ADBD | Experimentalx86 |
| _MM_PERM_ADCA | Experimentalx86 |
| _MM_PERM_ADCB | Experimentalx86 |
| _MM_PERM_ADCC | Experimentalx86 |
| _MM_PERM_ADCD | Experimentalx86 |
| _MM_PERM_ADDA | Experimentalx86 |
| _MM_PERM_ADDB | Experimentalx86 |
| _MM_PERM_ADDC | Experimentalx86 |
| _MM_PERM_ADDD | Experimentalx86 |
| _MM_PERM_BAAA | Experimentalx86 |
| _MM_PERM_BAAB | Experimentalx86 |
| _MM_PERM_BAAC | Experimentalx86 |
| _MM_PERM_BAAD | Experimentalx86 |
| _MM_PERM_BABA | Experimentalx86 |
| _MM_PERM_BABB | Experimentalx86 |
| _MM_PERM_BABC | Experimentalx86 |
| _MM_PERM_BABD | Experimentalx86 |
| _MM_PERM_BACA | Experimentalx86 |
| _MM_PERM_BACB | Experimentalx86 |
| _MM_PERM_BACC | Experimentalx86 |
| _MM_PERM_BACD | Experimentalx86 |
| _MM_PERM_BADA | Experimentalx86 |
| _MM_PERM_BADB | Experimentalx86 |
| _MM_PERM_BADC | Experimentalx86 |
| _MM_PERM_BADD | Experimentalx86 |
| _MM_PERM_BBAA | Experimentalx86 |
| _MM_PERM_BBAB | Experimentalx86 |
| _MM_PERM_BBAC | Experimentalx86 |
| _MM_PERM_BBAD | Experimentalx86 |
| _MM_PERM_BBBA | Experimentalx86 |
| _MM_PERM_BBBB | Experimentalx86 |
| _MM_PERM_BBBC | Experimentalx86 |
| _MM_PERM_BBBD | Experimentalx86 |
| _MM_PERM_BBCA | Experimentalx86 |
| _MM_PERM_BBCB | Experimentalx86 |
| _MM_PERM_BBCC | Experimentalx86 |
| _MM_PERM_BBCD | Experimentalx86 |
| _MM_PERM_BBDA | Experimentalx86 |
| _MM_PERM_BBDB | Experimentalx86 |
| _MM_PERM_BBDC | Experimentalx86 |
| _MM_PERM_BBDD | Experimentalx86 |
| _MM_PERM_BCAA | Experimentalx86 |
| _MM_PERM_BCAB | Experimentalx86 |
| _MM_PERM_BCAC | Experimentalx86 |
| _MM_PERM_BCAD | Experimentalx86 |
| _MM_PERM_BCBA | Experimentalx86 |
| _MM_PERM_BCBB | Experimentalx86 |
| _MM_PERM_BCBC | Experimentalx86 |
| _MM_PERM_BCBD | Experimentalx86 |
| _MM_PERM_BCCA | Experimentalx86 |
| _MM_PERM_BCCB | Experimentalx86 |
| _MM_PERM_BCCC | Experimentalx86 |
| _MM_PERM_BCCD | Experimentalx86 |
| _MM_PERM_BCDA | Experimentalx86 |
| _MM_PERM_BCDB | Experimentalx86 |
| _MM_PERM_BCDC | Experimentalx86 |
| _MM_PERM_BCDD | Experimentalx86 |
| _MM_PERM_BDAA | Experimentalx86 |
| _MM_PERM_BDAB | Experimentalx86 |
| _MM_PERM_BDAC | Experimentalx86 |
| _MM_PERM_BDAD | Experimentalx86 |
| _MM_PERM_BDBA | Experimentalx86 |
| _MM_PERM_BDBB | Experimentalx86 |
| _MM_PERM_BDBC | Experimentalx86 |
| _MM_PERM_BDBD | Experimentalx86 |
| _MM_PERM_BDCA | Experimentalx86 |
| _MM_PERM_BDCB | Experimentalx86 |
| _MM_PERM_BDCC | Experimentalx86 |
| _MM_PERM_BDCD | Experimentalx86 |
| _MM_PERM_BDDA | Experimentalx86 |
| _MM_PERM_BDDB | Experimentalx86 |
| _MM_PERM_BDDC | Experimentalx86 |
| _MM_PERM_BDDD | Experimentalx86 |
| _MM_PERM_CAAA | Experimentalx86 |
| _MM_PERM_CAAB | Experimentalx86 |
| _MM_PERM_CAAC | Experimentalx86 |
| _MM_PERM_CAAD | Experimentalx86 |
| _MM_PERM_CABA | Experimentalx86 |
| _MM_PERM_CABB | Experimentalx86 |
| _MM_PERM_CABC | Experimentalx86 |
| _MM_PERM_CABD | Experimentalx86 |
| _MM_PERM_CACA | Experimentalx86 |
| _MM_PERM_CACB | Experimentalx86 |
| _MM_PERM_CACC | Experimentalx86 |
| _MM_PERM_CACD | Experimentalx86 |
| _MM_PERM_CADA | Experimentalx86 |
| _MM_PERM_CADB | Experimentalx86 |
| _MM_PERM_CADC | Experimentalx86 |
| _MM_PERM_CADD | Experimentalx86 |
| _MM_PERM_CBAA | Experimentalx86 |
| _MM_PERM_CBAB | Experimentalx86 |
| _MM_PERM_CBAC | Experimentalx86 |
| _MM_PERM_CBAD | Experimentalx86 |
| _MM_PERM_CBBA | Experimentalx86 |
| _MM_PERM_CBBB | Experimentalx86 |
| _MM_PERM_CBBC | Experimentalx86 |
| _MM_PERM_CBBD | Experimentalx86 |
| _MM_PERM_CBCA | Experimentalx86 |
| _MM_PERM_CBCB | Experimentalx86 |
| _MM_PERM_CBCC | Experimentalx86 |
| _MM_PERM_CBCD | Experimentalx86 |
| _MM_PERM_CBDA | Experimentalx86 |
| _MM_PERM_CBDB | Experimentalx86 |
| _MM_PERM_CBDC | Experimentalx86 |
| _MM_PERM_CBDD | Experimentalx86 |
| _MM_PERM_CCAA | Experimentalx86 |
| _MM_PERM_CCAB | Experimentalx86 |
| _MM_PERM_CCAC | Experimentalx86 |
| _MM_PERM_CCAD | Experimentalx86 |
| _MM_PERM_CCBA | Experimentalx86 |
| _MM_PERM_CCBB | Experimentalx86 |
| _MM_PERM_CCBC | Experimentalx86 |
| _MM_PERM_CCBD | Experimentalx86 |
| _MM_PERM_CCCA | Experimentalx86 |
| _MM_PERM_CCCB | Experimentalx86 |
| _MM_PERM_CCCC | Experimentalx86 |
| _MM_PERM_CCCD | Experimentalx86 |
| _MM_PERM_CCDA | Experimentalx86 |
| _MM_PERM_CCDB | Experimentalx86 |
| _MM_PERM_CCDC | Experimentalx86 |
| _MM_PERM_CCDD | Experimentalx86 |
| _MM_PERM_CDAA | Experimentalx86 |
| _MM_PERM_CDAB | Experimentalx86 |
| _MM_PERM_CDAC | Experimentalx86 |
| _MM_PERM_CDAD | Experimentalx86 |
| _MM_PERM_CDBA | Experimentalx86 |
| _MM_PERM_CDBB | Experimentalx86 |
| _MM_PERM_CDBC | Experimentalx86 |
| _MM_PERM_CDBD | Experimentalx86 |
| _MM_PERM_CDCA | Experimentalx86 |
| _MM_PERM_CDCB | Experimentalx86 |
| _MM_PERM_CDCC | Experimentalx86 |
| _MM_PERM_CDCD | Experimentalx86 |
| _MM_PERM_CDDA | Experimentalx86 |
| _MM_PERM_CDDB | Experimentalx86 |
| _MM_PERM_CDDC | Experimentalx86 |
| _MM_PERM_CDDD | Experimentalx86 |
| _MM_PERM_DAAA | Experimentalx86 |
| _MM_PERM_DAAB | Experimentalx86 |
| _MM_PERM_DAAC | Experimentalx86 |
| _MM_PERM_DAAD | Experimentalx86 |
| _MM_PERM_DABA | Experimentalx86 |
| _MM_PERM_DABB | Experimentalx86 |
| _MM_PERM_DABC | Experimentalx86 |
| _MM_PERM_DABD | Experimentalx86 |
| _MM_PERM_DACA | Experimentalx86 |
| _MM_PERM_DACB | Experimentalx86 |
| _MM_PERM_DACC | Experimentalx86 |
| _MM_PERM_DACD | Experimentalx86 |
| _MM_PERM_DADA | Experimentalx86 |
| _MM_PERM_DADB | Experimentalx86 |
| _MM_PERM_DADC | Experimentalx86 |
| _MM_PERM_DADD | Experimentalx86 |
| _MM_PERM_DBAA | Experimentalx86 |
| _MM_PERM_DBAB | Experimentalx86 |
| _MM_PERM_DBAC | Experimentalx86 |
| _MM_PERM_DBAD | Experimentalx86 |
| _MM_PERM_DBBA | Experimentalx86 |
| _MM_PERM_DBBB | Experimentalx86 |
| _MM_PERM_DBBC | Experimentalx86 |
| _MM_PERM_DBBD | Experimentalx86 |
| _MM_PERM_DBCA | Experimentalx86 |
| _MM_PERM_DBCB | Experimentalx86 |
| _MM_PERM_DBCC | Experimentalx86 |
| _MM_PERM_DBCD | Experimentalx86 |
| _MM_PERM_DBDA | Experimentalx86 |
| _MM_PERM_DBDB | Experimentalx86 |
| _MM_PERM_DBDC | Experimentalx86 |
| _MM_PERM_DBDD | Experimentalx86 |
| _MM_PERM_DCAA | Experimentalx86 |
| _MM_PERM_DCAB | Experimentalx86 |
| _MM_PERM_DCAC | Experimentalx86 |
| _MM_PERM_DCAD | Experimentalx86 |
| _MM_PERM_DCBA | Experimentalx86 |
| _MM_PERM_DCBB | Experimentalx86 |
| _MM_PERM_DCBC | Experimentalx86 |
| _MM_PERM_DCBD | Experimentalx86 |
| _MM_PERM_DCCA | Experimentalx86 |
| _MM_PERM_DCCB | Experimentalx86 |
| _MM_PERM_DCCC | Experimentalx86 |
| _MM_PERM_DCCD | Experimentalx86 |
| _MM_PERM_DCDA | Experimentalx86 |
| _MM_PERM_DCDB | Experimentalx86 |
| _MM_PERM_DCDC | Experimentalx86 |
| _MM_PERM_DCDD | Experimentalx86 |
| _MM_PERM_DDAA | Experimentalx86 |
| _MM_PERM_DDAB | Experimentalx86 |
| _MM_PERM_DDAC | Experimentalx86 |
| _MM_PERM_DDAD | Experimentalx86 |
| _MM_PERM_DDBA | Experimentalx86 |
| _MM_PERM_DDBB | Experimentalx86 |
| _MM_PERM_DDBC | Experimentalx86 |
| _MM_PERM_DDBD | Experimentalx86 |
| _MM_PERM_DDCA | Experimentalx86 |
| _MM_PERM_DDCB | Experimentalx86 |
| _MM_PERM_DDCC | Experimentalx86 |
| _MM_PERM_DDCD | Experimentalx86 |
| _MM_PERM_DDDA | Experimentalx86 |
| _MM_PERM_DDDB | Experimentalx86 |
| _MM_PERM_DDDC | Experimentalx86 |
| _MM_PERM_DDDD | Experimentalx86 |
| _XABORT_CAPACITY | Experimentalx86 Transaction abort due to the transaction using too much memory. |
| _XABORT_CONFLICT | Experimentalx86 Transaction abort due to a memory conflict with another thread. |
| _XABORT_DEBUG | Experimentalx86 Transaction abort due to a debug trap. |
| _XABORT_EXPLICIT | Experimentalx86 Transaction explicitly aborted with xabort. The parameter passed to xabort is available with
|
| _XABORT_NESTED | Experimentalx86 Transaction abort in a inner nested transaction. |
| _XABORT_RETRY | Experimentalx86 Transaction retry is possible. |
| _XBEGIN_STARTED | Experimentalx86 Transaction successfully started. |
Functions
| _MM_GET_EXCEPTION_MASK⚠ | x86 and sseSee |
| _MM_GET_EXCEPTION_STATE⚠ | x86 and sseSee |
| _MM_GET_FLUSH_ZERO_MODE⚠ | x86 and sseSee |
| _MM_GET_ROUNDING_MODE⚠ | x86 and sseSee |
| _MM_SET_EXCEPTION_MASK⚠ | x86 and sseSee |
| _MM_SET_EXCEPTION_STATE⚠ | x86 and sseSee |
| _MM_SET_FLUSH_ZERO_MODE⚠ | x86 and sseSee |
| _MM_SET_ROUNDING_MODE⚠ | x86 and sseSee |
| _MM_TRANSPOSE4_PS⚠ | x86 and sseTranspose the 4x4 matrix formed by 4 rows of __m128 in place. |
| __cpuid⚠ | x86 See |
| __cpuid_count⚠ | x86 Returns the result of the |
| __get_cpuid_max⚠ | x86 Returns the highest-supported |
| __rdtscp⚠ | x86 Reads the current value of the processor’s time-stamp counter and
the |
| _addcarry_u32⚠ | x86 Adds unsigned 32-bit integers |
| _addcarryx_u32⚠ | x86 and adxAdds unsigned 32-bit integers |
| _andn_u32⚠ | x86 and bmi1Bitwise logical |
| _bextr2_u32⚠ | x86 and bmi1Extracts bits of |
| _bextr_u32⚠ | x86 and bmi1Extracts bits in range [ |
| _blcfill_u32⚠ | x86 and tbmClears all bits below the least significant zero bit of |
| _blcfill_u64⚠ | x86 and tbmClears all bits below the least significant zero bit of |
| _blci_u32⚠ | x86 and tbmSets all bits of |
| _blci_u64⚠ | x86 and tbmSets all bits of |
| _blcic_u32⚠ | x86 and tbmSets the least significant zero bit of |
| _blcic_u64⚠ | x86 and tbmSets the least significant zero bit of |
| _blcmsk_u32⚠ | x86 and tbmSets the least significant zero bit of |
| _blcmsk_u64⚠ | x86 and tbmSets the least significant zero bit of |
| _blcs_u32⚠ | x86 and tbmSets the least significant zero bit of |
| _blcs_u64⚠ | x86 and tbmSets the least significant zero bit of |
| _blsfill_u32⚠ | x86 and tbmSets all bits of |
| _blsfill_u64⚠ | x86 and tbmSets all bits of |
| _blsi_u32⚠ | x86 and bmi1Extracts lowest set isolated bit. |
| _blsic_u32⚠ | x86 and tbmClears least significant bit and sets all other bits. |
| _blsic_u64⚠ | x86 and tbmClears least significant bit and sets all other bits. |
| _blsmsk_u32⚠ | x86 and bmi1Gets mask up to lowest set bit. |
| _blsr_u32⚠ | x86 and bmi1Resets the lowest set bit of |
| _bswap⚠ | x86 Returns an integer with the reversed byte order of x |
| _bzhi_u32⚠ | x86 and bmi2Zeroes higher bits of |
| _fxrstor⚠ | x86 and fxsrRestores the |
| _fxsave⚠ | x86 and fxsrSaves the |
| _lzcnt_u32⚠ | x86 and lzcntCounts the leading most significant zero bits. |
| _mm256_abs_epi8⚠ | x86 and avx2Computes the absolute values of packed 8-bit integers in |
| _mm256_abs_epi16⚠ | x86 and avx2Computes the absolute values of packed 16-bit integers in |
| _mm256_abs_epi32⚠ | x86 and avx2Computes the absolute values of packed 32-bit integers in |
| _mm256_add_epi8⚠ | x86 and avx2Adds packed 8-bit integers in |
| _mm256_add_epi16⚠ | x86 and avx2Adds packed 16-bit integers in |
| _mm256_add_epi32⚠ | x86 and avx2Adds packed 32-bit integers in |
| _mm256_add_epi64⚠ | x86 and avx2Adds packed 64-bit integers in |
| _mm256_add_pd⚠ | x86 and avxAdds packed double-precision (64-bit) floating-point elements
in |
| _mm256_add_ps⚠ | x86 and avxAdds packed single-precision (32-bit) floating-point elements in |
| _mm256_adds_epi8⚠ | x86 and avx2Adds packed 8-bit integers in |
| _mm256_adds_epi16⚠ | x86 and avx2Adds packed 16-bit integers in |
| _mm256_adds_epu8⚠ | x86 and avx2Adds packed unsigned 8-bit integers in |
| _mm256_adds_epu16⚠ | x86 and avx2Adds packed unsigned 16-bit integers in |
| _mm256_addsub_pd⚠ | x86 and avxAlternatively adds and subtracts packed double-precision (64-bit)
floating-point elements in |
| _mm256_addsub_ps⚠ | x86 and avxAlternatively adds and subtracts packed single-precision (32-bit)
floating-point elements in |
| _mm256_alignr_epi8⚠ | x86 and avx2Concatenates pairs of 16-byte blocks in |
| _mm256_and_pd⚠ | x86 and avxComputes the bitwise AND of a packed double-precision (64-bit)
floating-point elements in |
| _mm256_and_ps⚠ | x86 and avxComputes the bitwise AND of packed single-precision (32-bit) floating-point
elements in |
| _mm256_and_si256⚠ | x86 and avx2Computes the bitwise AND of 256 bits (representing integer data)
in |
| _mm256_andnot_pd⚠ | x86 and avxComputes the bitwise NOT of packed double-precision (64-bit) floating-point
elements in |
| _mm256_andnot_ps⚠ | x86 and avxComputes the bitwise NOT of packed single-precision (32-bit) floating-point
elements in |
| _mm256_andnot_si256⚠ | x86 and avx2Computes the bitwise NOT of 256 bits (representing integer data)
in |
| _mm256_avg_epu8⚠ | x86 and avx2Averages packed unsigned 8-bit integers in |
| _mm256_avg_epu16⚠ | x86 and avx2Averages packed unsigned 16-bit integers in |
| _mm256_blend_epi16⚠ | x86 and avx2Blends packed 16-bit integers from |
| _mm256_blend_epi32⚠ | x86 and avx2Blends packed 32-bit integers from |
| _mm256_blend_pd⚠ | x86 and avxBlends packed double-precision (64-bit) floating-point elements from
|
| _mm256_blend_ps⚠ | x86 and avxBlends packed single-precision (32-bit) floating-point elements from
|
| _mm256_blendv_epi8⚠ | x86 and avx2Blends packed 8-bit integers from |
| _mm256_blendv_pd⚠ | x86 and avxBlends packed double-precision (64-bit) floating-point elements from
|
| _mm256_blendv_ps⚠ | x86 and avxBlends packed single-precision (32-bit) floating-point elements from
|
| _mm256_broadcast_pd⚠ | x86 and avxBroadcasts 128 bits from memory (composed of 2 packed double-precision (64-bit) floating-point elements) to all elements of the returned vector. |
| _mm256_broadcast_ps⚠ | x86 and avxBroadcasts 128 bits from memory (composed of 4 packed single-precision (32-bit) floating-point elements) to all elements of the returned vector. |
| _mm256_broadcast_sd⚠ | x86 and avxBroadcasts a double-precision (64-bit) floating-point element from memory to all elements of the returned vector. |
| _mm256_broadcast_ss⚠ | x86 and avxBroadcasts a single-precision (32-bit) floating-point element from memory to all elements of the returned vector. |
| _mm256_broadcastb_epi8⚠ | x86 and avx2Broadcasts the low packed 8-bit integer from |
| _mm256_broadcastd_epi32⚠ | x86 and avx2Broadcasts the low packed 32-bit integer from |
| _mm256_broadcastq_epi64⚠ | x86 and avx2Broadcasts the low packed 64-bit integer from |
| _mm256_broadcastsd_pd⚠ | x86 and avx2Broadcasts the low double-precision (64-bit) floating-point element
from |
| _mm256_broadcastsi128_si256⚠ | x86 and avx2Broadcasts 128 bits of integer data from a to all 128-bit lanes in the 256-bit returned value. |
| _mm256_broadcastss_ps⚠ | x86 and avx2Broadcasts the low single-precision (32-bit) floating-point element
from |
| _mm256_broadcastw_epi16⚠ | x86 and avx2Broadcasts the low packed 16-bit integer from a to all elements of the 256-bit returned value |
| _mm256_bslli_epi128⚠ | x86 and avx2Shifts 128-bit lanes in |
| _mm256_bsrli_epi128⚠ | x86 and avx2Shifts 128-bit lanes in |
| _mm256_castpd128_pd256⚠ | x86 and avxCasts vector of type __m128d to type __m256d; the upper 128 bits of the result are undefined. |
| _mm256_castpd256_pd128⚠ | x86 and avxCasts vector of type __m256d to type __m128d. |
| _mm256_castpd_ps⚠ | x86 and avxCast vector of type __m256d to type __m256. |
| _mm256_castpd_si256⚠ | x86 and avxCasts vector of type __m256d to type __m256i. |
| _mm256_castps128_ps256⚠ | x86 and avxCasts vector of type __m128 to type __m256; the upper 128 bits of the result are undefined. |
| _mm256_castps256_ps128⚠ | x86 and avxCasts vector of type __m256 to type __m128. |
| _mm256_castps_pd⚠ | x86 and avxCast vector of type __m256 to type __m256d. |
| _mm256_castps_si256⚠ | x86 and avxCasts vector of type __m256 to type __m256i. |
| _mm256_castsi128_si256⚠ | x86 and avxCasts vector of type __m128i to type __m256i; the upper 128 bits of the result are undefined. |
| _mm256_castsi256_pd⚠ | x86 and avxCasts vector of type __m256i to type __m256d. |
| _mm256_castsi256_ps⚠ | x86 and avxCasts vector of type __m256i to type __m256. |
| _mm256_castsi256_si128⚠ | x86 and avxCasts vector of type __m256i to type __m128i. |
| _mm256_ceil_pd⚠ | x86 and avxRounds packed double-precision (64-bit) floating point elements in |
| _mm256_ceil_ps⚠ | x86 and avxRounds packed single-precision (32-bit) floating point elements in |
| _mm256_cmp_pd⚠ | x86 and avxCompares packed double-precision (64-bit) floating-point
elements in |
| _mm256_cmp_ps⚠ | x86 and avxCompares packed single-precision (32-bit) floating-point
elements in |
| _mm256_cmpeq_epi8⚠ | x86 and avx2Compares packed 8-bit integers in |
| _mm256_cmpeq_epi16⚠ | x86 and avx2Compares packed 16-bit integers in |
| _mm256_cmpeq_epi32⚠ | x86 and avx2Compares packed 32-bit integers in |
| _mm256_cmpeq_epi64⚠ | x86 and avx2Compares packed 64-bit integers in |
| _mm256_cmpgt_epi8⚠ | x86 and avx2Compares packed 8-bit integers in |
| _mm256_cmpgt_epi16⚠ | x86 and avx2Compares packed 16-bit integers in |
| _mm256_cmpgt_epi32⚠ | x86 and avx2Compares packed 32-bit integers in |
| _mm256_cmpgt_epi64⚠ | x86 and avx2Compares packed 64-bit integers in |
| _mm256_cvtepi8_epi16⚠ | x86 and avx2Sign-extend 8-bit integers to 16-bit integers. |
| _mm256_cvtepi8_epi32⚠ | x86 and avx2Sign-extend 8-bit integers to 32-bit integers. |
| _mm256_cvtepi8_epi64⚠ | x86 and avx2Sign-extend 8-bit integers to 64-bit integers. |
| _mm256_cvtepi16_epi32⚠ | x86 and avx2Sign-extend 16-bit integers to 32-bit integers. |
| _mm256_cvtepi16_epi64⚠ | x86 and avx2Sign-extend 16-bit integers to 64-bit integers. |
| _mm256_cvtepi32_epi64⚠ | x86 and avx2Sign-extend 32-bit integers to 64-bit integers. |
| _mm256_cvtepi32_pd⚠ | x86 and avxConverts packed 32-bit integers in |
| _mm256_cvtepi32_ps⚠ | x86 and avxConverts packed 32-bit integers in |
| _mm256_cvtepu8_epi16⚠ | x86 and avx2Zero-extend unsigned 8-bit integers in |
| _mm256_cvtepu8_epi32⚠ | x86 and avx2Zero-extend the lower eight unsigned 8-bit integers in |
| _mm256_cvtepu8_epi64⚠ | x86 and avx2Zero-extend the lower four unsigned 8-bit integers in |
| _mm256_cvtepu16_epi32⚠ | x86 and avx2Zeroes extend packed unsigned 16-bit integers in |
| _mm256_cvtepu16_epi64⚠ | x86 and avx2Zero-extend the lower four unsigned 16-bit integers in |
| _mm256_cvtepu32_epi64⚠ | x86 and avx2Zero-extend unsigned 32-bit integers in |
| _mm256_cvtpd_epi32⚠ | x86 and avxConverts packed double-precision (64-bit) floating-point elements in |
| _mm256_cvtpd_ps⚠ | x86 and avxConverts packed double-precision (64-bit) floating-point elements in |
| _mm256_cvtps_epi32⚠ | x86 and avxConverts packed single-precision (32-bit) floating-point elements in |
| _mm256_cvtps_pd⚠ | x86 and avxConverts packed single-precision (32-bit) floating-point elements in |
| _mm256_cvtsd_f64⚠ | x86 and avx2Returns the first element of the input vector of |
| _mm256_cvtsi256_si32⚠ | x86 and avx2Returns the first element of the input vector of |
| _mm256_cvtss_f32⚠ | x86 and avxReturns the first element of the input vector of |
| _mm256_cvttpd_epi32⚠ | x86 and avxConverts packed double-precision (64-bit) floating-point elements in |
| _mm256_cvttps_epi32⚠ | x86 and avxConverts packed single-precision (32-bit) floating-point elements in |
| _mm256_div_pd⚠ | x86 and avxComputes the division of each of the 4 packed 64-bit floating-point elements
in |
| _mm256_div_ps⚠ | x86 and avxComputes the division of each of the 8 packed 32-bit floating-point elements
in |
| _mm256_dp_ps⚠ | x86 and avxConditionally multiplies the packed single-precision (32-bit) floating-point
elements in |
| _mm256_extract_epi8⚠ | x86 and avx2Extracts an 8-bit integer from |
| _mm256_extract_epi16⚠ | x86 and avx2Extracts a 16-bit integer from |
| _mm256_extract_epi32⚠ | x86 and avx2Extracts a 32-bit integer from |
| _mm256_extractf128_pd⚠ | x86 and avxExtracts 128 bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from |
| _mm256_extractf128_ps⚠ | x86 and avxExtracts 128 bits (composed of 4 packed single-precision (32-bit)
floating-point elements) from |
| _mm256_extractf128_si256⚠ | x86 and avxExtracts 128 bits (composed of integer data) from |
| _mm256_extracti128_si256⚠ | x86 and avx2Extracts 128 bits (of integer data) from |
| _mm256_floor_pd⚠ | x86 and avxRounds packed double-precision (64-bit) floating point elements in |
| _mm256_floor_ps⚠ | x86 and avxRounds packed single-precision (32-bit) floating point elements in |
| _mm256_fmadd_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm256_fmadd_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm256_fmaddsub_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm256_fmaddsub_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm256_fmsub_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm256_fmsub_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm256_fmsubadd_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm256_fmsubadd_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm256_fnmadd_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm256_fnmadd_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm256_fnmsub_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm256_fnmsub_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm256_hadd_epi16⚠ | x86 and avx2Horizontally adds adjacent pairs of 16-bit integers in |
| _mm256_hadd_epi32⚠ | x86 and avx2Horizontally adds adjacent pairs of 32-bit integers in |
| _mm256_hadd_pd⚠ | x86 and avxHorizontal addition of adjacent pairs in the two packed vectors
of 4 64-bit floating points |
| _mm256_hadd_ps⚠ | x86 and avxHorizontal addition of adjacent pairs in the two packed vectors
of 8 32-bit floating points |
| _mm256_hadds_epi16⚠ | x86 and avx2Horizontally adds adjacent pairs of 16-bit integers in |
| _mm256_hsub_epi16⚠ | x86 and avx2Horizontally subtract adjacent pairs of 16-bit integers in |
| _mm256_hsub_epi32⚠ | x86 and avx2Horizontally subtract adjacent pairs of 32-bit integers in |
| _mm256_hsub_pd⚠ | x86 and avxHorizontal subtraction of adjacent pairs in the two packed vectors
of 4 64-bit floating points |
| _mm256_hsub_ps⚠ | x86 and avxHorizontal subtraction of adjacent pairs in the two packed vectors
of 8 32-bit floating points |
| _mm256_hsubs_epi16⚠ | x86 and avx2Horizontally subtract adjacent pairs of 16-bit integers in |
| _mm256_i32gather_epi32⚠ | x86 and avx2Returns values from |
| _mm256_i32gather_epi64⚠ | x86 and avx2Returns values from |
| _mm256_i32gather_pd⚠ | x86 and avx2Returns values from |
| _mm256_i32gather_ps⚠ | x86 and avx2Returns values from |
| _mm256_i64gather_epi32⚠ | x86 and avx2Returns values from |
| _mm256_i64gather_epi64⚠ | x86 and avx2Returns values from |
| _mm256_i64gather_pd⚠ | x86 and avx2Returns values from |
| _mm256_i64gather_ps⚠ | x86 and avx2Returns values from |
| _mm256_insert_epi8⚠ | x86 and avxCopies |
| _mm256_insert_epi16⚠ | x86 and avxCopies |
| _mm256_insert_epi32⚠ | x86 and avxCopies |
| _mm256_insertf128_pd⚠ | x86 and avxCopies |
| _mm256_insertf128_ps⚠ | x86 and avxCopies |
| _mm256_insertf128_si256⚠ | x86 and avxCopies |
| _mm256_inserti128_si256⚠ | x86 and avx2Copies |
| _mm256_lddqu_si256⚠ | x86 and avxLoads 256-bits of integer data from unaligned memory into result.
This intrinsic may perform better than |
| _mm256_load_pd⚠ | x86 and avxLoads 256-bits (composed of 4 packed double-precision (64-bit)
floating-point elements) from memory into result.
|
| _mm256_load_ps⚠ | x86 and avxLoads 256-bits (composed of 8 packed single-precision (32-bit)
floating-point elements) from memory into result.
|
| _mm256_load_si256⚠ | x86 and avxLoads 256-bits of integer data from memory into result.
|
| _mm256_loadu2_m128⚠ | x86 and avx,sseLoads two 128-bit values (composed of 4 packed single-precision (32-bit)
floating-point elements) from memory, and combine them into a 256-bit
value.
|
| _mm256_loadu2_m128d⚠ | x86 and avx,sse2Loads two 128-bit values (composed of 2 packed double-precision (64-bit)
floating-point elements) from memory, and combine them into a 256-bit
value.
|
| _mm256_loadu2_m128i⚠ | x86 and avx,sse2Loads two 128-bit values (composed of integer data) from memory, and combine
them into a 256-bit value.
|
| _mm256_loadu_pd⚠ | x86 and avxLoads 256-bits (composed of 4 packed double-precision (64-bit)
floating-point elements) from memory into result.
|
| _mm256_loadu_ps⚠ | x86 and avxLoads 256-bits (composed of 8 packed single-precision (32-bit)
floating-point elements) from memory into result.
|
| _mm256_loadu_si256⚠ | x86 and avxLoads 256-bits of integer data from memory into result.
|
| _mm256_madd_epi16⚠ | x86 and avx2Multiplies packed signed 16-bit integers in |
| _mm256_maddubs_epi16⚠ | x86 and avx2Vertically multiplies each unsigned 8-bit integer from |
| _mm256_mask_i32gather_epi32⚠ | x86 and avx2Returns values from |
| _mm256_mask_i32gather_epi64⚠ | x86 and avx2Returns values from |
| _mm256_mask_i32gather_pd⚠ | x86 and avx2Returns values from |
| _mm256_mask_i32gather_ps⚠ | x86 and avx2Returns values from |
| _mm256_mask_i64gather_epi32⚠ | x86 and avx2Returns values from |
| _mm256_mask_i64gather_epi64⚠ | x86 and avx2Returns values from |
| _mm256_mask_i64gather_pd⚠ | x86 and avx2Returns values from |
| _mm256_mask_i64gather_ps⚠ | x86 and avx2Returns values from |
| _mm256_maskload_epi32⚠ | x86 and avx2Loads packed 32-bit integers from memory pointed by |
| _mm256_maskload_epi64⚠ | x86 and avx2Loads packed 64-bit integers from memory pointed by |
| _mm256_maskload_pd⚠ | x86 and avxLoads packed double-precision (64-bit) floating-point elements from memory
into result using |
| _mm256_maskload_ps⚠ | x86 and avxLoads packed single-precision (32-bit) floating-point elements from memory
into result using |
| _mm256_maskstore_epi32⚠ | x86 and avx2Stores packed 32-bit integers from |
| _mm256_maskstore_epi64⚠ | x86 and avx2Stores packed 64-bit integers from |
| _mm256_maskstore_pd⚠ | x86 and avxStores packed double-precision (64-bit) floating-point elements from |
| _mm256_maskstore_ps⚠ | x86 and avxStores packed single-precision (32-bit) floating-point elements from |
| _mm256_max_epi8⚠ | x86 and avx2Compares packed 8-bit integers in |
| _mm256_max_epi16⚠ | x86 and avx2Compares packed 16-bit integers in |
| _mm256_max_epi32⚠ | x86 and avx2Compares packed 32-bit integers in |
| _mm256_max_epu8⚠ | x86 and avx2Compares packed unsigned 8-bit integers in |
| _mm256_max_epu16⚠ | x86 and avx2Compares packed unsigned 16-bit integers in |
| _mm256_max_epu32⚠ | x86 and avx2Compares packed unsigned 32-bit integers in |
| _mm256_max_pd⚠ | x86 and avxCompares packed double-precision (64-bit) floating-point elements
in |
| _mm256_max_ps⚠ | x86 and avxCompares packed single-precision (32-bit) floating-point elements in |
| _mm256_min_epi8⚠ | x86 and avx2Compares packed 8-bit integers in |
| _mm256_min_epi16⚠ | x86 and avx2Compares packed 16-bit integers in |
| _mm256_min_epi32⚠ | x86 and avx2Compares packed 32-bit integers in |
| _mm256_min_epu8⚠ | x86 and avx2Compares packed unsigned 8-bit integers in |
| _mm256_min_epu16⚠ | x86 and avx2Compares packed unsigned 16-bit integers in |
| _mm256_min_epu32⚠ | x86 and avx2Compares packed unsigned 32-bit integers in |
| _mm256_min_pd⚠ | x86 and avxCompares packed double-precision (64-bit) floating-point elements
in |
| _mm256_min_ps⚠ | x86 and avxCompares packed single-precision (32-bit) floating-point elements in |
| _mm256_movedup_pd⚠ | x86 and avxDuplicate even-indexed double-precision (64-bit) floating-point elements
from |
| _mm256_movehdup_ps⚠ | x86 and avxDuplicate odd-indexed single-precision (32-bit) floating-point elements
from |
| _mm256_moveldup_ps⚠ | x86 and avxDuplicate even-indexed single-precision (32-bit) floating-point elements
from |
| _mm256_movemask_epi8⚠ | x86 and avx2Creates mask from the most significant bit of each 8-bit element in |
| _mm256_movemask_pd⚠ | x86 and avxSets each bit of the returned mask based on the most significant bit of the
corresponding packed double-precision (64-bit) floating-point element in
|
| _mm256_movemask_ps⚠ | x86 and avxSets each bit of the returned mask based on the most significant bit of the
corresponding packed single-precision (32-bit) floating-point element in
|
| _mm256_mpsadbw_epu8⚠ | x86 and avx2Computes the sum of absolute differences (SADs) of quadruplets of unsigned
8-bit integers in |
| _mm256_mul_epi32⚠ | x86 and avx2Multiplies the low 32-bit integers from each packed 64-bit element in
|
| _mm256_mul_epu32⚠ | x86 and avx2Multiplies the low unsigned 32-bit integers from each packed 64-bit
element in |
| _mm256_mul_pd⚠ | x86 and avxMultiplies packed double-precision (64-bit) floating-point elements
in |
| _mm256_mul_ps⚠ | x86 and avxMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm256_mulhi_epi16⚠ | x86 and avx2Multiplies the packed 16-bit integers in |
| _mm256_mulhi_epu16⚠ | x86 and avx2Multiplies the packed unsigned 16-bit integers in |
| _mm256_mulhrs_epi16⚠ | x86 and avx2Multiplies packed 16-bit integers in |
| _mm256_mullo_epi16⚠ | x86 and avx2Multiplies the packed 16-bit integers in |
| _mm256_mullo_epi32⚠ | x86 and avx2Multiplies the packed 32-bit integers in |
| _mm256_or_pd⚠ | x86 and avxComputes the bitwise OR packed double-precision (64-bit) floating-point
elements in |
| _mm256_or_ps⚠ | x86 and avxComputes the bitwise OR packed single-precision (32-bit) floating-point
elements in |
| _mm256_or_si256⚠ | x86 and avx2Computes the bitwise OR of 256 bits (representing integer data) in |
| _mm256_packs_epi16⚠ | x86 and avx2Converts packed 16-bit integers from |
| _mm256_packs_epi32⚠ | x86 and avx2Converts packed 32-bit integers from |
| _mm256_packus_epi16⚠ | x86 and avx2Converts packed 16-bit integers from |
| _mm256_packus_epi32⚠ | x86 and avx2Converts packed 32-bit integers from |
| _mm256_permute2f128_pd⚠ | x86 and avxShuffles 256 bits (composed of 4 packed double-precision (64-bit)
floating-point elements) selected by |
| _mm256_permute2f128_ps⚠ | x86 and avxShuffles 256 bits (composed of 8 packed single-precision (32-bit)
floating-point elements) selected by |
| _mm256_permute2f128_si256⚠ | x86 and avxShuffles 128-bits (composed of integer data) selected by |
| _mm256_permute2x128_si256⚠ | x86 and avx2Shuffles 128-bits of integer data selected by |
| _mm256_permute4x64_epi64⚠ | x86 and avx2Permutes 64-bit integers from |
| _mm256_permute4x64_pd⚠ | x86 and avx2Shuffles 64-bit floating-point elements in |
| _mm256_permute_pd⚠ | x86 and avxShuffles double-precision (64-bit) floating-point elements in |
| _mm256_permute_ps⚠ | x86 and avxShuffles single-precision (32-bit) floating-point elements in |
| _mm256_permutevar8x32_epi32⚠ | x86 and avx2Permutes packed 32-bit integers from |
| _mm256_permutevar8x32_ps⚠ | x86 and avx2Shuffles eight 32-bit foating-point elements in |
| _mm256_permutevar_pd⚠ | x86 and avxShuffles double-precision (64-bit) floating-point elements in |
| _mm256_permutevar_ps⚠ | x86 and avxShuffles single-precision (32-bit) floating-point elements in |
| _mm256_rcp_ps⚠ | x86 and avxComputes the approximate reciprocal of packed single-precision (32-bit)
floating-point elements in |
| _mm256_round_pd⚠ | x86 and avxRounds packed double-precision (64-bit) floating point elements in |
| _mm256_round_ps⚠ | x86 and avxRounds packed single-precision (32-bit) floating point elements in |
| _mm256_rsqrt_ps⚠ | x86 and avxComputes the approximate reciprocal square root of packed single-precision
(32-bit) floating-point elements in |
| _mm256_sad_epu8⚠ | x86 and avx2Computes the absolute differences of packed unsigned 8-bit integers in |
| _mm256_set1_epi8⚠ | x86 and avxBroadcasts 8-bit integer |
| _mm256_set1_epi16⚠ | x86 and avxBroadcasts 16-bit integer |
| _mm256_set1_epi32⚠ | x86 and avxBroadcasts 32-bit integer |
| _mm256_set1_epi64x⚠ | x86 and avxBroadcasts 64-bit integer |
| _mm256_set1_pd⚠ | x86 and avxBroadcasts double-precision (64-bit) floating-point value |
| _mm256_set1_ps⚠ | x86 and avxBroadcasts single-precision (32-bit) floating-point value |
| _mm256_set_epi8⚠ | x86 and avxSets packed 8-bit integers in returned vector with the supplied values in reverse order. |
| _mm256_set_epi16⚠ | x86 and avxSets packed 16-bit integers in returned vector with the supplied values. |
| _mm256_set_epi32⚠ | x86 and avxSets packed 32-bit integers in returned vector with the supplied values. |
| _mm256_set_epi64x⚠ | x86 and avxSets packed 64-bit integers in returned vector with the supplied values. |
| _mm256_set_m128⚠ | x86 and avxSets packed __m256 returned vector with the supplied values. |
| _mm256_set_m128d⚠ | x86 and avxSets packed __m256d returned vector with the supplied values. |
| _mm256_set_m128i⚠ | x86 and avxSets packed __m256i returned vector with the supplied values. |
| _mm256_set_pd⚠ | x86 and avxSets packed double-precision (64-bit) floating-point elements in returned vector with the supplied values. |
| _mm256_set_ps⚠ | x86 and avxSets packed single-precision (32-bit) floating-point elements in returned vector with the supplied values. |
| _mm256_setr_epi8⚠ | x86 and avxSets packed 8-bit integers in returned vector with the supplied values in reverse order. |
| _mm256_setr_epi16⚠ | x86 and avxSets packed 16-bit integers in returned vector with the supplied values in reverse order. |
| _mm256_setr_epi32⚠ | x86 and avxSets packed 32-bit integers in returned vector with the supplied values in reverse order. |
| _mm256_setr_epi64x⚠ | x86 and avxSets packed 64-bit integers in returned vector with the supplied values in reverse order. |
| _mm256_setr_m128⚠ | x86 and avxSets packed __m256 returned vector with the supplied values. |
| _mm256_setr_m128d⚠ | x86 and avxSets packed __m256d returned vector with the supplied values. |
| _mm256_setr_m128i⚠ | x86 and avxSets packed __m256i returned vector with the supplied values. |
| _mm256_setr_pd⚠ | x86 and avxSets packed double-precision (64-bit) floating-point elements in returned vector with the supplied values in reverse order. |
| _mm256_setr_ps⚠ | x86 and avxSets packed single-precision (32-bit) floating-point elements in returned vector with the supplied values in reverse order. |
| _mm256_setzero_pd⚠ | x86 and avxReturns vector of type __m256d with all elements set to zero. |
| _mm256_setzero_ps⚠ | x86 and avxReturns vector of type __m256 with all elements set to zero. |
| _mm256_setzero_si256⚠ | x86 and avxReturns vector of type __m256i with all elements set to zero. |
| _mm256_shuffle_epi8⚠ | x86 and avx2Shuffles bytes from |
| _mm256_shuffle_epi32⚠ | x86 and avx2Shuffles 32-bit integers in 128-bit lanes of |
| _mm256_shuffle_pd⚠ | x86 and avxShuffles double-precision (64-bit) floating-point elements within 128-bit
lanes using the control in |
| _mm256_shuffle_ps⚠ | x86 and avxShuffles single-precision (32-bit) floating-point elements in |
| _mm256_shufflehi_epi16⚠ | x86 and avx2Shuffles 16-bit integers in the high 64 bits of 128-bit lanes of |
| _mm256_shufflelo_epi16⚠ | x86 and avx2Shuffles 16-bit integers in the low 64 bits of 128-bit lanes of |
| _mm256_sign_epi8⚠ | x86 and avx2Negates packed 8-bit integers in |
| _mm256_sign_epi16⚠ | x86 and avx2Negates packed 16-bit integers in |
| _mm256_sign_epi32⚠ | x86 and avx2Negates packed 32-bit integers in |
| _mm256_sll_epi16⚠ | x86 and avx2Shifts packed 16-bit integers in |
| _mm256_sll_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm256_sll_epi64⚠ | x86 and avx2Shifts packed 64-bit integers in |
| _mm256_slli_epi16⚠ | x86 and avx2Shifts packed 16-bit integers in |
| _mm256_slli_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm256_slli_epi64⚠ | x86 and avx2Shifts packed 64-bit integers in |
| _mm256_slli_si256⚠ | x86 and avx2Shifts 128-bit lanes in |
| _mm256_sllv_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm256_sllv_epi64⚠ | x86 and avx2Shifts packed 64-bit integers in |
| _mm256_sqrt_pd⚠ | x86 and avxReturns the square root of packed double-precision (64-bit) floating point
elements in |
| _mm256_sqrt_ps⚠ | x86 and avxReturns the square root of packed single-precision (32-bit) floating point
elements in |
| _mm256_sra_epi16⚠ | x86 and avx2Shifts packed 16-bit integers in |
| _mm256_sra_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm256_srai_epi16⚠ | x86 and avx2Shifts packed 16-bit integers in |
| _mm256_srai_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm256_srav_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm256_srl_epi16⚠ | x86 and avx2Shifts packed 16-bit integers in |
| _mm256_srl_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm256_srl_epi64⚠ | x86 and avx2Shifts packed 64-bit integers in |
| _mm256_srli_epi16⚠ | x86 and avx2Shifts packed 16-bit integers in |
| _mm256_srli_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm256_srli_epi64⚠ | x86 and avx2Shifts packed 64-bit integers in |
| _mm256_srli_si256⚠ | x86 and avx2Shifts 128-bit lanes in |
| _mm256_srlv_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm256_srlv_epi64⚠ | x86 and avx2Shifts packed 64-bit integers in |
| _mm256_store_pd⚠ | x86 and avxStores 256-bits (composed of 4 packed double-precision (64-bit)
floating-point elements) from |
| _mm256_store_ps⚠ | x86 and avxStores 256-bits (composed of 8 packed single-precision (32-bit)
floating-point elements) from |
| _mm256_store_si256⚠ | x86 and avxStores 256-bits of integer data from |
| _mm256_storeu2_m128⚠ | x86 and avx,sseStores the high and low 128-bit halves (each composed of 4 packed
single-precision (32-bit) floating-point elements) from |
| _mm256_storeu2_m128d⚠ | x86 and avx,sse2Stores the high and low 128-bit halves (each composed of 2 packed
double-precision (64-bit) floating-point elements) from |
| _mm256_storeu2_m128i⚠ | x86 and avx,sse2Stores the high and low 128-bit halves (each composed of integer data) from
|
| _mm256_storeu_pd⚠ | x86 and avxStores 256-bits (composed of 4 packed double-precision (64-bit)
floating-point elements) from |
| _mm256_storeu_ps⚠ | x86 and avxStores 256-bits (composed of 8 packed single-precision (32-bit)
floating-point elements) from |
| _mm256_storeu_si256⚠ | x86 and avxStores 256-bits of integer data from |
| _mm256_stream_pd⚠ | x86 and avxMoves double-precision values from a 256-bit vector of |
| _mm256_stream_ps⚠ | x86 and avxMoves single-precision floating point values from a 256-bit vector
of |
| _mm256_stream_si256⚠ | x86 and avxMoves integer data from a 256-bit integer vector to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon) |
| _mm256_sub_epi8⚠ | x86 and avx2Subtract packed 8-bit integers in |
| _mm256_sub_epi16⚠ | x86 and avx2Subtract packed 16-bit integers in |
| _mm256_sub_epi32⚠ | x86 and avx2Subtract packed 32-bit integers in |
| _mm256_sub_epi64⚠ | x86 and avx2Subtract packed 64-bit integers in |
| _mm256_sub_pd⚠ | x86 and avxSubtracts packed double-precision (64-bit) floating-point elements in |
| _mm256_sub_ps⚠ | x86 and avxSubtracts packed single-precision (32-bit) floating-point elements in |
| _mm256_subs_epi8⚠ | x86 and avx2Subtract packed 8-bit integers in |
| _mm256_subs_epi16⚠ | x86 and avx2Subtract packed 16-bit integers in |
| _mm256_subs_epu8⚠ | x86 and avx2Subtract packed unsigned 8-bit integers in |
| _mm256_subs_epu16⚠ | x86 and avx2Subtract packed unsigned 16-bit integers in |
| _mm256_testc_pd⚠ | x86 and avxComputes the bitwise AND of 256 bits (representing double-precision (64-bit)
floating-point elements) in |
| _mm256_testc_ps⚠ | x86 and avxComputes the bitwise AND of 256 bits (representing single-precision (32-bit)
floating-point elements) in |
| _mm256_testc_si256⚠ | x86 and avxComputes the bitwise AND of 256 bits (representing integer data) in |
| _mm256_testnzc_pd⚠ | x86 and avxComputes the bitwise AND of 256 bits (representing double-precision (64-bit)
floating-point elements) in |
| _mm256_testnzc_ps⚠ | x86 and avxComputes the bitwise AND of 256 bits (representing single-precision (32-bit)
floating-point elements) in |
| _mm256_testnzc_si256⚠ | x86 and avxComputes the bitwise AND of 256 bits (representing integer data) in |
| _mm256_testz_pd⚠ | x86 and avxComputes the bitwise AND of 256 bits (representing double-precision (64-bit)
floating-point elements) in |
| _mm256_testz_ps⚠ | x86 and avxComputes the bitwise AND of 256 bits (representing single-precision (32-bit)
floating-point elements) in |
| _mm256_testz_si256⚠ | x86 and avxComputes the bitwise AND of 256 bits (representing integer data) in |
| _mm256_undefined_pd⚠ | x86 and avxReturns vector of type |
| _mm256_undefined_ps⚠ | x86 and avxReturns vector of type |
| _mm256_undefined_si256⚠ | x86 and avxReturns vector of type __m256i with undefined elements. |
| _mm256_unpackhi_epi8⚠ | x86 and avx2Unpacks and interleave 8-bit integers from the high half of each
128-bit lane in |
| _mm256_unpackhi_epi16⚠ | x86 and avx2Unpacks and interleave 16-bit integers from the high half of each
128-bit lane of |
| _mm256_unpackhi_epi32⚠ | x86 and avx2Unpacks and interleave 32-bit integers from the high half of each
128-bit lane of |
| _mm256_unpackhi_epi64⚠ | x86 and avx2Unpacks and interleave 64-bit integers from the high half of each
128-bit lane of |
| _mm256_unpackhi_pd⚠ | x86 and avxUnpacks and interleave double-precision (64-bit) floating-point elements
from the high half of each 128-bit lane in |
| _mm256_unpackhi_ps⚠ | x86 and avxUnpacks and interleave single-precision (32-bit) floating-point elements
from the high half of each 128-bit lane in |
| _mm256_unpacklo_epi8⚠ | x86 and avx2Unpacks and interleave 8-bit integers from the low half of each
128-bit lane of |
| _mm256_unpacklo_epi16⚠ | x86 and avx2Unpacks and interleave 16-bit integers from the low half of each
128-bit lane of |
| _mm256_unpacklo_epi32⚠ | x86 and avx2Unpacks and interleave 32-bit integers from the low half of each
128-bit lane of |
| _mm256_unpacklo_epi64⚠ | x86 and avx2Unpacks and interleave 64-bit integers from the low half of each
128-bit lane of |
| _mm256_unpacklo_pd⚠ | x86 and avxUnpacks and interleave double-precision (64-bit) floating-point elements
from the low half of each 128-bit lane in |
| _mm256_unpacklo_ps⚠ | x86 and avxUnpacks and interleave single-precision (32-bit) floating-point elements
from the low half of each 128-bit lane in |
| _mm256_xor_pd⚠ | x86 and avxComputes the bitwise XOR of packed double-precision (64-bit) floating-point
elements in |
| _mm256_xor_ps⚠ | x86 and avxComputes the bitwise XOR of packed single-precision (32-bit) floating-point
elements in |
| _mm256_xor_si256⚠ | x86 and avx2Computes the bitwise XOR of 256 bits (representing integer data)
in |
| _mm256_zeroall⚠ | x86 and avxZeroes the contents of all XMM or YMM registers. |
| _mm256_zeroupper⚠ | x86 and avxZeroes the upper 128 bits of all YMM registers; the lower 128-bits of the registers are unmodified. |
| _mm256_zextpd128_pd256⚠ | x86 and avx,sse2Constructs a 256-bit floating-point vector of |
| _mm256_zextps128_ps256⚠ | x86 and avx,sseConstructs a 256-bit floating-point vector of |
| _mm256_zextsi128_si256⚠ | x86 and avx,sse2Constructs a 256-bit integer vector from a 128-bit integer vector. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero. |
| _mm512_storeu_ps⚠ | x86 and avx512fStores 512-bits (composed of 16 packed single-precision (32-bit)
floating-point elements) from |
| _mm_abs_epi8⚠ | x86 and ssse3Computes the absolute value of packed 8-bit signed integers in |
| _mm_abs_epi16⚠ | x86 and ssse3Computes the absolute value of each of the packed 16-bit signed integers in
|
| _mm_abs_epi32⚠ | x86 and ssse3Computes the absolute value of each of the packed 32-bit signed integers in
|
| _mm_add_epi8⚠ | x86 and sse2Adds packed 8-bit integers in |
| _mm_add_epi16⚠ | x86 and sse2Adds packed 16-bit integers in |
| _mm_add_epi32⚠ | x86 and sse2Adds packed 32-bit integers in |
| _mm_add_epi64⚠ | x86 and sse2Adds packed 64-bit integers in |
| _mm_add_pd⚠ | x86 and sse2Adds packed double-precision (64-bit) floating-point elements in |
| _mm_add_ps⚠ | x86 and sseAdds __m128 vectors. |
| _mm_add_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_add_ss⚠ | x86 and sseAdds the first component of |
| _mm_adds_epi8⚠ | x86 and sse2Adds packed 8-bit integers in |
| _mm_adds_epi16⚠ | x86 and sse2Adds packed 16-bit integers in |
| _mm_adds_epu8⚠ | x86 and sse2Adds packed unsigned 8-bit integers in |
| _mm_adds_epu16⚠ | x86 and sse2Adds packed unsigned 16-bit integers in |
| _mm_addsub_pd⚠ | x86 and sse3Alternatively add and subtract packed double-precision (64-bit)
floating-point elements in |
| _mm_addsub_ps⚠ | x86 and sse3Alternatively add and subtract packed single-precision (32-bit)
floating-point elements in |
| _mm_aesdec_si128⚠ | x86 and aesPerforms one round of an AES decryption flow on data (state) in |
| _mm_aesdeclast_si128⚠ | x86 and aesPerforms the last round of an AES decryption flow on data (state) in |
| _mm_aesenc_si128⚠ | x86 and aesPerforms one round of an AES encryption flow on data (state) in |
| _mm_aesenclast_si128⚠ | x86 and aesPerforms the last round of an AES encryption flow on data (state) in |
| _mm_aesimc_si128⚠ | x86 and aesPerforms the |
| _mm_aeskeygenassist_si128⚠ | x86 and aesAssist in expanding the AES cipher key. |
| _mm_alignr_epi8⚠ | x86 and ssse3Concatenate 16-byte blocks in |
| _mm_and_pd⚠ | x86 and sse2Computes the bitwise AND of packed double-precision (64-bit) floating-point
elements in |
| _mm_and_ps⚠ | x86 and sseBitwise AND of packed single-precision (32-bit) floating-point elements. |
| _mm_and_si128⚠ | x86 and sse2Computes the bitwise AND of 128 bits (representing integer data) in |
| _mm_andnot_pd⚠ | x86 and sse2Computes the bitwise NOT of |
| _mm_andnot_ps⚠ | x86 and sseBitwise AND-NOT of packed single-precision (32-bit) floating-point elements. |
| _mm_andnot_si128⚠ | x86 and sse2Computes the bitwise NOT of 128 bits (representing integer data) in |
| _mm_avg_epu8⚠ | x86 and sse2Averages packed unsigned 8-bit integers in |
| _mm_avg_epu16⚠ | x86 and sse2Averages packed unsigned 16-bit integers in |
| _mm_blend_epi16⚠ | x86 and sse4.1Blend packed 16-bit integers from |
| _mm_blend_epi32⚠ | x86 and avx2Blends packed 32-bit integers from |
| _mm_blend_pd⚠ | x86 and sse4.1Blend packed double-precision (64-bit) floating-point elements from |
| _mm_blend_ps⚠ | x86 and sse4.1Blend packed single-precision (32-bit) floating-point elements from |
| _mm_blendv_epi8⚠ | x86 and sse4.1Blend packed 8-bit integers from |
| _mm_blendv_pd⚠ | x86 and sse4.1Blend packed double-precision (64-bit) floating-point elements from |
| _mm_blendv_ps⚠ | x86 and sse4.1Blend packed single-precision (32-bit) floating-point elements from |
| _mm_broadcast_ss⚠ | x86 and avxBroadcasts a single-precision (32-bit) floating-point element from memory to all elements of the returned vector. |
| _mm_broadcastb_epi8⚠ | x86 and avx2Broadcasts the low packed 8-bit integer from |
| _mm_broadcastd_epi32⚠ | x86 and avx2Broadcasts the low packed 32-bit integer from |
| _mm_broadcastq_epi64⚠ | x86 and avx2Broadcasts the low packed 64-bit integer from |
| _mm_broadcastsd_pd⚠ | x86 and avx2Broadcasts the low double-precision (64-bit) floating-point element
from |
| _mm_broadcastss_ps⚠ | x86 and avx2Broadcasts the low single-precision (32-bit) floating-point element
from |
| _mm_broadcastw_epi16⚠ | x86 and avx2Broadcasts the low packed 16-bit integer from a to all elements of the 128-bit returned value |
| _mm_bslli_si128⚠ | x86 and sse2Shifts |
| _mm_bsrli_si128⚠ | x86 and sse2Shifts |
| _mm_castpd_ps⚠ | x86 and sse2Casts a 128-bit floating-point vector of |
| _mm_castpd_si128⚠ | x86 and sse2Casts a 128-bit floating-point vector of |
| _mm_castps_pd⚠ | x86 and sse2Casts a 128-bit floating-point vector of |
| _mm_castps_si128⚠ | x86 and sse2Casts a 128-bit floating-point vector of |
| _mm_castsi128_pd⚠ | x86 and sse2Casts a 128-bit integer vector into a 128-bit floating-point vector
of |
| _mm_castsi128_ps⚠ | x86 and sse2Casts a 128-bit integer vector into a 128-bit floating-point vector
of |
| _mm_ceil_pd⚠ | x86 and sse4.1Round the packed double-precision (64-bit) floating-point elements in |
| _mm_ceil_ps⚠ | x86 and sse4.1Round the packed single-precision (32-bit) floating-point elements in |
| _mm_ceil_sd⚠ | x86 and sse4.1Round the lower double-precision (64-bit) floating-point element in |
| _mm_ceil_ss⚠ | x86 and sse4.1Round the lower single-precision (32-bit) floating-point element in |
| _mm_clflush⚠ | x86 and sse2Invalidates and flushes the cache line that contains |
| _mm_clmulepi64_si128⚠ | x86 and pclmulqdqPerforms a carry-less multiplication of two 64-bit polynomials over the finite field GF(2^k). |
| _mm_cmp_pd⚠ | x86 and avx,sse2Compares packed double-precision (64-bit) floating-point
elements in |
| _mm_cmp_ps⚠ | x86 and avx,sseCompares packed single-precision (32-bit) floating-point
elements in |
| _mm_cmp_sd⚠ | x86 and avx,sse2Compares the lower double-precision (64-bit) floating-point element in
|
| _mm_cmp_ss⚠ | x86 and avx,sseCompares the lower single-precision (32-bit) floating-point element in
|
| _mm_cmpeq_epi8⚠ | x86 and sse2Compares packed 8-bit integers in |
| _mm_cmpeq_epi16⚠ | x86 and sse2Compares packed 16-bit integers in |
| _mm_cmpeq_epi32⚠ | x86 and sse2Compares packed 32-bit integers in |
| _mm_cmpeq_epi64⚠ | x86 and sse4.1Compares packed 64-bit integers in |
| _mm_cmpeq_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpeq_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpeq_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpeq_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmpestra⚠ | x86 and sse4.2Compares packed strings in |
| _mm_cmpestrc⚠ | x86 and sse4.2Compares packed strings in |
| _mm_cmpestri⚠ | x86 and sse4.2Compares packed strings |
| _mm_cmpestrm⚠ | x86 and sse4.2Compares packed strings in |
| _mm_cmpestro⚠ | x86 and sse4.2Compares packed strings in |
| _mm_cmpestrs⚠ | x86 and sse4.2Compares packed strings in |
| _mm_cmpestrz⚠ | x86 and sse4.2Compares packed strings in |
| _mm_cmpge_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpge_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpge_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpge_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmpgt_epi8⚠ | x86 and sse2Compares packed 8-bit integers in |
| _mm_cmpgt_epi16⚠ | x86 and sse2Compares packed 16-bit integers in |
| _mm_cmpgt_epi32⚠ | x86 and sse2Compares packed 32-bit integers in |
| _mm_cmpgt_epi64⚠ | x86 and sse4.2Compares packed 64-bit integers in |
| _mm_cmpgt_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpgt_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpgt_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpgt_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmpistra⚠ | x86 and sse4.2Compares packed strings with implicit lengths in |
| _mm_cmpistrc⚠ | x86 and sse4.2Compares packed strings with implicit lengths in |
| _mm_cmpistri⚠ | x86 and sse4.2Compares packed strings with implicit lengths in |
| _mm_cmpistrm⚠ | x86 and sse4.2Compares packed strings with implicit lengths in |
| _mm_cmpistro⚠ | x86 and sse4.2Compares packed strings with implicit lengths in |
| _mm_cmpistrs⚠ | x86 and sse4.2Compares packed strings with implicit lengths in |
| _mm_cmpistrz⚠ | x86 and sse4.2Compares packed strings with implicit lengths in |
| _mm_cmple_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmple_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmple_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmple_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmplt_epi8⚠ | x86 and sse2Compares packed 8-bit integers in |
| _mm_cmplt_epi16⚠ | x86 and sse2Compares packed 16-bit integers in |
| _mm_cmplt_epi32⚠ | x86 and sse2Compares packed 32-bit integers in |
| _mm_cmplt_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmplt_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmplt_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmplt_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmpneq_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpneq_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpneq_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpneq_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmpnge_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpnge_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpnge_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpnge_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmpngt_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpngt_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpngt_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpngt_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmpnle_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpnle_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpnle_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpnle_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmpnlt_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpnlt_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpnlt_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpnlt_ss⚠ | x86 and sseCompares the lowest |
| _mm_cmpord_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpord_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpord_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpord_ss⚠ | x86 and sseChecks if the lowest |
| _mm_cmpunord_pd⚠ | x86 and sse2Compares corresponding elements in |
| _mm_cmpunord_ps⚠ | x86 and sseCompares each of the four floats in |
| _mm_cmpunord_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_cmpunord_ss⚠ | x86 and sseChecks if the lowest |
| _mm_comieq_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_comieq_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_comige_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_comige_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_comigt_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_comigt_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_comile_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_comile_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_comilt_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_comilt_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_comineq_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_comineq_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_crc32_u8⚠ | x86 and sse4.2Starting with the initial value in |
| _mm_crc32_u16⚠ | x86 and sse4.2Starting with the initial value in |
| _mm_crc32_u32⚠ | x86 and sse4.2Starting with the initial value in |
| _mm_cvt_si2ss⚠ | x86 and sseAlias for |
| _mm_cvt_ss2si⚠ | x86 and sseAlias for |
| _mm_cvtepi8_epi16⚠ | x86 and sse4.1Sign extend packed 8-bit integers in |
| _mm_cvtepi8_epi32⚠ | x86 and sse4.1Sign extend packed 8-bit integers in |
| _mm_cvtepi8_epi64⚠ | x86 and sse4.1Sign extend packed 8-bit integers in the low 8 bytes of |
| _mm_cvtepi16_epi32⚠ | x86 and sse4.1Sign extend packed 16-bit integers in |
| _mm_cvtepi16_epi64⚠ | x86 and sse4.1Sign extend packed 16-bit integers in |
| _mm_cvtepi32_epi64⚠ | x86 and sse4.1Sign extend packed 32-bit integers in |
| _mm_cvtepi32_pd⚠ | x86 and sse2Converts the lower two packed 32-bit integers in |
| _mm_cvtepi32_ps⚠ | x86 and sse2Converts packed 32-bit integers in |
| _mm_cvtepu8_epi16⚠ | x86 and sse4.1Zeroes extend packed unsigned 8-bit integers in |
| _mm_cvtepu8_epi32⚠ | x86 and sse4.1Zeroes extend packed unsigned 8-bit integers in |
| _mm_cvtepu8_epi64⚠ | x86 and sse4.1Zeroes extend packed unsigned 8-bit integers in |
| _mm_cvtepu16_epi32⚠ | x86 and sse4.1Zeroes extend packed unsigned 16-bit integers in |
| _mm_cvtepu16_epi64⚠ | x86 and sse4.1Zeroes extend packed unsigned 16-bit integers in |
| _mm_cvtepu32_epi64⚠ | x86 and sse4.1Zeroes extend packed unsigned 32-bit integers in |
| _mm_cvtpd_epi32⚠ | x86 and sse2Converts packed double-precision (64-bit) floating-point elements in |
| _mm_cvtpd_ps⚠ | x86 and sse2Converts packed double-precision (64-bit) floating-point elements in |
| _mm_cvtps_epi32⚠ | x86 and sse2Converts packed single-precision (32-bit) floating-point elements in |
| _mm_cvtps_pd⚠ | x86 and sse2Converts packed single-precision (32-bit) floating-point elements in |
| _mm_cvtsd_f64⚠ | x86 and sse2Returns the lower double-precision (64-bit) floating-point element of |
| _mm_cvtsd_si32⚠ | x86 and sse2Converts the lower double-precision (64-bit) floating-point element in a to a 32-bit integer. |
| _mm_cvtsd_ss⚠ | x86 and sse2Converts the lower double-precision (64-bit) floating-point element in |
| _mm_cvtsi32_sd⚠ | x86 and sse2Returns |
| _mm_cvtsi32_si128⚠ | x86 and sse2Returns a vector whose lowest element is |
| _mm_cvtsi32_ss⚠ | x86 and sseConverts a 32 bit integer to a 32 bit float. The result vector is the input
vector |
| _mm_cvtsi128_si32⚠ | x86 and sse2Returns the lowest element of |
| _mm_cvtss_f32⚠ | x86 and sseExtracts the lowest 32 bit float from the input vector. |
| _mm_cvtss_sd⚠ | x86 and sse2Converts the lower single-precision (32-bit) floating-point element in |
| _mm_cvtss_si32⚠ | x86 and sseConverts the lowest 32 bit float in the input vector to a 32 bit integer. |
| _mm_cvtt_ss2si⚠ | x86 and sseAlias for |
| _mm_cvttpd_epi32⚠ | x86 and sse2Converts packed double-precision (64-bit) floating-point elements in |
| _mm_cvttps_epi32⚠ | x86 and sse2Converts packed single-precision (32-bit) floating-point elements in |
| _mm_cvttsd_si32⚠ | x86 and sse2Converts the lower double-precision (64-bit) floating-point element in |
| _mm_cvttss_si32⚠ | x86 and sseConverts the lowest 32 bit float in the input vector to a 32 bit integer with truncation. |
| _mm_div_pd⚠ | x86 and sse2Divide packed double-precision (64-bit) floating-point elements in |
| _mm_div_ps⚠ | x86 and sseDivides __m128 vectors. |
| _mm_div_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_div_ss⚠ | x86 and sseDivides the first component of |
| _mm_dp_pd⚠ | x86 and sse4.1Returns the dot product of two __m128d vectors. |
| _mm_dp_ps⚠ | x86 and sse4.1Returns the dot product of two __m128 vectors. |
| _mm_extract_epi8⚠ | x86 and sse4.1Extracts an 8-bit integer from |
| _mm_extract_epi16⚠ | x86 and sse2Returns the |
| _mm_extract_epi32⚠ | x86 and sse4.1Extracts an 32-bit integer from |
| _mm_extract_ps⚠ | x86 and sse4.1Extracts a single-precision (32-bit) floating-point element from |
| _mm_extract_si64⚠ | x86 and sse4aExtracts the bit range specified by |
| _mm_floor_pd⚠ | x86 and sse4.1Round the packed double-precision (64-bit) floating-point elements in |
| _mm_floor_ps⚠ | x86 and sse4.1Round the packed single-precision (32-bit) floating-point elements in |
| _mm_floor_sd⚠ | x86 and sse4.1Round the lower double-precision (64-bit) floating-point element in |
| _mm_floor_ss⚠ | x86 and sse4.1Round the lower single-precision (32-bit) floating-point element in |
| _mm_fmadd_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm_fmadd_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm_fmadd_sd⚠ | x86 and fmaMultiplies the lower double-precision (64-bit) floating-point elements in
|
| _mm_fmadd_ss⚠ | x86 and fmaMultiplies the lower single-precision (32-bit) floating-point elements in
|
| _mm_fmaddsub_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm_fmaddsub_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm_fmsub_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm_fmsub_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm_fmsub_sd⚠ | x86 and fmaMultiplies the lower double-precision (64-bit) floating-point elements in
|
| _mm_fmsub_ss⚠ | x86 and fmaMultiplies the lower single-precision (32-bit) floating-point elements in
|
| _mm_fmsubadd_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm_fmsubadd_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm_fnmadd_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm_fnmadd_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm_fnmadd_sd⚠ | x86 and fmaMultiplies the lower double-precision (64-bit) floating-point elements in
|
| _mm_fnmadd_ss⚠ | x86 and fmaMultiplies the lower single-precision (32-bit) floating-point elements in
|
| _mm_fnmsub_pd⚠ | x86 and fmaMultiplies packed double-precision (64-bit) floating-point elements in |
| _mm_fnmsub_ps⚠ | x86 and fmaMultiplies packed single-precision (32-bit) floating-point elements in |
| _mm_fnmsub_sd⚠ | x86 and fmaMultiplies the lower double-precision (64-bit) floating-point elements in
|
| _mm_fnmsub_ss⚠ | x86 and fmaMultiplies the lower single-precision (32-bit) floating-point elements in
|
| _mm_getcsr⚠ | x86 and sseGets the unsigned 32-bit value of the MXCSR control and status register. |
| _mm_hadd_epi16⚠ | x86 and ssse3Horizontally adds the adjacent pairs of values contained in 2 packed
128-bit vectors of |
| _mm_hadd_epi32⚠ | x86 and ssse3Horizontally adds the adjacent pairs of values contained in 2 packed
128-bit vectors of |
| _mm_hadd_pd⚠ | x86 and sse3Horizontally adds adjacent pairs of double-precision (64-bit)
floating-point elements in |
| _mm_hadd_ps⚠ | x86 and sse3Horizontally adds adjacent pairs of single-precision (32-bit)
floating-point elements in |
| _mm_hadds_epi16⚠ | x86 and ssse3Horizontally adds the adjacent pairs of values contained in 2 packed
128-bit vectors of |
| _mm_hsub_epi16⚠ | x86 and ssse3Horizontally subtract the adjacent pairs of values contained in 2
packed 128-bit vectors of |
| _mm_hsub_epi32⚠ | x86 and ssse3Horizontally subtract the adjacent pairs of values contained in 2
packed 128-bit vectors of |
| _mm_hsub_pd⚠ | x86 and sse3Horizontally subtract adjacent pairs of double-precision (64-bit)
floating-point elements in |
| _mm_hsub_ps⚠ | x86 and sse3Horizontally adds adjacent pairs of single-precision (32-bit)
floating-point elements in |
| _mm_hsubs_epi16⚠ | x86 and ssse3Horizontally subtract the adjacent pairs of values contained in 2
packed 128-bit vectors of |
| _mm_i32gather_epi32⚠ | x86 and avx2Returns values from |
| _mm_i32gather_epi64⚠ | x86 and avx2Returns values from |
| _mm_i32gather_pd⚠ | x86 and avx2Returns values from |
| _mm_i32gather_ps⚠ | x86 and avx2Returns values from |
| _mm_i64gather_epi32⚠ | x86 and avx2Returns values from |
| _mm_i64gather_epi64⚠ | x86 and avx2Returns values from |
| _mm_i64gather_pd⚠ | x86 and avx2Returns values from |
| _mm_i64gather_ps⚠ | x86 and avx2Returns values from |
| _mm_insert_epi8⚠ | x86 and sse4.1Returns a copy of |
| _mm_insert_epi16⚠ | x86 and sse2Returns a new vector where the |
| _mm_insert_epi32⚠ | x86 and sse4.1Returns a copy of |
| _mm_insert_ps⚠ | x86 and sse4.1Select a single value in |
| _mm_insert_si64⚠ | x86 and sse4aInserts the |
| _mm_lddqu_si128⚠ | x86 and sse3Loads 128-bits of integer data from unaligned memory.
This intrinsic may perform better than |
| _mm_lfence⚠ | x86 and sse2Performs a serializing operation on all load-from-memory instructions that were issued prior to this instruction. |
| _mm_load1_pd⚠ | x86 and sse2Loads a double-precision (64-bit) floating-point element from memory into both elements of returned vector. |
| _mm_load1_ps⚠ | x86 and sseConstruct a |
| _mm_load_pd⚠ | x86 and sse2Loads 128-bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from memory into the returned vector.
|
| _mm_load_pd1⚠ | x86 and sse2Loads a double-precision (64-bit) floating-point element from memory into both elements of returned vector. |
| _mm_load_ps⚠ | x86 and sseLoads four |
| _mm_load_ps1⚠ | x86 and sseAlias for |
| _mm_load_sd⚠ | x86 and sse2Loads a 64-bit double-precision value to the low element of a 128-bit integer vector and clears the upper element. |
| _mm_load_si128⚠ | x86 and sse2Loads 128-bits of integer data from memory into a new vector. |
| _mm_load_ss⚠ | x86 and sseConstruct a |
| _mm_loaddup_pd⚠ | x86 and sse3Loads a double-precision (64-bit) floating-point element from memory into both elements of return vector. |
| _mm_loadh_pd⚠ | x86 and sse2Loads a double-precision value into the high-order bits of a 128-bit
vector of |
| _mm_loadl_epi64⚠ | x86 and sse2Loads 64-bit integer from memory into first element of returned vector. |
| _mm_loadl_pd⚠ | x86 and sse2Loads a double-precision value into the low-order bits of a 128-bit
vector of |
| _mm_loadr_pd⚠ | x86 and sse2Loads 2 double-precision (64-bit) floating-point elements from memory into
the returned vector in reverse order. |
| _mm_loadr_ps⚠ | x86 and sseLoads four |
| _mm_loadu_pd⚠ | x86 and sse2Loads 128-bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from memory into the returned vector.
|
| _mm_loadu_ps⚠ | x86 and sseLoads four |
| _mm_loadu_si64⚠ | x86 and sseLoads unaligned 64-bits of integer data from memory into new vector. |
| _mm_loadu_si128⚠ | x86 and sse2Loads 128-bits of integer data from memory into a new vector. |
| _mm_madd_epi16⚠ | x86 and sse2Multiplies and then horizontally add signed 16 bit integers in |
| _mm_maddubs_epi16⚠ | x86 and ssse3Multiplies corresponding pairs of packed 8-bit unsigned integer values contained in the first source operand and packed 8-bit signed integer values contained in the second source operand, add pairs of contiguous products with signed saturation, and writes the 16-bit sums to the corresponding bits in the destination. |
| _mm_mask_i32gather_epi32⚠ | x86 and avx2Returns values from |
| _mm_mask_i32gather_epi64⚠ | x86 and avx2Returns values from |
| _mm_mask_i32gather_pd⚠ | x86 and avx2Returns values from |
| _mm_mask_i32gather_ps⚠ | x86 and avx2Returns values from |
| _mm_mask_i64gather_epi32⚠ | x86 and avx2Returns values from |
| _mm_mask_i64gather_epi64⚠ | x86 and avx2Returns values from |
| _mm_mask_i64gather_pd⚠ | x86 and avx2Returns values from |
| _mm_mask_i64gather_ps⚠ | x86 and avx2Returns values from |
| _mm_maskload_epi32⚠ | x86 and avx2Loads packed 32-bit integers from memory pointed by |
| _mm_maskload_epi64⚠ | x86 and avx2Loads packed 64-bit integers from memory pointed by |
| _mm_maskload_pd⚠ | x86 and avxLoads packed double-precision (64-bit) floating-point elements from memory
into result using |
| _mm_maskload_ps⚠ | x86 and avxLoads packed single-precision (32-bit) floating-point elements from memory
into result using |
| _mm_maskmoveu_si128⚠ | x86 and sse2Conditionally store 8-bit integer elements from |
| _mm_maskstore_epi32⚠ | x86 and avx2Stores packed 32-bit integers from |
| _mm_maskstore_epi64⚠ | x86 and avx2Stores packed 64-bit integers from |
| _mm_maskstore_pd⚠ | x86 and avxStores packed double-precision (64-bit) floating-point elements from |
| _mm_maskstore_ps⚠ | x86 and avxStores packed single-precision (32-bit) floating-point elements from |
| _mm_max_epi8⚠ | x86 and sse4.1Compares packed 8-bit integers in |
| _mm_max_epi16⚠ | x86 and sse2Compares packed 16-bit integers in |
| _mm_max_epi32⚠ | x86 and sse4.1Compares packed 32-bit integers in |
| _mm_max_epu8⚠ | x86 and sse2Compares packed unsigned 8-bit integers in |
| _mm_max_epu16⚠ | x86 and sse4.1Compares packed unsigned 16-bit integers in |
| _mm_max_epu32⚠ | x86 and sse4.1Compares packed unsigned 32-bit integers in |
| _mm_max_pd⚠ | x86 and sse2Returns a new vector with the maximum values from corresponding elements in
|
| _mm_max_ps⚠ | x86 and sseCompares packed single-precision (32-bit) floating-point elements in |
| _mm_max_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_max_ss⚠ | x86 and sseCompares the first single-precision (32-bit) floating-point element of |
| _mm_mfence⚠ | x86 and sse2Performs a serializing operation on all load-from-memory and store-to-memory instructions that were issued prior to this instruction. |
| _mm_min_epi8⚠ | x86 and sse4.1Compares packed 8-bit integers in |
| _mm_min_epi16⚠ | x86 and sse2Compares packed 16-bit integers in |
| _mm_min_epi32⚠ | x86 and sse4.1Compares packed 32-bit integers in |
| _mm_min_epu8⚠ | x86 and sse2Compares packed unsigned 8-bit integers in |
| _mm_min_epu16⚠ | x86 and sse4.1Compares packed unsigned 16-bit integers in |
| _mm_min_epu32⚠ | x86 and sse4.1Compares packed unsigned 32-bit integers in |
| _mm_min_pd⚠ | x86 and sse2Returns a new vector with the minimum values from corresponding elements in
|
| _mm_min_ps⚠ | x86 and sseCompares packed single-precision (32-bit) floating-point elements in |
| _mm_min_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_min_ss⚠ | x86 and sseCompares the first single-precision (32-bit) floating-point element of |
| _mm_minpos_epu16⚠ | x86 and sse4.1Finds the minimum unsigned 16-bit element in the 128-bit __m128i vector, returning a vector containing its value in its first position, and its index in its second position; all other elements are set to zero. |
| _mm_move_epi64⚠ | x86 and sse2Returns a vector where the low element is extracted from |
| _mm_move_sd⚠ | x86 and sse2Constructs a 128-bit floating-point vector of |
| _mm_move_ss⚠ | x86 and sseReturns a |
| _mm_movedup_pd⚠ | x86 and sse3Duplicate the low double-precision (64-bit) floating-point element
from |
| _mm_movehdup_ps⚠ | x86 and sse3Duplicate odd-indexed single-precision (32-bit) floating-point elements
from |
| _mm_movehl_ps⚠ | x86 and sseCombine higher half of |
| _mm_moveldup_ps⚠ | x86 and sse3Duplicate even-indexed single-precision (32-bit) floating-point elements
from |
| _mm_movelh_ps⚠ | x86 and sseCombine lower half of |
| _mm_movemask_epi8⚠ | x86 and sse2Returns a mask of the most significant bit of each element in |
| _mm_movemask_pd⚠ | x86 and sse2Returns a mask of the most significant bit of each element in |
| _mm_movemask_ps⚠ | x86 and sseReturns a mask of the most significant bit of each element in |
| _mm_mpsadbw_epu8⚠ | x86 and sse4.1Subtracts 8-bit unsigned integer values and computes the absolute values of the differences to the corresponding bits in the destination. Then sums of the absolute differences are returned according to the bit fields in the immediate operand. |
| _mm_mul_epi32⚠ | x86 and sse4.1Multiplies the low 32-bit integers from each packed 64-bit
element in |
| _mm_mul_epu32⚠ | x86 and sse2Multiplies the low unsigned 32-bit integers from each packed 64-bit element
in |
| _mm_mul_pd⚠ | x86 and sse2Multiplies packed double-precision (64-bit) floating-point elements in |
| _mm_mul_ps⚠ | x86 and sseMultiplies __m128 vectors. |
| _mm_mul_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_mul_ss⚠ | x86 and sseMultiplies the first component of |
| _mm_mulhi_epi16⚠ | x86 and sse2Multiplies the packed 16-bit integers in |
| _mm_mulhi_epu16⚠ | x86 and sse2Multiplies the packed unsigned 16-bit integers in |
| _mm_mulhrs_epi16⚠ | x86 and ssse3Multiplies packed 16-bit signed integer values, truncate the 32-bit
product to the 18 most significant bits by right-shifting, round the
truncated value by adding 1, and write bits |
| _mm_mullo_epi16⚠ | x86 and sse2Multiplies the packed 16-bit integers in |
| _mm_mullo_epi32⚠ | x86 and sse4.1Multiplies the packed 32-bit integers in |
| _mm_or_pd⚠ | x86 and sse2Computes the bitwise OR of |
| _mm_or_ps⚠ | x86 and sseBitwise OR of packed single-precision (32-bit) floating-point elements. |
| _mm_or_si128⚠ | x86 and sse2Computes the bitwise OR of 128 bits (representing integer data) in |
| _mm_packs_epi16⚠ | x86 and sse2Converts packed 16-bit integers from |
| _mm_packs_epi32⚠ | x86 and sse2Converts packed 32-bit integers from |
| _mm_packus_epi16⚠ | x86 and sse2Converts packed 16-bit integers from |
| _mm_packus_epi32⚠ | x86 and sse4.1Converts packed 32-bit integers from |
| _mm_pause⚠ | x86 Provides a hint to the processor that the code sequence is a spin-wait loop. |
| _mm_permute_pd⚠ | x86 and avx,sse2Shuffles double-precision (64-bit) floating-point elements in |
| _mm_permute_ps⚠ | x86 and avx,sseShuffles single-precision (32-bit) floating-point elements in |
| _mm_permutevar_pd⚠ | x86 and avxShuffles double-precision (64-bit) floating-point elements in |
| _mm_permutevar_ps⚠ | x86 and avxShuffles single-precision (32-bit) floating-point elements in |
| _mm_prefetch⚠ | x86 and sseFetch the cache line that contains address |
| _mm_rcp_ps⚠ | x86 and sseReturns the approximate reciprocal of packed single-precision (32-bit)
floating-point elements in |
| _mm_rcp_ss⚠ | x86 and sseReturns the approximate reciprocal of the first single-precision
(32-bit) floating-point element in |
| _mm_round_pd⚠ | x86 and sse4.1Round the packed double-precision (64-bit) floating-point elements in |
| _mm_round_ps⚠ | x86 and sse4.1Round the packed single-precision (32-bit) floating-point elements in |
| _mm_round_sd⚠ | x86 and sse4.1Round the lower double-precision (64-bit) floating-point element in |
| _mm_round_ss⚠ | x86 and sse4.1Round the lower single-precision (32-bit) floating-point element in |
| _mm_rsqrt_ps⚠ | x86 and sseReturns the approximate reciprocal square root of packed single-precision
(32-bit) floating-point elements in |
| _mm_rsqrt_ss⚠ | x86 and sseReturns the approximate reciprocal square root of the fist single-precision
(32-bit) floating-point elements in |
| _mm_sad_epu8⚠ | x86 and sse2Sum the absolute differences of packed unsigned 8-bit integers. |
| _mm_set1_epi8⚠ | x86 and sse2Broadcasts 8-bit integer |
| _mm_set1_epi16⚠ | x86 and sse2Broadcasts 16-bit integer |
| _mm_set1_epi32⚠ | x86 and sse2Broadcasts 32-bit integer |
| _mm_set1_epi64x⚠ | x86 and sse2Broadcasts 64-bit integer |
| _mm_set1_pd⚠ | x86 and sse2Broadcasts double-precision (64-bit) floating-point value a to all elements of the return value. |
| _mm_set1_ps⚠ | x86 and sseConstruct a |
| _mm_set_epi8⚠ | x86 and sse2Sets packed 8-bit integers with the supplied values. |
| _mm_set_epi16⚠ | x86 and sse2Sets packed 16-bit integers with the supplied values. |
| _mm_set_epi32⚠ | x86 and sse2Sets packed 32-bit integers with the supplied values. |
| _mm_set_epi64x⚠ | x86 and sse2Sets packed 64-bit integers with the supplied values, from highest to lowest. |
| _mm_set_pd⚠ | x86 and sse2Sets packed double-precision (64-bit) floating-point elements in the return value with the supplied values. |
| _mm_set_pd1⚠ | x86 and sse2Broadcasts double-precision (64-bit) floating-point value a to all elements of the return value. |
| _mm_set_ps⚠ | x86 and sseConstruct a |
| _mm_set_ps1⚠ | x86 and sseAlias for |
| _mm_set_sd⚠ | x86 and sse2Copies double-precision (64-bit) floating-point element |
| _mm_set_ss⚠ | x86 and sseConstruct a |
| _mm_setcsr⚠ | x86 and sseSets the MXCSR register with the 32-bit unsigned integer value. |
| _mm_setr_epi8⚠ | x86 and sse2Sets packed 8-bit integers with the supplied values in reverse order. |
| _mm_setr_epi16⚠ | x86 and sse2Sets packed 16-bit integers with the supplied values in reverse order. |
| _mm_setr_epi32⚠ | x86 and sse2Sets packed 32-bit integers with the supplied values in reverse order. |
| _mm_setr_pd⚠ | x86 and sse2Sets packed double-precision (64-bit) floating-point elements in the return value with the supplied values in reverse order. |
| _mm_setr_ps⚠ | x86 and sseConstruct a |
| _mm_setzero_pd⚠ | x86 and sse2Returns packed double-precision (64-bit) floating-point elements with all zeros. |
| _mm_setzero_ps⚠ | x86 and sseConstruct a |
| _mm_setzero_si128⚠ | x86 and sse2Returns a vector with all elements set to zero. |
| _mm_sfence⚠ | x86 and ssePerforms a serializing operation on all store-to-memory instructions that were issued prior to this instruction. |
| _mm_sha1msg1_epu32⚠ | x86 and shaPerforms an intermediate calculation for the next four SHA1 message values
(unsigned 32-bit integers) using previous message values from |
| _mm_sha1msg2_epu32⚠ | x86 and shaPerforms the final calculation for the next four SHA1 message values
(unsigned 32-bit integers) using the intermediate result in |
| _mm_sha1nexte_epu32⚠ | x86 and shaCalculate SHA1 state variable E after four rounds of operation from the
current SHA1 state variable |
| _mm_sha1rnds4_epu32⚠ | x86 and shaPerforms four rounds of SHA1 operation using an initial SHA1 state (A,B,C,D)
from |
| _mm_sha256msg1_epu32⚠ | x86 and shaPerforms an intermediate calculation for the next four SHA256 message values
(unsigned 32-bit integers) using previous message values from |
| _mm_sha256msg2_epu32⚠ | x86 and shaPerforms the final calculation for the next four SHA256 message values
(unsigned 32-bit integers) using previous message values from |
| _mm_sha256rnds2_epu32⚠ | x86 and shaPerforms 2 rounds of SHA256 operation using an initial SHA256 state
(C,D,G,H) from |
| _mm_shuffle_epi8⚠ | x86 and ssse3Shuffles bytes from |
| _mm_shuffle_epi32⚠ | x86 and sse2Shuffles 32-bit integers in |
| _mm_shuffle_pd⚠ | x86 and sse2Constructs a 128-bit floating-point vector of |
| _mm_shuffle_ps⚠ | x86 and sseShuffles packed single-precision (32-bit) floating-point elements in |
| _mm_shufflehi_epi16⚠ | x86 and sse2Shuffles 16-bit integers in the high 64 bits of |
| _mm_shufflelo_epi16⚠ | x86 and sse2Shuffles 16-bit integers in the low 64 bits of |
| _mm_sign_epi8⚠ | x86 and ssse3Negates packed 8-bit integers in |
| _mm_sign_epi16⚠ | x86 and ssse3Negates packed 16-bit integers in |
| _mm_sign_epi32⚠ | x86 and ssse3Negates packed 32-bit integers in |
| _mm_sll_epi16⚠ | x86 and sse2Shifts packed 16-bit integers in |
| _mm_sll_epi32⚠ | x86 and sse2Shifts packed 32-bit integers in |
| _mm_sll_epi64⚠ | x86 and sse2Shifts packed 64-bit integers in |
| _mm_slli_epi16⚠ | x86 and sse2Shifts packed 16-bit integers in |
| _mm_slli_epi32⚠ | x86 and sse2Shifts packed 32-bit integers in |
| _mm_slli_epi64⚠ | x86 and sse2Shifts packed 64-bit integers in |
| _mm_slli_si128⚠ | x86 and sse2Shifts |
| _mm_sllv_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm_sllv_epi64⚠ | x86 and avx2Shifts packed 64-bit integers in |
| _mm_sqrt_pd⚠ | x86 and sse2Returns a new vector with the square root of each of the values in |
| _mm_sqrt_ps⚠ | x86 and sseReturns the square root of packed single-precision (32-bit) floating-point
elements in |
| _mm_sqrt_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_sqrt_ss⚠ | x86 and sseReturns the square root of the first single-precision (32-bit)
floating-point element in |
| _mm_sra_epi16⚠ | x86 and sse2Shifts packed 16-bit integers in |
| _mm_sra_epi32⚠ | x86 and sse2Shifts packed 32-bit integers in |
| _mm_srai_epi16⚠ | x86 and sse2Shifts packed 16-bit integers in |
| _mm_srai_epi32⚠ | x86 and sse2Shifts packed 32-bit integers in |
| _mm_srav_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm_srl_epi16⚠ | x86 and sse2Shifts packed 16-bit integers in |
| _mm_srl_epi32⚠ | x86 and sse2Shifts packed 32-bit integers in |
| _mm_srl_epi64⚠ | x86 and sse2Shifts packed 64-bit integers in |
| _mm_srli_epi16⚠ | x86 and sse2Shifts packed 16-bit integers in |
| _mm_srli_epi32⚠ | x86 and sse2Shifts packed 32-bit integers in |
| _mm_srli_epi64⚠ | x86 and sse2Shifts packed 64-bit integers in |
| _mm_srli_si128⚠ | x86 and sse2Shifts |
| _mm_srlv_epi32⚠ | x86 and avx2Shifts packed 32-bit integers in |
| _mm_srlv_epi64⚠ | x86 and avx2Shifts packed 64-bit integers in |
| _mm_store1_pd⚠ | x86 and sse2Stores the lower double-precision (64-bit) floating-point element from |
| _mm_store1_ps⚠ | x86 and sseStores the lowest 32 bit float of |
| _mm_store_pd⚠ | x86 and sse2Stores 128-bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from |
| _mm_store_pd1⚠ | x86 and sse2Stores the lower double-precision (64-bit) floating-point element from |
| _mm_store_ps⚠ | x86 and sseStores four 32-bit floats into aligned memory. |
| _mm_store_ps1⚠ | x86 and sseAlias for |
| _mm_store_sd⚠ | x86 and sse2Stores the lower 64 bits of a 128-bit vector of |
| _mm_store_si128⚠ | x86 and sse2Stores 128-bits of integer data from |
| _mm_store_ss⚠ | x86 and sseStores the lowest 32 bit float of |
| _mm_storeh_pd⚠ | x86 and sse2Stores the upper 64 bits of a 128-bit vector of |
| _mm_storel_epi64⚠ | x86 and sse2Stores the lower 64-bit integer |
| _mm_storel_pd⚠ | x86 and sse2Stores the lower 64 bits of a 128-bit vector of |
| _mm_storer_pd⚠ | x86 and sse2Stores 2 double-precision (64-bit) floating-point elements from |
| _mm_storer_ps⚠ | x86 and sseStores four 32-bit floats into aligned memory in reverse order. |
| _mm_storeu_pd⚠ | x86 and sse2Stores 128-bits (composed of 2 packed double-precision (64-bit)
floating-point elements) from |
| _mm_storeu_ps⚠ | x86 and sseStores four 32-bit floats into memory. There are no restrictions on memory
alignment. For aligned memory |
| _mm_storeu_si128⚠ | x86 and sse2Stores 128-bits of integer data from |
| _mm_stream_pd⚠ | x86 and sse2Stores a 128-bit floating point vector of |
| _mm_stream_ps⚠ | x86 and sseStores |
| _mm_stream_sd⚠ | x86 and sse4aNon-temporal store of |
| _mm_stream_si32⚠ | x86 and sse2Stores a 32-bit integer value in the specified memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon). |
| _mm_stream_si128⚠ | x86 and sse2Stores a 128-bit integer vector to a 128-bit aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon). |
| _mm_stream_ss⚠ | x86 and sse4aNon-temporal store of |
| _mm_sub_epi8⚠ | x86 and sse2Subtracts packed 8-bit integers in |
| _mm_sub_epi16⚠ | x86 and sse2Subtracts packed 16-bit integers in |
| _mm_sub_epi32⚠ | x86 and sse2Subtract packed 32-bit integers in |
| _mm_sub_epi64⚠ | x86 and sse2Subtract packed 64-bit integers in |
| _mm_sub_pd⚠ | x86 and sse2Subtract packed double-precision (64-bit) floating-point elements in |
| _mm_sub_ps⚠ | x86 and sseSubtracts __m128 vectors. |
| _mm_sub_sd⚠ | x86 and sse2Returns a new vector with the low element of |
| _mm_sub_ss⚠ | x86 and sseSubtracts the first component of |
| _mm_subs_epi8⚠ | x86 and sse2Subtract packed 8-bit integers in |
| _mm_subs_epi16⚠ | x86 and sse2Subtract packed 16-bit integers in |
| _mm_subs_epu8⚠ | x86 and sse2Subtract packed unsigned 8-bit integers in |
| _mm_subs_epu16⚠ | x86 and sse2Subtract packed unsigned 16-bit integers in |
| _mm_test_all_ones⚠ | x86 and sse4.1Tests whether the specified bits in |
| _mm_test_all_zeros⚠ | x86 and sse4.1Tests whether the specified bits in a 128-bit integer vector are all zeros. |
| _mm_test_mix_ones_zeros⚠ | x86 and sse4.1Tests whether the specified bits in a 128-bit integer vector are neither all zeros nor all ones. |
| _mm_testc_pd⚠ | x86 and avxComputes the bitwise AND of 128 bits (representing double-precision (64-bit)
floating-point elements) in |
| _mm_testc_ps⚠ | x86 and avxComputes the bitwise AND of 128 bits (representing single-precision (32-bit)
floating-point elements) in |
| _mm_testc_si128⚠ | x86 and sse4.1Tests whether the specified bits in a 128-bit integer vector are all ones. |
| _mm_testnzc_pd⚠ | x86 and avxComputes the bitwise AND of 128 bits (representing double-precision (64-bit)
floating-point elements) in |
| _mm_testnzc_ps⚠ | x86 and avxComputes the bitwise AND of 128 bits (representing single-precision (32-bit)
floating-point elements) in |
| _mm_testnzc_si128⚠ | x86 and sse4.1Tests whether the specified bits in a 128-bit integer vector are neither all zeros nor all ones. |
| _mm_testz_pd⚠ | x86 and avxComputes the bitwise AND of 128 bits (representing double-precision (64-bit)
floating-point elements) in |
| _mm_testz_ps⚠ | x86 and avxComputes the bitwise AND of 128 bits (representing single-precision (32-bit)
floating-point elements) in |
| _mm_testz_si128⚠ | x86 and sse4.1Tests whether the specified bits in a 128-bit integer vector are all zeros. |
| _mm_tzcnt_32⚠ | x86 and bmi1Counts the number of trailing least significant zero bits. |
| _mm_ucomieq_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_ucomieq_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_ucomige_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_ucomige_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_ucomigt_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_ucomigt_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_ucomile_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_ucomile_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_ucomilt_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_ucomilt_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_ucomineq_sd⚠ | x86 and sse2Compares the lower element of |
| _mm_ucomineq_ss⚠ | x86 and sseCompares two 32-bit floats from the low-order bits of |
| _mm_undefined_pd⚠ | x86 and sse2Returns vector of type __m128d with undefined elements. |
| _mm_undefined_ps⚠ | x86 and sseReturns vector of type __m128 with undefined elements. |
| _mm_undefined_si128⚠ | x86 and sse2Returns vector of type __m128i with undefined elements. |
| _mm_unpackhi_epi8⚠ | x86 and sse2Unpacks and interleave 8-bit integers from the high half of |
| _mm_unpackhi_epi16⚠ | x86 and sse2Unpacks and interleave 16-bit integers from the high half of |
| _mm_unpackhi_epi32⚠ | x86 and sse2Unpacks and interleave 32-bit integers from the high half of |
| _mm_unpackhi_epi64⚠ | x86 and sse2Unpacks and interleave 64-bit integers from the high half of |
| _mm_unpackhi_pd⚠ | x86 and sse2The resulting |
| _mm_unpackhi_ps⚠ | x86 and sseUnpacks and interleave single-precision (32-bit) floating-point elements
from the higher half of |
| _mm_unpacklo_epi8⚠ | x86 and sse2Unpacks and interleave 8-bit integers from the low half of |
| _mm_unpacklo_epi16⚠ | x86 and sse2Unpacks and interleave 16-bit integers from the low half of |
| _mm_unpacklo_epi32⚠ | x86 and sse2Unpacks and interleave 32-bit integers from the low half of |
| _mm_unpacklo_epi64⚠ | x86 and sse2Unpacks and interleave 64-bit integers from the low half of |
| _mm_unpacklo_pd⚠ | x86 and sse2The resulting |
| _mm_unpacklo_ps⚠ | x86 and sseUnpacks and interleave single-precision (32-bit) floating-point elements
from the lower half of |
| _mm_xor_pd⚠ | x86 and sse2Computes the bitwise OR of |
| _mm_xor_ps⚠ | x86 and sseBitwise exclusive OR of packed single-precision (32-bit) floating-point elements. |
| _mm_xor_si128⚠ | x86 and sse2Computes the bitwise XOR of 128 bits (representing integer data) in |
| _mulx_u32⚠ | x86 and bmi2Unsigned multiply without affecting flags. |
| _pdep_u32⚠ | x86 and bmi2Scatter contiguous low order bits of |
| _pext_u32⚠ | x86 and bmi2Gathers the bits of |
| _popcnt32⚠ | x86 and popcntCounts the bits that are set. |
| _rdrand16_step⚠ | x86 and rdrandRead a hardware generated 16-bit random value and store the result in val. Returns 1 if a random value was generated, and 0 otherwise. |
| _rdrand32_step⚠ | x86 and rdrandRead a hardware generated 32-bit random value and store the result in val. Returns 1 if a random value was generated, and 0 otherwise. |
| _rdseed16_step⚠ | x86 and rdseedRead a 16-bit NIST SP800-90B and SP800-90C compliant random value and store in val. Return 1 if a random value was generated, and 0 otherwise. |
| _rdseed32_step⚠ | x86 and rdseedRead a 32-bit NIST SP800-90B and SP800-90C compliant random value and store in val. Return 1 if a random value was generated, and 0 otherwise. |
| _rdtsc⚠ | x86 Reads the current value of the processor’s time-stamp counter. |
| _subborrow_u32⚠ | x86 Adds unsigned 32-bit integers |
| _t1mskc_u32⚠ | x86 and tbmClears all bits below the least significant zero of |
| _t1mskc_u64⚠ | x86 and tbmClears all bits below the least significant zero of |
| _tzcnt_u32⚠ | x86 and bmi1Counts the number of trailing least significant zero bits. |
| _tzmsk_u32⚠ | x86 and tbmSets all bits below the least significant one of |
| _tzmsk_u64⚠ | x86 and tbmSets all bits below the least significant one of |
| _xgetbv⚠ | x86 and xsaveReads the contents of the extended control register |
| _xrstor⚠ | x86 and xsavePerforms a full or partial restore of the enabled processor states using
the state information stored in memory at |
| _xrstors⚠ | x86 and xsave,xsavesPerforms a full or partial restore of the enabled processor states using the
state information stored in memory at |
| _xsave⚠ | x86 and xsavePerforms a full or partial save of the enabled processor states to memory at
|
| _xsavec⚠ | x86 and xsave,xsavecPerforms a full or partial save of the enabled processor states to memory
at |
| _xsaveopt⚠ | x86 and xsave,xsaveoptPerforms a full or partial save of the enabled processor states to memory at
|
| _xsaves⚠ | x86 and xsave,xsavesPerforms a full or partial save of the enabled processor states to memory at
|
| _xsetbv⚠ | x86 and xsaveCopies 64-bits from |
| _MM_SHUFFLE | Experimentalx86 A utility function for creating masks to use with Intel shuffle and permute intrinsics. |
| _bittest⚠ | Experimentalx86 Returns the bit in position |
| _bittestandcomplement⚠ | Experimentalx86 Returns the bit in position |
| _bittestandreset⚠ | Experimentalx86 Returns the bit in position |
| _bittestandset⚠ | Experimentalx86 Returns the bit in position |
| _kand_mask16⚠ | Experimentalx86 and avx512fCompute the bitwise AND of 16-bit masks a and b, and store the result in k. |
| _kandn_mask16⚠ | Experimentalx86 and avx512fCompute the bitwise NOT of 16-bit masks a and then AND with b, and store the result in k. |
| _knot_mask16⚠ | Experimentalx86 and avx512fCompute the bitwise NOT of 16-bit mask a, and store the result in k. |
| _kor_mask16⚠ | Experimentalx86 and avx512fCompute the bitwise OR of 16-bit masks a and b, and store the result in k. |
| _kxnor_mask16⚠ | Experimentalx86 and avx512fCompute the bitwise XNOR of 16-bit masks a and b, and store the result in k. |
| _kxor_mask16⚠ | Experimentalx86 and avx512fCompute the bitwise XOR of 16-bit masks a and b, and store the result in k. |
| _mm256_cvtph_ps⚠ | Experimentalx86 and f16cConverts the 8 x 16-bit half-precision float values in the 128-bit vector
|
| _mm256_cvtps_ph⚠ | Experimentalx86 and f16cConverts the 8 x 32-bit float values in the 256-bit vector |
| _mm256_madd52hi_epu64⚠ | Experimentalx86 and avx512ifma,avx512vlMultiply packed unsigned 52-bit integers in each 64-bit element of
|
| _mm256_madd52lo_epu64⚠ | Experimentalx86 and avx512ifma,avx512vlMultiply packed unsigned 52-bit integers in each 64-bit element of
|
| _mm512_abs_epi32⚠ | Experimentalx86 and avx512fComputes the absolute values of packed 32-bit integers in |
| _mm512_abs_epi64⚠ | Experimentalx86 and avx512fCompute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst. |
| _mm512_abs_pd⚠ | Experimentalx86 and avx512fFinds the absolute value of each packed double-precision (64-bit) floating-point element in v2, storing the results in dst. |
| _mm512_abs_ps⚠ | Experimentalx86 and avx512fFinds the absolute value of each packed single-precision (32-bit) floating-point element in v2, storing the results in dst. |
| _mm512_add_epi32⚠ | Experimentalx86 and avx512fAdd packed 32-bit integers in a and b, and store the results in dst. |
| _mm512_add_epi64⚠ | Experimentalx86 and avx512fAdd packed 64-bit integers in a and b, and store the results in dst. |
| _mm512_add_pd⚠ | Experimentalx86 and avx512fAdd packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst. |
| _mm512_add_ps⚠ | Experimentalx86 and avx512fAdd packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst. |
| _mm512_add_round_pd⚠ | Experimentalx86 and avx512fAdd packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst. |
| _mm512_add_round_ps⚠ | Experimentalx86 and avx512fAdd packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst. |
| _mm512_and_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_and_epi64⚠ | Experimentalx86 and avx512fCompute the bitwise AND of 512 bits (composed of packed 64-bit integers) in a and b, and store the results in dst. |
| _mm512_and_si512⚠ | Experimentalx86 and avx512fCompute the bitwise AND of 512 bits (representing integer data) in a and b, and store the result in dst. |
| _mm512_cmp_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b based on the comparison operand specified by op. |
| _mm512_cmp_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b based on the comparison operand specified by op. |
| _mm512_cmp_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b based on the comparison operand specified by op. |
| _mm512_cmp_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b based on the comparison operand specified by op. |
| _mm512_cmp_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by op. |
| _mm512_cmp_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by op. |
| _mm512_cmp_round_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by op. |
| _mm512_cmp_round_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by op. |
| _mm512_cmpeq_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for equality, and store the results in a mask vector. |
| _mm512_cmpeq_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for equality, and store the results in a mask vector. |
| _mm512_cmpeq_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for equality, and store the results in a mask vector. |
| _mm512_cmpeq_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for equality, and store the results in a mask vector. |
| _mm512_cmpeq_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b for equality, and store the results in a mask vector. |
| _mm512_cmpeq_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for equality, and store the results in a mask vector. |
| _mm512_cmpge_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector. |
| _mm512_cmpge_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector. |
| _mm512_cmpge_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector. |
| _mm512_cmpge_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector. |
| _mm512_cmpgt_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for greater-than, and store the results in a mask vector. |
| _mm512_cmpgt_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for greater-than, and store the results in a mask vector. |
| _mm512_cmpgt_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for greater-than, and store the results in a mask vector. |
| _mm512_cmpgt_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for greater-than, and store the results in a mask vector. |
| _mm512_cmple_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for less-than-or-equal, and store the results in a mask vector. |
| _mm512_cmple_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for less-than-or-equal, and store the results in a mask vector. |
| _mm512_cmple_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for less-than-or-equal, and store the results in a mask vector. |
| _mm512_cmple_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for less-than-or-equal, and store the results in a mask vector. |
| _mm512_cmple_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b for less-than-or-equal, and store the results in a mask vector. |
| _mm512_cmple_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for less-than-or-equal, and store the results in a mask vector. |
| _mm512_cmplt_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for less-than, and store the results in a mask vector. |
| _mm512_cmplt_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for less-than, and store the results in a mask vector. |
| _mm512_cmplt_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for less-than, and store the results in a mask vector. |
| _mm512_cmplt_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for less-than, and store the results in a mask vector. |
| _mm512_cmplt_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b for less-than, and store the results in a mask vector. |
| _mm512_cmplt_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for less-than, and store the results in a mask vector. |
| _mm512_cmpneq_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for inequality, and store the results in a mask vector. |
| _mm512_cmpneq_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for inequality, and store the results in a mask vector. |
| _mm512_cmpneq_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for inequality, and store the results in a mask vector. |
| _mm512_cmpneq_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for inequality, and store the results in a mask vector. |
| _mm512_cmpneq_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b for inequality, and store the results in a mask vector. |
| _mm512_cmpneq_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for inequality, and store the results in a mask vector. |
| _mm512_cmpnle_pd_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector. |
| _mm512_cmpnle_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector. |
| _mm512_cmpnlt_pd_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector. |
| _mm512_cmpnlt_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector. |
| _mm512_cmpord_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b to see if neither is NaN, and store the results in a mask vector. |
| _mm512_cmpord_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b to see if neither is NaN, and store the results in a mask vector. |
| _mm512_cmpunord_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b to see if either is NaN, and store the results in a mask vector. |
| _mm512_cmpunord_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b to see if either is NaN, and store the results in a mask vector. |
| _mm512_cvt_roundps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst. |
| _mm512_cvt_roundps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst. |
| _mm512_cvt_roundps_pd⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_cvtps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst. |
| _mm512_cvtps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst. |
| _mm512_cvtps_pd⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst. |
| _mm512_cvtt_roundpd_epi32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_cvtt_roundpd_epu32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_cvtt_roundps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_cvtt_roundps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_cvttpd_epi32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst. |
| _mm512_cvttpd_epu32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst. |
| _mm512_cvttps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst. |
| _mm512_cvttps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst. |
| _mm512_div_pd⚠ | Experimentalx86 and avx512fDivide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst. |
| _mm512_div_ps⚠ | Experimentalx86 and avx512fDivide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst. |
| _mm512_div_round_pd⚠ | Experimentalx86 and avx512fDivide packed double-precision (64-bit) floating-point elements in a by packed elements in b, =and store the results in dst. |
| _mm512_div_round_ps⚠ | Experimentalx86 and avx512fDivide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst. |
| _mm512_extractf32x4_ps⚠ | Experimentalx86 and avx512fExtract 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8, and store the result in dst. |
| _mm512_fmadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst. |
| _mm512_fmadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst. |
| _mm512_fmadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst. |
| _mm512_fmadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst. |
| _mm512_fmaddsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst. |
| _mm512_fmaddsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst. |
| _mm512_fmaddsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst. |
| _mm512_fmaddsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst. |
| _mm512_fmsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst. |
| _mm512_fmsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst. |
| _mm512_fmsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst. |
| _mm512_fmsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst. |
| _mm512_fmsubadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst. |
| _mm512_fmsubadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst. |
| _mm512_fmsubadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst. |
| _mm512_fmsubadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst. |
| _mm512_fnmadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst. |
| _mm512_fnmadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst. |
| _mm512_fnmadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst. |
| _mm512_fnmadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst. |
| _mm512_fnmsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst. |
| _mm512_fnmsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst. |
| _mm512_fnmsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst. |
| _mm512_fnmsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst. |
| _mm512_getexp_pd⚠ | Experimentalx86 and avx512fConvert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element. |
| _mm512_getexp_ps⚠ | Experimentalx86 and avx512fConvert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element. |
| _mm512_getexp_round_pd⚠ | Experimentalx86 and avx512fConvert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_getexp_round_ps⚠ | Experimentalx86 and avx512fConvert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_getmant_pd⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 |
| _mm512_getmant_ps⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 |
| _mm512_getmant_round_pd⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_getmant_round_ps⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_i32gather_epi32⚠ | Experimentalx86 and avx512fGather 32-bit integers from memory using 32-bit indices. |
| _mm512_i32gather_epi64⚠ | Experimentalx86 and avx512fGather 64-bit integers from memory using 32-bit indices. |
| _mm512_i32gather_pd⚠ | Experimentalx86 and avx512fGather double-precision (64-bit) floating-point elements from memory using 32-bit indices. |
| _mm512_i32gather_ps⚠ | Experimentalx86 and avx512fGather single-precision (32-bit) floating-point elements from memory using 32-bit indices. |
| _mm512_i32scatter_epi32⚠ | Experimentalx86 and avx512fScatter 32-bit integers from src into memory using 32-bit indices. |
| _mm512_i32scatter_epi64⚠ | Experimentalx86 and avx512fScatter 64-bit integers from src into memory using 32-bit indices. |
| _mm512_i32scatter_pd⚠ | Experimentalx86 and avx512fScatter double-precision (64-bit) floating-point elements from memory using 32-bit indices. |
| _mm512_i32scatter_ps⚠ | Experimentalx86 and avx512fScatter single-precision (32-bit) floating-point elements from memory using 32-bit indices. |
| _mm512_i64gather_epi32⚠ | Experimentalx86 and avx512fGather 32-bit integers from memory using 64-bit indices. |
| _mm512_i64gather_epi64⚠ | Experimentalx86 and avx512fGather 64-bit integers from memory using 64-bit indices. |
| _mm512_i64gather_pd⚠ | Experimentalx86 and avx512fGather double-precision (64-bit) floating-point elements from memory using 64-bit indices. |
| _mm512_i64gather_ps⚠ | Experimentalx86 and avx512fGather single-precision (32-bit) floating-point elements from memory using 64-bit indices. |
| _mm512_i64scatter_epi32⚠ | Experimentalx86 and avx512fScatter 32-bit integers from src into memory using 64-bit indices. |
| _mm512_i64scatter_epi64⚠ | Experimentalx86 and avx512fScatter 64-bit integers from src into memory using 64-bit indices. |
| _mm512_i64scatter_pd⚠ | Experimentalx86 and avx512fScatter double-precision (64-bit) floating-point elements from src into memory using 64-bit indices. |
| _mm512_i64scatter_ps⚠ | Experimentalx86 and avx512fScatter single-precision (32-bit) floating-point elements from src into memory using 64-bit indices. |
| _mm512_kand⚠ | Experimentalx86 and avx512fCompute the bitwise AND of 16-bit masks a and b, and store the result in k. |
| _mm512_kandn⚠ | Experimentalx86 and avx512fCompute the bitwise NOT of 16-bit masks a and then AND with b, and store the result in k. |
| _mm512_kmov⚠ | Experimentalx86 and avx512fCopy 16-bit mask a to k. |
| _mm512_knot⚠ | Experimentalx86 and avx512fCompute the bitwise NOT of 16-bit mask a, and store the result in k. |
| _mm512_kor⚠ | Experimentalx86 and avx512fCompute the bitwise OR of 16-bit masks a and b, and store the result in k. |
| _mm512_kxnor⚠ | Experimentalx86 and avx512fCompute the bitwise XNOR of 16-bit masks a and b, and store the result in k. |
| _mm512_kxor⚠ | Experimentalx86 and avx512fCompute the bitwise XOR of 16-bit masks a and b, and store the result in k. |
| _mm512_loadu_pd⚠ | Experimentalx86 and avx512fLoads 512-bits (composed of 8 packed double-precision (64-bit)
floating-point elements) from memory into result.
|
| _mm512_loadu_ps⚠ | Experimentalx86 and avx512fLoads 512-bits (composed of 16 packed single-precision (32-bit)
floating-point elements) from memory into result.
|
| _mm512_madd52hi_epu64⚠ | Experimentalx86 and avx512ifmaMultiply packed unsigned 52-bit integers in each 64-bit element of
|
| _mm512_madd52lo_epu64⚠ | Experimentalx86 and avx512ifmaMultiply packed unsigned 52-bit integers in each 64-bit element of
|
| _mm512_mask2_permutex2var_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set). |
| _mm512_mask2_permutex2var_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set). |
| _mm512_mask2_permutex2var_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set) |
| _mm512_mask2_permutex2var_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set). |
| _mm512_mask3_fmadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmaddsub_pd⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmaddsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmaddsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmaddsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmsubadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmsubadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmsubadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fmsubadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fnmadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fnmadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fnmadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fnmadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fnmsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fnmsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fnmsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask3_fnmsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set). |
| _mm512_mask_abs_epi32⚠ | Experimentalx86 and avx512fComputes the absolute value of packed 32-bit integers in |
| _mm512_mask_abs_epi64⚠ | Experimentalx86 and avx512fCompute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_abs_pd⚠ | Experimentalx86 and avx512fFinds the absolute value of each packed double-precision (64-bit) floating-point element in v2, storing the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_abs_ps⚠ | Experimentalx86 and avx512fFinds the absolute value of each packed single-precision (32-bit) floating-point element in v2, storing the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_add_epi32⚠ | Experimentalx86 and avx512fAdd packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_add_epi64⚠ | Experimentalx86 and avx512fAdd packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_add_pd⚠ | Experimentalx86 and avx512fAdd packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_add_ps⚠ | Experimentalx86 and avx512fAdd packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_add_round_pd⚠ | Experimentalx86 and avx512fAdd packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_add_round_ps⚠ | Experimentalx86 and avx512fAdd packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_and_epi32⚠ | Experimentalx86 and avx512fPerforms element-by-element bitwise AND between packed 32-bit integer elements of v2 and v3, storing the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_and_epi64⚠ | Experimentalx86 and avx512fCompute the bitwise AND of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_cmp_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmp_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmp_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmp_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmp_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmp_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmp_round_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmp_round_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by op, using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpeq_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpeq_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpeq_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpeq_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpeq_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpeq_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for equality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpge_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpge_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpge_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpge_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for greater-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpgt_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpgt_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpgt_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpgt_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmple_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmple_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmple_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmple_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmple_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmple_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for less-than-or-equal, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmplt_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmplt_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmplt_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmplt_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmplt_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmplt_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for less-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpneq_epi32_mask⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpneq_epi64_mask⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpneq_epu32_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpneq_epu64_mask⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpneq_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpneq_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for inequality, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpnle_pd_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpnle_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpnlt_pd_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpnlt_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b for greater-than, and store the results in a mask vector k using zeromask m (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_mask_cmpord_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b to see if neither is NaN, and store the results in a mask vector. |
| _mm512_mask_cmpord_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b to see if neither is NaN, and store the results in a mask vector. |
| _mm512_mask_cmpunord_pd_mask⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b to see if either is NaN, and store the results in a mask vector. |
| _mm512_mask_cmpunord_ps_mask⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b to see if either is NaN, and store the results in a mask vector. |
| _mm512_mask_cvt_roundps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_cvt_roundps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_cvt_roundps_pd⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_cvtps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_cvtps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_cvtps_pd⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_cvtt_roundpd_epi32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_cvtt_roundpd_epu32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_cvtt_roundps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_cvtt_roundps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_cvttpd_epi32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_cvttpd_epu32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_cvttps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_cvttps_epu32⚠ | Experimentalx86 and avx512fConvert packed double-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_div_pd⚠ | Experimentalx86 and avx512fDivide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_div_ps⚠ | Experimentalx86 and avx512fDivide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_div_round_pd⚠ | Experimentalx86 and avx512fDivide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_div_round_ps⚠ | Experimentalx86 and avx512fDivide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_fmadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmaddsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmaddsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmaddsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmaddsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmsubadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmsubadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmsubadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fmsubadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fnmadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fnmadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fnmadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fnmadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fnmsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fnmsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fnmsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_fnmsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_getexp_pd⚠ | Experimentalx86 and avx512fConvert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. |
| _mm512_mask_getexp_ps⚠ | Experimentalx86 and avx512fConvert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. |
| _mm512_mask_getexp_round_pd⚠ | Experimentalx86 and avx512fConvert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_getexp_round_ps⚠ | Experimentalx86 and avx512fConvert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_getmant_pd⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 |
| _mm512_mask_getmant_ps⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 |
| _mm512_mask_getmant_round_pd⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_getmant_round_ps⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_i32gather_epi32⚠ | Experimentalx86 and avx512fGather 32-bit integers from memory using 32-bit indices. |
| _mm512_mask_i32gather_epi64⚠ | Experimentalx86 and avx512fGather 64-bit integers from memory using 32-bit indices. |
| _mm512_mask_i32gather_pd⚠ | Experimentalx86 and avx512fGather double-precision (64-bit) floating-point elements from memory using 32-bit indices. |
| _mm512_mask_i32gather_ps⚠ | Experimentalx86 and avx512fGather single-precision (32-bit) floating-point elements from memory using 32-bit indices. |
| _mm512_mask_i32scatter_epi32⚠ | Experimentalx86 and avx512fScatter 32-bit integers from src into memory using 32-bit indices. |
| _mm512_mask_i32scatter_epi64⚠ | Experimentalx86 and avx512fScatter 64-bit integers from src into memory using 32-bit indices. |
| _mm512_mask_i32scatter_pd⚠ | Experimentalx86 and avx512fScatter double-precision (64-bit) floating-point elements from src into memory using 32-bit indices. |
| _mm512_mask_i32scatter_ps⚠ | Experimentalx86 and avx512fScatter single-precision (32-bit) floating-point elements from src into memory using 32-bit indices. |
| _mm512_mask_i64gather_epi32⚠ | Experimentalx86 and avx512fGather 32-bit integers from memory using 64-bit indices. |
| _mm512_mask_i64gather_epi64⚠ | Experimentalx86 and avx512fGather 64-bit integers from memory using 64-bit indices. |
| _mm512_mask_i64gather_pd⚠ | Experimentalx86 and avx512fGather double-precision (64-bit) floating-point elements from memory using 64-bit indices. |
| _mm512_mask_i64gather_ps⚠ | Experimentalx86 and avx512fGather single-precision (32-bit) floating-point elements from memory using 64-bit indices. |
| _mm512_mask_i64scatter_epi32⚠ | Experimentalx86 and avx512fScatter 32-bit integers from src into memory using 64-bit indices. |
| _mm512_mask_i64scatter_epi64⚠ | Experimentalx86 and avx512fScatter 64-bit integers from src into memory using 64-bit indices. |
| _mm512_mask_i64scatter_pd⚠ | Experimentalx86 and avx512fScatter double-precision (64-bit) floating-point elements from src into memory using 64-bit indices. |
| _mm512_mask_i64scatter_ps⚠ | Experimentalx86 and avx512fScatter single-precision (32-bit) floating-point elements from src into memory using 64-bit indices. |
| _mm512_mask_max_epi32⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_max_epi64⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_max_epu32⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_max_epu64⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_max_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_max_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_max_round_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_max_round_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_min_epi32⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_min_epi64⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_min_epu32⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_min_epu64⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_min_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_min_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_min_round_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_min_round_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_mask_movedup_pd⚠ | Experimentalx86 and avx512fDuplicate even-indexed double-precision (64-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_movehdup_ps⚠ | Experimentalx86 and avx512fDuplicate odd-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_moveldup_ps⚠ | Experimentalx86 and avx512fDuplicate even-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_mul_epi32⚠ | Experimentalx86 and avx512fMultiply the low signed 32-bit integers from each packed 64-bit element in a and b, and store the signed 64-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_mul_epu32⚠ | Experimentalx86 and avx512fMultiply the low unsigned 32-bit integers from each packed 64-bit element in a and b, and store the unsigned 64-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_mul_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). RM. |
| _mm512_mask_mul_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). RM. |
| _mm512_mask_mul_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_mul_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_mullo_epi32⚠ | Experimentalx86 and avx512fMultiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and store the low 32 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_mullox_epi64⚠ | Experimentalx86 and avx512fMultiplies elements in packed 64-bit integer vectors a and b together, storing the lower 64 bits of the result in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_or_epi32⚠ | Experimentalx86 and avx512fCompute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_or_epi64⚠ | Experimentalx86 and avx512fCompute the bitwise OR of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permute_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permute_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permutevar_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Note that this intrinsic shuffles across 128-bit lanes, unlike past intrinsics that use the permutevar name. This intrinsic is identical to _mm512_mask_permutexvar_epi32, and it is recommended that you use that intrinsic name. |
| _mm512_mask_permutevar_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permutevar_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permutex2var_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_permutex2var_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_permutex2var_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_permutex2var_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). |
| _mm512_mask_permutex_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permutex_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permutexvar_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permutexvar_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permutexvar_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_permutexvar_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_rcp14_pd⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14. |
| _mm512_mask_rcp14_ps⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14. |
| _mm512_mask_rol_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_rol_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_rolv_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_rolv_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_ror_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_ror_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_rorv_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_rorv_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_rsqrt14_pd⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14. |
| _mm512_mask_rsqrt14_ps⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14. |
| _mm512_mask_shuffle_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_shuffle_f32x4⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_shuffle_f64x2⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_shuffle_i32x4⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_shuffle_i64x2⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 2 64-bit integers) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_shuffle_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_shuffle_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sll_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sll_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_slli_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_slli_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sllv_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sllv_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sqrt_pd⚠ | Experimentalx86 and avx512fCompute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sqrt_ps⚠ | Experimentalx86 and avx512fCompute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sqrt_round_pd⚠ | Experimentalx86 and avx512fCompute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sqrt_round_ps⚠ | Experimentalx86 and avx512fCompute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sra_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sra_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srai_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srai_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srav_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srav_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srl_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srl_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srli_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srli_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srlv_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_srlv_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sub_epi32⚠ | Experimentalx86 and avx512fSubtract packed 32-bit integers in b from packed 32-bit integers in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sub_epi64⚠ | Experimentalx86 and avx512fSubtract packed 64-bit integers in b from packed 64-bit integers in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sub_pd⚠ | Experimentalx86 and avx512fSubtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sub_ps⚠ | Experimentalx86 and avx512fSubtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sub_round_pd⚠ | Experimentalx86 and avx512fSubtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_sub_round_ps⚠ | Experimentalx86 and avx512fSubtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_xor_epi32⚠ | Experimentalx86 and avx512fCompute the bitwise XOR of packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_mask_xor_epi64⚠ | Experimentalx86 and avx512fCompute the bitwise XOR of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). |
| _mm512_maskz_abs_epi32⚠ | Experimentalx86 and avx512fComputes the absolute value of packed 32-bit integers in |
| _mm512_maskz_abs_epi64⚠ | Experimentalx86 and avx512fCompute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_add_epi32⚠ | Experimentalx86 and avx512fAdd packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_add_epi64⚠ | Experimentalx86 and avx512fAdd packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_add_pd⚠ | Experimentalx86 and avx512fAdd packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_add_ps⚠ | Experimentalx86 and avx512fAdd packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_add_round_pd⚠ | Experimentalx86 and avx512fAdd packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_add_round_ps⚠ | Experimentalx86 and avx512fAdd packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_and_epi32⚠ | Experimentalx86 and avx512fCompute the bitwise AND of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_and_epi64⚠ | Experimentalx86 and avx512fCompute the bitwise AND of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_cvt_roundps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_cvt_roundps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_cvt_roundps_pd⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_cvtps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_cvtps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_cvtps_pd⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_cvtt_roundpd_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_cvtt_roundpd_epu32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_cvtt_roundps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_cvtt_roundps_epu32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_cvttpd_epi32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_cvttpd_epu32⚠ | Experimentalx86 and avx512fConvert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_cvttps_epi32⚠ | Experimentalx86 and avx512fConvert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_cvttps_epu32⚠ | Experimentalx86 and avx512fConvert packed double-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_div_pd⚠ | Experimentalx86 and avx512fDivide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_div_ps⚠ | Experimentalx86 and avx512fDivide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_div_round_pd⚠ | Experimentalx86 and avx512fDivide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_div_round_ps⚠ | Experimentalx86 and avx512fDivide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in a using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmaddsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmaddsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmaddsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmaddsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmsubadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmsubadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmsubadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fmsubadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fnmadd_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fnmadd_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fnmadd_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fnmadd_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fnmsub_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fnmsub_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fnmsub_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_fnmsub_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_getexp_pd⚠ | Experimentalx86 and avx512fConvert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. |
| _mm512_maskz_getexp_ps⚠ | Experimentalx86 and avx512fConvert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. |
| _mm512_maskz_getexp_round_pd⚠ | Experimentalx86 and avx512fConvert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_getexp_round_ps⚠ | Experimentalx86 and avx512fConvert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_getmant_pd⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 |
| _mm512_maskz_getmant_ps⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 |
| _mm512_maskz_getmant_round_pd⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_getmant_round_ps⚠ | Experimentalx86 and avx512fNormalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1 Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_max_epi32⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_max_epi64⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_max_epu32⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_max_epu64⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_max_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_max_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_max_round_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_max_round_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_min_epi32⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_min_epi64⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_min_epu32⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_min_epu64⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_min_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_min_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_min_round_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_min_round_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_maskz_movedup_pd⚠ | Experimentalx86 and avx512fDuplicate even-indexed double-precision (64-bit) floating-point elements from a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_movehdup_ps⚠ | Experimentalx86 and avx512fDuplicate odd-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_moveldup_ps⚠ | Experimentalx86 and avx512fDuplicate even-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_mul_epi32⚠ | Experimentalx86 and avx512fMultiply the low signed 32-bit integers from each packed 64-bit element in a and b, and store the signed 64-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_mul_epu32⚠ | Experimentalx86 and avx512fMultiply the low unsigned 32-bit integers from each packed 64-bit element in a and b, and store the unsigned 64-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_mul_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_mul_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_mul_round_pd⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_mul_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_mullo_epi32⚠ | Experimentalx86 and avx512fMultiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and store the low 32 bits of the intermediate integers in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_or_epi32⚠ | Experimentalx86 and avx512fCompute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_or_epi64⚠ | Experimentalx86 and avx512fCompute the bitwise OR of packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permute_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permute_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutevar_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutevar_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutex2var_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutex2var_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutex2var_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutex2var_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutex_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutex_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutexvar_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutexvar_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutexvar_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_permutexvar_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_rcp14_pd⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14. |
| _mm512_maskz_rcp14_ps⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14. |
| _mm512_maskz_rol_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_rol_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_rolv_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_rolv_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_ror_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_ror_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_rorv_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_rorv_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_rsqrt14_pd⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14. |
| _mm512_maskz_rsqrt14_ps⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14. |
| _mm512_maskz_shuffle_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_shuffle_f32x4⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_shuffle_f64x2⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_shuffle_i32x4⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_shuffle_i64x2⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 2 64-bit integers) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_shuffle_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_shuffle_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sll_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a left by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sll_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_slli_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_slli_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sllv_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sllv_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sqrt_pd⚠ | Experimentalx86 and avx512fCompute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sqrt_ps⚠ | Experimentalx86 and avx512fCompute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sqrt_round_pd⚠ | Experimentalx86 and avx512fCompute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sqrt_round_ps⚠ | Experimentalx86 and avx512fCompute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sra_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sra_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srai_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srai_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srav_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srav_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srl_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srl_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srli_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srli_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srlv_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_srlv_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sub_epi32⚠ | Experimentalx86 and avx512fSubtract packed 32-bit integers in b from packed 32-bit integers in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sub_epi64⚠ | Experimentalx86 and avx512fSubtract packed 64-bit integers in b from packed 64-bit integers in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sub_pd⚠ | Experimentalx86 and avx512fSubtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sub_ps⚠ | Experimentalx86 and avx512fSubtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sub_round_pd⚠ | Experimentalx86 and avx512fSubtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_sub_round_ps⚠ | Experimentalx86 and avx512fSubtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_xor_epi32⚠ | Experimentalx86 and avx512fCompute the bitwise XOR of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_maskz_xor_epi64⚠ | Experimentalx86 and avx512fCompute the bitwise XOR of packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). |
| _mm512_max_epi32⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b, and store packed maximum values in dst. |
| _mm512_max_epi64⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b, and store packed maximum values in dst. |
| _mm512_max_epu32⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b, and store packed maximum values in dst. |
| _mm512_max_epu64⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst. |
| _mm512_max_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst. |
| _mm512_max_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst. |
| _mm512_max_round_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_max_round_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_min_epi32⚠ | Experimentalx86 and avx512fCompare packed signed 32-bit integers in a and b, and store packed minimum values in dst. |
| _mm512_min_epi64⚠ | Experimentalx86 and avx512fCompare packed signed 64-bit integers in a and b, and store packed minimum values in dst. |
| _mm512_min_epu32⚠ | Experimentalx86 and avx512fCompare packed unsigned 32-bit integers in a and b, and store packed minimum values in dst. |
| _mm512_min_epu64⚠ | Experimentalx86 and avx512fCompare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst. |
| _mm512_min_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst. |
| _mm512_min_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst. |
| _mm512_min_round_pd⚠ | Experimentalx86 and avx512fCompare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_min_round_ps⚠ | Experimentalx86 and avx512fCompare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst. Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter. |
| _mm512_movedup_pd⚠ | Experimentalx86 and avx512fDuplicate even-indexed double-precision (64-bit) floating-point elements from a, and store the results in dst. |
| _mm512_movehdup_ps⚠ | Experimentalx86 and avx512fDuplicate odd-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst. |
| _mm512_moveldup_ps⚠ | Experimentalx86 and avx512fDuplicate even-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst. |
| _mm512_mul_epi32⚠ | Experimentalx86 and avx512fMultiply the low signed 32-bit integers from each packed 64-bit element in a and b, and store the signed 64-bit results in dst. |
| _mm512_mul_epu32⚠ | Experimentalx86 and avx512fMultiply the low unsigned 32-bit integers from each packed 64-bit element in a and b, and store the unsigned 64-bit results in dst. |
| _mm512_mul_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst. |
| _mm512_mul_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst. |
| _mm512_mul_round_pd⚠ | Experimentalx86 and avx512fMultiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst. |
| _mm512_mul_round_ps⚠ | Experimentalx86 and avx512fMultiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst. |
| _mm512_mullo_epi32⚠ | Experimentalx86 and avx512fMultiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and store the low 32 bits of the intermediate integers in dst. |
| _mm512_mullox_epi64⚠ | Experimentalx86 and avx512fMultiplies elements in packed 64-bit integer vectors a and b together, storing the lower 64 bits of the result in dst. |
| _mm512_or_epi32⚠ | Experimentalx86 and avx512fCompute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst. |
| _mm512_or_epi64⚠ | Experimentalx86 and avx512fCompute the bitwise OR of packed 64-bit integers in a and b, and store the resut in dst. |
| _mm512_or_si512⚠ | Experimentalx86 and avx512fCompute the bitwise OR of 512 bits (representing integer data) in a and b, and store the result in dst. |
| _mm512_permute_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst. |
| _mm512_permute_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst. |
| _mm512_permutevar_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst. Note that this intrinsic shuffles across 128-bit lanes, unlike past intrinsics that use the permutevar name. This intrinsic is identical to _mm512_permutexvar_epi32, and it is recommended that you use that intrinsic name. |
| _mm512_permutevar_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst. |
| _mm512_permutevar_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst. |
| _mm512_permutex2var_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst. |
| _mm512_permutex2var_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst. |
| _mm512_permutex2var_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst. |
| _mm512_permutex2var_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst. |
| _mm512_permutex_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst. |
| _mm512_permutex_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst. |
| _mm512_permutexvar_epi32⚠ | Experimentalx86 and avx512fShuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst. |
| _mm512_permutexvar_epi64⚠ | Experimentalx86 and avx512fShuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst. |
| _mm512_permutexvar_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst. |
| _mm512_permutexvar_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx. |
| _mm512_rcp14_pd⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14. |
| _mm512_rcp14_ps⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14. |
| _mm512_rol_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst. |
| _mm512_rol_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst. |
| _mm512_rolv_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst. |
| _mm512_rolv_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst. |
| _mm512_ror_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst. |
| _mm512_ror_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst. |
| _mm512_rorv_epi32⚠ | Experimentalx86 and avx512fRotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst. |
| _mm512_rorv_epi64⚠ | Experimentalx86 and avx512fRotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst. |
| _mm512_rsqrt14_pd⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14. |
| _mm512_rsqrt14_ps⚠ | Experimentalx86 and avx512fCompute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14. |
| _mm512_set1_epi32⚠ | Experimentalx86 and avx512fBroadcast 32-bit integer |
| _mm512_set1_epi64⚠ | Experimentalx86 and avx512fBroadcast 64-bit integer |
| _mm512_set1_pd⚠ | Experimentalx86 and avx512fBroadcast 64-bit float |
| _mm512_set1_ps⚠ | Experimentalx86 and avx512fBroadcast 32-bit float |
| _mm512_set_epi32⚠ | Experimentalx86 and avx512fSets packed 32-bit integers in |
| _mm512_set_pd⚠ | Experimentalx86 and avx512fSets packed 64-bit integers in |
| _mm512_set_ps⚠ | Experimentalx86 and avx512fSets packed 32-bit integers in |
| _mm512_setr_epi32⚠ | Experimentalx86 and avx512fSets packed 32-bit integers in |
| _mm512_setr_pd⚠ | Experimentalx86 and avx512fSets packed 64-bit integers in |
| _mm512_setr_ps⚠ | Experimentalx86 and avx512fSets packed 32-bit integers in |
| _mm512_setzero_pd⚠ | Experimentalx86 and avx512fReturns vector of type |
| _mm512_setzero_ps⚠ | Experimentalx86 and avx512fReturns vector of type |
| _mm512_setzero_si512⚠ | Experimentalx86 and avx512fReturns vector of type |
| _mm512_shuffle_epi32⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst. |
| _mm512_shuffle_f32x4⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst. |
| _mm512_shuffle_f64x2⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst. |
| _mm512_shuffle_i32x4⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst. |
| _mm512_shuffle_i64x2⚠ | Experimentalx86 and avx512fShuffle 128-bits (composed of 2 64-bit integers) selected by imm8 from a and b, and store the results in dst. |
| _mm512_shuffle_pd⚠ | Experimentalx86 and avx512fShuffle double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8, and store the results in dst. |
| _mm512_shuffle_ps⚠ | Experimentalx86 and avx512fShuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst. |
| _mm512_sll_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a left by count while shifting in zeros, and store the results in dst. |
| _mm512_sll_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst. |
| _mm512_slli_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a left by imm8 while shifting in zeros, and store the results in dst. |
| _mm512_slli_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by imm8 while shifting in zeros, and store the results in dst. |
| _mm512_sllv_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst. |
| _mm512_sllv_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst. |
| _mm512_sqrt_pd⚠ | Experimentalx86 and avx512fCompute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. |
| _mm512_sqrt_ps⚠ | Experimentalx86 and avx512fCompute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. |
| _mm512_sqrt_round_pd⚠ | Experimentalx86 and avx512fCompute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. |
| _mm512_sqrt_round_ps⚠ | Experimentalx86 and avx512fCompute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. |
| _mm512_sra_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by count while shifting in sign bits, and store the results in dst. |
| _mm512_sra_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by count while shifting in sign bits, and store the results in dst. |
| _mm512_srai_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst. |
| _mm512_srai_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst. |
| _mm512_srav_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst. |
| _mm512_srav_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst. |
| _mm512_srl_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by count while shifting in zeros, and store the results in dst. |
| _mm512_srl_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by count while shifting in zeros, and store the results in dst. |
| _mm512_srli_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by imm8 while shifting in zeros, and store the results in dst. |
| _mm512_srli_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by imm8 while shifting in zeros, and store the results in dst. |
| _mm512_srlv_epi32⚠ | Experimentalx86 and avx512fShift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst. |
| _mm512_srlv_epi64⚠ | Experimentalx86 and avx512fShift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst. |
| _mm512_storeu_pd⚠ | Experimentalx86 and avx512fStores 512-bits (composed of 8 packed double-precision (64-bit)
floating-point elements) from |
| _mm512_sub_epi32⚠ | Experimentalx86 and avx512fSubtract packed 32-bit integers in b from packed 32-bit integers in a, and store the results in dst. |
| _mm512_sub_epi64⚠ | Experimentalx86 and avx512fSubtract packed 64-bit integers in b from packed 64-bit integers in a, and store the results in dst. |
| _mm512_sub_pd⚠ | Experimentalx86 and avx512fSubtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst. |
| _mm512_sub_ps⚠ | Experimentalx86 and avx512fSubtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst. |
| _mm512_sub_round_pd⚠ | Experimentalx86 and avx512fSubtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst. |
| _mm512_sub_round_ps⚠ | Experimentalx86 and avx512fSubtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst. |
| _mm512_undefined_pd⚠ | Experimentalx86 and avx512fReturns vector of type |
| _mm512_undefined_ps⚠ | Experimentalx86 and avx512fReturns vector of type |
| _mm512_xor_epi32⚠ | Experimentalx86 and avx512fCompute the bitwise XOR of packed 32-bit integers in a and b, and store the results in dst. |
| _mm512_xor_epi64⚠ | Experimentalx86 and avx512fCompute the bitwise XOR of packed 64-bit integers in a and b, and store the results in dst. |
| _mm512_xor_si512⚠ | Experimentalx86 and avx512fCompute the bitwise XOR of 512 bits (representing integer data) in a and b, and store the result in dst. |
| _mm_cmp_round_sd_mask⚠ | Experimentalx86 and avx512fCompare the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, and store the result in a mask vector. |
| _mm_cmp_round_ss_mask⚠ | Experimentalx86 and avx512fCompare the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, and store the result in a mask vector. |
| _mm_cmp_sd_mask⚠ | Experimentalx86 and avx512fCompare the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, and store the result in a mask vector. |
| _mm_cmp_ss_mask⚠ | Experimentalx86 and avx512fCompare the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, and store the result in a mask vector. |
| _mm_cvtph_ps⚠ | Experimentalx86 and f16cConverts the 4 x 16-bit half-precision float values in the lowest 64-bit of
the 128-bit vector |
| _mm_cvtps_ph⚠ | Experimentalx86 and f16cConverts the 4 x 32-bit float values in the 128-bit vector |
| _mm_madd52hi_epu64⚠ | Experimentalx86 and avx512ifma,avx512vlMultiply packed unsigned 52-bit integers in each 64-bit element of
|
| _mm_madd52lo_epu64⚠ | Experimentalx86 and avx512ifma,avx512vlMultiply packed unsigned 52-bit integers in each 64-bit element of
|
| _mm_mask_cmp_round_sd_mask⚠ | Experimentalx86 and avx512fCompare the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, and store the result in a mask vector using zeromask m (the element is zeroed out when mask bit 0 is not set). |
| _mm_mask_cmp_round_ss_mask⚠ | Experimentalx86 and avx512fCompare the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, and store the result in a mask vector using zeromask m (the element is zeroed out when mask bit 0 is not set). |
| _mm_mask_cmp_sd_mask⚠ | Experimentalx86 and avx512fCompare the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, and store the result in a mask vector using zeromask m (the element is zeroed out when mask bit 0 is not set). |
| _mm_mask_cmp_ss_mask⚠ | Experimentalx86 and avx512fCompare the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, and store the result in a mask vector using zeromask m (the element is zeroed out when mask bit 0 is not set). |
| _xabort⚠ | Experimentalx86 and rtmForces a restricted transactional memory (RTM) region to abort. |
| _xabort_code | Experimentalx86 Retrieves the parameter passed to |
| _xbegin⚠ | Experimentalx86 and rtmSpecifies the start of a restricted transactional memory (RTM) code region and returns a value indicating status. |
| _xend⚠ | Experimentalx86 and rtmSpecifies the end of a restricted transactional memory (RTM) code region. |
| _xtest⚠ | Experimentalx86 and rtmQueries whether the processor is executing in a transactional region identified by restricted transactional memory (RTM) or hardware lock elision (HLE). |
| has_cpuid | Experimentalx86 Does the host support the |
| ud2⚠ | Experimentalx86 Generates the trap instruction |
Type Definitions
| _MM_CMPINT_ENUM | Experimentalx86 The |
| _MM_MANTISSA_NORM_ENUM | Experimentalx86 The |
| _MM_MANTISSA_SIGN_ENUM | Experimentalx86 The |
| _MM_PERM_ENUM | Experimentalx86 The |
| __mmask8 | Experimentalx86 The |
| __mmask16 | Experimentalx86 The |