initial commit

Change-Id: I32a6db1a17988d9df8ff69aa1672dbf08b108e8a
This commit is contained in:
Mark Charney
2016-12-16 16:09:38 -05:00
commit ffd94e705c
3166 changed files with 152815 additions and 0 deletions

17
.gitignore vendored Normal file
View File

@@ -0,0 +1,17 @@
.o
*.obj
*.exe
*.pyc
*~
*#
obj*
.buildid
xed2-install*
xed-install*
.#*
TAGS
/VS10/xed.vcxproj.user
/VS10/xed.sdf
kits/*
logs/*
.developer

178
LICENSE Normal file
View File

@@ -0,0 +1,178 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

18
README.md Normal file
View File

@@ -0,0 +1,18 @@
# Intel X86 Encoder Decoder (Intel XED)
## Doxygen API manual and source build manual:
https://intelxed.github.io
## Bugs:
### Intel internal employee users/developers:
http://mjc.intel.com
### Everyone else:
https://github.com/intelxed/xed/issues/new

1
VERSION Normal file
View File

@@ -0,0 +1 @@
7.54.0-35-gb441a09

View File

@@ -0,0 +1,91 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
#
#
#
# ***** GENERATED FILE -- DO NOT EDIT! *****
# ***** GENERATED FILE -- DO NOT EDIT! *****
# ***** GENERATED FILE -- DO NOT EDIT! *****
#
#
#
EVEX_INSTRUCTIONS()::
# EMITTING V4FMADDPS (V4FMADDPS-512-1)
{
ICLASS: V4FMADDPS
CPL: 3
CATEGORY: AVX512_4FMAPS
EXTENSION: AVX512EVEX
ISA_SET: AVX512_4FMAPS_512
EXCEPTIONS: AVX512-E2
REAL_OPCODE: Y
ATTRIBUTES: MEMORY_FAULT_SUPPRESSION MULTISOURCE4 DISP8_TUPLE1_4X MXCSR MASKOP_EVEX
PATTERN: EVV 0x9A VF2 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] BCRC=0 MODRM() VL512 W0 ESIZE_32_BITS() NELEM_TUPLE1_4X()
OPERANDS: REG0=ZMM_R3():rw:zf32 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=ZMM_N3():r:zf32:MULTISOURCE4 MEM0:r:dq:f32
IFORM: V4FMADDPS_ZMMf32_MASKmskw_ZMMf32_MEMf32_AVX512
}
# EMITTING V4FMADDSS (V4FMADDSS-128-1)
{
ICLASS: V4FMADDSS
CPL: 3
CATEGORY: AVX512_4FMAPS
EXTENSION: AVX512EVEX
ISA_SET: AVX512_4FMAPS_SCALAR
EXCEPTIONS: AVX512-E2
REAL_OPCODE: Y
ATTRIBUTES: DISP8_TUPLE1_4X MXCSR MULTISOURCE4 MEMORY_FAULT_SUPPRESSION MASKOP_EVEX SIMD_SCALAR
PATTERN: EVV 0x9B VF2 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] BCRC=0 MODRM() W0 ESIZE_32_BITS() NELEM_TUPLE1_4X()
OPERANDS: REG0=XMM_R3():rw:dq:f32 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=XMM_N3():r:dq:f32:MULTISOURCE4 MEM0:r:dq:f32
IFORM: V4FMADDSS_XMMf32_MASKmskw_XMMf32_MEMf32_AVX512
}
# EMITTING V4FNMADDPS (V4FNMADDPS-512-1)
{
ICLASS: V4FNMADDPS
CPL: 3
CATEGORY: AVX512_4FMAPS
EXTENSION: AVX512EVEX
ISA_SET: AVX512_4FMAPS_512
EXCEPTIONS: AVX512-E2
REAL_OPCODE: Y
ATTRIBUTES: MEMORY_FAULT_SUPPRESSION MULTISOURCE4 DISP8_TUPLE1_4X MXCSR MASKOP_EVEX
PATTERN: EVV 0xAA VF2 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] BCRC=0 MODRM() VL512 W0 ESIZE_32_BITS() NELEM_TUPLE1_4X()
OPERANDS: REG0=ZMM_R3():rw:zf32 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=ZMM_N3():r:zf32:MULTISOURCE4 MEM0:r:dq:f32
IFORM: V4FNMADDPS_ZMMf32_MASKmskw_ZMMf32_MEMf32_AVX512
}
# EMITTING V4FNMADDSS (V4FNMADDSS-128-1)
{
ICLASS: V4FNMADDSS
CPL: 3
CATEGORY: AVX512_4FMAPS
EXTENSION: AVX512EVEX
ISA_SET: AVX512_4FMAPS_SCALAR
EXCEPTIONS: AVX512-E2
REAL_OPCODE: Y
ATTRIBUTES: DISP8_TUPLE1_4X MXCSR MULTISOURCE4 MEMORY_FAULT_SUPPRESSION MASKOP_EVEX SIMD_SCALAR
PATTERN: EVV 0xAB VF2 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] BCRC=0 MODRM() W0 ESIZE_32_BITS() NELEM_TUPLE1_4X()
OPERANDS: REG0=XMM_R3():rw:dq:f32 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=XMM_N3():r:dq:f32:MULTISOURCE4 MEM0:r:dq:f32
IFORM: V4FNMADDSS_XMMf32_MASKmskw_XMMf32_MEMf32_AVX512
}

View File

@@ -0,0 +1,19 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XED_ISA_SET_AVX512_4FMAPS_512: avx512_4fmaps.7.0.edx.3
XED_ISA_SET_AVX512_4FMAPS_SCALAR: avx512_4fmaps.7.0.edx.3

View File

@@ -0,0 +1,24 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
dec-instructions: 4fmaps-512-isa.xed.txt
enc-instructions: 4fmaps-512-isa.xed.txt
cpuid: cpuid.xed.txt

View File

@@ -0,0 +1,59 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
#
#
#
# ***** GENERATED FILE -- DO NOT EDIT! *****
# ***** GENERATED FILE -- DO NOT EDIT! *****
# ***** GENERATED FILE -- DO NOT EDIT! *****
#
#
#
EVEX_INSTRUCTIONS()::
# EMITTING VP4DPWSSD (VP4DPWSSD-512-1)
{
ICLASS: VP4DPWSSD
CPL: 3
CATEGORY: AVX512_4VNNIW
EXTENSION: AVX512EVEX
ISA_SET: AVX512_4VNNIW_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MEMORY_FAULT_SUPPRESSION MULTISOURCE4 DISP8_TUPLE1_4X MASKOP_EVEX
PATTERN: EVV 0x52 VF2 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] BCRC=0 MODRM() VL512 W0 ESIZE_32_BITS() NELEM_TUPLE1_4X()
OPERANDS: REG0=ZMM_R3():rw:zi32 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=ZMM_N3():r:zi16:MULTISOURCE4 MEM0:r:dq:u32
IFORM: VP4DPWSSD_ZMMi32_MASKmskw_ZMMi16_MEMu32_AVX512
}
# EMITTING VP4DPWSSDS (VP4DPWSSDS-512-1)
{
ICLASS: VP4DPWSSDS
CPL: 3
CATEGORY: AVX512_4VNNIW
EXTENSION: AVX512EVEX
ISA_SET: AVX512_4VNNIW_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MEMORY_FAULT_SUPPRESSION MULTISOURCE4 DISP8_TUPLE1_4X MASKOP_EVEX
PATTERN: EVV 0x53 VF2 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] BCRC=0 MODRM() VL512 W0 ESIZE_32_BITS() NELEM_TUPLE1_4X()
OPERANDS: REG0=ZMM_R3():rw:zi32 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=ZMM_N3():r:zi16:MULTISOURCE4 MEM0:r:dq:u32
IFORM: VP4DPWSSDS_ZMMi32_MASKmskw_ZMMi16_MEMu32_AVX512
}

View File

@@ -0,0 +1,18 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XED_ISA_SET_AVX512_4VNNIW_512: avx512_4vnniw.7.0.edx.2

View File

@@ -0,0 +1,24 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
dec-instructions: 4vnniw-512-isa.xed.txt
enc-instructions: 4vnniw-512-isa.xed.txt
cpuid: cpuid.xed.txt

View File

@@ -0,0 +1,582 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
AVX_INSTRUCTIONS()::
{
ICLASS: VFMADDSUBPS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x5C V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:dq:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x5C V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:dq:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x5C V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:dq:f32
PATTERN: VV1 0x5C V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:dq:f32
PATTERN: VV1 0x5C V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 MEM0:r:qq:f32 REG2=YMM_SE():r:qq:f32
PATTERN: VV1 0x5C V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_B():r:qq:f32 REG3=YMM_SE():r:qq:f32
PATTERN: VV1 0x5C V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 MEM0:r:qq:f32
PATTERN: VV1 0x5C V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 REG3=YMM_B():r:qq:f32
}
{
ICLASS: VFMADDSUBPD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x5D V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:dq:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x5D V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:dq:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x5D V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:dq:f64
PATTERN: VV1 0x5D V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:dq:f64
PATTERN: VV1 0x5D V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 MEM0:r:qq:f64 REG2=YMM_SE():r:qq:f64
PATTERN: VV1 0x5D V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_B():r:qq:f64 REG3=YMM_SE():r:qq:f64
PATTERN: VV1 0x5D V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 MEM0:r:qq:f64
PATTERN: VV1 0x5D V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 REG3=YMM_B():r:qq:f64
}
{
ICLASS: VFMSUBADDPS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x5E V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:dq:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x5E V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:dq:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x5E V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:dq:f32
PATTERN: VV1 0x5E V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:dq:f32
PATTERN: VV1 0x5E V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 MEM0:r:qq:f32 REG2=YMM_SE():r:qq:f32
PATTERN: VV1 0x5E V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_B():r:qq:f32 REG3=YMM_SE():r:qq:f32
PATTERN: VV1 0x5E V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 MEM0:r:qq:f32
PATTERN: VV1 0x5E V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 REG3=YMM_B():r:qq:f32
}
{
ICLASS: VFMSUBADDPD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x5F V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:dq:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x5F V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:dq:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x5F V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:dq:f64
PATTERN: VV1 0x5F V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:dq:f64
PATTERN: VV1 0x5F V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 MEM0:r:qq:f64 REG2=YMM_SE():r:qq:f64
PATTERN: VV1 0x5F V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_B():r:qq:f64 REG3=YMM_SE():r:qq:f64
PATTERN: VV1 0x5F V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 MEM0:r:qq:f64
PATTERN: VV1 0x5F V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 REG3=YMM_B():r:qq:f64
}
{
ICLASS: VFMADDPS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x68 V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:dq:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x68 V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:dq:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x68 V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:dq:f32
PATTERN: VV1 0x68 V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:dq:f32
PATTERN: VV1 0x68 V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 MEM0:r:qq:f32 REG2=YMM_SE():r:qq:f32
PATTERN: VV1 0x68 V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_B():r:qq:f32 REG3=YMM_SE():r:qq:f32
PATTERN: VV1 0x68 V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 MEM0:r:qq:f32
PATTERN: VV1 0x68 V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 REG3=YMM_B():r:qq:f32
}
{
ICLASS: VFMADDPD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x69 V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:dq:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x69 V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:dq:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x69 V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:dq:f64
PATTERN: VV1 0x69 V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:dq:f64
PATTERN: VV1 0x69 V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 MEM0:r:qq:f64 REG2=YMM_SE():r:qq:f64
PATTERN: VV1 0x69 V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_B():r:qq:f64 REG3=YMM_SE():r:qq:f64
PATTERN: VV1 0x69 V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 MEM0:r:qq:f64
PATTERN: VV1 0x69 V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 REG3=YMM_B():r:qq:f64
}
{
ICLASS: VFMADDSS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: SIMD_SCALAR MXCSR
PATTERN: VV1 0x6A V66 W0 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:d:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x6A V66 W0 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:d:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x6A V66 W1 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:d:f32
PATTERN: VV1 0x6A V66 W1 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:d:f32
}
{
ICLASS: VFMADDSD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: SIMD_SCALAR MXCSR
PATTERN: VV1 0x6B V66 W0 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:q:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x6B V66 W0 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:q:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x6B V66 W1 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:q:f64
PATTERN: VV1 0x6B V66 W1 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:q:f64
}
{
ICLASS: VFMSUBPS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x6C V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:dq:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x6C V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:dq:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x6C V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:dq:f32
PATTERN: VV1 0x6C V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:dq:f32
PATTERN: VV1 0x6C V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 MEM0:r:qq:f32 REG2=YMM_SE():r:qq:f32
PATTERN: VV1 0x6C V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_B():r:qq:f32 REG3=YMM_SE():r:qq:f32
PATTERN: VV1 0x6C V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 MEM0:r:qq:f32
PATTERN: VV1 0x6C V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 REG3=YMM_B():r:qq:f32
}
{
ICLASS: VFMSUBPD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x6D V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:dq:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x6D V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:dq:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x6D V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:dq:f64
PATTERN: VV1 0x6D V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:dq:f64
PATTERN: VV1 0x6D V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 MEM0:r:qq:f64 REG2=YMM_SE():r:qq:f64
PATTERN: VV1 0x6D V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_B():r:qq:f64 REG3=YMM_SE():r:qq:f64
PATTERN: VV1 0x6D V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 MEM0:r:qq:f64
PATTERN: VV1 0x6D V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 REG3=YMM_B():r:qq:f64
}
{
ICLASS: VFMSUBSS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: SIMD_SCALAR MXCSR
PATTERN: VV1 0x6E V66 W0 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:d:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x6E V66 W0 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:d:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x6E V66 W1 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:d:f32
PATTERN: VV1 0x6E V66 W1 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:d:f32
}
{
ICLASS: VFMSUBSD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: SIMD_SCALAR MXCSR
PATTERN: VV1 0x6F V66 W0 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:q:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x6F V66 W0 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:q:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x6F V66 W1 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:q:f64
PATTERN: VV1 0x6F V66 W1 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:q:f64
}
{
ICLASS: VFNMADDPS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x78 V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:dq:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x78 V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:dq:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x78 V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:dq:f32
PATTERN: VV1 0x78 V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:dq:f32
PATTERN: VV1 0x78 V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 MEM0:r:qq:f32 REG2=YMM_SE():r:qq:f32
PATTERN: VV1 0x78 V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_B():r:qq:f32 REG3=YMM_SE():r:qq:f32
PATTERN: VV1 0x78 V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 MEM0:r:qq:f32
PATTERN: VV1 0x78 V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 REG3=YMM_B():r:qq:f32
}
{
ICLASS: VFNMADDPD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x79 V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:dq:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x79 V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:dq:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x79 V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:dq:f64
PATTERN: VV1 0x79 V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:dq:f64
PATTERN: VV1 0x79 V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 MEM0:r:qq:f64 REG2=YMM_SE():r:qq:f64
PATTERN: VV1 0x79 V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_B():r:qq:f64 REG3=YMM_SE():r:qq:f64
PATTERN: VV1 0x79 V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 MEM0:r:qq:f64
PATTERN: VV1 0x79 V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 REG3=YMM_B():r:qq:f64
}
{
ICLASS: VFNMADDSS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: SIMD_SCALAR MXCSR
PATTERN: VV1 0x7A V66 W0 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:d:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x7A V66 W0 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:d:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x7A V66 W1 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:d:f32
PATTERN: VV1 0x7A V66 W1 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:d:f32
}
{
ICLASS: VFNMADDSD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: SIMD_SCALAR MXCSR
PATTERN: VV1 0x7B V66 W0 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:q:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x7B V66 W0 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:q:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x7B V66 W1 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:q:f64
PATTERN: VV1 0x7B V66 W1 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:q:f64
}
{
ICLASS: VFNMSUBPS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x7C V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:dq:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x7C V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:dq:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x7C V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:dq:f32
PATTERN: VV1 0x7C V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:dq:f32
PATTERN: VV1 0x7C V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 MEM0:r:qq:f32 REG2=YMM_SE():r:qq:f32
PATTERN: VV1 0x7C V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_B():r:qq:f32 REG3=YMM_SE():r:qq:f32
PATTERN: VV1 0x7C V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 MEM0:r:qq:f32
PATTERN: VV1 0x7C V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 REG3=YMM_B():r:qq:f32
}
{
ICLASS: VFNMSUBPD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: MXCSR
PATTERN: VV1 0x7D V66 W0 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:dq:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x7D V66 W0 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:dq:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x7D V66 W1 VL128 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:dq:f64
PATTERN: VV1 0x7D V66 W1 VL128 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:dq:f64
PATTERN: VV1 0x7D V66 W0 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 MEM0:r:qq:f64 REG2=YMM_SE():r:qq:f64
PATTERN: VV1 0x7D V66 W0 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_B():r:qq:f64 REG3=YMM_SE():r:qq:f64
PATTERN: VV1 0x7D V66 W1 VL256 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 MEM0:r:qq:f64
PATTERN: VV1 0x7D V66 W1 VL256 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 REG3=YMM_B():r:qq:f64
}
{
ICLASS: VFNMSUBSS
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: SIMD_SCALAR MXCSR
PATTERN: VV1 0x7E V66 W0 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:d:f32 REG2=XMM_SE():r:dq:f32
PATTERN: VV1 0x7E V66 W0 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:d:f32 REG3=XMM_SE():r:dq:f32
PATTERN: VV1 0x7E V66 W1 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:d:f32
PATTERN: VV1 0x7E V66 W1 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:d:f32
}
{
ICLASS: VFNMSUBSD
CPL: 3
CATEGORY: FMA4
ISA_SET: FMA4
EXTENSION: FMA4
ATTRIBUTES: SIMD_SCALAR MXCSR
PATTERN: VV1 0x7F V66 W0 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:q:f64 REG2=XMM_SE():r:dq:f64
PATTERN: VV1 0x7F V66 W0 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:q:f64 REG3=XMM_SE():r:dq:f64
PATTERN: VV1 0x7F V66 W1 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:q:f64
PATTERN: VV1 0x7F V66 W1 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS: REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:q:f64
}

View File

@@ -0,0 +1,97 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
AVX_INSTRUCTIONS()::
{
ICLASS : VPERMIL2PS
CPL : 3
CATEGORY : XOP
EXTENSION : XOP
ISA_SET : XOP
# 128b W0
PATTERN : VV1 0x48 VL128 V66 V0F3A W0 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS : REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 MEM0:r:dq:f32 REG2=XMM_SE():r:dq:f32 IMM0:r:b
PATTERN : VV1 0x48 VL128 V66 V0F3A W0 MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS : REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_B():r:dq:f32 REG3=XMM_SE():r:dq:f32 IMM0:r:b
# 256b W0
PATTERN : VV1 0x48 VL256 V66 V0F3A W0 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS : REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 MEM0:r:qq:f32 REG2=YMM_SE():r:qq:f32 IMM0:r:b
PATTERN : VV1 0x48 VL256 V66 V0F3A W0 MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS : REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_B():r:qq:f32 REG3=YMM_SE():r:qq:f32 IMM0:r:b
# 128b W1
PATTERN : VV1 0x48 VL128 V66 V0F3A W1 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS : REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 MEM0:r:dq:f32 IMM0:r:b
PATTERN : VV1 0x48 VL128 V66 V0F3A W1 MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS : REG0=XMM_R():w:dq:f32 REG1=XMM_N():r:dq:f32 REG2=XMM_SE():r:dq:f32 REG3=XMM_B():r:dq:f32 IMM0:r:b
# 256b W1
PATTERN : VV1 0x48 VL256 V66 V0F3A W1 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS : REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 MEM0:r:qq:f32 IMM0:r:b
PATTERN : VV1 0x48 VL256 V66 V0F3A W1 MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS : REG0=YMM_R():w:qq:f32 REG1=YMM_N():r:qq:f32 REG2=YMM_SE():r:qq:f32 REG3=YMM_B():r:qq:f32 IMM0:r:b
}
{
ICLASS : VPERMIL2PD
CPL : 3
CATEGORY : XOP
EXTENSION : XOP
ISA_SET : XOP
# 128b W0
PATTERN : VV1 0x49 VL128 V66 V0F3A W0 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS : REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 MEM0:r:dq:f64 REG2=XMM_SE():r:dq:f64 IMM0:r:b
PATTERN : VV1 0x49 VL128 V66 V0F3A W0 MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS : REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_B():r:dq:f64 REG3=XMM_SE():r:dq:f64 IMM0:r:b
# 256b W0
PATTERN : VV1 0x49 VL256 V66 V0F3A W0 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS : REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 MEM0:r:qq:f64 REG2=YMM_SE():r:qq:f64 IMM0:r:b
PATTERN : VV1 0x49 VL256 V66 V0F3A W0 MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS : REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_B():r:qq:f64 REG3=YMM_SE():r:qq:f64 IMM0:r:b
# 128b W1
PATTERN : VV1 0x49 VL128 V66 V0F3A W1 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS : REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 MEM0:r:dq:f64 IMM0:r:b
PATTERN : VV1 0x49 VL128 V66 V0F3A W1 MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS : REG0=XMM_R():w:dq:f64 REG1=XMM_N():r:dq:f64 REG2=XMM_SE():r:dq:f64 REG3=XMM_B():r:dq:f64 IMM0:r:b
# 256b W1
PATTERN : VV1 0x49 VL256 V66 V0F3A W1 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() SE_IMM8()
OPERANDS : REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 MEM0:r:qq:f64 IMM0:r:b
PATTERN : VV1 0x49 VL256 V66 V0F3A W1 MOD[0b11] MOD=3 REG[rrr] RM[nnn] SE_IMM8()
OPERANDS : REG0=YMM_R():w:qq:f64 REG1=YMM_N():r:qq:f64 REG2=YMM_SE():r:qq:f64 REG3=YMM_B():r:qq:f64 IMM0:r:b
}

View File

@@ -0,0 +1,24 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
AVX_SPLITTER()::
VEXVALID=3 XOP_INSTRUCTIONS() |
EVEX_SPLITTER()::
VEXVALID=3 XOP_INSTRUCTIONS() |

View File

@@ -0,0 +1,62 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
######## XOP #################################
SEQUENCE XOP_ENC_BIND
XOP_TYPE_ENC_BIND
VEX_REXR_ENC_BIND
XOP_REXXB_ENC_BIND
XOP_MAP_ENC_BIND
VEX_REG_ENC_BIND
VEX_ESCVL_ENC_BIND
SEQUENCE XOP_ENC_EMIT
XOP_TYPE_ENC_EMIT
VEX_REXR_ENC_EMIT
XOP_REXXB_ENC_EMIT
XOP_MAP_ENC_EMIT
VEX_REG_ENC_EMIT
VEX_ESCVL_ENC_EMIT
##############################################
VEXED_REX()::
VEXVALID=3 -> XOP_ENC()
XOP_TYPE_ENC()::
XMAP8 -> 0x8F
XMAP9 -> 0x8F
XMAPA -> 0x8F
otherwise -> error
XOP_MAP_ENC()::
XMAP8 REXW[w] -> 0b0_1000 w
XMAP9 REXW[w] -> 0b0_1001 w
XMAPA REXW[w] -> 0b0_1010 w
otherwise -> error
XOP_REXXB_ENC()::
mode64 REXX=0 REXB=0 -> 0b11
mode64 REXX=1 REXB=0 -> 0b01
mode64 REXX=0 REXB=1 -> 0b10
mode64 REXX=1 REXB=1 -> 0b00
not64 REXX=0 REXB=0 -> 0b11
not64 REXX=1 REXB=0 -> error
not64 REXX=0 REXB=1 -> error
not64 REXX=1 REXB=1 -> error
otherwise -> nothing

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,20 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XED_ISA_SET_XOP: n/a
XED_ISA_SET_TBM: n/a
XED_ISA_SET_FMA4: n/a

View File

@@ -0,0 +1,30 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
state:xop-state-bits.txt
dec-patterns:amd-xop-dec.txt
dec-instructions:amd-xop-isa.txt
enc-instructions:amd-xop-isa.txt
dec-instructions:amd-fma4-isa.txt
enc-instructions:amd-fma4-isa.txt
dec-instructions:amd-vpermil2-isa.txt
enc-instructions:amd-vpermil2-isa.txt
enc-patterns:amd-xop-enc.txt
cpuid : cpuid.xed.txt

View File

@@ -0,0 +1,23 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XMAP8 MAP=8
XMAP9 MAP=9
XMAPA MAP=10
XOPV VEXVALID=3

View File

@@ -0,0 +1,86 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
AVX_INSTRUCTIONS()::
{
ICLASS : VAESKEYGENASSIST
EXCEPTIONS: avx-type-4
CPL : 3
CATEGORY : AES
EXTENSION : AVXAES
PATTERN : VV1 0xDF VL128 V66 V0F3A NOVSR MOD[0b11] MOD=3 REG[rrr] RM[nnn] UIMM8()
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_B():r:dq IMM0:r:b
PATTERN : VV1 0xDF VL128 V66 V0F3A NOVSR MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() UIMM8()
OPERANDS : REG0=XMM_R():w:dq MEM0:r:dq IMM0:r:b
}
{
ICLASS : VAESENC
EXCEPTIONS: avx-type-4
CPL : 3
CATEGORY : AES
EXTENSION : AVXAES
PATTERN : VV1 0xDC V66 V0F38 MOD[0b11] MOD=3 REG[rrr] RM[nnn] VL128
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_N():r:dq REG2=XMM_B():r:dq
PATTERN : VV1 0xDC V66 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() VL128
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_N():r:dq MEM0:r:dq
}
{
ICLASS : VAESENCLAST
EXCEPTIONS: avx-type-4
CPL : 3
CATEGORY : AES
EXTENSION : AVXAES
PATTERN : VV1 0xDD V66 V0F38 MOD[0b11] MOD=3 REG[rrr] RM[nnn] VL128
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_N():r:dq REG2=XMM_B():r:dq
PATTERN : VV1 0xDD V66 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() VL128
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_N():r:dq MEM0:r:dq
}
{
ICLASS : VAESDEC
EXCEPTIONS: avx-type-4
CPL : 3
CATEGORY : AES
EXTENSION : AVXAES
PATTERN : VV1 0xDE V66 V0F38 MOD[0b11] MOD=3 REG[rrr] RM[nnn] VL128
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_N():r:dq REG2=XMM_B():r:dq
PATTERN : VV1 0xDE V66 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() VL128
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_N():r:dq MEM0:r:dq
}
{
ICLASS : VAESDECLAST
EXCEPTIONS: avx-type-4
CPL : 3
CATEGORY : AES
EXTENSION : AVXAES
PATTERN : VV1 0xDF V66 V0F38 MOD[0b11] MOD=3 REG[rrr] RM[nnn] VL128
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_N():r:dq REG2=XMM_B():r:dq
PATTERN : VV1 0xDF V66 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() VL128
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_N():r:dq MEM0:r:dq
}
{
ICLASS : VAESIMC
EXCEPTIONS: avx-type-4
CPL : 3
CATEGORY : AES
EXTENSION : AVXAES
PATTERN : VV1 0xDB VL128 V66 V0F38 NOVSR MOD[0b11] MOD=3 REG[rrr] RM[nnn]
OPERANDS : REG0=XMM_R():w:dq REG1=XMM_B():r:dq
PATTERN : VV1 0xDB VL128 V66 V0F38 NOVSR MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM()
OPERANDS : REG0=XMM_R():w:dq MEM0:r:dq
}

View File

@@ -0,0 +1,18 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
SANDYBRIDGE: ALL_OF(WESTMERE) AVX AVXAES XSAVE XSAVEOPT

View File

@@ -0,0 +1,28 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
# ==== ==== ========= ========== ==============
# default
# name type bit-width visibility behavior
# ==== ==== ========= ========== ==============
VEXDEST3 SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EO
VEXDEST210 SCALAR xed_bits_t 3 SUPPRESSED NOPRINT INTERNAL DO EO
VL SCALAR xed_bits_t 2 SUPPRESSED NOPRINT INTERNAL DO EO
VEX_PREFIX SCALAR xed_bits_t 2 SUPPRESSED NOPRINT INTERNAL DO EO # VEX.PP
VEX_C4 SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EO # ENCONLY
BCAST SCALAR xed_bits_t 5 SUPPRESSED NOPRINT INTERNAL DO EO

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,19 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
SE_IMM8()::
true ESRC[ssss] UIMM0[dddd] -> ssss_dddd

19
datafiles/avx/avx-imm.txt Normal file
View File

@@ -0,0 +1,19 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
SE_IMM8()::
UIMM0[ssss_uuuu] | IMM_WIDTH=8 ESRC=ssss

View File

@@ -0,0 +1,24 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
#AVX_SPLITTER()::
#VEXVALID=0 -> INSTRUCTIONS()
#VEXVALID=1 -> AVX_INSTRUCTIONS()

View File

@@ -0,0 +1,20 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
AVX_SPLITTER()::
VEXVALID=0 INSTRUCTIONS() |
VEXVALID=1 AVX_INSTRUCTIONS() |

4402
datafiles/avx/avx-isa.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,54 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
AVX_INSTRUCTIONS()::
{
ICLASS : VMOVNTDQ
EXCEPTIONS: avx-type-1
CPL : 3
CATEGORY : DATAXFER
EXTENSION : AVX
ATTRIBUTES : REQUIRES_ALIGNMENT NOTSX
PATTERN : VV1 0xE7 V66 V0F VL256 NOVSR MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM()
OPERANDS : MEM0:w:qq:i32 REG0=YMM_R():r:qq:i32
}
{
ICLASS : VMOVNTPD
EXCEPTIONS: avx-type-1
CPL : 3
CATEGORY : DATAXFER
EXTENSION : AVX
ATTRIBUTES : REQUIRES_ALIGNMENT NOTSX
PATTERN : VV1 0x2B V66 V0F VL256 NOVSR MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM()
OPERANDS : MEM0:w:qq:f64 REG0=YMM_R():r:qq:f64
}
{
ICLASS : VMOVNTPS
EXCEPTIONS: avx-type-1
CPL : 3
CATEGORY : DATAXFER
EXTENSION : AVX
ATTRIBUTES : REQUIRES_ALIGNMENT NOTSX
PATTERN : VV1 0x2B VNP V0F VL256 NOVSR MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM()
OPERANDS : MEM0:w:qq:f32 REG0=YMM_R():r:qq:f32
}

View File

@@ -0,0 +1,36 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
#
#code XTYPE width16 width32 width64 (if only one width is presented, it is for all widths)
#
qq i32 32
yub u8 32
yuw u16 32
yud u32 32
yuq u64 32
y128 u128 32
yb i8 32
yw i16 32
yd i32 32
yq i64 32
yps f32 32
ypd f64 32

View File

@@ -0,0 +1,29 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
AVX_INSTRUCTIONS()::
{
ICLASS : VPCLMULQDQ
EXCEPTIONS: avx-type-4
CPL : 3
CATEGORY : AVX
EXTENSION : AVX
PATTERN : VV1 0x44 V66 V0F3A MOD[0b11] MOD=3 REG[rrr] RM[nnn] VL128 UIMM8()
OPERANDS : REG0=XMM_R():w:dq:u128 REG1=XMM_N():r:dq:u64 REG2=XMM_B():r:dq:u64 IMM0:r:b
PATTERN : VV1 0x44 V66 V0F3A MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() VL128 UIMM8()
OPERANDS : REG0=XMM_R():w:dq:u128 REG1=XMM_N():r:dq:u64 MEM0:r:dq:u64 IMM0:r:b
}

View File

@@ -0,0 +1,18 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
32 ymmword y

View File

@@ -0,0 +1,235 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
xed_reg_enum_t XMM_SE()::
mode16 | OUTREG=XMM_SE32()
mode32 | OUTREG=XMM_SE32()
mode64 | OUTREG=XMM_SE64()
xed_reg_enum_t XMM_SE64()::
ESRC=0x0 | OUTREG=XED_REG_XMM0
ESRC=0x1 | OUTREG=XED_REG_XMM1
ESRC=0x2 | OUTREG=XED_REG_XMM2
ESRC=0x3 | OUTREG=XED_REG_XMM3
ESRC=0x4 | OUTREG=XED_REG_XMM4
ESRC=0x5 | OUTREG=XED_REG_XMM5
ESRC=0x6 | OUTREG=XED_REG_XMM6
ESRC=0x7 | OUTREG=XED_REG_XMM7
ESRC=0x8 | OUTREG=XED_REG_XMM8
ESRC=0x9 | OUTREG=XED_REG_XMM9
ESRC=0xA | OUTREG=XED_REG_XMM10
ESRC=0xB | OUTREG=XED_REG_XMM11
ESRC=0xC | OUTREG=XED_REG_XMM12
ESRC=0xD | OUTREG=XED_REG_XMM13
ESRC=0xE | OUTREG=XED_REG_XMM14
ESRC=0xF | OUTREG=XED_REG_XMM15
xed_reg_enum_t XMM_SE32()::
ESRC=0 | OUTREG=XED_REG_XMM0 enc
ESRC=1 | OUTREG=XED_REG_XMM1 enc
ESRC=2 | OUTREG=XED_REG_XMM2 enc
ESRC=3 | OUTREG=XED_REG_XMM3 enc
ESRC=4 | OUTREG=XED_REG_XMM4 enc
ESRC=5 | OUTREG=XED_REG_XMM5 enc
ESRC=6 | OUTREG=XED_REG_XMM6 enc
ESRC=7 | OUTREG=XED_REG_XMM7 enc
# ignoring the high bit in non64b modes. Really just 0...7
ESRC=0x8 | OUTREG=XED_REG_XMM0
ESRC=0x9 | OUTREG=XED_REG_XMM1
ESRC=0xA | OUTREG=XED_REG_XMM2
ESRC=0xB | OUTREG=XED_REG_XMM3
ESRC=0xC | OUTREG=XED_REG_XMM4
ESRC=0xD | OUTREG=XED_REG_XMM5
ESRC=0xE | OUTREG=XED_REG_XMM6
ESRC=0xF | OUTREG=XED_REG_XMM7
xed_reg_enum_t YMM_SE()::
mode16 | OUTREG=YMM_SE32()
mode32 | OUTREG=YMM_SE32()
mode64 | OUTREG=YMM_SE64()
xed_reg_enum_t YMM_SE64()::
ESRC=0x0 | OUTREG=XED_REG_YMM0
ESRC=0x1 | OUTREG=XED_REG_YMM1
ESRC=0x2 | OUTREG=XED_REG_YMM2
ESRC=0x3 | OUTREG=XED_REG_YMM3
ESRC=0x4 | OUTREG=XED_REG_YMM4
ESRC=0x5 | OUTREG=XED_REG_YMM5
ESRC=0x6 | OUTREG=XED_REG_YMM6
ESRC=0x7 | OUTREG=XED_REG_YMM7
ESRC=0x8 | OUTREG=XED_REG_YMM8
ESRC=0x9 | OUTREG=XED_REG_YMM9
ESRC=0xA | OUTREG=XED_REG_YMM10
ESRC=0xB | OUTREG=XED_REG_YMM11
ESRC=0xC | OUTREG=XED_REG_YMM12
ESRC=0xD | OUTREG=XED_REG_YMM13
ESRC=0xE | OUTREG=XED_REG_YMM14
ESRC=0xF | OUTREG=XED_REG_YMM15
xed_reg_enum_t YMM_SE32()::
ESRC=0 | OUTREG=XED_REG_YMM0 enc
ESRC=1 | OUTREG=XED_REG_YMM1 enc
ESRC=2 | OUTREG=XED_REG_YMM2 enc
ESRC=3 | OUTREG=XED_REG_YMM3 enc
ESRC=4 | OUTREG=XED_REG_YMM4 enc
ESRC=5 | OUTREG=XED_REG_YMM5 enc
ESRC=6 | OUTREG=XED_REG_YMM6 enc
ESRC=7 | OUTREG=XED_REG_YMM7 enc
# ignoring the high bit in non64b modes. Really just 0...7
ESRC=0x8 | OUTREG=XED_REG_YMM0
ESRC=0x9 | OUTREG=XED_REG_YMM1
ESRC=0xA | OUTREG=XED_REG_YMM2
ESRC=0xB | OUTREG=XED_REG_YMM3
ESRC=0xC | OUTREG=XED_REG_YMM4
ESRC=0xD | OUTREG=XED_REG_YMM5
ESRC=0xE | OUTREG=XED_REG_YMM6
ESRC=0xF | OUTREG=XED_REG_YMM7
xed_reg_enum_t XMM_N()::
mode16 | OUTREG=XMM_N_32():
mode32 | OUTREG=XMM_N_32():
mode64 | OUTREG=XMM_N_64():
xed_reg_enum_t XMM_N_32()::
VEXDEST210=7 | OUTREG=XED_REG_XMM0
VEXDEST210=6 | OUTREG=XED_REG_XMM1
VEXDEST210=5 | OUTREG=XED_REG_XMM2
VEXDEST210=4 | OUTREG=XED_REG_XMM3
VEXDEST210=3 | OUTREG=XED_REG_XMM4
VEXDEST210=2 | OUTREG=XED_REG_XMM5
VEXDEST210=1 | OUTREG=XED_REG_XMM6
VEXDEST210=0 | OUTREG=XED_REG_XMM7
xed_reg_enum_t XMM_N_64()::
VEXDEST3=1 VEXDEST210=7 | OUTREG=XED_REG_XMM0
VEXDEST3=1 VEXDEST210=6 | OUTREG=XED_REG_XMM1
VEXDEST3=1 VEXDEST210=5 | OUTREG=XED_REG_XMM2
VEXDEST3=1 VEXDEST210=4 | OUTREG=XED_REG_XMM3
VEXDEST3=1 VEXDEST210=3 | OUTREG=XED_REG_XMM4
VEXDEST3=1 VEXDEST210=2 | OUTREG=XED_REG_XMM5
VEXDEST3=1 VEXDEST210=1 | OUTREG=XED_REG_XMM6
VEXDEST3=1 VEXDEST210=0 | OUTREG=XED_REG_XMM7
VEXDEST3=0 VEXDEST210=7 | OUTREG=XED_REG_XMM8
VEXDEST3=0 VEXDEST210=6 | OUTREG=XED_REG_XMM9
VEXDEST3=0 VEXDEST210=5 | OUTREG=XED_REG_XMM10
VEXDEST3=0 VEXDEST210=4 | OUTREG=XED_REG_XMM11
VEXDEST3=0 VEXDEST210=3 | OUTREG=XED_REG_XMM12
VEXDEST3=0 VEXDEST210=2 | OUTREG=XED_REG_XMM13
VEXDEST3=0 VEXDEST210=1 | OUTREG=XED_REG_XMM14
VEXDEST3=0 VEXDEST210=0 | OUTREG=XED_REG_XMM15
xed_reg_enum_t YMM_N()::
mode16 | OUTREG=YMM_N_32():
mode32 | OUTREG=YMM_N_32():
mode64 | OUTREG=YMM_N_64():
xed_reg_enum_t YMM_N_32()::
VEXDEST210=7 | OUTREG=XED_REG_YMM0
VEXDEST210=6 | OUTREG=XED_REG_YMM1
VEXDEST210=5 | OUTREG=XED_REG_YMM2
VEXDEST210=4 | OUTREG=XED_REG_YMM3
VEXDEST210=3 | OUTREG=XED_REG_YMM4
VEXDEST210=2 | OUTREG=XED_REG_YMM5
VEXDEST210=1 | OUTREG=XED_REG_YMM6
VEXDEST210=0 | OUTREG=XED_REG_YMM7
xed_reg_enum_t YMM_N_64()::
VEXDEST3=1 VEXDEST210=7 | OUTREG=XED_REG_YMM0
VEXDEST3=1 VEXDEST210=6 | OUTREG=XED_REG_YMM1
VEXDEST3=1 VEXDEST210=5 | OUTREG=XED_REG_YMM2
VEXDEST3=1 VEXDEST210=4 | OUTREG=XED_REG_YMM3
VEXDEST3=1 VEXDEST210=3 | OUTREG=XED_REG_YMM4
VEXDEST3=1 VEXDEST210=2 | OUTREG=XED_REG_YMM5
VEXDEST3=1 VEXDEST210=1 | OUTREG=XED_REG_YMM6
VEXDEST3=1 VEXDEST210=0 | OUTREG=XED_REG_YMM7
VEXDEST3=0 VEXDEST210=7 | OUTREG=XED_REG_YMM8
VEXDEST3=0 VEXDEST210=6 | OUTREG=XED_REG_YMM9
VEXDEST3=0 VEXDEST210=5 | OUTREG=XED_REG_YMM10
VEXDEST3=0 VEXDEST210=4 | OUTREG=XED_REG_YMM11
VEXDEST3=0 VEXDEST210=3 | OUTREG=XED_REG_YMM12
VEXDEST3=0 VEXDEST210=2 | OUTREG=XED_REG_YMM13
VEXDEST3=0 VEXDEST210=1 | OUTREG=XED_REG_YMM14
VEXDEST3=0 VEXDEST210=0 | OUTREG=XED_REG_YMM15
xed_reg_enum_t YMM_R()::
mode16 | OUTREG=YMM_R_32():
mode32 | OUTREG=YMM_R_32():
mode64 | OUTREG=YMM_R_64():
xed_reg_enum_t YMM_R_32()::
REG=0 | OUTREG=XED_REG_YMM0
REG=1 | OUTREG=XED_REG_YMM1
REG=2 | OUTREG=XED_REG_YMM2
REG=3 | OUTREG=XED_REG_YMM3
REG=4 | OUTREG=XED_REG_YMM4
REG=5 | OUTREG=XED_REG_YMM5
REG=6 | OUTREG=XED_REG_YMM6
REG=7 | OUTREG=XED_REG_YMM7
xed_reg_enum_t YMM_R_64()::
REXR=0 REG=0 | OUTREG=XED_REG_YMM0
REXR=0 REG=1 | OUTREG=XED_REG_YMM1
REXR=0 REG=2 | OUTREG=XED_REG_YMM2
REXR=0 REG=3 | OUTREG=XED_REG_YMM3
REXR=0 REG=4 | OUTREG=XED_REG_YMM4
REXR=0 REG=5 | OUTREG=XED_REG_YMM5
REXR=0 REG=6 | OUTREG=XED_REG_YMM6
REXR=0 REG=7 | OUTREG=XED_REG_YMM7
REXR=1 REG=0 | OUTREG=XED_REG_YMM8
REXR=1 REG=1 | OUTREG=XED_REG_YMM9
REXR=1 REG=2 | OUTREG=XED_REG_YMM10
REXR=1 REG=3 | OUTREG=XED_REG_YMM11
REXR=1 REG=4 | OUTREG=XED_REG_YMM12
REXR=1 REG=5 | OUTREG=XED_REG_YMM13
REXR=1 REG=6 | OUTREG=XED_REG_YMM14
REXR=1 REG=7 | OUTREG=XED_REG_YMM15
xed_reg_enum_t YMM_B()::
mode16 | OUTREG=YMM_B_32():
mode32 | OUTREG=YMM_B_32():
mode64 | OUTREG=YMM_B_64():
xed_reg_enum_t YMM_B_32()::
RM=0 | OUTREG=XED_REG_YMM0
RM=1 | OUTREG=XED_REG_YMM1
RM=2 | OUTREG=XED_REG_YMM2
RM=3 | OUTREG=XED_REG_YMM3
RM=4 | OUTREG=XED_REG_YMM4
RM=5 | OUTREG=XED_REG_YMM5
RM=6 | OUTREG=XED_REG_YMM6
RM=7 | OUTREG=XED_REG_YMM7
xed_reg_enum_t YMM_B_64()::
REXB=0 RM=0 | OUTREG=XED_REG_YMM0
REXB=0 RM=1 | OUTREG=XED_REG_YMM1
REXB=0 RM=2 | OUTREG=XED_REG_YMM2
REXB=0 RM=3 | OUTREG=XED_REG_YMM3
REXB=0 RM=4 | OUTREG=XED_REG_YMM4
REXB=0 RM=5 | OUTREG=XED_REG_YMM5
REXB=0 RM=6 | OUTREG=XED_REG_YMM6
REXB=0 RM=7 | OUTREG=XED_REG_YMM7
REXB=1 RM=0 | OUTREG=XED_REG_YMM8
REXB=1 RM=1 | OUTREG=XED_REG_YMM9
REXB=1 RM=2 | OUTREG=XED_REG_YMM10
REXB=1 RM=3 | OUTREG=XED_REG_YMM11
REXB=1 RM=4 | OUTREG=XED_REG_YMM12
REXB=1 RM=5 | OUTREG=XED_REG_YMM13
REXB=1 RM=6 | OUTREG=XED_REG_YMM14
REXB=1 RM=7 | OUTREG=XED_REG_YMM15

View File

@@ -0,0 +1,59 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XMM0 xmm 128 YMM0 0
XMM1 xmm 128 YMM1 1
XMM2 xmm 128 YMM2 2
XMM3 xmm 128 YMM3 3
XMM4 xmm 128 YMM4 4
XMM5 xmm 128 YMM5 5
XMM6 xmm 128 YMM6 6
XMM7 xmm 128 YMM7 7
XMM8 xmm 128 YMM8 8
XMM9 xmm 128 YMM9 9
XMM10 xmm 128 YMM10 10
XMM11 xmm 128 YMM11 11
XMM12 xmm 128 YMM12 12
XMM13 xmm 128 YMM13 13
XMM14 xmm 128 YMM14 14
XMM15 xmm 128 YMM15 15
YMM0 ymm 256 YMM0 0
YMM1 ymm 256 YMM1 1
YMM2 ymm 256 YMM2 2
YMM3 ymm 256 YMM3 3
YMM4 ymm 256 YMM4 4
YMM5 ymm 256 YMM5 5
YMM6 ymm 256 YMM6 6
YMM7 ymm 256 YMM7 7
YMM8 ymm 256 YMM8 8
YMM9 ymm 256 YMM9 9
YMM10 ymm 256 YMM10 10
YMM11 ymm 256 YMM11 11
YMM12 ymm 256 YMM12 12
YMM13 ymm 256 YMM13 13
YMM14 ymm 256 YMM14 14
YMM15 ymm 256 YMM15 15

View File

@@ -0,0 +1,21 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
ISA()::
PREFIXES() OSZ_NONTERM() ASZ_NONTERM() AVX_SPLITTER() |

View File

@@ -0,0 +1,41 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
VL128 VL=0
VL256 VL=1
VV1 VEXVALID=1
VV0 VEXVALID=0
VMAP0 MAP=0
V0F MAP=1
V0F38 MAP=2
V0F3A MAP=3
VNP VEX_PREFIX=0
V66 VEX_PREFIX=1
VF2 VEX_PREFIX=2
VF3 VEX_PREFIX=3
# No VEX-SPECIFIED-REGISTER
NOVSR VEXDEST3=0b1 VEXDEST210=0b111
EMX_BROADCAST_1TO4_32 BCAST=10 # 128
EMX_BROADCAST_1TO4_64 BCAST=13 # 256
EMX_BROADCAST_1TO8_32 BCAST=3 # 256
EMX_BROADCAST_2TO4_64 BCAST=20 # 256

117
datafiles/avx/avx-vex-enc.txt Executable file
View File

@@ -0,0 +1,117 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
# These bind the operand deciders that control the encoding
SEQUENCE ISA_BINDINGS
FIXUP_EOSZ_ENC_BIND()
FIXUP_EASZ_ENC_BIND()
ASZ_NONTERM_BIND()
INSTRUCTIONS_BIND() # not calling tree splitter! AVX instructions must set VEXVALID=1
OSZ_NONTERM_ENC_BIND() # OSZ must be after the instructions so that DF64 is bound (and before any prefixes obviously)
PREFIX_ENC_BIND()
VEXED_REX_BIND()
# These emit the bits and bytes that make up the encoding
SEQUENCE ISA_EMIT
PREFIX_ENC_EMIT()
VEXED_REX_EMIT()
INSTRUCTIONS_EMIT()
VEXED_REX()::
VEXVALID=0 -> REX_PREFIX_ENC()
VEXVALID=1 -> NEWVEX_ENC()
#################################################
SEQUENCE NEWVEX_ENC_BIND
VEX_TYPE_ENC_BIND
VEX_REXR_ENC_BIND
VEX_REXXB_ENC_BIND
VEX_MAP_ENC_BIND
VEX_REG_ENC_BIND
VEX_ESCVL_ENC_BIND
SEQUENCE NEWVEX_ENC_EMIT
VEX_TYPE_ENC_EMIT
VEX_REXR_ENC_EMIT
VEX_REXXB_ENC_EMIT
VEX_MAP_ENC_EMIT
VEX_REG_ENC_EMIT
VEX_ESCVL_ENC_EMIT
##############################################
VEX_TYPE_ENC()::
REXX=1 -> 0xC4 VEX_C4=1
REXB=1 -> 0xC4 VEX_C4=1
MAP=0 -> 0xC4 VEX_C4=1
MAP=2 -> 0xC4 VEX_C4=1
MAP=3 -> 0xC4 VEX_C4=1
REXW=1 -> 0xC4 VEX_C4=1
otherwise -> 0xC5 VEX_C4=0
VEX_REXR_ENC()::
mode64 REXR=1 -> 0b0
mode64 REXR=0 -> 0b1
not64 REXR=1 -> error
not64 REXR=0 -> 0b1
VEX_REXXB_ENC()::
mode64 VEX_C4=1 REXX=0 REXB=0 -> 0b11
mode64 VEX_C4=1 REXX=1 REXB=0 -> 0b01
mode64 VEX_C4=1 REXX=0 REXB=1 -> 0b10
mode64 VEX_C4=1 REXX=1 REXB=1 -> 0b00
not64 VEX_C4=1 REXX=0 REXB=0 -> 0b11
not64 VEX_C4=1 REXX=1 REXB=0 -> error
not64 VEX_C4=1 REXX=0 REXB=1 -> error
not64 VEX_C4=1 REXX=1 REXB=1 -> error
otherwise -> nothing
# also emits W
VEX_MAP_ENC()::
VEX_C4=1 MAP=0 REXW[w] -> 0b0_0000 w
VEX_C4=1 MAP=1 REXW[w] -> 0b0_0001 w
VEX_C4=1 MAP=2 REXW[w] -> 0b0_0010 w
VEX_C4=1 MAP=3 REXW[w] -> 0b0_0011 w
otherwise -> nothing
# for VEX C5, VEXDEST3 MUST be 1 in 32b mode
VEX_REG_ENC()::
mode64 VEXDEST3[u] VEXDEST210[ddd] -> u_ddd
not64 VEXDEST3[u] VEXDEST210[ddd] -> 1_ddd
# FOR VEX'ed instructions, I need to turn off the normal REX prefix
# encoder. Ideally, I could use fields names other than REX{WRXB},
# but the register lookup functions need those names. I can get away
# with using different names for the f2/f3/66 refining legacy prefixes
# since they are only referenced by the AVX instructions.
VEX_ESCVL_ENC()::
VL128 VNP -> 0b000
VL128 V66 -> 0b001
VL128 VF3 -> 0b010
VL128 VF2 -> 0b011
VL256 VNP -> 0b100
VL256 V66 -> 0b101
VL256 VF3 -> 0b110
VL256 VF2 -> 0b111
##############################################################################

28
datafiles/avx/avx-vex.txt Normal file
View File

@@ -0,0 +1,28 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
# FOR VEX'ed instructions, I need to turn off the normal REX prefix
# encoder. Ideally, I could use fields names other than REX{WRXB},
# but the register lookup functions need those names. I can get away
# with using different names for the f2/f3/66 refining legacy prefixes
# since they are only referenced by the AVX instructions.

View File

@@ -0,0 +1,21 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XED_ISA_SET_AVX: avx.1.0.ecx.28
XED_ISA_SET_AVXAES: aes.1.0.ecx.25 avx.1.0.ecx.28
XED_ISA_SET_FMA: fma.1.0.ecx.12 avx.1.0.ecx.28

View File

@@ -0,0 +1,21 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
dec-instructions:avx-fma-isa.txt
enc-instructions:avx-fma-isa.txt

49
datafiles/avx/files.cfg Normal file
View File

@@ -0,0 +1,49 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
add:dec-spine:avx-spine.txt:2
dec-instructions:avx-isa.txt
enc-instructions:avx-isa.txt
dec-instructions:avx-movnt-store.txt
enc-instructions:avx-movnt-store.txt
dec-instructions:avx-aes-isa.txt
enc-instructions:avx-aes-isa.txt
dec-instructions:avx-pclmul-isa.txt
enc-instructions:avx-pclmul-isa.txt
state:avx-state-bits.txt
widths:avx-operand-width.txt
pointer-names:avx-pointer-width.txt
registers:avx-regs.txt
dec-patterns:avx-reg-table.txt
enc-dec-patterns:avx-reg-table.txt
fields:avx-fields.txt
#
dec-patterns:avx-isa-supp.txt
enc-patterns:avx-isa-supp-enc.txt
#
dec-patterns:avx-vex.txt
dec-patterns:avx-imm.txt
#
enc-patterns:avx-vex-enc.txt
enc-patterns:avx-imm-enc.txt
chip-models:avx-chips.txt
cpuid : cpuid.xed.txt

View File

@@ -0,0 +1,28 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
AVX512_FUTURE: ALL_OF(SKYLAKE_SERVER) SHA \
AVX512IFMA_128 \
AVX512IFMA_256 \
AVX512IFMA_512 \
AVX512VBMI_128 \
AVX512VBMI_256 \
AVX512VBMI_512

View File

@@ -0,0 +1,20 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
chip-models:avx512-future-chips.txt

View File

@@ -0,0 +1,29 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XED_ISA_SET_AVX512BW_128: avx512bw.7.0.ebx.30 avx512vl.7.0.ebx.31
XED_ISA_SET_AVX512BW_128N: avx512bw.7.0.ebx.30
XED_ISA_SET_AVX512BW_256: avx512bw.7.0.ebx.30 avx512vl.7.0.ebx.31
XED_ISA_SET_AVX512BW_512: avx512bw.7.0.ebx.30
XED_ISA_SET_AVX512BW_KOP: avx512bw.7.0.ebx.30
XED_ISA_SET_AVX512DQ_128: avx512dq.7.0.ebx.17 avx512vl.7.0.ebx.31
XED_ISA_SET_AVX512DQ_128N: avx512dq.7.0.ebx.17
XED_ISA_SET_AVX512DQ_256: avx512dq.7.0.ebx.17 avx512vl.7.0.ebx.31
XED_ISA_SET_AVX512DQ_512: avx512dq.7.0.ebx.17
XED_ISA_SET_AVX512DQ_KOP: avx512dq.7.0.ebx.17
XED_ISA_SET_AVX512DQ_SCALAR: avx512dq.7.0.ebx.17

View File

@@ -0,0 +1,23 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
state:skx-state-bits.txt
dec-instructions: skx-isa.xed.txt
enc-instructions: skx-isa.xed.txt
cpuid : cpuid.xed.txt

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,24 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
EMX_BROADCAST_1TO2_8 BCAST=23
EMX_BROADCAST_1TO4_8 BCAST=24
EMX_BROADCAST_1TO8_8 BCAST=25
EMX_BROADCAST_1TO2_16 BCAST=26
EMX_BROADCAST_1TO4_16 BCAST=27

View File

@@ -0,0 +1,20 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XED_ISA_SET_AVX512CD_128: avx512cd.7.0.ebx.28 avx512vl.7.0.ebx.31
XED_ISA_SET_AVX512CD_256: avx512cd.7.0.ebx.28 avx512vl.7.0.ebx.31
XED_ISA_SET_AVX512CD_512: avx512cd.7.0.ebx.28

View File

@@ -0,0 +1,24 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
dec-instructions: vconflict-isa.xed.txt
enc-instructions: vconflict-isa.xed.txt
cpuid : cpuid.xed.txt

View File

@@ -0,0 +1,177 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
#
#
#
# ***** GENERATED FILE -- DO NOT EDIT! *****
# ***** GENERATED FILE -- DO NOT EDIT! *****
# ***** GENERATED FILE -- DO NOT EDIT! *****
#
#
#
EVEX_INSTRUCTIONS()::
# EMITTING VPBROADCASTMB2Q (VPBROADCASTMB2Q-512-1)
{
ICLASS: VPBROADCASTMB2Q
CPL: 3
CATEGORY: BROADCAST
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E6NF
REAL_OPCODE: Y
PATTERN: EVV 0x2A VF3 V0F38 MOD[0b11] MOD=3 BCRC=0 REG[rrr] RM[nnn] VL512 W1 NOEVSR ZEROING=0 MASK=0
OPERANDS: REG0=ZMM_R3():w:zu64 REG1=MASK_B():r:mskw:u64 EMX_BROADCAST_1TO8_8
IFORM: VPBROADCASTMB2Q_ZMMu64_MASKu64_AVX512CD
}
# EMITTING VPBROADCASTMW2D (VPBROADCASTMW2D-512-1)
{
ICLASS: VPBROADCASTMW2D
CPL: 3
CATEGORY: BROADCAST
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E6NF
REAL_OPCODE: Y
PATTERN: EVV 0x3A VF3 V0F38 MOD[0b11] MOD=3 BCRC=0 REG[rrr] RM[nnn] VL512 W0 NOEVSR ZEROING=0 MASK=0
OPERANDS: REG0=ZMM_R3():w:zu32 REG1=MASK_B():r:mskw:u32 EMX_BROADCAST_1TO16_16
IFORM: VPBROADCASTMW2D_ZMMu32_MASKu32_AVX512CD
}
# EMITTING VPCONFLICTD (VPCONFLICTD-512-1)
{
ICLASS: VPCONFLICTD
CPL: 3
CATEGORY: CONFLICT
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MASKOP_EVEX
PATTERN: EVV 0xC4 V66 V0F38 MOD[0b11] MOD=3 BCRC=0 REG[rrr] RM[nnn] VL512 W0 NOEVSR
OPERANDS: REG0=ZMM_R3():w:zu32 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=ZMM_B3():r:zu32
IFORM: VPCONFLICTD_ZMMu32_MASKmskw_ZMMu32_AVX512CD
}
{
ICLASS: VPCONFLICTD
CPL: 3
CATEGORY: CONFLICT
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MASKOP_EVEX DISP8_FULL BROADCAST_ENABLED
PATTERN: EVV 0xC4 V66 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() VL512 W0 NOEVSR ESIZE_32_BITS() NELEM_FULL()
OPERANDS: REG0=ZMM_R3():w:zu32 REG1=MASK1():r:mskw:TXT=ZEROSTR MEM0:r:vv:u32:TXT=BCASTSTR
IFORM: VPCONFLICTD_ZMMu32_MASKmskw_MEMu32_AVX512CD
}
# EMITTING VPCONFLICTQ (VPCONFLICTQ-512-1)
{
ICLASS: VPCONFLICTQ
CPL: 3
CATEGORY: CONFLICT
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MASKOP_EVEX
PATTERN: EVV 0xC4 V66 V0F38 MOD[0b11] MOD=3 BCRC=0 REG[rrr] RM[nnn] VL512 W1 NOEVSR
OPERANDS: REG0=ZMM_R3():w:zu64 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=ZMM_B3():r:zu64
IFORM: VPCONFLICTQ_ZMMu64_MASKmskw_ZMMu64_AVX512CD
}
{
ICLASS: VPCONFLICTQ
CPL: 3
CATEGORY: CONFLICT
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MASKOP_EVEX DISP8_FULL BROADCAST_ENABLED
PATTERN: EVV 0xC4 V66 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() VL512 W1 NOEVSR ESIZE_64_BITS() NELEM_FULL()
OPERANDS: REG0=ZMM_R3():w:zu64 REG1=MASK1():r:mskw:TXT=ZEROSTR MEM0:r:vv:u64:TXT=BCASTSTR
IFORM: VPCONFLICTQ_ZMMu64_MASKmskw_MEMu64_AVX512CD
}
# EMITTING VPLZCNTD (VPLZCNTD-512-1)
{
ICLASS: VPLZCNTD
CPL: 3
CATEGORY: CONFLICT
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MASKOP_EVEX
PATTERN: EVV 0x44 V66 V0F38 MOD[0b11] MOD=3 BCRC=0 REG[rrr] RM[nnn] VL512 W0 NOEVSR
OPERANDS: REG0=ZMM_R3():w:zu32 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=ZMM_B3():r:zu32
IFORM: VPLZCNTD_ZMMu32_MASKmskw_ZMMu32_AVX512CD
}
{
ICLASS: VPLZCNTD
CPL: 3
CATEGORY: CONFLICT
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MEMORY_FAULT_SUPPRESSION MASKOP_EVEX DISP8_FULL BROADCAST_ENABLED
PATTERN: EVV 0x44 V66 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() VL512 W0 NOEVSR ESIZE_32_BITS() NELEM_FULL()
OPERANDS: REG0=ZMM_R3():w:zu32 REG1=MASK1():r:mskw:TXT=ZEROSTR MEM0:r:vv:u32:TXT=BCASTSTR
IFORM: VPLZCNTD_ZMMu32_MASKmskw_MEMu32_AVX512CD
}
# EMITTING VPLZCNTQ (VPLZCNTQ-512-1)
{
ICLASS: VPLZCNTQ
CPL: 3
CATEGORY: CONFLICT
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MASKOP_EVEX
PATTERN: EVV 0x44 V66 V0F38 MOD[0b11] MOD=3 BCRC=0 REG[rrr] RM[nnn] VL512 W1 NOEVSR
OPERANDS: REG0=ZMM_R3():w:zu64 REG1=MASK1():r:mskw:TXT=ZEROSTR REG2=ZMM_B3():r:zu64
IFORM: VPLZCNTQ_ZMMu64_MASKmskw_ZMMu64_AVX512CD
}
{
ICLASS: VPLZCNTQ
CPL: 3
CATEGORY: CONFLICT
EXTENSION: AVX512EVEX
ISA_SET: AVX512CD_512
EXCEPTIONS: AVX512-E4
REAL_OPCODE: Y
ATTRIBUTES: MEMORY_FAULT_SUPPRESSION MASKOP_EVEX DISP8_FULL BROADCAST_ENABLED
PATTERN: EVV 0x44 V66 V0F38 MOD[mm] MOD!=3 REG[rrr] RM[nnn] MODRM() VL512 W1 NOEVSR ESIZE_64_BITS() NELEM_FULL()
OPERANDS: REG0=ZMM_R3():w:zu64 REG1=MASK1():r:mskw:TXT=ZEROSTR MEM0:r:vv:u64:TXT=BCASTSTR
IFORM: VPLZCNTQ_ZMMu64_MASKmskw_MEMu64_AVX512CD
}

View File

@@ -0,0 +1,18 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
32 ymmword y

View File

@@ -0,0 +1,189 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
UISA_VMODRM_ZMM()::
MOD=0b00 UISA_VSIB_ZMM() |
MOD=0b01 UISA_VSIB_ZMM() MEMDISP8() |
MOD=0b10 UISA_VSIB_ZMM() MEMDISP32() |
UISA_VMODRM_YMM()::
MOD=0b00 UISA_VSIB_YMM() |
MOD=0b01 UISA_VSIB_YMM() MEMDISP8() |
MOD=0b10 UISA_VSIB_YMM() MEMDISP32() |
UISA_VMODRM_XMM()::
MOD=0b00 UISA_VSIB_XMM() |
MOD=0b01 UISA_VSIB_XMM() MEMDISP8() |
MOD=0b10 UISA_VSIB_XMM() MEMDISP32() |
UISA_VSIB_ZMM()::
SIBSCALE[0b00] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_ZMM() SCALE=1
SIBSCALE[0b01] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_ZMM() SCALE=2
SIBSCALE[0b10] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_ZMM() SCALE=4
SIBSCALE[0b11] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_ZMM() SCALE=8
UISA_VSIB_YMM()::
SIBSCALE[0b00] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_YMM() SCALE=1
SIBSCALE[0b01] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_YMM() SCALE=2
SIBSCALE[0b10] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_YMM() SCALE=4
SIBSCALE[0b11] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_YMM() SCALE=8
UISA_VSIB_XMM()::
SIBSCALE[0b00] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_XMM() SCALE=1
SIBSCALE[0b01] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_XMM() SCALE=2
SIBSCALE[0b10] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_XMM() SCALE=4
SIBSCALE[0b11] SIBINDEX[iii] SIBBASE[bbb] UISA_VSIB_BASE() | INDEX=UISA_VSIB_INDEX_XMM() SCALE=8
xed_reg_enum_t UISA_VSIB_INDEX_ZMM()::
VEXDEST4=0 REXX=0 SIBINDEX=0 | OUTREG=XED_REG_ZMM0
VEXDEST4=0 REXX=0 SIBINDEX=1 | OUTREG=XED_REG_ZMM1
VEXDEST4=0 REXX=0 SIBINDEX=2 | OUTREG=XED_REG_ZMM2
VEXDEST4=0 REXX=0 SIBINDEX=3 | OUTREG=XED_REG_ZMM3
VEXDEST4=0 REXX=0 SIBINDEX=4 | OUTREG=XED_REG_ZMM4
VEXDEST4=0 REXX=0 SIBINDEX=5 | OUTREG=XED_REG_ZMM5
VEXDEST4=0 REXX=0 SIBINDEX=6 | OUTREG=XED_REG_ZMM6
VEXDEST4=0 REXX=0 SIBINDEX=7 | OUTREG=XED_REG_ZMM7
VEXDEST4=0 REXX=1 SIBINDEX=0 | OUTREG=XED_REG_ZMM8
VEXDEST4=0 REXX=1 SIBINDEX=1 | OUTREG=XED_REG_ZMM9
VEXDEST4=0 REXX=1 SIBINDEX=2 | OUTREG=XED_REG_ZMM10
VEXDEST4=0 REXX=1 SIBINDEX=3 | OUTREG=XED_REG_ZMM11
VEXDEST4=0 REXX=1 SIBINDEX=4 | OUTREG=XED_REG_ZMM12
VEXDEST4=0 REXX=1 SIBINDEX=5 | OUTREG=XED_REG_ZMM13
VEXDEST4=0 REXX=1 SIBINDEX=6 | OUTREG=XED_REG_ZMM14
VEXDEST4=0 REXX=1 SIBINDEX=7 | OUTREG=XED_REG_ZMM15
VEXDEST4=1 REXX=0 SIBINDEX=0 | OUTREG=XED_REG_ZMM16
VEXDEST4=1 REXX=0 SIBINDEX=1 | OUTREG=XED_REG_ZMM17
VEXDEST4=1 REXX=0 SIBINDEX=2 | OUTREG=XED_REG_ZMM18
VEXDEST4=1 REXX=0 SIBINDEX=3 | OUTREG=XED_REG_ZMM19
VEXDEST4=1 REXX=0 SIBINDEX=4 | OUTREG=XED_REG_ZMM20
VEXDEST4=1 REXX=0 SIBINDEX=5 | OUTREG=XED_REG_ZMM21
VEXDEST4=1 REXX=0 SIBINDEX=6 | OUTREG=XED_REG_ZMM22
VEXDEST4=1 REXX=0 SIBINDEX=7 | OUTREG=XED_REG_ZMM23
VEXDEST4=1 REXX=1 SIBINDEX=0 | OUTREG=XED_REG_ZMM24
VEXDEST4=1 REXX=1 SIBINDEX=1 | OUTREG=XED_REG_ZMM25
VEXDEST4=1 REXX=1 SIBINDEX=2 | OUTREG=XED_REG_ZMM26
VEXDEST4=1 REXX=1 SIBINDEX=3 | OUTREG=XED_REG_ZMM27
VEXDEST4=1 REXX=1 SIBINDEX=4 | OUTREG=XED_REG_ZMM28
VEXDEST4=1 REXX=1 SIBINDEX=5 | OUTREG=XED_REG_ZMM29
VEXDEST4=1 REXX=1 SIBINDEX=6 | OUTREG=XED_REG_ZMM30
VEXDEST4=1 REXX=1 SIBINDEX=7 | OUTREG=XED_REG_ZMM31
xed_reg_enum_t UISA_VSIB_INDEX_YMM()::
VEXDEST4=0 REXX=0 SIBINDEX=0 | OUTREG=XED_REG_YMM0
VEXDEST4=0 REXX=0 SIBINDEX=1 | OUTREG=XED_REG_YMM1
VEXDEST4=0 REXX=0 SIBINDEX=2 | OUTREG=XED_REG_YMM2
VEXDEST4=0 REXX=0 SIBINDEX=3 | OUTREG=XED_REG_YMM3
VEXDEST4=0 REXX=0 SIBINDEX=4 | OUTREG=XED_REG_YMM4
VEXDEST4=0 REXX=0 SIBINDEX=5 | OUTREG=XED_REG_YMM5
VEXDEST4=0 REXX=0 SIBINDEX=6 | OUTREG=XED_REG_YMM6
VEXDEST4=0 REXX=0 SIBINDEX=7 | OUTREG=XED_REG_YMM7
VEXDEST4=0 REXX=1 SIBINDEX=0 | OUTREG=XED_REG_YMM8
VEXDEST4=0 REXX=1 SIBINDEX=1 | OUTREG=XED_REG_YMM9
VEXDEST4=0 REXX=1 SIBINDEX=2 | OUTREG=XED_REG_YMM10
VEXDEST4=0 REXX=1 SIBINDEX=3 | OUTREG=XED_REG_YMM11
VEXDEST4=0 REXX=1 SIBINDEX=4 | OUTREG=XED_REG_YMM12
VEXDEST4=0 REXX=1 SIBINDEX=5 | OUTREG=XED_REG_YMM13
VEXDEST4=0 REXX=1 SIBINDEX=6 | OUTREG=XED_REG_YMM14
VEXDEST4=0 REXX=1 SIBINDEX=7 | OUTREG=XED_REG_YMM15
VEXDEST4=1 REXX=0 SIBINDEX=0 | OUTREG=XED_REG_YMM16
VEXDEST4=1 REXX=0 SIBINDEX=1 | OUTREG=XED_REG_YMM17
VEXDEST4=1 REXX=0 SIBINDEX=2 | OUTREG=XED_REG_YMM18
VEXDEST4=1 REXX=0 SIBINDEX=3 | OUTREG=XED_REG_YMM19
VEXDEST4=1 REXX=0 SIBINDEX=4 | OUTREG=XED_REG_YMM20
VEXDEST4=1 REXX=0 SIBINDEX=5 | OUTREG=XED_REG_YMM21
VEXDEST4=1 REXX=0 SIBINDEX=6 | OUTREG=XED_REG_YMM22
VEXDEST4=1 REXX=0 SIBINDEX=7 | OUTREG=XED_REG_YMM23
VEXDEST4=1 REXX=1 SIBINDEX=0 | OUTREG=XED_REG_YMM24
VEXDEST4=1 REXX=1 SIBINDEX=1 | OUTREG=XED_REG_YMM25
VEXDEST4=1 REXX=1 SIBINDEX=2 | OUTREG=XED_REG_YMM26
VEXDEST4=1 REXX=1 SIBINDEX=3 | OUTREG=XED_REG_YMM27
VEXDEST4=1 REXX=1 SIBINDEX=4 | OUTREG=XED_REG_YMM28
VEXDEST4=1 REXX=1 SIBINDEX=5 | OUTREG=XED_REG_YMM29
VEXDEST4=1 REXX=1 SIBINDEX=6 | OUTREG=XED_REG_YMM30
VEXDEST4=1 REXX=1 SIBINDEX=7 | OUTREG=XED_REG_YMM31
xed_reg_enum_t UISA_VSIB_INDEX_XMM()::
VEXDEST4=0 REXX=0 SIBINDEX=0 | OUTREG=XED_REG_XMM0
VEXDEST4=0 REXX=0 SIBINDEX=1 | OUTREG=XED_REG_XMM1
VEXDEST4=0 REXX=0 SIBINDEX=2 | OUTREG=XED_REG_XMM2
VEXDEST4=0 REXX=0 SIBINDEX=3 | OUTREG=XED_REG_XMM3
VEXDEST4=0 REXX=0 SIBINDEX=4 | OUTREG=XED_REG_XMM4
VEXDEST4=0 REXX=0 SIBINDEX=5 | OUTREG=XED_REG_XMM5
VEXDEST4=0 REXX=0 SIBINDEX=6 | OUTREG=XED_REG_XMM6
VEXDEST4=0 REXX=0 SIBINDEX=7 | OUTREG=XED_REG_XMM7
VEXDEST4=0 REXX=1 SIBINDEX=0 | OUTREG=XED_REG_XMM8
VEXDEST4=0 REXX=1 SIBINDEX=1 | OUTREG=XED_REG_XMM9
VEXDEST4=0 REXX=1 SIBINDEX=2 | OUTREG=XED_REG_XMM10
VEXDEST4=0 REXX=1 SIBINDEX=3 | OUTREG=XED_REG_XMM11
VEXDEST4=0 REXX=1 SIBINDEX=4 | OUTREG=XED_REG_XMM12
VEXDEST4=0 REXX=1 SIBINDEX=5 | OUTREG=XED_REG_XMM13
VEXDEST4=0 REXX=1 SIBINDEX=6 | OUTREG=XED_REG_XMM14
VEXDEST4=0 REXX=1 SIBINDEX=7 | OUTREG=XED_REG_XMM15
VEXDEST4=1 REXX=0 SIBINDEX=0 | OUTREG=XED_REG_XMM16
VEXDEST4=1 REXX=0 SIBINDEX=1 | OUTREG=XED_REG_XMM17
VEXDEST4=1 REXX=0 SIBINDEX=2 | OUTREG=XED_REG_XMM18
VEXDEST4=1 REXX=0 SIBINDEX=3 | OUTREG=XED_REG_XMM19
VEXDEST4=1 REXX=0 SIBINDEX=4 | OUTREG=XED_REG_XMM20
VEXDEST4=1 REXX=0 SIBINDEX=5 | OUTREG=XED_REG_XMM21
VEXDEST4=1 REXX=0 SIBINDEX=6 | OUTREG=XED_REG_XMM22
VEXDEST4=1 REXX=0 SIBINDEX=7 | OUTREG=XED_REG_XMM23
VEXDEST4=1 REXX=1 SIBINDEX=0 | OUTREG=XED_REG_XMM24
VEXDEST4=1 REXX=1 SIBINDEX=1 | OUTREG=XED_REG_XMM25
VEXDEST4=1 REXX=1 SIBINDEX=2 | OUTREG=XED_REG_XMM26
VEXDEST4=1 REXX=1 SIBINDEX=3 | OUTREG=XED_REG_XMM27
VEXDEST4=1 REXX=1 SIBINDEX=4 | OUTREG=XED_REG_XMM28
VEXDEST4=1 REXX=1 SIBINDEX=5 | OUTREG=XED_REG_XMM29
VEXDEST4=1 REXX=1 SIBINDEX=6 | OUTREG=XED_REG_XMM30
VEXDEST4=1 REXX=1 SIBINDEX=7 | OUTREG=XED_REG_XMM31
UISA_VSIB_BASE()::
REXB=0 SIBBASE=0 | BASE0=ArAX() SEG0=FINAL_DSEG()
REXB=0 SIBBASE=1 | BASE0=ArCX() SEG0=FINAL_DSEG()
REXB=0 SIBBASE=2 | BASE0=ArDX() SEG0=FINAL_DSEG()
REXB=0 SIBBASE=3 | BASE0=ArBX() SEG0=FINAL_DSEG()
REXB=0 SIBBASE=4 | BASE0=ArSP() SEG0=FINAL_SSEG()
# FIXME: BASE ISA IS CONSIDERABLY MORE COMPLICATED FOR DISP8 and DISP32
REXB=0 SIBBASE=5 MOD=0 MEMDISP32() | BASE0=XED_REG_INVALID SEG0=FINAL_DSEG()
REXB=0 SIBBASE=5 MOD!=0 | BASE0=ArBP() SEG0=FINAL_SSEG()
REXB=0 SIBBASE=6 | BASE0=ArSI() SEG0=FINAL_DSEG()
REXB=0 SIBBASE=7 | BASE0=ArDI() SEG0=FINAL_DSEG()
REXB=1 SIBBASE=0 | BASE0=Ar8() SEG0=FINAL_DSEG()
REXB=1 SIBBASE=1 | BASE0=Ar9() SEG0=FINAL_DSEG()
REXB=1 SIBBASE=2 | BASE0=Ar10() SEG0=FINAL_DSEG()
REXB=1 SIBBASE=3 | BASE0=Ar11() SEG0=FINAL_DSEG()
REXB=1 SIBBASE=4 | BASE0=Ar12() SEG0=FINAL_DSEG()
# FIXME: BASE ISA IS CONSIDERABLY MORE COMPLICATED FOR DISP8 and DISP32
REXB=1 SIBBASE=5 MOD=0 MEMDISP32() | BASE0=XED_REG_INVALID SEG0=FINAL_DSEG()
REXB=1 SIBBASE=5 MOD!=0 | BASE0=Ar13() SEG0=FINAL_DSEG()
REXB=1 SIBBASE=6 | BASE0=Ar14() SEG0=FINAL_DSEG()
REXB=1 SIBBASE=7 | BASE0=Ar15() SEG0=FINAL_DSEG()

View File

@@ -0,0 +1,167 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
SEQUENCE UISA_VMODRM_ZMM_BIND
VMODRM_MOD_ENCODE_BIND() # FROM HSW
VSIB_ENC_BASE_BIND() # FROM HSW
UISA_ENC_INDEX_ZMM_BIND()
VSIB_ENC_SCALE_BIND() # FROM HSW
VSIB_ENC_BIND() # FROM HSW
SEGMENT_DEFAULT_ENCODE_BIND() # FROM BASE ISA
SEGMENT_ENCODE_BIND() # FROM BASE ISA
DISP_NT_BIND() # FROM BASE ISA
SEQUENCE UISA_VMODRM_YMM_BIND
VMODRM_MOD_ENCODE_BIND() # FROM HSW
VSIB_ENC_BASE_BIND() # FROM HSW
UISA_ENC_INDEX_YMM_BIND()
VSIB_ENC_SCALE_BIND() # FROM HSW
VSIB_ENC_BIND() # FROM HSW
SEGMENT_DEFAULT_ENCODE_BIND() # FROM BASE ISA
SEGMENT_ENCODE_BIND() # FROM BASE ISA
DISP_NT_BIND() # FROM BASE ISA
SEQUENCE UISA_VMODRM_XMM_BIND
VMODRM_MOD_ENCODE_BIND() # FROM HSW
VSIB_ENC_BASE_BIND() # FROM HSW
UISA_ENC_INDEX_XMM_BIND()
VSIB_ENC_SCALE_BIND() # FROM HSW
VSIB_ENC_BIND() # FROM HSW
SEGMENT_DEFAULT_ENCODE_BIND() # FROM BASE ISA
SEGMENT_ENCODE_BIND() # FROM BASE ISA
DISP_NT_BIND() # FROM BASE ISA
# For now, ignoring the difference in x/y/zmm for the index register. Could
# split these.
SEQUENCE UISA_VMODRM_ZMM_EMIT
VSIB_ENC_EMIT()
DISP_NT_EMIT()
SEQUENCE UISA_VMODRM_YMM_EMIT
VSIB_ENC_EMIT()
DISP_NT_EMIT()
SEQUENCE UISA_VMODRM_XMM_EMIT
VSIB_ENC_EMIT()
DISP_NT_EMIT()
######################################
UISA_ENC_INDEX_ZMM()::
INDEX=XED_REG_ZMM0 -> VEXDEST4=0 REXX=0 SIBINDEX=0
INDEX=XED_REG_ZMM1 -> VEXDEST4=0 REXX=0 SIBINDEX=1
INDEX=XED_REG_ZMM2 -> VEXDEST4=0 REXX=0 SIBINDEX=2
INDEX=XED_REG_ZMM3 -> VEXDEST4=0 REXX=0 SIBINDEX=3
INDEX=XED_REG_ZMM4 -> VEXDEST4=0 REXX=0 SIBINDEX=4
INDEX=XED_REG_ZMM5 -> VEXDEST4=0 REXX=0 SIBINDEX=5
INDEX=XED_REG_ZMM6 -> VEXDEST4=0 REXX=0 SIBINDEX=6
INDEX=XED_REG_ZMM7 -> VEXDEST4=0 REXX=0 SIBINDEX=7
INDEX=XED_REG_ZMM8 -> VEXDEST4=0 REXX=1 SIBINDEX=0
INDEX=XED_REG_ZMM9 -> VEXDEST4=0 REXX=1 SIBINDEX=1
INDEX=XED_REG_ZMM10 -> VEXDEST4=0 REXX=1 SIBINDEX=2
INDEX=XED_REG_ZMM11 -> VEXDEST4=0 REXX=1 SIBINDEX=3
INDEX=XED_REG_ZMM12 -> VEXDEST4=0 REXX=1 SIBINDEX=4
INDEX=XED_REG_ZMM13 -> VEXDEST4=0 REXX=1 SIBINDEX=5
INDEX=XED_REG_ZMM14 -> VEXDEST4=0 REXX=1 SIBINDEX=6
INDEX=XED_REG_ZMM15 -> VEXDEST4=0 REXX=1 SIBINDEX=7
INDEX=XED_REG_ZMM16 -> VEXDEST4=1 REXX=0 SIBINDEX=0
INDEX=XED_REG_ZMM17 -> VEXDEST4=1 REXX=0 SIBINDEX=1
INDEX=XED_REG_ZMM18 -> VEXDEST4=1 REXX=0 SIBINDEX=2
INDEX=XED_REG_ZMM19 -> VEXDEST4=1 REXX=0 SIBINDEX=3
INDEX=XED_REG_ZMM20 -> VEXDEST4=1 REXX=0 SIBINDEX=4
INDEX=XED_REG_ZMM21 -> VEXDEST4=1 REXX=0 SIBINDEX=5
INDEX=XED_REG_ZMM22 -> VEXDEST4=1 REXX=0 SIBINDEX=6
INDEX=XED_REG_ZMM23 -> VEXDEST4=1 REXX=0 SIBINDEX=7
INDEX=XED_REG_ZMM24 -> VEXDEST4=1 REXX=1 SIBINDEX=0
INDEX=XED_REG_ZMM25 -> VEXDEST4=1 REXX=1 SIBINDEX=1
INDEX=XED_REG_ZMM26 -> VEXDEST4=1 REXX=1 SIBINDEX=2
INDEX=XED_REG_ZMM27 -> VEXDEST4=1 REXX=1 SIBINDEX=3
INDEX=XED_REG_ZMM28 -> VEXDEST4=1 REXX=1 SIBINDEX=4
INDEX=XED_REG_ZMM29 -> VEXDEST4=1 REXX=1 SIBINDEX=5
INDEX=XED_REG_ZMM30 -> VEXDEST4=1 REXX=1 SIBINDEX=6
INDEX=XED_REG_ZMM31 -> VEXDEST4=1 REXX=1 SIBINDEX=7
UISA_ENC_INDEX_YMM()::
INDEX=XED_REG_YMM0 -> VEXDEST4=0 REXX=0 SIBINDEX=0
INDEX=XED_REG_YMM1 -> VEXDEST4=0 REXX=0 SIBINDEX=1
INDEX=XED_REG_YMM2 -> VEXDEST4=0 REXX=0 SIBINDEX=2
INDEX=XED_REG_YMM3 -> VEXDEST4=0 REXX=0 SIBINDEX=3
INDEX=XED_REG_YMM4 -> VEXDEST4=0 REXX=0 SIBINDEX=4
INDEX=XED_REG_YMM5 -> VEXDEST4=0 REXX=0 SIBINDEX=5
INDEX=XED_REG_YMM6 -> VEXDEST4=0 REXX=0 SIBINDEX=6
INDEX=XED_REG_YMM7 -> VEXDEST4=0 REXX=0 SIBINDEX=7
INDEX=XED_REG_YMM8 -> VEXDEST4=0 REXX=1 SIBINDEX=0
INDEX=XED_REG_YMM9 -> VEXDEST4=0 REXX=1 SIBINDEX=1
INDEX=XED_REG_YMM10 -> VEXDEST4=0 REXX=1 SIBINDEX=2
INDEX=XED_REG_YMM11 -> VEXDEST4=0 REXX=1 SIBINDEX=3
INDEX=XED_REG_YMM12 -> VEXDEST4=0 REXX=1 SIBINDEX=4
INDEX=XED_REG_YMM13 -> VEXDEST4=0 REXX=1 SIBINDEX=5
INDEX=XED_REG_YMM14 -> VEXDEST4=0 REXX=1 SIBINDEX=6
INDEX=XED_REG_YMM15 -> VEXDEST4=0 REXX=1 SIBINDEX=7
INDEX=XED_REG_YMM16 -> VEXDEST4=1 REXX=0 SIBINDEX=0
INDEX=XED_REG_YMM17 -> VEXDEST4=1 REXX=0 SIBINDEX=1
INDEX=XED_REG_YMM18 -> VEXDEST4=1 REXX=0 SIBINDEX=2
INDEX=XED_REG_YMM19 -> VEXDEST4=1 REXX=0 SIBINDEX=3
INDEX=XED_REG_YMM20 -> VEXDEST4=1 REXX=0 SIBINDEX=4
INDEX=XED_REG_YMM21 -> VEXDEST4=1 REXX=0 SIBINDEX=5
INDEX=XED_REG_YMM22 -> VEXDEST4=1 REXX=0 SIBINDEX=6
INDEX=XED_REG_YMM23 -> VEXDEST4=1 REXX=0 SIBINDEX=7
INDEX=XED_REG_YMM24 -> VEXDEST4=1 REXX=1 SIBINDEX=0
INDEX=XED_REG_YMM25 -> VEXDEST4=1 REXX=1 SIBINDEX=1
INDEX=XED_REG_YMM26 -> VEXDEST4=1 REXX=1 SIBINDEX=2
INDEX=XED_REG_YMM27 -> VEXDEST4=1 REXX=1 SIBINDEX=3
INDEX=XED_REG_YMM28 -> VEXDEST4=1 REXX=1 SIBINDEX=4
INDEX=XED_REG_YMM29 -> VEXDEST4=1 REXX=1 SIBINDEX=5
INDEX=XED_REG_YMM30 -> VEXDEST4=1 REXX=1 SIBINDEX=6
INDEX=XED_REG_YMM31 -> VEXDEST4=1 REXX=1 SIBINDEX=7
UISA_ENC_INDEX_XMM()::
INDEX=XED_REG_XMM0 -> VEXDEST4=0 REXX=0 SIBINDEX=0
INDEX=XED_REG_XMM1 -> VEXDEST4=0 REXX=0 SIBINDEX=1
INDEX=XED_REG_XMM2 -> VEXDEST4=0 REXX=0 SIBINDEX=2
INDEX=XED_REG_XMM3 -> VEXDEST4=0 REXX=0 SIBINDEX=3
INDEX=XED_REG_XMM4 -> VEXDEST4=0 REXX=0 SIBINDEX=4
INDEX=XED_REG_XMM5 -> VEXDEST4=0 REXX=0 SIBINDEX=5
INDEX=XED_REG_XMM6 -> VEXDEST4=0 REXX=0 SIBINDEX=6
INDEX=XED_REG_XMM7 -> VEXDEST4=0 REXX=0 SIBINDEX=7
INDEX=XED_REG_XMM8 -> VEXDEST4=0 REXX=1 SIBINDEX=0
INDEX=XED_REG_XMM9 -> VEXDEST4=0 REXX=1 SIBINDEX=1
INDEX=XED_REG_XMM10 -> VEXDEST4=0 REXX=1 SIBINDEX=2
INDEX=XED_REG_XMM11 -> VEXDEST4=0 REXX=1 SIBINDEX=3
INDEX=XED_REG_XMM12 -> VEXDEST4=0 REXX=1 SIBINDEX=4
INDEX=XED_REG_XMM13 -> VEXDEST4=0 REXX=1 SIBINDEX=5
INDEX=XED_REG_XMM14 -> VEXDEST4=0 REXX=1 SIBINDEX=6
INDEX=XED_REG_XMM15 -> VEXDEST4=0 REXX=1 SIBINDEX=7
INDEX=XED_REG_XMM16 -> VEXDEST4=1 REXX=0 SIBINDEX=0
INDEX=XED_REG_XMM17 -> VEXDEST4=1 REXX=0 SIBINDEX=1
INDEX=XED_REG_XMM18 -> VEXDEST4=1 REXX=0 SIBINDEX=2
INDEX=XED_REG_XMM19 -> VEXDEST4=1 REXX=0 SIBINDEX=3
INDEX=XED_REG_XMM20 -> VEXDEST4=1 REXX=0 SIBINDEX=4
INDEX=XED_REG_XMM21 -> VEXDEST4=1 REXX=0 SIBINDEX=5
INDEX=XED_REG_XMM22 -> VEXDEST4=1 REXX=0 SIBINDEX=6
INDEX=XED_REG_XMM23 -> VEXDEST4=1 REXX=0 SIBINDEX=7
INDEX=XED_REG_XMM24 -> VEXDEST4=1 REXX=1 SIBINDEX=0
INDEX=XED_REG_XMM25 -> VEXDEST4=1 REXX=1 SIBINDEX=1
INDEX=XED_REG_XMM26 -> VEXDEST4=1 REXX=1 SIBINDEX=2
INDEX=XED_REG_XMM27 -> VEXDEST4=1 REXX=1 SIBINDEX=3
INDEX=XED_REG_XMM28 -> VEXDEST4=1 REXX=1 SIBINDEX=4
INDEX=XED_REG_XMM29 -> VEXDEST4=1 REXX=1 SIBINDEX=5
INDEX=XED_REG_XMM30 -> VEXDEST4=1 REXX=1 SIBINDEX=6
INDEX=XED_REG_XMM31 -> VEXDEST4=1 REXX=1 SIBINDEX=7

View File

@@ -0,0 +1,119 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
ESIZE_128_BITS()::
otherwise -> nothing
ESIZE_64_BITS()::
otherwise -> nothing
ESIZE_32_BITS()::
otherwise -> nothing
ESIZE_16_BITS()::
otherwise -> nothing
ESIZE_8_BITS()::
otherwise -> nothing
ESIZE_4_BITS()::
otherwise -> nothing
ESIZE_2_BITS()::
otherwise -> nothing
ESIZE_1_BITS()::
otherwise -> nothing
NELEM_MOVDDUP()::
otherwise -> nothing
NELEM_FULLMEM()::
otherwise -> nothing
NELEM_HALFMEM()::
otherwise -> nothing
NELEM_QUARTERMEM()::
otherwise -> nothing
NELEM_EIGHTHMEM()::
otherwise -> nothing
NELEM_GPR_READER_BYTE()::
otherwise -> nothing
NELEM_GPR_READER_WORD()::
otherwise -> nothing
NELEM_GPR_WRITER_LDOP_D()::
otherwise -> nothing
NELEM_GPR_WRITER_LDOP_Q()::
otherwise -> nothing
NELEM_GPR_WRITER_STORE_BYTE()::
otherwise -> nothing
NELEM_GPR_WRITER_STORE_WORD()::
otherwise -> nothing
NELEM_TUPLE1_BYTE()::
otherwise -> nothing
NELEM_TUPLE1_WORD()::
otherwise -> nothing
NELEM_SCALAR()::
otherwise -> nothing
NELEM_TUPLE1_SUBDWORD()::
otherwise -> nothing
NELEM_GPR_READER()::
otherwise -> nothing
NELEM_GPR_READER_SUBDWORD()::
otherwise -> nothing
NELEM_GPR_WRITER_LDOP()::
otherwise -> nothing
NELEM_GPR_WRITER_STORE()::
otherwise -> nothing
NELEM_GPR_WRITER_STORE_SUBDWORD()::
otherwise -> nothing
NELEM_MEM128()::
BCAST!=0 -> error
otherwise -> BCRC=0
# TUPLE1,2,4,8, FULL and HALF
NELEM_TUPLE1()::
otherwise -> nothing
NELEM_GSCAT()::
otherwise -> nothing
NELEM_TUPLE2()::
otherwise -> nothing
NELEM_TUPLE4()::
otherwise -> nothing
NELEM_TUPLE8()::
otherwise -> nothing
# these have broadcasting
NELEM_FULL()::
BCAST!=0 -> BCRC=1
otherwise -> BCRC=0
NELEM_HALF()::
BCAST!=0 -> BCRC=1
otherwise -> BCRC=0
FIX_ROUND_LEN512()::
otherwise -> nothing
FIX_ROUND_LEN128()::
otherwise -> nothing

View File

@@ -0,0 +1,331 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
#The "MEM" suffix on tuples means NO BROADCAST ALLOWED
# SET THE ELEMENT SIZE DURING DECODE -- using spreadsheet InputSize field
# FIXME: fix and use 'otherwise' instead of REX=0!
ESIZE_128_BITS()::
REX=0 | ELEMENT_SIZE=128
ESIZE_64_BITS()::
REX=0 | ELEMENT_SIZE=64
ESIZE_32_BITS()::
REX=0 | ELEMENT_SIZE=32
ESIZE_16_BITS()::
REX=0 | ELEMENT_SIZE=16
ESIZE_8_BITS()::
REX=0 | ELEMENT_SIZE=8
ESIZE_4_BITS()::
REX=0 | ELEMENT_SIZE=4
ESIZE_2_BITS()::
REX=0 | ELEMENT_SIZE=2
ESIZE_1_BITS()::
REX=0 | ELEMENT_SIZE=1
# eightmem is a 8B reference
# quartermem is a 16B reference
# halfmem is a 32B reference
# fullmem is a 64B reference
# legacy movddup references 64b when doing a 128b VL
# but acts like fullmem for 256/512.
NELEM_MOVDDUP()::
ELEMENT_SIZE=64 VL128 | NELEM=1
ELEMENT_SIZE=64 VL256 | NELEM=4
ELEMENT_SIZE=64 VL512 | NELEM=8
# element size is in bits...
NELEM_FULLMEM():: # updated 2011-02-18
ELEMENT_SIZE=1 VL512 | NELEM=512
ELEMENT_SIZE=2 VL512 | NELEM=256
ELEMENT_SIZE=4 VL512 | NELEM=128
ELEMENT_SIZE=8 VL512 | NELEM=64
ELEMENT_SIZE=16 VL512 | NELEM=32
ELEMENT_SIZE=32 VL512 | NELEM=16
ELEMENT_SIZE=64 VL512 | NELEM=8
ELEMENT_SIZE=128 VL512 | NELEM=4
ELEMENT_SIZE=256 VL512 | NELEM=2
ELEMENT_SIZE=512 VL512 | NELEM=1
ELEMENT_SIZE=1 VL256 | NELEM=256
ELEMENT_SIZE=2 VL256 | NELEM=128
ELEMENT_SIZE=4 VL256 | NELEM=64
ELEMENT_SIZE=8 VL256 | NELEM=32
ELEMENT_SIZE=16 VL256 | NELEM=16
ELEMENT_SIZE=32 VL256 | NELEM=8
ELEMENT_SIZE=64 VL256 | NELEM=4
ELEMENT_SIZE=128 VL256 | NELEM=2
ELEMENT_SIZE=256 VL256 | NELEM=1
ELEMENT_SIZE=512 VL256 | error
ELEMENT_SIZE=1 VL128 | NELEM=128
ELEMENT_SIZE=2 VL128 | NELEM=64
ELEMENT_SIZE=4 VL128 | NELEM=32
ELEMENT_SIZE=8 VL128 | NELEM=16
ELEMENT_SIZE=16 VL128 | NELEM=8
ELEMENT_SIZE=32 VL128 | NELEM=4
ELEMENT_SIZE=64 VL128 | NELEM=2
ELEMENT_SIZE=128 VL128 | NELEM=1
ELEMENT_SIZE=256 VL128 | error
ELEMENT_SIZE=512 VL128 | error
NELEM_HALFMEM():: # 32B/256b reference updated 2011-02-18
ELEMENT_SIZE=1 VL512 | NELEM=256
ELEMENT_SIZE=2 VL512 | NELEM=128
ELEMENT_SIZE=4 VL512 | NELEM=64
ELEMENT_SIZE=8 VL512 | NELEM=32
ELEMENT_SIZE=16 VL512 | NELEM=16
ELEMENT_SIZE=32 VL512 | NELEM=8
ELEMENT_SIZE=64 VL512 | NELEM=4
ELEMENT_SIZE=128 VL512 | NELEM=2
ELEMENT_SIZE=256 VL512 | NELEM=1
ELEMENT_SIZE=512 VL512 | error
ELEMENT_SIZE=1 VL256 | NELEM=128
ELEMENT_SIZE=2 VL256 | NELEM=64
ELEMENT_SIZE=4 VL256 | NELEM=32
ELEMENT_SIZE=8 VL256 | NELEM=16
ELEMENT_SIZE=16 VL256 | NELEM=8
ELEMENT_SIZE=32 VL256 | NELEM=4
ELEMENT_SIZE=64 VL256 | NELEM=2
ELEMENT_SIZE=128 VL256 | NELEM=1
ELEMENT_SIZE=256 VL256 | error
ELEMENT_SIZE=512 VL256 | error
ELEMENT_SIZE=1 VL128 | NELEM=64
ELEMENT_SIZE=2 VL128 | NELEM=32
ELEMENT_SIZE=4 VL128 | NELEM=16
ELEMENT_SIZE=8 VL128 | NELEM=8
ELEMENT_SIZE=16 VL128 | NELEM=4
ELEMENT_SIZE=32 VL128 | NELEM=2
ELEMENT_SIZE=64 VL128 | NELEM=1
ELEMENT_SIZE=128 VL128 | error
ELEMENT_SIZE=256 VL128 | error
ELEMENT_SIZE=512 VL128 | error
NELEM_QUARTERMEM():: # 16B/128b reference updated 2011-02-18
ELEMENT_SIZE=1 VL512 | NELEM=128
ELEMENT_SIZE=2 VL512 | NELEM=64
ELEMENT_SIZE=4 VL512 | NELEM=32
ELEMENT_SIZE=8 VL512 | NELEM=16
ELEMENT_SIZE=16 VL512 | NELEM=8
ELEMENT_SIZE=32 VL512 | NELEM=4
ELEMENT_SIZE=64 VL512 | NELEM=2
ELEMENT_SIZE=128 VL512 | NELEM=1
ELEMENT_SIZE=256 VL512 | error
ELEMENT_SIZE=512 VL512 | error
ELEMENT_SIZE=1 VL256 | NELEM=64
ELEMENT_SIZE=2 VL256 | NELEM=32
ELEMENT_SIZE=4 VL256 | NELEM=16
ELEMENT_SIZE=8 VL256 | NELEM=8
ELEMENT_SIZE=16 VL256 | NELEM=4
ELEMENT_SIZE=32 VL256 | NELEM=2
ELEMENT_SIZE=64 VL256 | NELEM=1
ELEMENT_SIZE=128 VL256 | error
ELEMENT_SIZE=256 VL256 | error
ELEMENT_SIZE=512 VL256 | error
ELEMENT_SIZE=1 VL128 | NELEM=32
ELEMENT_SIZE=2 VL128 | NELEM=16
ELEMENT_SIZE=4 VL128 | NELEM=8
ELEMENT_SIZE=8 VL128 | NELEM=4
ELEMENT_SIZE=16 VL128 | NELEM=2
ELEMENT_SIZE=32 VL128 | NELEM=1
ELEMENT_SIZE=64 VL128 | error
ELEMENT_SIZE=128 VL128 | error
ELEMENT_SIZE=256 VL128 | error
ELEMENT_SIZE=512 VL128 | error
NELEM_EIGHTHMEM():: # 8B/64b reference updated 2011-02-18
ELEMENT_SIZE=1 VL512 | NELEM=64
ELEMENT_SIZE=2 VL512 | NELEM=32
ELEMENT_SIZE=4 VL512 | NELEM=16
ELEMENT_SIZE=8 VL512 | NELEM=8
ELEMENT_SIZE=16 VL512 | NELEM=4
ELEMENT_SIZE=32 VL512 | NELEM=2
ELEMENT_SIZE=64 VL512 | NELEM=1
ELEMENT_SIZE=128 VL512 | error
ELEMENT_SIZE=256 VL512 | error
ELEMENT_SIZE=512 VL512 | error
ELEMENT_SIZE=1 VL256 | NELEM=32
ELEMENT_SIZE=2 VL256 | NELEM=16
ELEMENT_SIZE=4 VL256 | NELEM=8
ELEMENT_SIZE=8 VL256 | NELEM=4
ELEMENT_SIZE=16 VL256 | NELEM=2
ELEMENT_SIZE=32 VL256 | NELEM=1
ELEMENT_SIZE=64 VL256 | error
ELEMENT_SIZE=128 VL256 | error
ELEMENT_SIZE=256 VL256 | error
ELEMENT_SIZE=512 VL256 | error
ELEMENT_SIZE=1 VL128 | NELEM=16
ELEMENT_SIZE=2 VL128 | NELEM=8
ELEMENT_SIZE=4 VL128 | NELEM=4
ELEMENT_SIZE=8 VL128 | NELEM=2
ELEMENT_SIZE=16 VL128 | NELEM=1
ELEMENT_SIZE=32 VL128 | error
ELEMENT_SIZE=64 VL128 | error
ELEMENT_SIZE=128 VL128 | error
ELEMENT_SIZE=256 VL128 | error
ELEMENT_SIZE=512 VL128 | error
NELEM_GPR_READER_BYTE()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_READER_WORD()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_WRITER_LDOP_D()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_WRITER_LDOP_Q()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_WRITER_STORE_BYTE()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_WRITER_STORE_WORD()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_TUPLE1_BYTE()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_TUPLE1_WORD()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_SCALAR():: # same as tuple1 updated 2011-02-18
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_TUPLE1_SUBDWORD()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_READER()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_READER_SUBDWORD()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_WRITER_LDOP()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_WRITER_STORE()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GPR_WRITER_STORE_SUBDWORD()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
# TUPLE1,2,4,8, FULL and HALF
NELEM_TUPLE1():: #updated 2011-02-18
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_GSCAT()::
VL128 | NELEM=1
VL256 | NELEM=1
VL512 | NELEM=1
NELEM_TUPLE2():: #updated 2011-02-18
VL128 | NELEM=2
VL256 | NELEM=2
VL512 | NELEM=2
NELEM_TUPLE4():: #updated 2011-02-18
VL128 | NELEM=4
VL256 | NELEM=4
VL512 | NELEM=4
NELEM_TUPLE8():: # updated 2011-02-18
VL128 | NELEM=8
VL256 | NELEM=8
VL512 | NELEM=8
NELEM_MEM128():: # element_size=64 always!! SPECIAL updated 2011-02-18
BCRC=0b0 | ELEMENT_SIZE=64 NELEM=2
BCRC=0b1 | error
NELEM_FULL()::
BCRC=0b0 ELEMENT_SIZE=16 VL512 | NELEM=32
BCRC=0b1 ELEMENT_SIZE=16 VL512 | NELEM=1 EMX_BROADCAST_1TO32_16
BCRC=0b0 ELEMENT_SIZE=32 VL512 | NELEM=16
BCRC=0b1 ELEMENT_SIZE=32 VL512 | NELEM=1 EMX_BROADCAST_1TO16_32
BCRC=0b0 ELEMENT_SIZE=64 VL512 | NELEM=8
BCRC=0b1 ELEMENT_SIZE=64 VL512 | NELEM=1 EMX_BROADCAST_1TO8_64
BCRC=0b0 ELEMENT_SIZE=16 VL256 | NELEM=16
BCRC=0b1 ELEMENT_SIZE=16 VL256 | NELEM=1 EMX_BROADCAST_1TO16_16
BCRC=0b0 ELEMENT_SIZE=32 VL256 | NELEM=8
BCRC=0b1 ELEMENT_SIZE=32 VL256 | NELEM=1 EMX_BROADCAST_1TO8_32
BCRC=0b0 ELEMENT_SIZE=64 VL256 | NELEM=4
BCRC=0b1 ELEMENT_SIZE=64 VL256 | NELEM=1 EMX_BROADCAST_1TO4_64
BCRC=0b0 ELEMENT_SIZE=16 VL128 | NELEM=8
BCRC=0b1 ELEMENT_SIZE=16 VL128 | NELEM=1 EMX_BROADCAST_1TO8_16
BCRC=0b0 ELEMENT_SIZE=32 VL128 | NELEM=4
BCRC=0b1 ELEMENT_SIZE=32 VL128 | NELEM=1 EMX_BROADCAST_1TO4_32
BCRC=0b0 ELEMENT_SIZE=64 VL128 | NELEM=2
BCRC=0b1 ELEMENT_SIZE=64 VL128 | NELEM=1 EMX_BROADCAST_1TO2_64
# 512b=64B=16DW=8QW -> Half = 256b=32B=8DWORDS=4QWORDS
# 256b=32B=8DW=4QW -> Half = 128b=16B=4DW=2QW
# 128b=16B=4DW=2QW -> Half = 64b=8B=2DW=1QW
NELEM_HALF():: # updated 2011-02-18
BCRC=0b0 ELEMENT_SIZE=32 VL512 | NELEM=8
BCRC=0b1 ELEMENT_SIZE=32 VL512 | NELEM=1 EMX_BROADCAST_1TO8_32
BCRC=0b0 ELEMENT_SIZE=32 VL256 | NELEM=4
BCRC=0b1 ELEMENT_SIZE=32 VL256 | NELEM=1 EMX_BROADCAST_1TO4_32
BCRC=0b0 ELEMENT_SIZE=32 VL128 | NELEM=2
BCRC=0b1 ELEMENT_SIZE=32 VL128 | NELEM=1 EMX_BROADCAST_1TO2_32
# For reg/reg ops with rounding control, we have to avoid having the
# RC bits mes up the length. So we fix them here.
FIX_ROUND_LEN512()::
mode16 | VL512
mode32 | VL512
mode64 | VL512
FIX_ROUND_LEN128()::
mode16 | VL128
mode32 | VL128
mode64 | VL128

View File

@@ -0,0 +1,46 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
AVX512_ROUND()::
LLRC=0b00 | ROUNDC=1 SAE=1
LLRC=0b01 | ROUNDC=2 SAE=1
LLRC=0b10 | ROUNDC=3 SAE=1
LLRC=0b11 | ROUNDC=4 SAE=1
SAE()::
BCRC=1 | SAE=1
BCRC=0 | error
# NEWKEY: VEXPFX_OP == 0x62
# NEWKEY: MBITS --> REXR, REXX (complemented MBITS)
# NEWKEY: BRR -> REXB, REXRR (complemented BRR bits)
# NEWKEY: EVMAP -> V0F, V0F38, V0F3A or error
# NEWKEY: REXW
# NEWKEY: VEXDEST3
# NEWKEY: VEXDEST210
# NEWKEY: UBIT
# NEWKEY: VEXPP_OP -> VNP/V66/VF3/VF2 recoding
# NEWKEY: confirm no refining prefix or rex prefix
# NEWKEY: set VEXVALID=2
# NEWKEY: ZEROING[z]
# NEWKEY: LLRCDECODE()-> LLRC -> VL128,256,512 or error
# NEWKEY: BCRC[b]
# NEWKEY: VEXDEST4P[p]
# NEWKEY: VEXDEST4_INVERT() <<<< invert VEXDEST4
# NEWKEY: MASK[aaa]

View File

@@ -0,0 +1,162 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
# These bind the operand deciders that control the encoding
SEQUENCE ISA_BINDINGS
FIXUP_EOSZ_ENC_BIND()
FIXUP_EASZ_ENC_BIND()
ASZ_NONTERM_BIND()
INSTRUCTIONS_BIND() # not calling tree splitter! GSSE instructions must set VEXVALID=1
OSZ_NONTERM_ENC_BIND() # OSZ must be after the instructions so that DF64 is bound (and before any prefixes obviously)
PREFIX_ENC_BIND()
VEXED_REX_BIND()
# These emit the bits and bytes that make up the encoding
SEQUENCE ISA_EMIT
PREFIX_ENC_EMIT()
VEXED_REX_EMIT()
INSTRUCTIONS_EMIT()
VEXED_REX()::
VEXVALID=2 -> EVEX_ENC()
#################################################
SEQUENCE EVEX_ENC_BIND
# R,X,B R map(mmm) (byte 1)
# W, vvvv, U, pp (byte 2)
# z, LL/RC, b V', aaa ( byte 3)
EVEX_62_REXR_ENC_BIND
EVEX_REXX_ENC_BIND
EVEX_REXB_ENC_BIND
EVEX_REXRR_ENC_BIND
EVEX_MAP_ENC_BIND
EVEX_REXW_VVVV_ENC_BIND
EVEX_UPP_ENC_BIND
EVEX_LL_ENC_BIND
AVX512_EVEX_BYTE3_ENC_BIND
SEQUENCE EVEX_ENC_EMIT
EVEX_62_REXR_ENC_EMIT
EVEX_REXX_ENC_EMIT
EVEX_REXB_ENC_EMIT
EVEX_REXRR_ENC_EMIT
EVEX_MAP_ENC_EMIT
EVEX_REXW_VVVV_ENC_EMIT
EVEX_UPP_ENC_EMIT
EVEX_LL_ENC_EMIT
AVX512_EVEX_BYTE3_ENC_EMIT
EVEX_62_REXR_ENC()::
mode64 REXR=1 -> 0x62 0b0
mode64 REXR=0 -> 0x62 0b1
mode32 REXR=1 -> error
mode32 REXR=0 -> 0x62 0b1
EVEX_REXX_ENC()::
mode64 REXX=1 -> 0b0
mode64 REXX=0 -> 0b1
mode32 REXX=1 -> error
mode32 REXX=0 -> 0b1
EVEX_REXB_ENC()::
mode64 REXB=1 -> 0b0
mode64 REXB=0 -> 0b1
mode32 REXB=1 -> error
mode32 REXB=0 -> 0b1
EVEX_REXRR_ENC()::
mode64 REXRR=1 -> 0b0
mode64 REXRR=0 -> 0b1
mode32 REXRR=1 -> error
mode32 REXRR=0 -> 0b1
EVEX_MAP_ENC()::
MAP=0 -> 0b0000
MAP=1 -> 0b0001
MAP=2 -> 0b0010
MAP=3 -> 0b0011
EVEX_REXW_VVVV_ENC()::
true REXW[w] VEXDEST3[u] VEXDEST210[ddd] -> w u_ddd
# emit the EVEX.U=1 with the EVEX.pp field
EVEX_UPP_ENC()::
VNP -> 0b100
V66 -> 0b101
VF3 -> 0b110
VF2 -> 0b111
EVEX_LL_ENC()::
ROUNDC=0 SAE=0 VL128 -> LLRC=0
ROUNDC=0 SAE=0 VL256 -> LLRC=1
ROUNDC=0 SAE=0 VL512 -> LLRC=2
# scalars (XED has scalars as VL128)
ROUNDC=0 SAE=1 VL128 -> LLRC=0 BCRC=1 # sae only, no rounding
ROUNDC=1 SAE=1 VL128 -> LLRC=0 BCRC=1 # rounding only supported with sae
ROUNDC=2 SAE=1 VL128 -> LLRC=1 BCRC=1 # rounding only supported with sae
ROUNDC=3 SAE=1 VL128 -> LLRC=2 BCRC=1 # rounding only supported with sae
ROUNDC=4 SAE=1 VL128 -> LLRC=3 BCRC=1 # rounding only supported with sae
# everything else (must be VL512)
ROUNDC=0 SAE=1 VL512 -> LLRC=0 BCRC=1 # sae only, no rounding
ROUNDC=1 SAE=1 VL512 -> LLRC=0 BCRC=1 # rounding only supported with sae
ROUNDC=2 SAE=1 VL512 -> LLRC=1 BCRC=1 # rounding only supported with sae
ROUNDC=3 SAE=1 VL512 -> LLRC=2 BCRC=1 # rounding only supported with sae
ROUNDC=4 SAE=1 VL512 -> LLRC=3 BCRC=1 # rounding only supported with sae
AVX512_EVEX_BYTE3_ENC()::
ZEROING[z] LLRC[nn] BCRC[b] VEXDEST4=0 MASK[aaa] -> z_nn_b 0b1 aaa
ZEROING[z] LLRC[nn] BCRC[b] VEXDEST4=1 MASK[aaa] -> z_nn_b 0b0 aaa
#################################################
SEQUENCE NEWVEX3_ENC_BIND
VEX_TYPE_ENC_BIND
VEX_REXR_ENC_BIND
VEX_REXXB_ENC_BIND
VEX_MAP_ENC_BIND
VEX_REG_ENC_BIND
VEX_ESCVL_ENC_BIND
SEQUENCE NEWVEX3_ENC_EMIT
VEX_TYPE_ENC_EMIT
VEX_REXR_ENC_EMIT
VEX_REXXB_ENC_EMIT
VEX_MAP_ENC_EMIT
VEX_REG_ENC_EMIT
VEX_ESCVL_ENC_EMIT
##############################################################################
AVX512_ROUND()::
ROUNDC=1 -> LLRC=0 BCRC=1
ROUNDC=2 -> LLRC=1 BCRC=1
ROUNDC=3 -> LLRC=2 BCRC=1
ROUNDC=4 -> LLRC=3 BCRC=1
SAE()::
SAE=1 -> BCRC=1
SAE=0 -> BCRC=0

View File

@@ -0,0 +1,38 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
#
ZEROING SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EO
LLRC SCALAR xed_bits_t 2 SUPPRESSED NOPRINT INTERNAL DO EO
BCRC SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EO
REXRR SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EO
VEXDEST4 SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EO
MASK SCALAR xed_bits_t 3 SUPPRESSED NOPRINT INTERNAL DO EO
ROUNDC SCALAR xed_bits_t 3 SUPPRESSED NOPRINT INTERNAL DO EI
SAE SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EI
# this is required for KNC's disp8 C-code override file
# (for their unaligned memop support).
NO_SCALE_DISP8 SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EO
EVEXRR SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EO
UBIT SCALAR xed_bits_t 1 SUPPRESSED NOPRINT INTERNAL DO EO

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
#Filename #priority, largest wins
#cur_dir is current file's directory #(e.g. 4 wins over 0)
%(cur_dir)s/ild/include/avx512-ild-getters.h 4

View File

@@ -0,0 +1,25 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
K0 mask 64 K0 0
K1 mask 64 K1 1
K2 mask 64 K2 2
K3 mask 64 K3 3
K4 mask 64 K4 4
K5 mask 64 K5 5
K6 mask 64 K6 6
K7 mask 64 K7 7

View File

@@ -0,0 +1,54 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
#
#code XTYPE width16 width32 width64 (if only one width is presented, it is for all widths)
#
vv var 0 # relies on nelem * elem_size
zv var 0 # relies on nelem * elem_size
wrd u16 16bits
mskw i1 64bits # FIXME: bad name
zmskw i1 512bits
zf32 f32 512bits
zf64 f64 512bits
zb i8 512bits
zw i16 512bits
zd i32 512bits
zq i64 512bits
zub u8 512bits
zuw u16 512bits
zud u32 512bits
zuq u64 512bits
# alternative names...
zi8 i8 512bits
zi16 i16 512bits
zi32 i32 512bits
zi64 i64 512bits
zu8 u8 512bits
zu16 u16 512bits
zu32 u32 512bits
zu64 u64 512bits
zu128 u128 512bits

View File

@@ -0,0 +1,19 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
64 zmmword z

View File

@@ -0,0 +1,26 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
xed_reg_enum_t GPRm_B()::
mode64 | OUTREG=GPR64_B()
mode32 | OUTREG=GPR32_B()
mode16 | OUTREG=GPR32_B()
xed_reg_enum_t GPRm_R()::
mode64 | OUTREG=GPR64_R()
mode32 | OUTREG=GPR32_R()
mode16 | OUTREG=GPR32_R()

View File

@@ -0,0 +1,75 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
# FIXME: the rest of this file is common w/KNC. Split it out to avoid
# duplication
xed_reg_enum_t MASK1()::
MASK=0x0 | OUTREG=XED_REG_K0
MASK=0x1 | OUTREG=XED_REG_K1
MASK=0x2 | OUTREG=XED_REG_K2
MASK=0x3 | OUTREG=XED_REG_K3
MASK=0x4 | OUTREG=XED_REG_K4
MASK=0x5 | OUTREG=XED_REG_K5
MASK=0x6 | OUTREG=XED_REG_K6
MASK=0x7 | OUTREG=XED_REG_K7
xed_reg_enum_t MASKNOT0()::
MASK=0x0 | OUTREG=XED_REG_ERROR
MASK=0x1 | OUTREG=XED_REG_K1
MASK=0x2 | OUTREG=XED_REG_K2
MASK=0x3 | OUTREG=XED_REG_K3
MASK=0x4 | OUTREG=XED_REG_K4
MASK=0x5 | OUTREG=XED_REG_K5
MASK=0x6 | OUTREG=XED_REG_K6
MASK=0x7 | OUTREG=XED_REG_K7
# used for compares in EVEX
xed_reg_enum_t MASK_R()::
REXRR=0 REXR=0 REG=0x0 | OUTREG=XED_REG_K0
REXRR=0 REXR=0 REG=0x1 | OUTREG=XED_REG_K1
REXRR=0 REXR=0 REG=0x2 | OUTREG=XED_REG_K2
REXRR=0 REXR=0 REG=0x3 | OUTREG=XED_REG_K3
REXRR=0 REXR=0 REG=0x4 | OUTREG=XED_REG_K4
REXRR=0 REXR=0 REG=0x5 | OUTREG=XED_REG_K5
REXRR=0 REXR=0 REG=0x6 | OUTREG=XED_REG_K6
REXRR=0 REXR=0 REG=0x7 | OUTREG=XED_REG_K7
# only used in VEX space for K-mask ops
xed_reg_enum_t MASK_B()::
REXB=0 RM=0x0 | OUTREG=XED_REG_K0
REXB=0 RM=0x1 | OUTREG=XED_REG_K1
REXB=0 RM=0x2 | OUTREG=XED_REG_K2
REXB=0 RM=0x3 | OUTREG=XED_REG_K3
REXB=0 RM=0x4 | OUTREG=XED_REG_K4
REXB=0 RM=0x5 | OUTREG=XED_REG_K5
REXB=0 RM=0x6 | OUTREG=XED_REG_K6
REXB=0 RM=0x7 | OUTREG=XED_REG_K7
# only used in VEX space for K-mask ops
# stored inverted
xed_reg_enum_t MASK_N()::
VEXDEST3=1 VEXDEST210=0x0 | OUTREG=XED_REG_K7
VEXDEST3=1 VEXDEST210=0x1 | OUTREG=XED_REG_K6
VEXDEST3=1 VEXDEST210=0x2 | OUTREG=XED_REG_K5
VEXDEST3=1 VEXDEST210=0x3 | OUTREG=XED_REG_K4
VEXDEST3=1 VEXDEST210=0x4 | OUTREG=XED_REG_K3
VEXDEST3=1 VEXDEST210=0x5 | OUTREG=XED_REG_K2
VEXDEST3=1 VEXDEST210=0x6 | OUTREG=XED_REG_K1
VEXDEST3=1 VEXDEST210=0x7 | OUTREG=XED_REG_K0

View File

@@ -0,0 +1,172 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
xed_reg_enum_t XMM_B3()::
mode16 | OUTREG=XMM_B3_32()
mode32 | OUTREG=XMM_B3_32()
mode64 | OUTREG=XMM_B3_64()
xed_reg_enum_t XMM_B3_32()::
RM=0 | OUTREG=XED_REG_XMM0
RM=1 | OUTREG=XED_REG_XMM1
RM=2 | OUTREG=XED_REG_XMM2
RM=3 | OUTREG=XED_REG_XMM3
RM=4 | OUTREG=XED_REG_XMM4
RM=5 | OUTREG=XED_REG_XMM5
RM=6 | OUTREG=XED_REG_XMM6
RM=7 | OUTREG=XED_REG_XMM7
xed_reg_enum_t XMM_B3_64()::
REXX=0 REXB=0 RM=0 | OUTREG=XED_REG_XMM0
REXX=0 REXB=0 RM=1 | OUTREG=XED_REG_XMM1
REXX=0 REXB=0 RM=2 | OUTREG=XED_REG_XMM2
REXX=0 REXB=0 RM=3 | OUTREG=XED_REG_XMM3
REXX=0 REXB=0 RM=4 | OUTREG=XED_REG_XMM4
REXX=0 REXB=0 RM=5 | OUTREG=XED_REG_XMM5
REXX=0 REXB=0 RM=6 | OUTREG=XED_REG_XMM6
REXX=0 REXB=0 RM=7 | OUTREG=XED_REG_XMM7
REXX=0 REXB=1 RM=0 | OUTREG=XED_REG_XMM8
REXX=0 REXB=1 RM=1 | OUTREG=XED_REG_XMM9
REXX=0 REXB=1 RM=2 | OUTREG=XED_REG_XMM10
REXX=0 REXB=1 RM=3 | OUTREG=XED_REG_XMM11
REXX=0 REXB=1 RM=4 | OUTREG=XED_REG_XMM12
REXX=0 REXB=1 RM=5 | OUTREG=XED_REG_XMM13
REXX=0 REXB=1 RM=6 | OUTREG=XED_REG_XMM14
REXX=0 REXB=1 RM=7 | OUTREG=XED_REG_XMM15
REXX=1 REXB=0 RM=0 | OUTREG=XED_REG_XMM16
REXX=1 REXB=0 RM=1 | OUTREG=XED_REG_XMM17
REXX=1 REXB=0 RM=2 | OUTREG=XED_REG_XMM18
REXX=1 REXB=0 RM=3 | OUTREG=XED_REG_XMM19
REXX=1 REXB=0 RM=4 | OUTREG=XED_REG_XMM20
REXX=1 REXB=0 RM=5 | OUTREG=XED_REG_XMM21
REXX=1 REXB=0 RM=6 | OUTREG=XED_REG_XMM22
REXX=1 REXB=0 RM=7 | OUTREG=XED_REG_XMM23
REXX=1 REXB=1 RM=0 | OUTREG=XED_REG_XMM24
REXX=1 REXB=1 RM=1 | OUTREG=XED_REG_XMM25
REXX=1 REXB=1 RM=2 | OUTREG=XED_REG_XMM26
REXX=1 REXB=1 RM=3 | OUTREG=XED_REG_XMM27
REXX=1 REXB=1 RM=4 | OUTREG=XED_REG_XMM28
REXX=1 REXB=1 RM=5 | OUTREG=XED_REG_XMM29
REXX=1 REXB=1 RM=6 | OUTREG=XED_REG_XMM30
REXX=1 REXB=1 RM=7 | OUTREG=XED_REG_XMM31
xed_reg_enum_t YMM_B3()::
mode16 | OUTREG=YMM_B3_32()
mode32 | OUTREG=YMM_B3_32()
mode64 | OUTREG=YMM_B3_64()
xed_reg_enum_t YMM_B3_32()::
RM=0 | OUTREG=XED_REG_YMM0
RM=1 | OUTREG=XED_REG_YMM1
RM=2 | OUTREG=XED_REG_YMM2
RM=3 | OUTREG=XED_REG_YMM3
RM=4 | OUTREG=XED_REG_YMM4
RM=5 | OUTREG=XED_REG_YMM5
RM=6 | OUTREG=XED_REG_YMM6
RM=7 | OUTREG=XED_REG_YMM7
xed_reg_enum_t YMM_B3_64()::
REXX=0 REXB=0 RM=0 | OUTREG=XED_REG_YMM0
REXX=0 REXB=0 RM=1 | OUTREG=XED_REG_YMM1
REXX=0 REXB=0 RM=2 | OUTREG=XED_REG_YMM2
REXX=0 REXB=0 RM=3 | OUTREG=XED_REG_YMM3
REXX=0 REXB=0 RM=4 | OUTREG=XED_REG_YMM4
REXX=0 REXB=0 RM=5 | OUTREG=XED_REG_YMM5
REXX=0 REXB=0 RM=6 | OUTREG=XED_REG_YMM6
REXX=0 REXB=0 RM=7 | OUTREG=XED_REG_YMM7
REXX=0 REXB=1 RM=0 | OUTREG=XED_REG_YMM8
REXX=0 REXB=1 RM=1 | OUTREG=XED_REG_YMM9
REXX=0 REXB=1 RM=2 | OUTREG=XED_REG_YMM10
REXX=0 REXB=1 RM=3 | OUTREG=XED_REG_YMM11
REXX=0 REXB=1 RM=4 | OUTREG=XED_REG_YMM12
REXX=0 REXB=1 RM=5 | OUTREG=XED_REG_YMM13
REXX=0 REXB=1 RM=6 | OUTREG=XED_REG_YMM14
REXX=0 REXB=1 RM=7 | OUTREG=XED_REG_YMM15
REXX=1 REXB=0 RM=0 | OUTREG=XED_REG_YMM16
REXX=1 REXB=0 RM=1 | OUTREG=XED_REG_YMM17
REXX=1 REXB=0 RM=2 | OUTREG=XED_REG_YMM18
REXX=1 REXB=0 RM=3 | OUTREG=XED_REG_YMM19
REXX=1 REXB=0 RM=4 | OUTREG=XED_REG_YMM20
REXX=1 REXB=0 RM=5 | OUTREG=XED_REG_YMM21
REXX=1 REXB=0 RM=6 | OUTREG=XED_REG_YMM22
REXX=1 REXB=0 RM=7 | OUTREG=XED_REG_YMM23
REXX=1 REXB=1 RM=0 | OUTREG=XED_REG_YMM24
REXX=1 REXB=1 RM=1 | OUTREG=XED_REG_YMM25
REXX=1 REXB=1 RM=2 | OUTREG=XED_REG_YMM26
REXX=1 REXB=1 RM=3 | OUTREG=XED_REG_YMM27
REXX=1 REXB=1 RM=4 | OUTREG=XED_REG_YMM28
REXX=1 REXB=1 RM=5 | OUTREG=XED_REG_YMM29
REXX=1 REXB=1 RM=6 | OUTREG=XED_REG_YMM30
REXX=1 REXB=1 RM=7 | OUTREG=XED_REG_YMM31
xed_reg_enum_t ZMM_B3()::
mode16 | OUTREG=ZMM_B3_32()
mode32 | OUTREG=ZMM_B3_32()
mode64 | OUTREG=ZMM_B3_64()
xed_reg_enum_t ZMM_B3_32()::
RM=0 | OUTREG=XED_REG_ZMM0
RM=1 | OUTREG=XED_REG_ZMM1
RM=2 | OUTREG=XED_REG_ZMM2
RM=3 | OUTREG=XED_REG_ZMM3
RM=4 | OUTREG=XED_REG_ZMM4
RM=5 | OUTREG=XED_REG_ZMM5
RM=6 | OUTREG=XED_REG_ZMM6
RM=7 | OUTREG=XED_REG_ZMM7
xed_reg_enum_t ZMM_B3_64()::
REXX=0 REXB=0 RM=0 | OUTREG=XED_REG_ZMM0
REXX=0 REXB=0 RM=1 | OUTREG=XED_REG_ZMM1
REXX=0 REXB=0 RM=2 | OUTREG=XED_REG_ZMM2
REXX=0 REXB=0 RM=3 | OUTREG=XED_REG_ZMM3
REXX=0 REXB=0 RM=4 | OUTREG=XED_REG_ZMM4
REXX=0 REXB=0 RM=5 | OUTREG=XED_REG_ZMM5
REXX=0 REXB=0 RM=6 | OUTREG=XED_REG_ZMM6
REXX=0 REXB=0 RM=7 | OUTREG=XED_REG_ZMM7
REXX=0 REXB=1 RM=0 | OUTREG=XED_REG_ZMM8
REXX=0 REXB=1 RM=1 | OUTREG=XED_REG_ZMM9
REXX=0 REXB=1 RM=2 | OUTREG=XED_REG_ZMM10
REXX=0 REXB=1 RM=3 | OUTREG=XED_REG_ZMM11
REXX=0 REXB=1 RM=4 | OUTREG=XED_REG_ZMM12
REXX=0 REXB=1 RM=5 | OUTREG=XED_REG_ZMM13
REXX=0 REXB=1 RM=6 | OUTREG=XED_REG_ZMM14
REXX=0 REXB=1 RM=7 | OUTREG=XED_REG_ZMM15
REXX=1 REXB=0 RM=0 | OUTREG=XED_REG_ZMM16
REXX=1 REXB=0 RM=1 | OUTREG=XED_REG_ZMM17
REXX=1 REXB=0 RM=2 | OUTREG=XED_REG_ZMM18
REXX=1 REXB=0 RM=3 | OUTREG=XED_REG_ZMM19
REXX=1 REXB=0 RM=4 | OUTREG=XED_REG_ZMM20
REXX=1 REXB=0 RM=5 | OUTREG=XED_REG_ZMM21
REXX=1 REXB=0 RM=6 | OUTREG=XED_REG_ZMM22
REXX=1 REXB=0 RM=7 | OUTREG=XED_REG_ZMM23
REXX=1 REXB=1 RM=0 | OUTREG=XED_REG_ZMM24
REXX=1 REXB=1 RM=1 | OUTREG=XED_REG_ZMM25
REXX=1 REXB=1 RM=2 | OUTREG=XED_REG_ZMM26
REXX=1 REXB=1 RM=3 | OUTREG=XED_REG_ZMM27
REXX=1 REXB=1 RM=4 | OUTREG=XED_REG_ZMM28
REXX=1 REXB=1 RM=5 | OUTREG=XED_REG_ZMM29
REXX=1 REXB=1 RM=6 | OUTREG=XED_REG_ZMM30
REXX=1 REXB=1 RM=7 | OUTREG=XED_REG_ZMM31

View File

@@ -0,0 +1,174 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
xed_reg_enum_t XMM_N3()::
mode16 | OUTREG=XMM_N3_32()
mode32 | OUTREG=XMM_N3_32()
mode64 | OUTREG=XMM_N3_64()
xed_reg_enum_t XMM_N3_32()::
VEXDEST210=7 | OUTREG=XED_REG_XMM0
VEXDEST210=6 | OUTREG=XED_REG_XMM1
VEXDEST210=5 | OUTREG=XED_REG_XMM2
VEXDEST210=4 | OUTREG=XED_REG_XMM3
VEXDEST210=3 | OUTREG=XED_REG_XMM4
VEXDEST210=2 | OUTREG=XED_REG_XMM5
VEXDEST210=1 | OUTREG=XED_REG_XMM6
VEXDEST210=0 | OUTREG=XED_REG_XMM7
xed_reg_enum_t XMM_N3_64()::
VEXDEST4=0 VEXDEST3=1 VEXDEST210=7 | OUTREG=XED_REG_XMM0
VEXDEST4=0 VEXDEST3=1 VEXDEST210=6 | OUTREG=XED_REG_XMM1
VEXDEST4=0 VEXDEST3=1 VEXDEST210=5 | OUTREG=XED_REG_XMM2
VEXDEST4=0 VEXDEST3=1 VEXDEST210=4 | OUTREG=XED_REG_XMM3
VEXDEST4=0 VEXDEST3=1 VEXDEST210=3 | OUTREG=XED_REG_XMM4
VEXDEST4=0 VEXDEST3=1 VEXDEST210=2 | OUTREG=XED_REG_XMM5
VEXDEST4=0 VEXDEST3=1 VEXDEST210=1 | OUTREG=XED_REG_XMM6
VEXDEST4=0 VEXDEST3=1 VEXDEST210=0 | OUTREG=XED_REG_XMM7
VEXDEST4=0 VEXDEST3=0 VEXDEST210=7 | OUTREG=XED_REG_XMM8
VEXDEST4=0 VEXDEST3=0 VEXDEST210=6 | OUTREG=XED_REG_XMM9
VEXDEST4=0 VEXDEST3=0 VEXDEST210=5 | OUTREG=XED_REG_XMM10
VEXDEST4=0 VEXDEST3=0 VEXDEST210=4 | OUTREG=XED_REG_XMM11
VEXDEST4=0 VEXDEST3=0 VEXDEST210=3 | OUTREG=XED_REG_XMM12
VEXDEST4=0 VEXDEST3=0 VEXDEST210=2 | OUTREG=XED_REG_XMM13
VEXDEST4=0 VEXDEST3=0 VEXDEST210=1 | OUTREG=XED_REG_XMM14
VEXDEST4=0 VEXDEST3=0 VEXDEST210=0 | OUTREG=XED_REG_XMM15
VEXDEST4=1 VEXDEST3=1 VEXDEST210=7 | OUTREG=XED_REG_XMM16
VEXDEST4=1 VEXDEST3=1 VEXDEST210=6 | OUTREG=XED_REG_XMM17
VEXDEST4=1 VEXDEST3=1 VEXDEST210=5 | OUTREG=XED_REG_XMM18
VEXDEST4=1 VEXDEST3=1 VEXDEST210=4 | OUTREG=XED_REG_XMM19
VEXDEST4=1 VEXDEST3=1 VEXDEST210=3 | OUTREG=XED_REG_XMM20
VEXDEST4=1 VEXDEST3=1 VEXDEST210=2 | OUTREG=XED_REG_XMM21
VEXDEST4=1 VEXDEST3=1 VEXDEST210=1 | OUTREG=XED_REG_XMM22
VEXDEST4=1 VEXDEST3=1 VEXDEST210=0 | OUTREG=XED_REG_XMM23
VEXDEST4=1 VEXDEST3=0 VEXDEST210=7 | OUTREG=XED_REG_XMM24
VEXDEST4=1 VEXDEST3=0 VEXDEST210=6 | OUTREG=XED_REG_XMM25
VEXDEST4=1 VEXDEST3=0 VEXDEST210=5 | OUTREG=XED_REG_XMM26
VEXDEST4=1 VEXDEST3=0 VEXDEST210=4 | OUTREG=XED_REG_XMM27
VEXDEST4=1 VEXDEST3=0 VEXDEST210=3 | OUTREG=XED_REG_XMM28
VEXDEST4=1 VEXDEST3=0 VEXDEST210=2 | OUTREG=XED_REG_XMM29
VEXDEST4=1 VEXDEST3=0 VEXDEST210=1 | OUTREG=XED_REG_XMM30
VEXDEST4=1 VEXDEST3=0 VEXDEST210=0 | OUTREG=XED_REG_XMM31
xed_reg_enum_t YMM_N3()::
mode16 | OUTREG=YMM_N3_32()
mode32 | OUTREG=YMM_N3_32()
mode64 | OUTREG=YMM_N3_64()
xed_reg_enum_t YMM_N3_32()::
VEXDEST210=7 | OUTREG=XED_REG_YMM0
VEXDEST210=6 | OUTREG=XED_REG_YMM1
VEXDEST210=5 | OUTREG=XED_REG_YMM2
VEXDEST210=4 | OUTREG=XED_REG_YMM3
VEXDEST210=3 | OUTREG=XED_REG_YMM4
VEXDEST210=2 | OUTREG=XED_REG_YMM5
VEXDEST210=1 | OUTREG=XED_REG_YMM6
VEXDEST210=0 | OUTREG=XED_REG_YMM7
xed_reg_enum_t YMM_N3_64()::
VEXDEST4=0 VEXDEST3=1 VEXDEST210=7 | OUTREG=XED_REG_YMM0
VEXDEST4=0 VEXDEST3=1 VEXDEST210=6 | OUTREG=XED_REG_YMM1
VEXDEST4=0 VEXDEST3=1 VEXDEST210=5 | OUTREG=XED_REG_YMM2
VEXDEST4=0 VEXDEST3=1 VEXDEST210=4 | OUTREG=XED_REG_YMM3
VEXDEST4=0 VEXDEST3=1 VEXDEST210=3 | OUTREG=XED_REG_YMM4
VEXDEST4=0 VEXDEST3=1 VEXDEST210=2 | OUTREG=XED_REG_YMM5
VEXDEST4=0 VEXDEST3=1 VEXDEST210=1 | OUTREG=XED_REG_YMM6
VEXDEST4=0 VEXDEST3=1 VEXDEST210=0 | OUTREG=XED_REG_YMM7
VEXDEST4=0 VEXDEST3=0 VEXDEST210=7 | OUTREG=XED_REG_YMM8
VEXDEST4=0 VEXDEST3=0 VEXDEST210=6 | OUTREG=XED_REG_YMM9
VEXDEST4=0 VEXDEST3=0 VEXDEST210=5 | OUTREG=XED_REG_YMM10
VEXDEST4=0 VEXDEST3=0 VEXDEST210=4 | OUTREG=XED_REG_YMM11
VEXDEST4=0 VEXDEST3=0 VEXDEST210=3 | OUTREG=XED_REG_YMM12
VEXDEST4=0 VEXDEST3=0 VEXDEST210=2 | OUTREG=XED_REG_YMM13
VEXDEST4=0 VEXDEST3=0 VEXDEST210=1 | OUTREG=XED_REG_YMM14
VEXDEST4=0 VEXDEST3=0 VEXDEST210=0 | OUTREG=XED_REG_YMM15
VEXDEST4=1 VEXDEST3=1 VEXDEST210=7 | OUTREG=XED_REG_YMM16
VEXDEST4=1 VEXDEST3=1 VEXDEST210=6 | OUTREG=XED_REG_YMM17
VEXDEST4=1 VEXDEST3=1 VEXDEST210=5 | OUTREG=XED_REG_YMM18
VEXDEST4=1 VEXDEST3=1 VEXDEST210=4 | OUTREG=XED_REG_YMM19
VEXDEST4=1 VEXDEST3=1 VEXDEST210=3 | OUTREG=XED_REG_YMM20
VEXDEST4=1 VEXDEST3=1 VEXDEST210=2 | OUTREG=XED_REG_YMM21
VEXDEST4=1 VEXDEST3=1 VEXDEST210=1 | OUTREG=XED_REG_YMM22
VEXDEST4=1 VEXDEST3=1 VEXDEST210=0 | OUTREG=XED_REG_YMM23
VEXDEST4=1 VEXDEST3=0 VEXDEST210=7 | OUTREG=XED_REG_YMM24
VEXDEST4=1 VEXDEST3=0 VEXDEST210=6 | OUTREG=XED_REG_YMM25
VEXDEST4=1 VEXDEST3=0 VEXDEST210=5 | OUTREG=XED_REG_YMM26
VEXDEST4=1 VEXDEST3=0 VEXDEST210=4 | OUTREG=XED_REG_YMM27
VEXDEST4=1 VEXDEST3=0 VEXDEST210=3 | OUTREG=XED_REG_YMM28
VEXDEST4=1 VEXDEST3=0 VEXDEST210=2 | OUTREG=XED_REG_YMM29
VEXDEST4=1 VEXDEST3=0 VEXDEST210=1 | OUTREG=XED_REG_YMM30
VEXDEST4=1 VEXDEST3=0 VEXDEST210=0 | OUTREG=XED_REG_YMM31
xed_reg_enum_t ZMM_N3()::
mode16 | OUTREG=ZMM_N3_32()
mode32 | OUTREG=ZMM_N3_32()
mode64 | OUTREG=ZMM_N3_64()
xed_reg_enum_t ZMM_N3_32()::
VEXDEST210=7 | OUTREG=XED_REG_ZMM0
VEXDEST210=6 | OUTREG=XED_REG_ZMM1
VEXDEST210=5 | OUTREG=XED_REG_ZMM2
VEXDEST210=4 | OUTREG=XED_REG_ZMM3
VEXDEST210=3 | OUTREG=XED_REG_ZMM4
VEXDEST210=2 | OUTREG=XED_REG_ZMM5
VEXDEST210=1 | OUTREG=XED_REG_ZMM6
VEXDEST210=0 | OUTREG=XED_REG_ZMM7
xed_reg_enum_t ZMM_N3_64()::
VEXDEST4=0 VEXDEST3=1 VEXDEST210=7 | OUTREG=XED_REG_ZMM0
VEXDEST4=0 VEXDEST3=1 VEXDEST210=6 | OUTREG=XED_REG_ZMM1
VEXDEST4=0 VEXDEST3=1 VEXDEST210=5 | OUTREG=XED_REG_ZMM2
VEXDEST4=0 VEXDEST3=1 VEXDEST210=4 | OUTREG=XED_REG_ZMM3
VEXDEST4=0 VEXDEST3=1 VEXDEST210=3 | OUTREG=XED_REG_ZMM4
VEXDEST4=0 VEXDEST3=1 VEXDEST210=2 | OUTREG=XED_REG_ZMM5
VEXDEST4=0 VEXDEST3=1 VEXDEST210=1 | OUTREG=XED_REG_ZMM6
VEXDEST4=0 VEXDEST3=1 VEXDEST210=0 | OUTREG=XED_REG_ZMM7
VEXDEST4=0 VEXDEST3=0 VEXDEST210=7 | OUTREG=XED_REG_ZMM8
VEXDEST4=0 VEXDEST3=0 VEXDEST210=6 | OUTREG=XED_REG_ZMM9
VEXDEST4=0 VEXDEST3=0 VEXDEST210=5 | OUTREG=XED_REG_ZMM10
VEXDEST4=0 VEXDEST3=0 VEXDEST210=4 | OUTREG=XED_REG_ZMM11
VEXDEST4=0 VEXDEST3=0 VEXDEST210=3 | OUTREG=XED_REG_ZMM12
VEXDEST4=0 VEXDEST3=0 VEXDEST210=2 | OUTREG=XED_REG_ZMM13
VEXDEST4=0 VEXDEST3=0 VEXDEST210=1 | OUTREG=XED_REG_ZMM14
VEXDEST4=0 VEXDEST3=0 VEXDEST210=0 | OUTREG=XED_REG_ZMM15
VEXDEST4=1 VEXDEST3=1 VEXDEST210=7 | OUTREG=XED_REG_ZMM16
VEXDEST4=1 VEXDEST3=1 VEXDEST210=6 | OUTREG=XED_REG_ZMM17
VEXDEST4=1 VEXDEST3=1 VEXDEST210=5 | OUTREG=XED_REG_ZMM18
VEXDEST4=1 VEXDEST3=1 VEXDEST210=4 | OUTREG=XED_REG_ZMM19
VEXDEST4=1 VEXDEST3=1 VEXDEST210=3 | OUTREG=XED_REG_ZMM20
VEXDEST4=1 VEXDEST3=1 VEXDEST210=2 | OUTREG=XED_REG_ZMM21
VEXDEST4=1 VEXDEST3=1 VEXDEST210=1 | OUTREG=XED_REG_ZMM22
VEXDEST4=1 VEXDEST3=1 VEXDEST210=0 | OUTREG=XED_REG_ZMM23
VEXDEST4=1 VEXDEST3=0 VEXDEST210=7 | OUTREG=XED_REG_ZMM24
VEXDEST4=1 VEXDEST3=0 VEXDEST210=6 | OUTREG=XED_REG_ZMM25
VEXDEST4=1 VEXDEST3=0 VEXDEST210=5 | OUTREG=XED_REG_ZMM26
VEXDEST4=1 VEXDEST3=0 VEXDEST210=4 | OUTREG=XED_REG_ZMM27
VEXDEST4=1 VEXDEST3=0 VEXDEST210=3 | OUTREG=XED_REG_ZMM28
VEXDEST4=1 VEXDEST3=0 VEXDEST210=2 | OUTREG=XED_REG_ZMM29
VEXDEST4=1 VEXDEST3=0 VEXDEST210=1 | OUTREG=XED_REG_ZMM30
VEXDEST4=1 VEXDEST3=0 VEXDEST210=0 | OUTREG=XED_REG_ZMM31

View File

@@ -0,0 +1,172 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
xed_reg_enum_t XMM_R3()::
mode16 | OUTREG=XMM_R3_32()
mode32 | OUTREG=XMM_R3_32()
mode64 | OUTREG=XMM_R3_64()
xed_reg_enum_t XMM_R3_32()::
REG=0 | OUTREG=XED_REG_XMM0
REG=1 | OUTREG=XED_REG_XMM1
REG=2 | OUTREG=XED_REG_XMM2
REG=3 | OUTREG=XED_REG_XMM3
REG=4 | OUTREG=XED_REG_XMM4
REG=5 | OUTREG=XED_REG_XMM5
REG=6 | OUTREG=XED_REG_XMM6
REG=7 | OUTREG=XED_REG_XMM7
xed_reg_enum_t XMM_R3_64()::
REXRR=0 REXR=0 REG=0 | OUTREG=XED_REG_XMM0
REXRR=0 REXR=0 REG=1 | OUTREG=XED_REG_XMM1
REXRR=0 REXR=0 REG=2 | OUTREG=XED_REG_XMM2
REXRR=0 REXR=0 REG=3 | OUTREG=XED_REG_XMM3
REXRR=0 REXR=0 REG=4 | OUTREG=XED_REG_XMM4
REXRR=0 REXR=0 REG=5 | OUTREG=XED_REG_XMM5
REXRR=0 REXR=0 REG=6 | OUTREG=XED_REG_XMM6
REXRR=0 REXR=0 REG=7 | OUTREG=XED_REG_XMM7
REXRR=0 REXR=1 REG=0 | OUTREG=XED_REG_XMM8
REXRR=0 REXR=1 REG=1 | OUTREG=XED_REG_XMM9
REXRR=0 REXR=1 REG=2 | OUTREG=XED_REG_XMM10
REXRR=0 REXR=1 REG=3 | OUTREG=XED_REG_XMM11
REXRR=0 REXR=1 REG=4 | OUTREG=XED_REG_XMM12
REXRR=0 REXR=1 REG=5 | OUTREG=XED_REG_XMM13
REXRR=0 REXR=1 REG=6 | OUTREG=XED_REG_XMM14
REXRR=0 REXR=1 REG=7 | OUTREG=XED_REG_XMM15
REXRR=1 REXR=0 REG=0 | OUTREG=XED_REG_XMM16
REXRR=1 REXR=0 REG=1 | OUTREG=XED_REG_XMM17
REXRR=1 REXR=0 REG=2 | OUTREG=XED_REG_XMM18
REXRR=1 REXR=0 REG=3 | OUTREG=XED_REG_XMM19
REXRR=1 REXR=0 REG=4 | OUTREG=XED_REG_XMM20
REXRR=1 REXR=0 REG=5 | OUTREG=XED_REG_XMM21
REXRR=1 REXR=0 REG=6 | OUTREG=XED_REG_XMM22
REXRR=1 REXR=0 REG=7 | OUTREG=XED_REG_XMM23
REXRR=1 REXR=1 REG=0 | OUTREG=XED_REG_XMM24
REXRR=1 REXR=1 REG=1 | OUTREG=XED_REG_XMM25
REXRR=1 REXR=1 REG=2 | OUTREG=XED_REG_XMM26
REXRR=1 REXR=1 REG=3 | OUTREG=XED_REG_XMM27
REXRR=1 REXR=1 REG=4 | OUTREG=XED_REG_XMM28
REXRR=1 REXR=1 REG=5 | OUTREG=XED_REG_XMM29
REXRR=1 REXR=1 REG=6 | OUTREG=XED_REG_XMM30
REXRR=1 REXR=1 REG=7 | OUTREG=XED_REG_XMM31
xed_reg_enum_t YMM_R3()::
mode16 | OUTREG=YMM_R3_32()
mode32 | OUTREG=YMM_R3_32()
mode64 | OUTREG=YMM_R3_64()
xed_reg_enum_t YMM_R3_32()::
REG=0 | OUTREG=XED_REG_YMM0
REG=1 | OUTREG=XED_REG_YMM1
REG=2 | OUTREG=XED_REG_YMM2
REG=3 | OUTREG=XED_REG_YMM3
REG=4 | OUTREG=XED_REG_YMM4
REG=5 | OUTREG=XED_REG_YMM5
REG=6 | OUTREG=XED_REG_YMM6
REG=7 | OUTREG=XED_REG_YMM7
xed_reg_enum_t YMM_R3_64()::
REXRR=0 REXR=0 REG=0 | OUTREG=XED_REG_YMM0
REXRR=0 REXR=0 REG=1 | OUTREG=XED_REG_YMM1
REXRR=0 REXR=0 REG=2 | OUTREG=XED_REG_YMM2
REXRR=0 REXR=0 REG=3 | OUTREG=XED_REG_YMM3
REXRR=0 REXR=0 REG=4 | OUTREG=XED_REG_YMM4
REXRR=0 REXR=0 REG=5 | OUTREG=XED_REG_YMM5
REXRR=0 REXR=0 REG=6 | OUTREG=XED_REG_YMM6
REXRR=0 REXR=0 REG=7 | OUTREG=XED_REG_YMM7
REXRR=0 REXR=1 REG=0 | OUTREG=XED_REG_YMM8
REXRR=0 REXR=1 REG=1 | OUTREG=XED_REG_YMM9
REXRR=0 REXR=1 REG=2 | OUTREG=XED_REG_YMM10
REXRR=0 REXR=1 REG=3 | OUTREG=XED_REG_YMM11
REXRR=0 REXR=1 REG=4 | OUTREG=XED_REG_YMM12
REXRR=0 REXR=1 REG=5 | OUTREG=XED_REG_YMM13
REXRR=0 REXR=1 REG=6 | OUTREG=XED_REG_YMM14
REXRR=0 REXR=1 REG=7 | OUTREG=XED_REG_YMM15
REXRR=1 REXR=0 REG=0 | OUTREG=XED_REG_YMM16
REXRR=1 REXR=0 REG=1 | OUTREG=XED_REG_YMM17
REXRR=1 REXR=0 REG=2 | OUTREG=XED_REG_YMM18
REXRR=1 REXR=0 REG=3 | OUTREG=XED_REG_YMM19
REXRR=1 REXR=0 REG=4 | OUTREG=XED_REG_YMM20
REXRR=1 REXR=0 REG=5 | OUTREG=XED_REG_YMM21
REXRR=1 REXR=0 REG=6 | OUTREG=XED_REG_YMM22
REXRR=1 REXR=0 REG=7 | OUTREG=XED_REG_YMM23
REXRR=1 REXR=1 REG=0 | OUTREG=XED_REG_YMM24
REXRR=1 REXR=1 REG=1 | OUTREG=XED_REG_YMM25
REXRR=1 REXR=1 REG=2 | OUTREG=XED_REG_YMM26
REXRR=1 REXR=1 REG=3 | OUTREG=XED_REG_YMM27
REXRR=1 REXR=1 REG=4 | OUTREG=XED_REG_YMM28
REXRR=1 REXR=1 REG=5 | OUTREG=XED_REG_YMM29
REXRR=1 REXR=1 REG=6 | OUTREG=XED_REG_YMM30
REXRR=1 REXR=1 REG=7 | OUTREG=XED_REG_YMM31
xed_reg_enum_t ZMM_R3()::
mode16 | OUTREG=ZMM_R3_32()
mode32 | OUTREG=ZMM_R3_32()
mode64 | OUTREG=ZMM_R3_64()
xed_reg_enum_t ZMM_R3_32()::
REG=0 | OUTREG=XED_REG_ZMM0
REG=1 | OUTREG=XED_REG_ZMM1
REG=2 | OUTREG=XED_REG_ZMM2
REG=3 | OUTREG=XED_REG_ZMM3
REG=4 | OUTREG=XED_REG_ZMM4
REG=5 | OUTREG=XED_REG_ZMM5
REG=6 | OUTREG=XED_REG_ZMM6
REG=7 | OUTREG=XED_REG_ZMM7
xed_reg_enum_t ZMM_R3_64()::
REXRR=0 REXR=0 REG=0 | OUTREG=XED_REG_ZMM0
REXRR=0 REXR=0 REG=1 | OUTREG=XED_REG_ZMM1
REXRR=0 REXR=0 REG=2 | OUTREG=XED_REG_ZMM2
REXRR=0 REXR=0 REG=3 | OUTREG=XED_REG_ZMM3
REXRR=0 REXR=0 REG=4 | OUTREG=XED_REG_ZMM4
REXRR=0 REXR=0 REG=5 | OUTREG=XED_REG_ZMM5
REXRR=0 REXR=0 REG=6 | OUTREG=XED_REG_ZMM6
REXRR=0 REXR=0 REG=7 | OUTREG=XED_REG_ZMM7
REXRR=0 REXR=1 REG=0 | OUTREG=XED_REG_ZMM8
REXRR=0 REXR=1 REG=1 | OUTREG=XED_REG_ZMM9
REXRR=0 REXR=1 REG=2 | OUTREG=XED_REG_ZMM10
REXRR=0 REXR=1 REG=3 | OUTREG=XED_REG_ZMM11
REXRR=0 REXR=1 REG=4 | OUTREG=XED_REG_ZMM12
REXRR=0 REXR=1 REG=5 | OUTREG=XED_REG_ZMM13
REXRR=0 REXR=1 REG=6 | OUTREG=XED_REG_ZMM14
REXRR=0 REXR=1 REG=7 | OUTREG=XED_REG_ZMM15
REXRR=1 REXR=0 REG=0 | OUTREG=XED_REG_ZMM16
REXRR=1 REXR=0 REG=1 | OUTREG=XED_REG_ZMM17
REXRR=1 REXR=0 REG=2 | OUTREG=XED_REG_ZMM18
REXRR=1 REXR=0 REG=3 | OUTREG=XED_REG_ZMM19
REXRR=1 REXR=0 REG=4 | OUTREG=XED_REG_ZMM20
REXRR=1 REXR=0 REG=5 | OUTREG=XED_REG_ZMM21
REXRR=1 REXR=0 REG=6 | OUTREG=XED_REG_ZMM22
REXRR=1 REXR=0 REG=7 | OUTREG=XED_REG_ZMM23
REXRR=1 REXR=1 REG=0 | OUTREG=XED_REG_ZMM24
REXRR=1 REXR=1 REG=1 | OUTREG=XED_REG_ZMM25
REXRR=1 REXR=1 REG=2 | OUTREG=XED_REG_ZMM26
REXRR=1 REXR=1 REG=3 | OUTREG=XED_REG_ZMM27
REXRR=1 REXR=1 REG=4 | OUTREG=XED_REG_ZMM28
REXRR=1 REXR=1 REG=5 | OUTREG=XED_REG_ZMM29
REXRR=1 REXR=1 REG=6 | OUTREG=XED_REG_ZMM30
REXRR=1 REXR=1 REG=7 | OUTREG=XED_REG_ZMM31

View File

@@ -0,0 +1,102 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XMM0 xmm 128 ZMM0 0
XMM1 xmm 128 ZMM1 1
XMM2 xmm 128 ZMM2 2
XMM3 xmm 128 ZMM3 3
XMM4 xmm 128 ZMM4 4
XMM5 xmm 128 ZMM5 5
XMM6 xmm 128 ZMM6 6
XMM7 xmm 128 ZMM7 7
XMM8 xmm 128 ZMM8 8
XMM9 xmm 128 ZMM9 9
XMM10 xmm 128 ZMM10 10
XMM11 xmm 128 ZMM11 11
XMM12 xmm 128 ZMM12 12
XMM13 xmm 128 ZMM13 13
XMM14 xmm 128 ZMM14 14
XMM15 xmm 128 ZMM15 15
XMM16 xmm 128 ZMM16 16
XMM17 xmm 128 ZMM17 17
XMM18 xmm 128 ZMM18 18
XMM19 xmm 128 ZMM19 19
XMM20 xmm 128 ZMM20 20
XMM21 xmm 128 ZMM21 21
XMM22 xmm 128 ZMM22 22
XMM23 xmm 128 ZMM23 23
XMM24 xmm 128 ZMM24 24
XMM25 xmm 128 ZMM25 25
XMM26 xmm 128 ZMM26 26
XMM27 xmm 128 ZMM27 27
XMM28 xmm 128 ZMM28 28
XMM29 xmm 128 ZMM29 29
XMM30 xmm 128 ZMM30 30
XMM31 xmm 128 ZMM31 31
YMM0 ymm 256 ZMM0 0
YMM1 ymm 256 ZMM1 1
YMM2 ymm 256 ZMM2 2
YMM3 ymm 256 ZMM3 3
YMM4 ymm 256 ZMM4 4
YMM5 ymm 256 ZMM5 5
YMM6 ymm 256 ZMM6 6
YMM7 ymm 256 ZMM7 7
YMM8 ymm 256 ZMM8 8
YMM9 ymm 256 ZMM9 9
YMM10 ymm 256 ZMM10 10
YMM11 ymm 256 ZMM11 11
YMM12 ymm 256 ZMM12 12
YMM13 ymm 256 ZMM13 13
YMM14 ymm 256 ZMM14 14
YMM15 ymm 256 ZMM15 15
YMM16 ymm 256 ZMM16 16
YMM17 ymm 256 ZMM17 17
YMM18 ymm 256 ZMM18 18
YMM19 ymm 256 ZMM19 19
YMM20 ymm 256 ZMM20 20
YMM21 ymm 256 ZMM21 21
YMM22 ymm 256 ZMM22 22
YMM23 ymm 256 ZMM23 23
YMM24 ymm 256 ZMM24 24
YMM25 ymm 256 ZMM25 25
YMM26 ymm 256 ZMM26 26
YMM27 ymm 256 ZMM27 27
YMM28 ymm 256 ZMM28 28
YMM29 ymm 256 ZMM29 29
YMM30 ymm 256 ZMM30 30
YMM31 ymm 256 ZMM31 31

View File

@@ -0,0 +1,33 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
EVV VEXVALID=2
EMX_BROADCAST_1TO16_32 BCAST=1 # 512
EMX_BROADCAST_4TO16_32 BCAST=2 # 512
EMX_BROADCAST_1TO8_64 BCAST=5 # 512
EMX_BROADCAST_4TO8_64 BCAST=6 # 512
EMX_BROADCAST_2TO16_32 BCAST=7 # 512
EMX_BROADCAST_2TO8_64 BCAST=8 # 512
EMX_BROADCAST_8TO16_32 BCAST=9 # 512
EMX_BROADCAST_1TO32_16 BCAST=16 # 512
EMX_BROADCAST_1TO64_8 BCAST=19 # 512
# these do not show up on earlier processors
EMX_BROADCAST_4TO8_32 BCAST=4 # 256
EMX_BROADCAST_2TO4_32 BCAST=12 # 128
EMX_BROADCAST_2TO8_32 BCAST=21 # 256
EMX_BROADCAST_1TO2_32 BCAST=22 # 128

View File

@@ -0,0 +1,63 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
ZEROSTR(XED_OPERAND_ZEROING)::
0 -> ''
1 -> '{z}'
SAESTR(XED_OPERAND_SAE)::
0 -> ''
1 -> '{sae}'
# AVX512 only has rounding with implied SAE
ROUNDC(XED_OPERAND_ROUNDC)::
0 -> ''
1 -> '{rne-sae}'
2 -> '{rd-sae}'
3 -> '{ru-sae}'
4 -> '{rz-sae}'
BCASTSTR(XED_OPERAND_BCAST)::
0 -> ''
1 -> '{1to16}'
2 -> '{4to16}'
3 -> '{1to8}'
4 -> '{4to8}'
5 -> '{1to8}'
6 -> '{4to8}'
7 -> '{2to16}'
8 -> '{2to8}'
9 -> '{8to16}'
10 -> '{1to4}'
11 -> '{1to2}'
12 -> '{2to4}'
13 -> '{1to4}'
14 -> '{1to8}'
15 -> '{1to16}'
16 -> '{1to32}'
17 -> '{1to16}'
18 -> '{1to32}'
19 -> '{1to64}'
20 -> '{2to4}'
21 -> '{2to8}'
22 -> '{1to2}'
23 -> '{1to2}'
24 -> '{1to4}'
25 -> '{1to8}'
26 -> '{1to2}'
27 -> '{1to4}'

View File

@@ -0,0 +1,23 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
XED_ISA_SET_AVX512F_128: avx512f.7.0.ebx.16 avx512vl.7.0.ebx.31
XED_ISA_SET_AVX512F_128N: avx512f.7.0.ebx.16
XED_ISA_SET_AVX512F_256: avx512f.7.0.ebx.16 avx512vl.7.0.ebx.31
XED_ISA_SET_AVX512F_512: avx512f.7.0.ebx.16
XED_ISA_SET_AVX512F_KOP: avx512f.7.0.ebx.16
XED_ISA_SET_AVX512F_SCALAR: avx512f.7.0.ebx.16

View File

@@ -0,0 +1,58 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
state:avx512-state-bits.txt
registers:avx512-regs.txt
registers:avx512-kregs.txt
dec-patterns:avx512-evex-dec.txt # vex and evex prefixes
enc-patterns:avx512-evex-enc.txt # vex and evex prefixes
dec-patterns:avx512-disp8.txt
enc-patterns:avx512-disp8-enc.txt
dec-patterns:avx512-addressing-dec.txt
enc-patterns:avx512-addressing-enc.txt
dec-patterns:avx512-reg-table-mask.txt
enc-dec-patterns:avx512-reg-table-mask.txt
dec-patterns:avx512-reg-table-gpr.txt
enc-dec-patterns:avx512-reg-table-gpr.txt
dec-patterns:avx512-reg-tables-r3.txt
enc-dec-patterns:avx512-reg-tables-r3.txt
dec-patterns:avx512-reg-tables-b3.txt
enc-dec-patterns:avx512-reg-tables-b3.txt
dec-patterns:avx512-reg-tables-n3.txt
enc-dec-patterns:avx512-reg-tables-n3.txt
widths:avx512-operand-widths.txt
pointer-names:avx512-pointer-width.txt
fields:avx512-fields.txt
dec-instructions: avx512-foundation-isa.xed.txt
enc-instructions: avx512-foundation-isa.xed.txt
conversion-table:avx512-strings.txt
ild-getters: avx512-ild-getters.txt
cpuid : cpuid.xed.txt

View File

@@ -0,0 +1,44 @@
/*BEGIN_LEGAL
Copyright (c) 2016 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
END_LEGAL */
/// AVX512 ILD getters
#if !defined(_XED_ILD_AVX512_GETTERS_H)
#define _XED_ILD_AVX512_GETTERS_H
#include "xed-common-hdrs.h"
#include "xed-common-defs.h"
#include "xed-portability.h"
#include "xed-types.h"
#include "xed-ild.h"
/* ild getters */
static XED_INLINE
xed_uint32_t xed3_operand_get_mask_not0(const xed_decoded_inst_t *d) {
/* aaa != 0 */
return xed3_operand_get_mask(d) != 0;
}
static XED_INLINE
xed_uint32_t xed3_operand_get_mask_zero(const xed_decoded_inst_t *d) {
/* aaa == 0 */
return xed3_operand_get_mask(d) == 0;
}
#endif

View File

@@ -0,0 +1,33 @@
#BEGIN_LEGAL
#
#Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#END_LEGAL
# common stuff for AVX512
add:dec-spine:%(xed_dir)s/datafiles/knc/uisa-spine.txt:4
state:%(xed_dir)s/datafiles/knc/uisa-state-bits.txt
dec-patterns:%(xed_dir)s/datafiles/knc/uisa-splitter.txt
# These do not have the XMM/YMM regs (and our kregs are wider)
registers:%(xed_dir)s/datafiles/knc/lrb2-regs.txt
# we change two functions for LRB/UISA for the N*disp8 scaling
remove-source:source:xed-operand-values-interface-repl.c
add-source:source:%(xed_dir)s/datafiles/knc/xed-operand-values-interface-uisa.c

View File

@@ -0,0 +1,10 @@
BUILDDIR/xed -64 -e vaddps zmm3 k1 zmm1 zmm2
BUILDDIR/xed -64 -d 62F1744958DA
BUILDDIR/xed -64 -e VGATHERDPD zmm0 k7 MEM64:rax,ymm1,1
BUILDDIR/xed -64 -d 62F2FD4F920408
BUILDDIR/xed -64 -e VGATHERDPD zmm0 k7 MEM64:rax,ymm1,1
BUILDDIR/xed -64 -e VGATHERDPD zmm0 k7 MEM64:rax,ymm1,1,11
BUILDDIR/xed -64 -e VGATHERDPD zmm0 k7 MEM64:rax,ymm1,1,11223344
BUILDDIR/xed -64 -d 62727D4F924CC500
BUILDDIR/xed -64 -e VGATHERDPS ZMM9 K7 MEM64:RBP,ZMM0,8,0

View File

@@ -0,0 +1 @@
../../../tests/run-cmd.py --bulk-make-tests bulk-tests.txt --build-dir ../../../obj

View File

@@ -0,0 +1 @@
../../../tests/run-cmd.py --build-dir ../../../obj

View File

@@ -0,0 +1 @@
BUILDDIR/xed -64 -e vaddps zmm3 k1 zmm1 zmm2

View File

@@ -0,0 +1 @@
0

View File

@@ -0,0 +1,4 @@
Request: VADDPS MODE:2, REG0:ZMM3, REG1:K1, REG2:ZMM1, REG3:ZMM2, SMODE:2
OPERAND ORDER: REG0 REG1 REG2 REG3
Encodable! 62F1744958DA
.byte 0x62,0xf1,0x74,0x49,0x58,0xda

View File

@@ -0,0 +1 @@
BUILDDIR/xed -64 -d 62F1744958DA

View File

@@ -0,0 +1 @@
0

View File

@@ -0,0 +1,3 @@
62F1744958DA
ICLASS: VADDPS CATEGORY: AVX512 EXTENSION: AVX512EVEX IFORM: VADDPS_ZMMf32_MASKmskw_ZMMf32_ZMMf32_AVX512 ISA_SET: AVX512F_512
SHORT: vaddps zmm3, k1, zmm1, zmm2

View File

@@ -0,0 +1 @@
BUILDDIR/xed -64 -e VGATHERDPD zmm0 k7 MEM64:rax,ymm1,1

View File

@@ -0,0 +1 @@
0

View File

@@ -0,0 +1,4 @@
Request: VGATHERDPD MEM_WIDTH:64, MEM0:zmmword ptr [RAX+YMM1*1], MODE:2, REG0:ZMM0, REG1:K7, SMODE:2
OPERAND ORDER: REG0 REG1 MEM0
Encodable! 62F2FD4F920408
.byte 0x62,0xf2,0xfd,0x4f,0x92,0x04,0x08

View File

@@ -0,0 +1 @@
BUILDDIR/xed -64 -d 62F2FD4F920408

View File

@@ -0,0 +1 @@
0

View File

@@ -0,0 +1,3 @@
62F2FD4F920408
ICLASS: VGATHERDPD CATEGORY: GATHER EXTENSION: AVX512EVEX IFORM: VGATHERDPD_ZMMf64_MASKmskw_MEMf64_AVX512 ISA_SET: AVX512F_512
SHORT: vgatherdpd zmm0, k7, zmmword ptr [rax+ymm1*1]

View File

@@ -0,0 +1 @@
BUILDDIR/xed -64 -e VGATHERDPD zmm0 k7 MEM64:rax,ymm1,1

View File

@@ -0,0 +1 @@
0

View File

@@ -0,0 +1,4 @@
Request: VGATHERDPD MEM_WIDTH:64, MEM0:zmmword ptr [RAX+YMM1*1], MODE:2, REG0:ZMM0, REG1:K7, SMODE:2
OPERAND ORDER: REG0 REG1 MEM0
Encodable! 62F2FD4F920408
.byte 0x62,0xf2,0xfd,0x4f,0x92,0x04,0x08

View File

@@ -0,0 +1 @@
BUILDDIR/xed -64 -e VGATHERDPD zmm0 k7 MEM64:rax,ymm1,1,11

View File

@@ -0,0 +1 @@
0

Some files were not shown because too many files have changed in this diff Show More