BFMMLA

BFloat16 floating-point matrix multiply-accumulate into 2x2 matrix.

If FEAT_EBF16 is not implemented or FPCR.EBF is 0, this instruction:

If FEAT_EBF16 is implemented and FPCR.EBF is 1, then this instruction:

Irrespective of FEAT_EBF16 and FPCR.EBF, this instruction:

ID_AA64ISAR1_EL1.BF16 indicates whether this instruction is supported.

Vector
(FEAT_BF16)

313029282726252423222120191817161514131211109876543210
01101110010Rm111011RnRd
QUsizeopcode

BFMMLA <Vd>.4S, <Vn>.8H, <Vm>.8H

if !IsFeatureImplemented(FEAT_BF16) then UNDEFINED; integer n = UInt(Rn); integer m = UInt(Rm); integer d = UInt(Rd);

Assembler Symbols

<Vd>

Is the name of the SIMD&FP third source and destination register, encoded in the "Rd" field.

<Vn>

Is the name of the first SIMD&FP source register, encoded in the "Rn" field.

<Vm>

Is the name of the second SIMD&FP source register, encoded in the "Rm" field.

Operation

CheckFPAdvSIMDEnabled64(); bits(128) op1 = V[n, 128]; bits(128) op2 = V[m, 128]; bits(128) acc = V[d, 128]; V[d, 128] = BFMatMulAdd(acc, op1, op2, FPCR);

Operational information

Arm expects that the BFMMLA instruction will deliver a peak BFloat16 multiply throughput that is at least as high as can be achieved using two BFDOT (vector) instructions, with a goal that it should have significantly higher throughput.


Internal version only: aarchmrs v2023-12_rel, pseudocode v2023-12_rel, sve v2023-12_rel ; Build timestamp: 2023-12-15T16:46

Copyright © 2010-2023 Arm Limited or its affiliates. All rights reserved. This document is Non-Confidential.