Skip to content

[RISCV] Begin moving post-isel vector peepholes to a MF pass #70342

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Oct 30, 2023

Conversation

lukel97
Copy link
Contributor

@lukel97 lukel97 commented Oct 26, 2023

We currently have three postprocess peephole optimisations for vector pseudos:

  1. Masked pseudo with all ones mask -> unmasked pseudo
  2. Merge vmerge pseudo into operand pseudo's mask
  3. vmerge pseudo with all ones mask -> vmv.v.v pseudo

This patch aims to move these peepholes out of SelectionDAG and into a separate RISCVFoldMasks MachineFunction pass.

There are a few motivations for doing this:

  • The current SelectionDAG implementation operates on MachineSDNodes, which are essentially MachineInstrs but require a bunch of logic to reason about chain and glue operands. The RISCVII::has*Op helper functions also don't exactly line up with the SDNode operands. Mutating these pseudos and their operands in place becomes a good bit easier at the MachineInstr level. For example, we would no longer need to check for cycles in the DAG during performCombineVMergeAndVOps.

  • Although it's further down the line, moving this code out of SelectionDAG allows it to be reused by GlobalISel later on.

  • In performCombineVMergeAndVOps, it may be possible to commute the operands to enable folding in more cases (see test/CodeGen/RISCV/rvv/vmadd-vp.ll). There is existing machinery to commute operands in TII::commuteInstruction, but it's implemented on MachineInstrs.

The pass runs straight after ISel, before any of the other machine SSA optimization passes run. This is so that dead-mi-elimination can mop up any vmsets that are no longer used (but if preferred we could try and erase them from inside RISCVFoldMasks itself). This also means that these peepholes are no longer run at codegen -O0, so this patch isn't strictly NFC.

Only the performVMergeToVMv peephole is refactored in this patch, the remaining two would be implemented later. And as noted by @preames, it should be possible to move doPeepholeSExtW out of SelectionDAG as well.

@llvmbot
Copy link
Member

llvmbot commented Oct 26, 2023

@llvm/pr-subscribers-backend-risc-v

Author: Luke Lau (lukel97)

Changes

We currently have three postprocess peephole optimisations for vector pseudos:

  1. Masked pseudo with all ones mask -> unmasked pseudo
  2. Merge vmerge pseudo into operand pseudo's mask
  3. vmerge pseudo with all ones mask -> vmv.v.v pseudo

This patch aims to move these peepholes out of SelectionDAG and into a separate RISCVFoldMasks MachineFunction pass.

There are a few motivations for doing this:

  • The current SelectionDAG implementation operates on MachineSDNodes, which are essentially MachineInstrs but require a bunch of logic to reason about chain and glue operands. The RISCVII::has*Op helper functions also don't exactly line up with the SDNode operands. Mutating these pseudos and their operands in place becomes a good bit easier at the MachineInstr level. For example, we would no longer need to check for cycles in the DAG during performCombineVMergeAndVOps.

  • Although it's further down the line, moving this code out of SelectionDAG allows it to be reused by GlobalISel later on.

  • In performCombineVMergeAndVOps, it may be possible to commute the operands to enable folding in more cases (see test/CodeGen/RISCV/rvv/vmadd-vp.ll). There is existing machinery to commute operands in TII::commuteInstruction, but it's implemented on MachineInstrs.

The pass runs straight after ISel, before any of the other machine SSA optimization passes run. This is so that dead-mi-elimination can mop up any vmsets that are no longer used (but if preferred we could try and erase them from inside RISCVFoldMasks itself). This also means that these peepholes are no longer run at codegen -O0, so this patch isn't strictly NFC.

Only the performVMergeToVMv peephole is refactored in this patch, the remaining two would be implemented later. And as noted by @preames, it should be possible to move doPeepholeSExtW out of SelectionDAG as well.


Full diff: https://github.com/llvm/llvm-project/pull/70342.diff

6 Files Affected:

  • (modified) llvm/lib/Target/RISCV/CMakeLists.txt (+1)
  • (modified) llvm/lib/Target/RISCV/RISCV.h (+3)
  • (added) llvm/lib/Target/RISCV/RISCVFoldMasks.cpp (+182)
  • (modified) llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp (-36)
  • (modified) llvm/lib/Target/RISCV/RISCVTargetMachine.cpp (+4)
  • (modified) llvm/test/CodeGen/RISCV/O3-pipeline.ll (+1)
diff --git a/llvm/lib/Target/RISCV/CMakeLists.txt b/llvm/lib/Target/RISCV/CMakeLists.txt
index 4d5fa79389ea68b..b0282b72c6a8dba 100644
--- a/llvm/lib/Target/RISCV/CMakeLists.txt
+++ b/llvm/lib/Target/RISCV/CMakeLists.txt
@@ -33,6 +33,7 @@ add_llvm_target(RISCVCodeGen
   RISCVMakeCompressible.cpp
   RISCVExpandAtomicPseudoInsts.cpp
   RISCVExpandPseudoInsts.cpp
+  RISCVFoldMasks.cpp
   RISCVFrameLowering.cpp
   RISCVGatherScatterLowering.cpp
   RISCVInsertVSETVLI.cpp
diff --git a/llvm/lib/Target/RISCV/RISCV.h b/llvm/lib/Target/RISCV/RISCV.h
index 3d8e33dc716ea44..4e870d444120c21 100644
--- a/llvm/lib/Target/RISCV/RISCV.h
+++ b/llvm/lib/Target/RISCV/RISCV.h
@@ -45,6 +45,9 @@ void initializeRISCVMakeCompressibleOptPass(PassRegistry &);
 FunctionPass *createRISCVGatherScatterLoweringPass();
 void initializeRISCVGatherScatterLoweringPass(PassRegistry &);
 
+FunctionPass *createRISCVFoldMasksPass();
+void initializeRISCVFoldMasksPass(PassRegistry &);
+
 FunctionPass *createRISCVOptWInstrsPass();
 void initializeRISCVOptWInstrsPass(PassRegistry &);
 
diff --git a/llvm/lib/Target/RISCV/RISCVFoldMasks.cpp b/llvm/lib/Target/RISCV/RISCVFoldMasks.cpp
new file mode 100644
index 000000000000000..81fd39348e9f584
--- /dev/null
+++ b/llvm/lib/Target/RISCV/RISCVFoldMasks.cpp
@@ -0,0 +1,182 @@
+//===- RISCVOptWInstrs.cpp - MI W instruction optimizations ---------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===---------------------------------------------------------------------===//
+//
+// This pass performs various peephole optimisations that fold masks into vector
+// pseudo instructions after instruction selection.
+//
+// Currently it converts
+// PseudoVMERGE_VVM %false, %false, %true, %allonesmask, %vl, %sew
+// ->
+// PseudoVMV_V_V %false, %true, %vl, %sew
+//
+//===---------------------------------------------------------------------===//
+
+#include "RISCV.h"
+#include "RISCVSubtarget.h"
+#include "llvm/CodeGen/MachineFunctionPass.h"
+#include "llvm/CodeGen/MachineRegisterInfo.h"
+#include "llvm/CodeGen/TargetInstrInfo.h"
+#include "llvm/CodeGen/TargetRegisterInfo.h"
+
+using namespace llvm;
+
+#define DEBUG_TYPE "riscv-fold-masks"
+
+namespace {
+
+class RISCVFoldMasks : public MachineFunctionPass {
+public:
+  static char ID;
+  const TargetInstrInfo *TII;
+  MachineRegisterInfo *MRI;
+  const TargetRegisterInfo *TRI;
+  RISCVFoldMasks() : MachineFunctionPass(ID) {
+    initializeRISCVFoldMasksPass(*PassRegistry::getPassRegistry());
+  }
+
+  bool runOnMachineFunction(MachineFunction &MF) override;
+  MachineFunctionProperties getRequiredProperties() const override {
+    return MachineFunctionProperties().set(
+        MachineFunctionProperties::Property::IsSSA);
+  }
+
+  StringRef getPassName() const override { return "RISC-V Fold Masks"; }
+
+private:
+  bool convertVMergeToVMv(MachineInstr &MI, MachineInstr *MaskDef);
+
+  bool isAllOnesMask(MachineInstr *MaskCopy);
+};
+
+} // namespace
+
+char RISCVFoldMasks::ID = 0;
+
+INITIALIZE_PASS(RISCVFoldMasks, DEBUG_TYPE, "RISC-V Fold Masks", false, false)
+
+bool RISCVFoldMasks::isAllOnesMask(MachineInstr *MaskCopy) {
+  if (!MaskCopy)
+    return false;
+  assert(MaskCopy->isCopy() && MaskCopy->getOperand(0).getReg() == RISCV::V0);
+  Register SrcReg =
+      TRI->lookThruCopyLike(MaskCopy->getOperand(1).getReg(), MRI);
+  if (!SrcReg.isVirtual())
+    return false;
+  MachineInstr *SrcDef = MRI->getVRegDef(SrcReg);
+  if (!SrcDef)
+    return false;
+
+  // TODO: Check that the VMSET is the expected bitwidth? The pseudo has
+  // undefined behaviour if it's the wrong bitwidth, so we could choose to
+  // assume that it's all-ones? Same applies to its VL.
+  switch (SrcDef->getOpcode()) {
+  case RISCV::PseudoVMSET_M_B1:
+  case RISCV::PseudoVMSET_M_B2:
+  case RISCV::PseudoVMSET_M_B4:
+  case RISCV::PseudoVMSET_M_B8:
+  case RISCV::PseudoVMSET_M_B16:
+  case RISCV::PseudoVMSET_M_B32:
+  case RISCV::PseudoVMSET_M_B64:
+    return true;
+  default:
+    return false;
+  }
+}
+
+static bool isVMerge(MachineInstr &MI) {
+  unsigned Opc = MI.getOpcode();
+  return Opc == RISCV::PseudoVMERGE_VVM_MF8 ||
+         Opc == RISCV::PseudoVMERGE_VVM_MF4 ||
+         Opc == RISCV::PseudoVMERGE_VVM_MF2 ||
+         Opc == RISCV::PseudoVMERGE_VVM_M1 ||
+         Opc == RISCV::PseudoVMERGE_VVM_M2 ||
+         Opc == RISCV::PseudoVMERGE_VVM_M4 || Opc == RISCV::PseudoVMERGE_VVM_M8;
+}
+
+// Transform (VMERGE_VVM_<LMUL> false, false, true, allones, vl, sew) to
+// (VMV_V_V_<LMUL> false, true, vl, sew). It may decrease uses of VMSET.
+bool RISCVFoldMasks::convertVMergeToVMv(MachineInstr &MI, MachineInstr *V0Def) {
+#define CASE_VMERGE_TO_VMV(lmul)                                               \
+  case RISCV::PseudoVMERGE_VVM_##lmul:                                         \
+    NewOpc = RISCV::PseudoVMV_V_V_##lmul;                                      \
+    break;
+  unsigned NewOpc;
+  switch (MI.getOpcode()) {
+  default:
+    llvm_unreachable("Expected VMERGE_VVM_<LMUL> instruction.");
+    CASE_VMERGE_TO_VMV(MF8)
+    CASE_VMERGE_TO_VMV(MF4)
+    CASE_VMERGE_TO_VMV(MF2)
+    CASE_VMERGE_TO_VMV(M1)
+    CASE_VMERGE_TO_VMV(M2)
+    CASE_VMERGE_TO_VMV(M4)
+    CASE_VMERGE_TO_VMV(M8)
+  }
+
+  Register MergeReg = MI.getOperand(1).getReg();
+  Register FalseReg = MI.getOperand(2).getReg();
+  // Check merge == false (or merge == undef)
+  if (MergeReg != RISCV::NoRegister && TRI->lookThruCopyLike(MergeReg, MRI) !=
+                                           TRI->lookThruCopyLike(FalseReg, MRI))
+    return false;
+
+  assert(MI.getOperand(4).isReg() && MI.getOperand(4).getReg() == RISCV::V0);
+  if (!isAllOnesMask(V0Def))
+    return false;
+
+  MI.setDesc(TII->get(NewOpc));
+  MI.removeOperand(2); // False operand
+  MI.removeOperand(3); // Mask operand
+  MI.addOperand(
+      MachineOperand::CreateImm(RISCVII::TAIL_UNDISTURBED_MASK_UNDISTURBED));
+
+  // vmv.v.v doesn't have a mask operand, so we may be able to inflate the
+  // register class for the destination and merge operands e.g. VRNoV0 -> VR
+  MRI->recomputeRegClass(MI.getOperand(0).getReg());
+  MRI->recomputeRegClass(MI.getOperand(1).getReg());
+  return true;
+}
+
+bool RISCVFoldMasks::runOnMachineFunction(MachineFunction &MF) {
+  if (skipFunction(MF.getFunction()))
+    return false;
+
+  // Skip if the vector extension is not enabled.
+  const RISCVSubtarget &ST = MF.getSubtarget<RISCVSubtarget>();
+  if (!ST.hasVInstructions())
+    return false;
+
+  TII = ST.getInstrInfo();
+  MRI = &MF.getRegInfo();
+  TRI = MRI->getTargetRegisterInfo();
+
+  bool Changed = false;
+
+  // Masked pseudos coming out of isel will have their mask operand in the form:
+  //
+  // $v0:vr = COPY %mask:vr
+  // %x:vr = Pseudo_MASK %a:vr, %b:br, $v0:vr
+  //
+  // Because $v0 isn't in SSA, keep track of it so we can check the mask operand
+  // on each pseudo.
+  MachineInstr *CurrentV0Def;
+  for (MachineBasicBlock &MBB : MF) {
+    CurrentV0Def = nullptr;
+    for (MachineInstr &MI : MBB) {
+      if (isVMerge(MI))
+        Changed |= convertVMergeToVMv(MI, CurrentV0Def);
+
+      if (MI.definesRegister(RISCV::V0, TRI))
+        CurrentV0Def = &MI;
+    }
+  }
+
+  return Changed;
+}
+
+FunctionPass *llvm::createRISCVFoldMasksPass() { return new RISCVFoldMasks(); }
diff --git a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
index 6c156057ccd7d0e..79936e930ec9b76 100644
--- a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
@@ -3696,40 +3696,6 @@ bool RISCVDAGToDAGISel::performCombineVMergeAndVOps(SDNode *N) {
   return true;
 }
 
-// Transform (VMERGE_VVM_<LMUL> false, false, true, allones, vl, sew) to
-// (VMV_V_V_<LMUL> false, true, vl, sew). It may decrease uses of VMSET.
-bool RISCVDAGToDAGISel::performVMergeToVMv(SDNode *N) {
-#define CASE_VMERGE_TO_VMV(lmul)                                               \
-  case RISCV::PseudoVMERGE_VVM_##lmul:                                    \
-    NewOpc = RISCV::PseudoVMV_V_V_##lmul;                                 \
-    break;
-  unsigned NewOpc;
-  switch (N->getMachineOpcode()) {
-  default:
-    llvm_unreachable("Expected VMERGE_VVM_<LMUL> instruction.");
-  CASE_VMERGE_TO_VMV(MF8)
-  CASE_VMERGE_TO_VMV(MF4)
-  CASE_VMERGE_TO_VMV(MF2)
-  CASE_VMERGE_TO_VMV(M1)
-  CASE_VMERGE_TO_VMV(M2)
-  CASE_VMERGE_TO_VMV(M4)
-  CASE_VMERGE_TO_VMV(M8)
-  }
-
-  if (!usesAllOnesMask(N, /* MaskOpIdx */ 3))
-    return false;
-
-  SDLoc DL(N);
-  SDValue PolicyOp =
-    CurDAG->getTargetConstant(/*TUMU*/ 0, DL, Subtarget->getXLenVT());
-  SDNode *Result = CurDAG->getMachineNode(
-      NewOpc, DL, N->getValueType(0),
-      {N->getOperand(1), N->getOperand(2), N->getOperand(4), N->getOperand(5),
-       PolicyOp});
-  ReplaceUses(N, Result);
-  return true;
-}
-
 bool RISCVDAGToDAGISel::doPeepholeMergeVVMFold() {
   bool MadeChange = false;
   SelectionDAG::allnodes_iterator Position = CurDAG->allnodes_end();
@@ -3741,8 +3707,6 @@ bool RISCVDAGToDAGISel::doPeepholeMergeVVMFold() {
 
     if (IsVMerge(N) || IsVMv(N))
       MadeChange |= performCombineVMergeAndVOps(N);
-    if (IsVMerge(N) && N->getOperand(0) == N->getOperand(1))
-      MadeChange |= performVMergeToVMv(N);
   }
   return MadeChange;
 }
diff --git a/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp b/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
index 953ac097b915044..85683a3adc968df 100644
--- a/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
+++ b/llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
@@ -101,6 +101,7 @@ extern "C" LLVM_EXTERNAL_VISIBILITY void LLVMInitializeRISCVTarget() {
   initializeRISCVOptWInstrsPass(*PR);
   initializeRISCVPreRAExpandPseudoPass(*PR);
   initializeRISCVExpandPseudoPass(*PR);
+  initializeRISCVFoldMasksPass(*PR);
   initializeRISCVInsertVSETVLIPass(*PR);
   initializeRISCVInsertReadWriteCSRPass(*PR);
   initializeRISCVDAGToDAGISelPass(*PR);
@@ -414,7 +415,10 @@ void RISCVPassConfig::addPreEmitPass2() {
 }
 
 void RISCVPassConfig::addMachineSSAOptimization() {
+  addPass(createRISCVFoldMasksPass());
+
   TargetPassConfig::addMachineSSAOptimization();
+
   if (EnableMachineCombiner)
     addPass(&MachineCombinerID);
 
diff --git a/llvm/test/CodeGen/RISCV/O3-pipeline.ll b/llvm/test/CodeGen/RISCV/O3-pipeline.ll
index cf0826096bd41f8..414b721661021fd 100644
--- a/llvm/test/CodeGen/RISCV/O3-pipeline.ll
+++ b/llvm/test/CodeGen/RISCV/O3-pipeline.ll
@@ -82,6 +82,7 @@
 ; CHECK-NEXT:       Lazy Block Frequency Analysis
 ; CHECK-NEXT:       RISC-V DAG->DAG Pattern Instruction Selection
 ; CHECK-NEXT:       Finalize ISel and expand pseudo-instructions
+; CHECK-NEXT:       RISC-V Fold Masks
 ; CHECK-NEXT:       Lazy Machine Block Frequency Analysis
 ; CHECK-NEXT:       Early Tail Duplication
 ; CHECK-NEXT:       Optimize machine instruction PHIs

Copy link
Collaborator

@topperc topperc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@preames preames left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM w/one fix.

return false;

MI.setDesc(TII->get(NewOpc));
MI.removeOperand(2); // False operand
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you might have a bug here. I think you want to be dropping the merge operand, not the false operand. In the case where merge == false, it doesn't matter. But in the case where merge is undef, I think you need to keep the false operand, not the merge operand.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, thanks for catching this. Should be fixed now, and added a test case


static bool isVMerge(MachineInstr &MI) {
unsigned Opc = MI.getOpcode();
return Opc == RISCV::PseudoVMERGE_VVM_MF8 ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rebase is needed after #70637

We currently have three postprocess peephole optimisations for vector pseudos:

1) Masked pseudo with all ones mask -> unmasked pseudo
2) Merge vmerge pseudo into operand pseudo's mask
3) vmerge pseudo with all ones mask -> vmv.v.v pseudo

This patch aims to move these peepholes out of SelectionDAG and into a separate
RISCVFoldMasks MachineFunction pass.

There are a few motivations for doing this:

* The current SelectionDAG implementation operates on MachineSDNodes, which are
  essentially MachineInstrs but require a bunch of logic to reason about chain
  and glue operands. The RISCVII::has*Op helper functions also don't exactly
  line up with the SDNode operands. Mutating these pseudos and their operands
  in place becomes a good bit easier at the MachineInstr level. For example, we
  would no longer need to check for cycles in the DAG during
  performCombineVMergeAndVOps.

* Although it's further down the line, moving this code out of SelectionDAG
  allows it to be reused by GlobalISel later on.

* In performCombineVMergeAndVOps, it may be possible to commute the operands to
  enable folding in more cases (see test/CodeGen/RISCV/rvv/vmadd-vp.ll). There
  is existing machinery to commute operands in TII::commuteInstruction, but
  it's implemented on MachineInstrs.

The pass runs straight after ISel, before any of the other machine SSA
optimization passes run. This is so that dead-mi-elimination can mop up any
vmsets that are no longer used (but if preferred we could try and erase them
from inside RISCVFoldMasks itself). This also means that these peepholes are no
longer run at codegen -O0, so this patch isn't strictly NFC.

Only the performVMergeToVMv peephole is refactored in this patch, the
remaining two would be implemented in following patches. And as noted by
@preames, it should be possible to move doPeepholeSExtW out of SelectionDAG as
well.
@lukel97 lukel97 merged commit 72e6c1c into llvm:main Oct 30, 2023
@lukel97 lukel97 deleted the fold-mask branch October 31, 2023 14:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants