v0.7.2+Gaudi-1.21.0
vLLM with Intel® Gaudi® AI Accelerators
This README provides instructions on how to run vLLM with Intel Gaudi devices.
Requirements and Installation
To set up the execution environment, please follow the instructions in the Gaudi Installation Guide. To achieve the best performance on HPU, please follow the methods outlined in the Optimizing Training Platform Guide.
Requirements
- Python 3.10
- Intel Gaudi 2 and 3 AI accelerators
- Intel Gaudi software version 1.21.0 and above
Quick Start Using Dockerfile
Set up the container with the latest Intel Gaudi Software Suite release using the Dockerfile.
Ubuntu
$ docker build -f Dockerfile.hpu -t vllm-hpu-env .
$ docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --rm vllm-hpu-env
Tip
If you are facing the following error: docker: Error response from daemon: Unknown runtime specified habana.
, please refer to the "Install Optional Packages" section of Install Driver and Software and "Configure Container Runtime" section of Docker Installation. Make sure you have habanalabs-container-runtime
package installed and that habana
container runtime is registered.
Red Hat Enterprise Linux for Use with Red Hat OpenShift AI
$ docker build -f Dockerfile.hpu.ubi -t vllm-hpu-env .
$ docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --rm vllm-hpu-env
Build from Source
Environment Verification
To verify that the Intel Gaudi software was correctly installed, run the following:
$ hl-smi # verify that hl-smi is in your PATH and each Gaudi accelerator is visible
$ apt list --installed | grep habana # verify that habanalabs-firmware-tools, habanalabs-graph, habanalabs-rdma-core, habanalabs-thunk and habanalabs-container-runtime are installed
$ pip list | grep habana # verify that habana-torch-plugin, habana-torch-dataloader, habana-pyhlml and habana-media-loader are installed
$ pip list | grep neural # verify that neural-compressor is installed
Refer to System Verification and Final Tests for more details.
Run Docker Image
It is highly recommended to use the latest Docker image from the Intel Gaudi vault. Refer to the Intel Gaudi documentation for more details.
Use the following commands to run a Docker image. Make sure to update the versions below as listed in the Support Matrix:
$ docker pull vault.habana.ai/gaudi-docker/1.21.0/ubuntu22.04/habanalabs/pytorch-installer-2.6.0:latest
$ docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.21.0/ubuntu22.04/habanalabs/pytorch-installer-2.6.0:latest
Build and Install vLLM
Currently, multiple ways are provided which can be used to install vLLM with Intel® Gaudi®, pick one option:
1. Build and Install the stable version
vLLM releases are being performed periodically to align with Intel® Gaudi® software releases. The stable version is released with a tag, and supports fully validated features and performance optimizations in Gaudi's vLLM-fork. To install the stable release from HabanaAI/vLLM-fork, run the following:
$ git clone https://github.com/HabanaAI/vllm-fork.git
$ cd vllm-fork
$ git checkout v0.7.2+Gaudi-1.21.0
$ pip install -r requirements-hpu.txt
$ python setup.py develop
2. Build and Install the latest from vLLM-fork
Currently, the latest features and performance optimizations are being developed in Gaudi's vLLM-fork and periodically upstreamed to the vLLM main repository. To install latest HabanaAI/vLLM-fork, run the following:
$ git clone https://github.com/HabanaAI/vllm-fork.git
$ cd vllm-fork
$ git checkout habana_main
$ pip install --upgrade pip
$ pip install -r requirements-hpu.txt
$ python setup.py develop
3. Build and Install from the vLLM main source
If you prefer to build and install directly from the main vLLM source, where periodically we are upstreaming new features, run the following:
$ git clone https://github.com/vllm-project/vllm.git
$ cd vllm
$ pip install -r requirements-hpu.txt
$ python setup.py develop
Supported Features
Feature | Description | References |
---|---|---|
Offline batched inference | Offline inference using LLM class from vLLM Python API | Quickstart Example |
Online inference via OpenAI-Compatible Server | Online inference using HTTP server that implements OpenAI Chat and Completions API | Documentation Example |
HPU autodetection | HPU users do not need to specify the target platform, it will be detected automatically upon vLLM startup | N/A |
Paged KV cache with algorithms enabled for Intel Gaudi accelerators | vLLM HPU backend contains a custom Paged Attention and cache operators implementations optimized for Gaudi devices. | N/A |
Custom Intel Gaudi operator implementations | vLLM HPU backend provides optimized implementations of operators such as prefill attention, Root Mean Square Layer Normalization, Rotary Positional Encoding. | N/A |
Tensor parallel inference (single or multi-node multi-HPU) | vLLM HPU backend supports multi-HPU inference across multiple nodes with tensor parallelism with multiprocessing or Ray and HCCL. | Documentation Example HCCL reference |
Pipeline parallel inference (single or multi-node multi-HPU) | vLLM HPU backend supports multi-HPU inference across single or multi-node with pipeline parallelism. | Documentation Running Pipeline Parallelism |
Inference with HPU Graphs | vLLM HPU backend uses HPU Graphs by default for optimal performance. When HPU Graphs are enabled, execution graphs will be recorded ahead of time and replayed later during inference, significantly reducing host overheads. | Documentation vLLM HPU backend execution modes Optimization guide |
Inference with torch.compile | vLLM HPU backend supports inference with torch.compile . |
vLLM HPU backend execution modes |
INC quantization | vLLM HPU backend supports FP8 model and KV cache quantization and calibration with Intel Neural Compressor (INC). (Not fully supported with torch.compile execution mode) | Documentation |
AutoAWQ quantization | vLLM HPU backend supports inference with models quantized using AutoAWQ library. | Library |
AutoGPTQ quantization | vLLM HPU backend supports inference with models quantized using AutoGPTQ library. | Library |
LoRA/MultiLoRA support | vLLM HPU backend includes support for LoRA and MultiLoRA on supported models. | Documentation Example vLLM supported models |
Multi-step scheduling support | vLLM HPU backend includes multi-step scheduling support for host overhead reduction, configurable by standard --num-scheduler-seqs parameter. |
Feature RFC |
Automatic prefix caching | vLLM HPU backend includes automatic prefix caching (APC) support for more efficient prefills, configurable by standard --enable-prefix-caching parameter. |
Documentation Details |
Speculative decoding (functional release) | vLLM HPU backend includes experimental speculative decoding support for improving inter-token latency in some scenarios, configurable via standard --speculative_model and --num_speculative_tokens parameters. (Not fully supported with torch.compile execution mode) |
Documentation Example |
Multiprocessing backend | Multiprocessing is the default distributed runtime in vLLM. The vLLM HPU backend supports it alongside Ray. | Documentation |
Multimodal | vLLM HPU backend supports the inference for multi-modal models. (Not fully supported with t.compile execution mode) | Documentation |
Multinode support | vLLM HPU backend supports distributed, multiple-node inference with Ray. | |
vLLM v1 architecture (early release) | V1 architecture is now available for the HPU backend, and will gradually enable it for every use case we plan to support. | Documentation |
Guided decode | vLLM HPU supports a guided decoding backend for generating structured outputs. | Documentation |
Delayed Sampling (experimental) | vLLM HPU supports delayed sampling scheduling for asynchronous execution, enabled by VLLM_DELAYED_SAMPLING=true environment variable. |
N/A |
Exponential bucketing (experimental) | vLLM HPU supports exponential bucketing spacing instead of linear to automate configuration of bucketing mechanism, enabled by VLLM_EXPONENTIAL_BUCKETING=true environment variable. |
N/A |
Note
All specified features are also supported with the -- enforce-eager
flag.
Unsupported Features
- Beam search
- Prefill chunking (mixed-batch inferencing)
Validated Models and Configurations
The following configurations have been validated to function with Gaudi 2 or Gaudi 3 devices with random or greedy sampling. Configurations that are not listed may or may not work.
Performance Tuning
Execution Modes
Currently, vLLM for HPU supports four execution modes, determined by the selected HPU PyTorch Bridge backend (via the PT_HPU_LAZY_MODE
environment variable) and the --enforce-eager
flag.
PT_HPU_LAZY_MODE |
enforce_eager |
Execution Mode |
---|---|---|
0 | 0 | torch.compile |
0 | 1 | PyTorch eager mode |
1 | 0 | HPU Graphs |
1 | 1 | PyTorch lazy mode |
Note
Starting with the 1.21.0 Intel Gaudi software release, the torch.compile
execution mode is the default for vLLM. HPU Graphs mode remains supported to ensure backward compatibility. Please verify the compatibility of the torch.compile
mode with the information in the Supported Features table.
Tip
We recommend experimenting with the PT_HPU_LAZY_MODE
environment variable to determine whether HPU Graphs or torch.compile
mode performs better for your specific use case. While both modes generally deliver comparable performance, certain edge cases may favor one over the other.
Bucketing Mechanism
Intel Gaudi accelerators perform best when operating on models with fixed tensor shapes. Intel Gaudi Graph Compiler generates optimized binary code that implements the given model topology on Gaudi. In its default configuration, the produced binary code may be highly dependent on input and output tensor shapes, requiring graph recompilation when encountering tensors with different shapes within the same topology. While these binaries efficiently utilize Gaudi, the compilation process itself can introduce noticeable overhead in end-to-end execution. In dynamic inference serving scenarios, minimizing the number of graph compilations and reducing the risk of graph compilation occurring during server runtime is important. Currently, this is achieved by "bucketing" the model's forward pass across two dimensions: batch_size
and sequence_length
.
Note
Bucketing helps significantly reduce the number of required graphs, but does not handle graph compilation or device code generation. These tasks are performed during the warmup and HPUGraph capture phase.
Bucketing ranges are determined with 3 parameters - min
, step
, and max
. They can be set separately for the prompt and decode phase, and batch size and sequence length dimensions. These parameters can be observed in logs during vLLM startup:
INFO 08-01 21:37:59 hpu_model_runner.py:493] Prompt bucket config (min, step, max_warmup) bs:[1, 32, 4], seq:[128, 128, 1024]
INFO 08-01 21:37:59 hpu_model_runner.py:499] Generated 24 prompt buckets: [(1, 128), (1, 256), (1, 384), (1, 512), (1, 640), (1, 768), (1, 896), (1, 1024), (2, 128), (2, 256), (2, 384), (2, 512), (2, 640), (2, 768), (2, 896), (2, 1024), (4, 128), (4, 256), (4, 384), (4, 512), (4, 640), (4, 768), (4, 896), (4, 1024)]
INFO 08-01 21:37:59 hpu_model_runner.py:504] Decode bucket config (min, step, max_warmup) bs:[1, 128, 4], seq:[128, 128, 2048]
INFO 08-01 21:37:59 hpu_model_runner.py:509] Generated 48 decode buckets: [(1, 128), (1, 256), (1, 384), (1, 512), (1, 640), (1, 768), (1, 896), (1, 1024), (1, 1152), (1, 1280), (1, 1408), (1, 1536), (1, 1664), (1, 1792), (1, 1920), (1, 2048), (2, 128), (2, 256), (2, 384), (2, 512), (2, 640), (2, 768), (2, 896), (2, 1024), (2, 1152), (2, 1280), (2, 1408), (2, 1536), (2, 1664), (2, 1792), (2, 1920), (2, 2048), (4, 128), (4, 256), (4, 384), (4, 512), (4, 640), (4, 768), (4, 896), (4, 1024), (4, 1152), (4, 1280), (4, 1408), (4, 1536), (4, 1664), (4, 1792), (4, 1920), (4, 2048)]
min
determines the lowest value of the bucket. step
determines the interval between buckets, and max
determines the upper bound of the bucket. Furthermore, the interval between min
and step
has special handling - min
gets multiplied by consecutive powers of two, until the multiplier is less than or equal to step
. We call this the ramp-up phase, and it is used for handling lower batch sizes with minimum wastage, while allowing larger padding on larger batch sizes.
Example with ramp-up
min = 2, step = 32, max = 64
=> ramp_up = (2, 4, 8, 16)
=> stable = (32, 64)
=> buckets = ramp_up + stable => (2, 4, 8, 16, 32, 64)
Example without ramp-up
min = 128, step = 128, max = 512
=> ramp_up = ()
=> stable = (128, 256, 384, 512)
=> buckets = ramp_up + stable => (128, 256, 384, 512)
In the logged scenario, 24 buckets were generated for prompt (prefill) runs, and 48 buckets for decode runs. Each bucket corresponds to a separate optimized device binary for a given model with specified tensor shapes. Whenever a batch of requests is processed, it is padded across batch and sequence length dimension to the smallest possible bucket.
Warning
If a request exceeds the maximum bucket size in any dimension, it will be processed without padding, and its processing may require a graph compilation, potentially significantly increasing end-to-end latency. The boundaries of the buckets are user-configurable via environment variables, and upper bucket boundaries can be increased to avoid such scenario.
For example, if a request with 3 sequences, each having a maximum sequence length of 412, is sent to an idle vLLM server, it will be padded and executed as a (4, 512)
prefill bucket. This is because the batch_size
(number of sequences) will be padded to 4 (the nearest batch size dimension higher than 3), and the maximum sequence length will be padded to 512 (the nearest sequence length dimension higher than 412). After the prefill stage, it will be executed as a (4, 512)
decode bucket and will remain in this bucket until either the batch dimension changes (e.g., due to a request being completed), in which case it will become a (2, 512)
bucket, or the context length increases beyond 512 tokens. It will become a (4, 640)
bucket at that point.
Note
Bucketing is transparent to the user – padding in the sequence length dimension is never returned, and padding in the batch dimension does not create new requests.
Warmup
Warmup is an optional but highly recommended step that occurs before the vLLM server starts listening. It executes a forward pass for each bucket using dummy data. The goal is to pre-compile all graphs and avoid any graph compilation overhead within bucket boundaries during server runtime. Each warmup step is logged during vLLM startup.
This example uses the same buckets as those in the Bucketing Mechanism section. Each output line corresponds to the execution of a single bucket. When a bucket is executed for the first time, its graph is compiled and can be reused later, avoiding further graph compilations.
INFO 08-01 22:26:47 hpu_model_runner.py:1066] [Warmup][Prompt][1/24] batch_size:4 seq_len:1024 free_mem:79.16 GiB
INFO 08-01 22:26:47 hpu_model_runner.py:1066] [Warmup][Prompt][2/24] batch_size:4 seq_len:896 free_mem:55.43 GiB
INFO 08-01 22:26:48 hpu_model_runner.py:1066] [Warmup][Prompt][3/24] batch_size:4 seq_len:768 free_mem:55.43 GiB
...
INFO 08-01 22:26:59 hpu_model_runner.py:1066] [Warmup][Prompt][24/24] batch_size:1 seq_len:128 free_mem:55.43 GiB
INFO 08-01 22:27:00 hpu_model_runner.py:1066] [Warmup][Decode][1/48] batch_size:4 seq_len:2048 free_mem:55.43 GiB
INFO 08-01 22:27:00 hpu_model_runner.py:1066] [Warmup][Decode][2/48] batch_size:4 seq_len:1920 free_mem:55.43 GiB
INFO 08-01 22:27:01 hpu_model_runner.py:1066] [Warmup][Decode][3/48] batch_size:4 seq_len:1792 free_mem:55.43 GiB
...
INFO 08-01 22:27:16 hpu_model_runner.py:1066] [Warmup][Decode][47/48] batch_size:2 seq_len:128 free_mem:55.43 GiB
INFO 08-01 22:27:16 hpu_model_runner.py:1066] [Warmup][Decode][48/48] batch_size:1 seq_len:128 free_mem:55.43 GiB
Tip
Compiling all the buckets may take some time and can be disabled by setting the VLLM_SKIP_WARMUP=true
environment variable. Remember that if you do this, you may encounter graph compilations when executing a given bucket for the first time.
Warning
Disabling warmup is fine for development, but it is highly recommended to enable it in deployment.
HPU Graph Capture
HPU Graphs are currently the most performant execution method of vLLM on Intel Gaudi. When HPU Graphs are enabled, execution graphs will be traced (recorded) ahead of time (after performing warmup), to be later replayed during inference, significantly reducing host overheads. Recording can take large amounts of memory, which needs to be taken into account when allocating KV cache. Enabling HPU Graphs will impact the number of available KV cache blocks, but vLLM provides user-configurable variables to control memory management.
When HPU Graphs are used, they share the common memory pool ("usable memory") with the KV cache, as determined by the gpu_memory_utilization
flag (default value is 0.9
). Before the KV cache is allocated, the model weights are loaded onto the device, and a forward pass of the model is executed on dummy data to estimate memory usage. Only after that, the gpu_memory_utilization
flag is applied. At its default value, it marks 90% of the free device memory at that point as usable. Next, the KV cache is allocated, the model is warmed up, and HPU Graphs are captured. The VLLM_GRAPH_RESERVED_MEM
environment variable defines the ratio of memory reserved for HPU Graph capture. With its default value (VLLM_GRAPH_RESERVED_MEM=0.1
), 10% of the usable memory will be reserved for graph capture (referred to as "usable graph memory"), and the remaining 90% will be used for the KV cache. The environment variable VLLM_GRAPH_PROMPT_RATIO
determines the ratio of usable graph memory reserved for prefill and decode graphs. By default (VLLM_GRAPH_PROMPT_RATIO=0.3
), both stages share equal memory constraints. A lower value corresponds to less usable graph memory reserved for the prefill stage. For example, setting VLLM_GRAPH_PROMPT_RATIO=0.2
reserves 20% of usable graph memory for prefill graphs, while 80% is allocated for decode graphs.
Note
gpu_memory_utilization
does not represent the absolute memory usage across the HPU. Instead, it specifies the memory margin after loading the model and running a profile. For example, if a device has 100 GiB of total memory and 50 GiB of free memory after loading the model weights and executing the profiling run, the default value of gpu_memory_utilization
will mark 90% of the 50 GiB as usable, leaving 5 GiB as a margin, regardless of the total device memory.
You can also configure the strategy for capturing HPU graphs separately for the prompt and decode stages. The strategy affects the order in which graphs are captured. Two strategies are implemented:
max_bs
- The graph capture queue is sorted in descending order by batch size. Buckets with equal batch sizes are sorted by sequence length in ascending order (e.g.,(64, 128)
,(64, 256)
,(32, 128)
,(32, 256)
,(1, 128)
,(1,256)
), which is the default strategy for decode.min_tokens
- The graph capture queue is sorted in ascending order by the number of tokens each graph processes (batch_size*sequence_length
), which is the default strategy for prompt.
When many requests are pending, the vLLM scheduler attempts to fill the maximum batch size for decoding as quickly as possible. Once a request is finished, the decode batch size decreases. When this happens, vLLM attempts to schedule a prefill iteration for requests in the waiting queue to restore the decode batch size to its previous state. In a fully loaded scenario, the decode batch size is often at its maximum, making large-batch HPU graphs critical to capture, as indicated by the max_bs
strategy. Conversely, prefill iterations will typically be executed with very low batch sizes (1-4), as reflected in the min_tokens
strategy.
Note
VLLM_GRAPH_PROMPT_RATIO
does not set a hard limit on the memory allocated for graphs in each stage (prefill and decode). vLLM first attempts to use the entire usable prefill graph memory (usable graph memory * VLLM_GRAPH_PROMPT_RATIO) to capture prefilled HPU graphs. It will then attempt to do the same for decode graphs and the usable decode graph memory pool. If one stage is fully captured and there is unused memory remaining in the usable graph memory pool, vLLM will attempt to capture more graphs for the other stage, until no more HPU Graphs can be captured without exceeding the reserved memory pool. The behavior of this mechanism is illustrated in the example below.
Each step outlined is logged by the vLLM server, with negative values indicating memory release:
INFO 08-02 17:37:44 hpu_model_runner.py:493] Prompt bucket config (min, step, max_warmup) bs:[1, 32, 4], seq:[128, 128, 1024]
INFO 08-02 17:37:44 hpu_model_runner.py:499] Generated 24 prompt buckets: [(1, 128), (1, 256), (1, 384), (1, 512), (1, 640), (1, 768), (1, 896), (1, 1024), (2, 128), (2, 256), (2, 384), (2, 512), (2, 640), (2, 768), (2, 896), (2, 1024), (4, 128), (4, 256), (4, 384), (4, 512), (4, 640), (4, 768), (4, 896), (4, 1024)]
INFO 08-02 17:37:44 hpu_model_runner.py:504] Decode bucket config (min, step, max_warmup) bs:[1, 128, 4], seq:[128, 128, 2048]
INFO 08-02 17:37:44 hpu_model_runner.py:509] Generated 48 decode buckets: [(1, 128), (1, 256), (1, 384), (1, 512), (1, 640), (1, 768), (1, 896), (1, 1024), (1, 1152), (1, 1280), (1, 1408), (1, 1536), (1, 1664), (1, 1792), (1, 1920), (1, 2048), (2, 128), (2, 256), (2, 384), (2, 512), (2, 640), (2, 768), (2, 896), (2, 1024), (2, 1152), (2, 1280), (2, 1408), (2, 1536), (2, 1664), (2, 1792), (2, 1920), (2, 2048), (4, 128), (4, 256), (4, 384), (4, 512), (4, 640), (4, 768), (4, 896), (4, 1024), (4, 1152), (4, 1280), (4, 1408), (4, 1536), (4, 1664), (4, 1792), (4, 1920), (4, 2048)]
INFO 08-02 17:37:52 hpu_model_runner.py:430] Pre-loading model weights on hpu:0 took 14.97 GiB of device memory (14.97 GiB/94.62 GiB used) and 2.95 GiB of host memory (475.2 GiB/1007 GiB used)
INFO 08-02 17:37:52 hpu_model_runner.py:438] Wrapping in HPU Graph took 0 B of device memory (14.97 GiB/94.62 GiB used) and -252 KiB of host memory (475.2 GiB/1007 GiB used)
INFO 08-02 17:37:52 hpu_model_runner.py:442] Loading model weights took in total 14.97 GiB of device memory (14.97 GiB/94.62 GiB used) and 2.95 GiB of host memory (475.2 GiB/1007 GiB used)
INFO 08-02 17:37:54 hpu_worker.py:134] Model profiling run took 504 MiB of device memory (15.46 GiB/94.62 GiB used) and 180.9 MiB of host memory (475.4 GiB/1007 GiB used)
INFO 08-02 17:37:54 hpu_worker.py:158] Free device memory: 79.16 GiB, 39.58 GiB usable (gpu_memory_utilization=0.5), 15.83 GiB reserved for HPUGraphs (VLLM_GRAPH_RESERVED_MEM=0.4), 23.75 GiB reserved for KV cache
INFO 08-02 17:37:54 hpu_executor.py:85] # HPU blocks: 1519, # CPU blocks: 0
INFO 08-02 17:37:54 hpu_worker.py:190] Initializing cache engine took 23.73 GiB of device memory (39.2 GiB/94.62 GiB used) and -1.238 MiB of host memory (475.4 GiB/1007 GiB used)
INFO 08-02 17:37:54 hpu_model_runner.py:1066] [Warmup][Prompt][1/24] batch_size:4 seq_len:1024 free_mem:55.43 GiB
...
INFO 08-02 17:38:22 hpu_model_runner.py:1066] [Warmup][Decode][48/48] batch_size:1 seq_len:128 free_mem:55.43 GiB
INFO 08-02 17:38:22 hpu_model_runner.py:1159] Using 15.85 GiB/55.43 GiB of free device memory for HPUGraphs, 4.755 GiB for prompt and 11.095 GiB for decode (VLLM_GRAPH_PROMPT_RATIO=0.3)
INFO 08-02 17:38:22 hpu_model_runner.py:1066] [Warmup][Graph/Prompt][1/24] batch_size:1 seq_len:128 free_mem:55.43 GiB
...
INFO 08-02 17:38:26 hpu_model_runner.py:1066] [Warmup][Graph/Prompt][11/24] batch_size:1 seq_len:896 free_mem:48.77 GiB
INFO 08-02 17:38:27 hpu_model_runner.py:1066] [Warmup][Graph/Decode][1/48] batch_size:4 seq_len:128 free_mem:47.51 GiB
...
INFO 08-02 17:38:41 hpu_model_runner.py:1066] [Warmup][Graph/Decode][48/48] batch_size:1 seq_len:2048 free_mem:47.35 GiB
INFO 08-02 17:38:41 hpu_model_runner.py:1066] [Warmup][Graph/Prompt][12/24] batch_size:4 seq_len:256 free_mem:47.35 GiB
INFO 08-02 17:38:42 hpu_model_runner.py:1066] [Warmup][Graph/Prompt][13/24] batch_size:2 seq_len:512 free_mem:45.91 GiB
INFO 08-02 17:38:42 hpu_model_runner.py:1066] [Warmup][Graph/Prompt][14/24] batch_size:1 seq_len:1024 free_mem:44.48 GiB
INFO 08-02 17:38:43 hpu_model_runner.py:1066] [Warmup][Graph/Prompt][15/24] batch_size:2 seq_len:640 free_mem:43.03 GiB
INFO 08-02 17:38:43 hpu_model_runner.py:1128] Graph/Prompt captured:15 (62.5%) used_mem:14.03 GiB buckets:[(1, 128), (1, 256), (1, 384), (1, 512), (1, 640), (1, 768), (1, 896), (1, 1024), (2, 128), (2, 256), (2, 384), (2, 512), (2, 640), (4, 128), (4, 256)]
INFO 08-02 17:38:43 hpu_model_runner.py:1128] Graph/Decode captured:48 (100.0%) used_mem:161.9 MiB buckets:[(1, 128), (1, 256), (1, 384), (1, 512), (1, 640), (1, 768), (1, 896), (1, 1024), (1, 1152), (1, 1280), (1, 1408), (1, 1536), (1, 1664), (1, 1792), (1, 1920), (1, 2048), (2, 128), (2, 256), (2, 384), (2, 512), (2, 640), (2, 768), (2, 896), (2, 1024), (2, 1152), (2, 1280), (2, 1408), (2, 1536), (2, 1664), (2, 1792), (2, 1920), (2, 2048), (4, 128), (4, 256), (4, 384), (4, 512), (4, 640), (4, 768), (4, 896), (4, 1024), (4, 1152), (4, 1280), (4, 1408), (4, 1536), (4, 1664), (4, 1792), (4, 1920), (4, 2048)]
INFO 08-02 17:38:43 hpu_model_runner.py:1206] Warmup finished in 49 secs, allocated 14.19 GiB of device memory
INFO 08-02 17:38:43 hpu_executor.py:91] init_cache_engine took 37.92 GiB of device memory (53.39 GiB/94.62 GiB used) and 57.86 MiB of host memory (475.4 GiB/1007 GiB used)
Recommended vLLM Parameters
- It is recommended to run inference on Gaudi 2 with
block_size
of 128 for BF16 data type. Using the default values (16, 32) may result in suboptimal performance due to underutilization of the Matrix Multiplication Engine (see Gaudi Architecture). - To achieve maximum throughput on Llama 7B, it is recommended to use a batch size of 128 or 256 and a maximum context length of 2048 with HPU Graphs enabled. If you experience out-of-memory issues, please refer to the Troubleshooting section below.
Environment Variables
Diagnostic and Profiling Knobs:
VLLM_PROFILER_ENABLED
: iftrue
- enables high-level profiler. Resulting JSON traces can be viewed at perfetto.habana.ai. Disabled by default.VLLM_HPU_LOG_STEP_GRAPH_COMPILATION
: iftrue
- logs graph compilations for each vLLM engine step, but only if any compilation occurs. It is highly recommended to use this in conjunction withPT_HPU_METRICS_GC_DETAILS=1
. Disabled by default.VLLM_HPU_LOG_STEP_GRAPH_COMPILATION_ALL
: iftrue
- logs graph compilations for every vLLM engine step, even if no compilation occurs. Disabled by default.VLLM_HPU_LOG_STEP_CPU_FALLBACKS
: iftrue
- logs CPU fallbacks for each vLLM engine step, but only if any fallback occurs. Disabled by default.VLLM_HPU_LOG_STEP_CPU_FALLBACKS_ALL
: iftrue
- logs CPU fallbacks for each vLLM engine step, even if no fallback occurs. Disabled by default.VLLM_T_COMPILE_FULLGRAPH
: iftrue
- PyTorch compile function raises an error if any graph breaks happen during compilation. This allows for the easy detection of existing graph breaks, which usually reduce performance. Disabled by default.
Performance Tuning Knobs:
VLLM_SKIP_WARMUP
: iftrue
, warmup is skipped. The default isfalse
.VLLM_GRAPH_RESERVED_MEM
: percentage of memory dedicated to HPUGraph capture. The default is0.1
.VLLM_GRAPH_PROMPT_RATIO
: percentage of reserved graph memory dedicated to prompt graphs. The default is0.3
.VLLM_GRAPH_PROMPT_STRATEGY
: strategy determining order of prompt graph capture,min_tokens
ormax_bs
. The default ismin_tokens
.VLLM_GRAPH_DECODE_STRATEGY
: strategy determining order of decode graph capture,min_tokens
ormax_bs
. The default ismax_bs
.VLLM_EXPONENTIAL_BUCKETING
: iftrue
, enables exponential bucket spacing instead of linear (experimental).VLLM_{phase}_{dim}_BUCKET_{param}
: collection of 12 environment variables configuring ranges of bucketing mechanism (linear bucketing only).{phase}
is eitherPROMPT
orDECODE
{dim}
is eitherBS
,SEQ
orBLOCK
{param}
is eitherMIN
,STEP
orMAX
- Default values:
-
Prompt:
- batch size min (
VLLM_PROMPT_BS_BUCKET_MIN
):1
- batch size step (
VLLM_PROMPT_BS_BUCKET_STEP
):min(max_num_seqs, 32)
- batch size max (
VLLM_PROMPT_BS_BUCKET_MAX
):min(max_num_seqs, 64)
- sequence length min (
VLLM_PROMPT_SEQ_BUCKET_MIN
):block_size
- sequence length step (
VLLM_PROMPT_SEQ_BUCKET_STEP
):block_size
- sequence length max (
VLLM_PROMPT_SEQ_BUCKET_MAX
):1024
- batch size min (
-
Decode:
- batch size min (
VLLM_DECODE_BS_BUCKET_MIN
):1
- batch size step (
VLLM_DECODE_BS_BUCKET_STEP
):min(max_num_seqs, 32)
- batch size max (
VLLM_DECODE_BS_BUCKET_MAX
):max_num_seqs
- block size min (
VLLM_DECODE_BLOCK_BUCKET_MIN
):block_size
- block size step (
VLLM_DECODE_BLOCK_BUCKET_STEP
):block_size
- block size max (
VLLM_DECODE_BLOCK_BUCKET_MAX
):max(128, (max_num_seqs*2048)/block_size)
- batch size min (
-
- Recommended Values:
-
Prompt:
- sequence length max (
VLLM_PROMPT_SEQ_BUCKET_MAX
):max_model_len
- sequence length max (
-
Decode:
- block size max (
VLLM_DECODE_BLOCK_BUCKET_MAX
):max(128, (max_num_seqs*max_model_len/block_size)
- block size max (
-
Note
If the model config reports a high max_model_len
, set it to max input_tokens+output_tokens
rounded up to a multiple of block_size
as per actual requirements.
Tip
When a deployed workload does not utilize the full context that a model can handle, it is good practice to limit the maximum values upfront based on the input and output token lengths that will be generated after serving the vLLM server.
Example:
Let's assume that we want to deploy text generation model Qwen2.5-1.5B, which has a defined max_position_embeddings
of 131072 (our max_model_len
). At the same time, we know that our workload pattern will not use the full context length because we expect a maximum input token size of 1K and predict generating a maximum of 2K tokens as output. In this case, starting the vLLM server to be ready for the full context length is unnecessary. Instead, we should limit it upfront to achieve faster service preparation and decrease warmup time. The recommended values in this example should be:
--max_model_len
:3072
- the sum of input and output sequences (1+2)*1024.VLLM_PROMPT_SEQ_BUCKET_MAX
:1024
- the maximum input token size that we expect to handle.
VLLM_HANDLE_TOPK_DUPLICATES
: iftrue
- handles duplicates outside top-k. The default isfalse
.VLLM_CONFIG_HIDDEN_LAYERS
: configures how many hidden layers to run in a HPUGraph for model splitting among hidden layers when TP is 1. It helps to improve throughput by reducing inter-token latency limitations in some models. The default is1
.
Additionally, there are HPU PyTorch Bridge environment variables impacting vLLM execution:
-
PT_HPU_LAZY_MODE
: if0
, PyTorch Eager backend for Gaudi will be used. If1
, PyTorch Lazy backend for Gaudi will be used. The default is0
. -
PT_HPU_ENABLE_LAZY_COLLECTIVES
: must be set totrue
for tensor parallel inference with HPU Graphs. The default istrue
. -
PT_HPUGRAPH_DISABLE_TENSOR_CACHE
: must be set tofalse
for LLaVA, qwen, and RoBERTa models. The default isfalse
. -
VLLM_PROMPT_USE_FLEX_ATTENTION
: enabled only for the Llama model, allowing usage oftorch.nn.attention.flex_attention
instead of FusedSDPA. RequiresVLLM_PROMPT_USE_FUSEDSDPA=0
. The default isfalse
.
Quantization, FP8 Inference and Model Calibration Process
Note
Measurement files are required to run quantized models with vLLM on Gaudi accelerators. The FP8 model calibration procedure is described in detail in docs.habana.ai vLLM Inference Section. An end-to-end example tutorial for quantizing a BF16 Llama 3.1 model to FP8 and then inferencing is provided in this Gaudi-tutorials repository.
Once you have completed the model calibration process and collected the measurements, you can run FP8 inference with vLLM using the following command:
export QUANT_CONFIG=/path/to/quant/config/inc/meta-llama-3.1-405b-instruct/maxabs_quant_g3.json
vllm serve meta-llama/Llama-3.1-405B-Instruct --dtype bfloat16 --max-model-len 2048 --block-size 128 --max-num-seqs 32 --quantization inc --kv-cache-dtype fp8_inc --weights-load-device cpu --tensor-parallel-size 8
QUANT_CONFIG
is an environment variable that points to the measurement or quantization configuration file. The measurement configuration file is used during the calibration procedure to collect measurements for a given model. The quantization configuration is used during inference.
Tip
If you are prototyping or testing your model with FP8, you can use the VLLM_SKIP_WARMUP=true
environment variable to disable the warmup stage, which is time-consuming. However, disabling this feature in production environments is not recommended, as it can lead to a significant performance decrease.
Tip
If you are benchmarking an FP8 model with scale_format=const
, setting VLLM_DISABLE_MARK_SCALES_AS_CONST=true
can help speed up the warmup stage.
Tip
When using FP8 models, you may experience timeouts caused by the long compilation time of FP8 operations. To mitigate this, set the following environment variables:
VLLM_ENGINE_ITERATION_TIMEOUT_S
- to adjust the vLLM server timeout. You can set the value in seconds, e.g., 600 equals 10 minutes.VLLM_RPC_TIMEOUT
- to adjust the RPC protocol timeout used by the OpenAI-compatible API. This value is in microseconds, e.g., 600000 equals 10 minutes.
Long Context Support
Long context feature enables support for a token context window exceeding 128K tokens. It is supported by the following models:
- meta-llama/Llama-2-7b
- meta-llama/Llama-2-70b
- meta-llama/Meta-Llama-3-8B-Instruct
- meta-llama/Meta-Llama-3.1-8B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- meta-llama/Meta-Llama-3.1-70B-Instruct
Environment Variables Settings
Set the following environment variables to avoid OOM/functional issues. Additional environment variable settings depend on context length:
VLLM_ENGINE_ITERATION_TIMEOUT_S=3600
VLLM_RPC_TIMEOUT=100000
VLLM_PROMPT_USE_FUSEDSDPA=1
PT_HPU_ENABLE_LAZY_COLLECTIVES=true
PT_HPUGRAPH_DISABLE_TENSOR_CACHE=1
VLLM_ALLOW_LONG_MAX_MODEL_LEN=1
32K context length flags examples:
VLLM_GRAPH_RESERVED_MEM
: The value depends on the model and context length settings. UseVLLM_GRAPH_RESERVED_MEM=0.02
for Llama3.1-8B orVLLM_GRAPH_RESERVED_MEM=0.1
for Llama3.1-70B.VLLM_PROMPT_BS_BUCKET_MIN=1
: Suggested value, depends on the model. You can increase it until you reach an OOM error or decrease it if OOM occurs.VLLM_PROMPT_BS_BUCKET_STEP=16
: Suggested value, depends on the model. Increasing the step value results in fewer buckets. If an OOM error occurs, the value should be increased.VLLM_PROMPT_BS_BUCKET_MAX=16
: Suggested value, depends on the model. You can increase it until you reach an OOM error or decrease it if OOM occurs.VLLM_PROMPT_SEQ_BUCKET_MIN=24576
: Suggested value, depends on warmup results.VLLM_PROMPT_SEQ_BUCKET_STEP=2048
: Suggested value, depends on warmup results. It is recommended to increase it to a higher value for faster warmup.VLLM_PROMPT_SEQ_BUCKET_STEP=16384
- Suggested value for Intel Gaudi 3.VLLM_PROMPT_SEQ_BUCKET_MAX=32768
: Value for context length of 32K. Use 16384 for 16K.VLLM_DECODE_BLOCK_BUCKET_MIN=1024
: Suggested value, depends on warmup results.VLLM_DECODE_BLOCK_BUCKET_STEP=1024
: Suggested value, depends on warmup results.VLLM_DECODE_BLOCK_BUCKET_MAX=33792
:max_num_seqs * max_decode_seq // self.block_size
, wheremax_decode_seq
represents the sum of input and output sequences. For example:128 *((32 + 1)* 1024) / 128
32 *((32 + 1)* 1024) / 128
Batch Size Settings
The default batch_size=256
is not optimal for long contexts (8K+). Recompilations may occur if there is not enough KV cache space for some sequence groups.
If recompilation or next recomputation warnings appear during inference, reduce batch_size
to improve stability.
Recompilation message example:
Configuration: (prompt, 1, 36864) was not warmed-up!
Warning message example:
Sequence group cmpl-3cbf19b0c6d74b3f90b5d5db2ed2385e-0 is preempted by PreemptionMode.RECOMPUTE mode because there is not enough KV cache space. This can affect the end-to-end performance. Increase gpu_memory_utilization or tensor_parallel_size to provide more KV cache memory.
Multi-Step Scheduling Feature Usage
Enabling Multi-Step Scheduling is recommended for better decode performance. Refer to vllm-project#6854 for more details.
Pipeline Parallelism
Pipeline parallelism is a distributed model parallelization technique that splits the model vertically across its layers, distributing different parts of the model across multiple devices. With this feature, when running a model that does not fit on a single node with tensor parallelism and requires a multi-node solution, we can split the model vertically across its layers and distribute the slices across available nodes. For example, if we have two nodes, each with 8 HPUs, we no longer have to use tensor_parallel_size=16
but can instead set tensor_parallel_size=8
with pipeline_parallel_size=2. Because pipeline parallelism runs a pp_size
number of virtual engines on each device, we have to lower max_num_seqs
accordingly, since it acts as a micro batch for each virtual engine.
Running Pipeline Parallelism
The following example shows how to use Pipeline parallelism with vLLM on HPU:
vllm serve <model_path> --device hpu --tensor-parallel-size 8 --pipeline_parallel_size 2 --distributed-executor-backend ray
Note
Currently, pipeline parallelism on Lazy mode requires the PT_HPUGRAPH_DISABLE_TENSOR_CACHE=0
flag.
Multi-node Support
vLLM works with a multi-node environment setup via Ray. To run models on multiple nodes, follow the procedure below.
Prerequisites
Perform the following on all nodes:
-
Install the latest vllm-fork.
-
Check if all Gaudi NIC ports are up by running:
Note
Following commands should be run on the host and NOT inside the container.
cd /opt/habanalabs/qual/gaudi2/bin
./manage_network_ifs.sh --status
# All the ports should be in 'up' state. Try flipping the state
./manage_network_ifs.sh --down
./manage_network_ifs.sh --up
# Give it a minute for the NIC's to flip and check the status again
- Set the following flags:
# Check the network interface for outbound/inbound comms. Command 'ip a' or 'ifconfig' should list all the interfaces
export GLOO_SOCKET_IFNAME=eth0
export HCCL_SOCKET_IFNAME=eth0
1. Start Ray on the head node:
ray start --head --port=6379
2. Add workers to the Ray cluster:
ray start --address='<ip-of-ray-head-node>:6379'
3. Start the vLLM server:
vllm serve meta-llama/Llama-3.1-405B-Instruct --dtype bfloat16 --max-model-len 2048 --block-size 128 --max-num-seqs 32 --tensor-parallel-size 16 --distributed-executor-backend ray
Note
Running FP8 models with a multi-node setup is described in the documentation of FP8 calibration procedure: README.
Other Online Serving Examples
Please refer to this collection of static-batched online serving example scripts designed to help the user reproduce performance numbers with vLLM on Gaudi for various types of models and varying context lengths. Below is a list of the models and example scripts provided for 2K and 4K context length scenarios:
- deepseek-r1-distill-llama-70b_gaudi3_1.20_contextlen-2k
- deepseek-r1-distill-llama-70b_gaudi3_1.20_contextlen-4k
- llama-3.1-70b-instruct_gaudi3_1.20_contextlen-2k
- llama-3.1-70b-instruct_gaudi3_1.20_contextlen-4k
- llama-3.1-8b-instruct_gaudi3_1.20_contextlen-2k
- llama-3.1-8b-instruct_gaudi3_1.20_contextlen-4k
- llama-3.3-70b-instruct_gaudi3_1.20_contextlen-2k
- llama-3.3-70b-instruct_gaudi3_1.20_contextlen-4k
Troubleshooting
The following steps address Out of Memory related errors:
- Increase
gpu_memory_utilization
- This addresses insufficient overall memory. The vLLM pre-allocates HPU cache by usinggpu_memory_utilization%
of device memory. By default,gpu_memory_utilization
is set to 0.9. By increasing this utilization, you can provide more KV cache space. - Decrease
max_num_seqs
ormax_num_batched_tokens
- This may reduce the number of concurrent requests in a batch, thereby requiring less KV cache space when overall usable memory is limited. - Increase
tensor_parallel_size
- This approach shards model weights, so each GPU has more memory available for KV cache. - To maximize the memory available for the KV cache, you can disable
HPUGraph
completely. With HPU Graphs disabled, you are trading latency and throughput at lower batches for potentially higher throughput on higher batches. You can do that by adding--enforce-eager
flag to the server (for online inference), or by passingenforce_eager=True
argument to LLM constructor (for offline inference).
What's Changed
- Re-integrate HPU after upstream refactors by @kzawora-intel in #20
- Fix model_output_idx on HPU by @madamczyk-intel in #27
- Allow block_sizes: 64 and 128 by @madamczyk-intel in #28
- Rebase habana_main up to cc466a3 by @kzawora-intel in #26
- WA: Disable cumsum in HPU _prepare_prompt by @kzawora-intel in #30
- bs/seq bucketing for prompt and decode by @madamczyk-intel in #33
- Cleanup: Fix HPU auto-detection in setup.py by @kzawora-intel in #34
- Cleanup: Restore int64 sampling by @kzawora-intel in #35
- Cleanup: Llama whitespace fix by @kzawora-intel in #36
- Cleanup: Restore pyproject.toml by @kzawora-intel in #37
- Add vLLM high-level profiler by @DamianSzwichtenberg in #29
- Add release docs for Gaudi by @kzawora-intel in #32
- Minor: update release tag in README by @kzawora-intel in #39
- Fix error with high-level profiler in multi-card scenario by @DamianSzwichtenberg in #38
- Static fused moe op by @jkaniecki in #41
- WA: Remove pyproject.toml, bypass HPU autodetection by @kzawora-intel in #45
- Use setuptools older than 70.0.0 by @madamczyk-intel in #42
- Add VLLM_SKIP_WARMUP flag by @madamczyk-intel in #43
- Graphs v2 by @madamczyk-intel in #44
- Remove usage of wrap_in_hpu_graph in PT eager by @kzawora-intel in #47
- Add HPU support to benchmark_latency and benchmark_throughput by @kzawora-intel in #49
- Use int32 seeds for random sampler on HPU by @kzawora-intel in #50
- Add host memory profiling to HabanaMemoryProfiler by @kzawora-intel in #51
- Bump ray version to 2.23.0 by @kzawora-intel in #52
- Skip incompatible tests with HPU by @afierka-intel in #46
- Enable PA_SPLIT_VALUE by default by @kzawora-intel in #54
- Add syncs in mixtral weight loader by @jkaniecki in #55
- HPU: Change KV-cache layout by @madamczyk-intel in #56
- Add more detailed event names to profiler by @kzawora-intel in #57
- Disable value splitting by default on G3 by @madamczyk-intel in #58
- Fix for OOM in Llama 70b by @tzielinski-habana in #60
- Enable high-level profiler on multiple instances by @DamianSzwichtenberg in #61
- Add mark steps to prevent OOM in static moe op by @jkaniecki in #65
- Add Mistal&Mixtral supported configurations by @szutenberg in #64
- Normalize router weights in MoE OP by @jkaniecki in #72
- Revert "Disable value splitting by default on G3" by @tzielinski-habana in #74
- Add more metrics to high level profiler by @kzawora-intel in #63
- [Hardware][Gaudi]Add alibi support by @wenbinc-Bin in #69
- Remove allgather workaround in logits_processor by @kzawora-intel in #76
- habana_main rebase by @kzawora-intel in #81
- Conform to vLLM formatting rules by @kzawora-intel in #83
- SiLU memory leak in fwd by @michalkuligowski in #87
- habana_main rebase v4 by @kzawora-intel in #85
- Add workaround for RuntimeError: Invalid inputs for scatter_nd_onnx by @kzawora-intel in #107
- Refactor forward_hpu of RMSNorm by @kzawora-intel in #128
- Refactor & re-enable HPU RoPE for Gaudi1 by @kzawora-intel in #129
- formatting fixes by @kzawora-intel in #132
- Address upstream PR code review comments by @kzawora-intel in #133
- Whitespace fix by @kzawora-intel in #134
- Add torch.compile support by @kzawora-intel in #48
- habana_main rebase v5 by @kzawora-intel in #108
- Add constraints for HPU UnquantizedFusedMoEMethod by @kzawora-intel in #137
- Remove redundant torch.device call by @kzawora-intel in #139
- Add functools.wraps decorator to with_mark_steps by @kzawora-intel in #138
- Add HPU platform and HpuCommunicator for TP by @kzawora-intel in #136
- Re-enable FusedRoPE by @kzawora-intel in #145
- Overhaul HPU memory management in HPUGraph capture by @kzawora-intel in #147
- Allocate blocks from id=1 for HPU by @kdamaszk in #160
- Revert "Allocate blocks from id=1 for HPU" by @kzawora-intel in #163
- Reimplement silu_and_mul for mixtral by @jkaniecki in #167
- Enable GitHub Actions static checks for habana_main by @kzawora-intel in #177
- remove reminder_comment.yml by @kzawora-intel in #179
- Fix logger initialization in ops.py by @kzawora-intel in #178
- 1.17 documentation update by @kzawora-intel in #172
- Readme 1.17 update by @kzawora-intel in #186
- Support FP8 INC in vLLM by @nirda7 in #144
- [Doc][BugFix] Update setup instructions and reference links by @MohitIntel in #191
- split gptbigcode forward by @libinta in #194
- Enable FusedSDPA for prompt attention with VLLM_PROMPT_USE_FUSEDSDPA by @libinta in #168
- Enable LoRA support for HPU by @scsudhak-intel in #170
- Compile mode bug fix for LoRA by @scsudhak-intel in #196
- Ensure buckets do not exceed the batch token limit by @kzawora-intel in #206
- Make max_num_batched_tokens behavior more verbose, add legacy mode by @kzawora-intel in #208
- Update paddings computed to adjust selected_token_indices by @vivekgoe in #210
- Port not warmed-up configurations log warnings by @adobrzyn in #222
- Remove mark step from static MoE loop by @jkaniecki in #231
- Enable llama-405b - w/a for memory allocation error by @afierka-intel in #184
- [bugfix] handle large bucket minimums correctly by @kzawora-intel in #235
- Remove token budget from decode buckets by @kzawora-intel in #241
- [habana_main bugfix] Fix min bucket boundary calculation by @kzawora-intel in #239
- Mask based BGMV implementation by @hlahkar in #223
- Dispersed dummy slots by @madamczyk-intel in #243
- Use PT_COMPILE_ONLY_MODE during warmup by @mfylcek in #227
- Do not pass warmup_mode to execute_model_kwargs by @kzawora-intel in #229
- Add error handling for PT_COMPILE_ONLY_MODE by @kzawora-intel in #251
- Hardcode fastapi version due to pydantic error by @hlahkar in #255
- Mask based BGMV implementation for LoRA Embedding by @scsudhak-intel in #247
- Eliminate graph breaks for torch.compile mode by @yuwenzho in #202
- Port flat PA from habana_next to habana_main by @dolszewska in #169
- Add disable_tensor_cache=True to HPUGraph capture by @kzawora-intel in #252
- Fix dispersed slots by @madamczyk-intel in #261
- Skip compilation warnings during warmup phase by @jkaniecki in #262
- Port PT Profiler to habana_main by @adobrzyn in #256
- Fix LoRA test by handling mask creation inside the test by @scsudhak-intel in #270
- Attn MetaData dtype should be same as model dtype by @hlahkar in #271
- Support Mixtral quantization using INC by @dudilester in #267
- Fixed ALiBi by @itaraban in #254
- Update gaudi-installation.rst by @dolszewska in #279
- Remove hardcoded value from softmax in flat_pa by @madamczyk-intel in #280
- Increase garbage collector's threshold by @kwisniewski98 in #281
- [Bugfix][Habana_main] fix guided_decode HPU failing issue by @xuechendi in #236
- fix rotary embedding
rotary_dim
not equalhead_size
case by @jikunshang in #245 - [Bugfix][Habana_main] - dbrx model and arctic model codes fix to remove CUDA hardcode by @xuechendi in #217
- Add Dockerfile.hpu by @xuechendi in #200
- optimized topp/topk calculation by @ssarkar2 in #195
- Increase garbage collector's threshold 1.18 by @kwisniewski98 in #284
- [Bugfix][Habana_main] fix multi-modal model inference - tested with llava-1.5 by @xuechendi in #283
- Add fake HPU mode to Habana components with dummy habana_frameworks module. by @jmaksymczuk in #250
- Update documentation on support of fp8 by @michalkuligowski in #288
- Removed vllm.hpu directory and changed relevant imports by @tzielinski-habana in #291
- Reduce default value of VLLM_GRAPH_RESERVED_MEM to 0.1 by @kzawora-intel in #292
- Fix minor logging issue in habana_model_runner.py by @schoi-habana in #294
- Fix blocks number calculation for Flat PA by @iboiko-habana in #269
- Remove dummy seq group data creation from loop by @iboiko-habana in #301
- optimize qwen2 model on Gaudi by @czhu15 in #233
- fix bug: device_str in initialize_ray_cluster requires uppercase string by @hlin99 in #297
- Fix Lora Rebase by @hlahkar in #290
- Fix calculating slots for warmup by @madamczyk-intel in #310
- Removed padding block from a list of available blocks in allocators by @tzielinski-habana in #313
- Fix seq_len for padding sequences by @madamczyk-intel in #318
- Fix lora specific conditions in profile-run by @scsudhak-intel in #317
- Run with HPU graphs even when warmup was skipped by @madamczyk-intel in #320
- vLLM 0.6.1 rebase by @kzawora-intel in #311
- vLLM-fork 0.6.0 rebase by @kzawora-intel in #259
- Move profilers to vllm-hpu-extension by @kzawora-intel in #323
- Restore upstream requirements-build.txt by @kzawora-intel in #324
- Remove reminder_comment.yml workflow by @kzawora-intel in #325
- Don't throw "Failed to import from vllm._C" warning on HPU by @kzawora-intel in #326
- Fix doc build warnings by @kzawora-intel in #330
- fix qwen2 model issue by @jikunshang in #329
- Remove vllm.utils.is_hpu() by @kzawora-intel in #331
- Remove logger from layernorm by @kzawora-intel in #332
- Fix INC FP8 inference after rebase by @kzawora-intel in #333
- Make weights_load_device not change EngineArgs.create_load_config() by @kzawora-intel in #336
- Refine INC shutdown code by @kzawora-intel in #335
- Setting enough cache_size_limit for torch.compile warmup by @zehao-intel in #238
- Change default values for decode bucket flags by @iboiko-habana in #316
- Support loading checkpoints quantized using Autofp8 by @Yantom1 in #286
- Fix torch.compile issue of dispatch key set mismatch by @yuwenzho in #299
- Chunk prefill cache writes, remove div_i32 from insert_or_update_cache by @kzawora-intel in #289
- Fix runtime errors reported when using long input sequence lengths with LoRA by @vivekgoe in #339
- vLLM 0.6.2 rebase by @kzawora-intel in #340
- Enable Async output process for HPU by @zhouyu5 in #342
- Port last_bucket change from v1.18.0 by @iboiko-habana in #347
- Add setuptools_scm to requirements-hpu.txt by @kzawora-intel in #349
- Fixed lora manager tests by @rsshaik1 in #315
- Oct 01 rebase by @kzawora-intel in #353
- Lora Mask based on lora index by @hlahkar in #348
- Add rope_scaling support for LLama3.1 by @kdamaszk in #356
- [Core] Support Torch profiler in Habana Worker by @mswiniarsk in #357
- [Refactor] Rename components Habana -> HPU by @kzawora-intel in #359
- Oct 04 rebase by @kzawora-intel in #360
- Lazily import HPU-dependent components by @kzawora-intel in #363
- [Refactor] Rename HabanaAttention -> HPUAttention by @kzawora-intel in #362
- Use BF16 on HPU by default by @kzawora-intel in #361
- Set vllm-hpu-extension to 36c7f9c by @madamczyk-intel in #365
- Add AliBi to supported features in README_GAUDI.md by @kwisniewski98 in #287
- Fix hpu_set_env call in load_model in vllm by @Yantom1 in #364
- Timeout adjusted in MLLMEngine by @jczaja in #368
- Add Jenkins test definitions by @kzawora-intel in #369
- Make workaround for SW-204785 broader by @kzawora-intel in #374
- Fix LoRA tests by handling broken import by @scsudhak-intel in #376
- [CI] Report test name, add properties to JUnitXML by @kzawora-intel in #377
- Disable performance counters if profiler is not enabled by @kdamaszk in #383
- Remove constraints for bucket creation during warmup in LoRA by @scsudhak-intel in #382
- seed_everything function doesn't handle HPU by @scsudhak-intel in #384
- Fixed lora_manager tests with hpu_model_runner by @rsshaik1 in #386
- Reformat README_GAUDI.md by @kzawora-intel in #389
- [CI] Prepare separate Jenkins tests for torch compile mode by @anko-intel in #388
- Remove workaround added to resolve multi-card stall issue by @scsudhak-intel in #387
- Update SynapseAI version in README & Dockerfile by @kzawora-intel in #390
- Oct 7 rebase by @kzawora-intel in #367
- enable mixtral quantization using INC by @dudilester in #372
- [CI] Temporarily increase test tolerances by @kzawora-intel in #392
- Add quickstart section to READMEs by @kzawora-intel in #391
- Softmax: add weighted-sum normalization by @madamczyk-intel in #378
- Remove HPU changes from cache_engine.py by @kzawora-intel in #400
- [bucketing overhaul 1/n] Add padding-aware scheduling and option to limit prefill batch size by @kzawora-intel in #394
- Add WA for RuntimeError: "fill_cpu" not implemented for 'Float8_e4m3fn' by @kzawora-intel in #402
- Workaround for OOM during loading llama-405 by @afierka-intel in #396
- Add HPU specific arguments to benchmark_throughput by @kdamaszk in #406
- Add forward_hpu to RotaryEmbedding, remove custom module by @kzawora-intel in #404
- Remove if blocks smaller than bs in generate_decode_buckets by @kamil-kaczor in #412
- Remove CPU sync before Sampler by @kdamaszk in #414
- Remove redundant set_active_loras call during warmup by @scsudhak-intel in #413
- Change profile Run batch based on max_seq_len by @hlahkar in #415
- Add support for various softmax normalization options by @madamczyk-intel in #420
- Update README_GAUDI about fp8 calibration procedure by @afierka-intel in #423
- Set vllm-hpu-extension to 341a77f by @madamczyk-intel in #428
- Contiguous PA by @mfylcek in #424
- Revert "Contiguous PA" by @madamczyk-intel in #432
- Enable Dynamic MoE for Mixtral on 1.19.0 by @tpawlows in #425
- Support long contexts with LoRA by @scsudhak-intel in #418
- Add HPU specific changes to benchmark_latency.py by @kdamaszk in #436
- sarkar/Add htrandom generator for hpu by @ssarkar2 in #246
- Fix one_hot bug in torch compile mode by @yuwenzho in #427
- HPU: offload logits processing to CPU by @madamczyk-intel in #358
- Lora layers by @rsshaik1 in #435
- initial works on enabling automatic prefix caching by @huijjj in #162
- Multi step scheduling by @tzielinski-habana in #441
- Add fp8 test to jenkins CI by @afierka-intel in #429
- Enable FusedSDPA prefill by default by @kzawora-intel in #447
- Contiguous PA by @mfylcek in #433
- Fix default value for FSDPA by @madamczyk-intel in #448
- Fix performance of top_p and top_k calculations by @kdamaszk in #449
- Reduce block fragmentation by @yangw1234 in #426
- Create scorecard.yml by @rozhukov in #431
- [CI] Add Llama2 to torch compile tests by @anko-intel in #446
- Enable HPUGraphs for lora long-contexts tests by @scsudhak-intel in #454
- Fix SchedulerConfig params by @ldurejko in #459
- Tensor parallelism for multi-step scheduling by @tzielinski-habana in #457
- Set tokenizers version to <0.20.2 by @madamczyk-intel in #460
- [BugFix][Habana_main][Multistep]Fix multistep deepcopy overhead by @xuechendi in #452
- Set vllm-hpu-extension to 0063520 by @madamczyk-intel in #455
- Oct 28 rebase by @kzawora-intel in #439
- Revert "Oct 28 rebase" by @kzawora-intel in #466
- Oct 28 rebase - attempt 2 by @kzawora-intel in #467
- Nov 6 rebase (sans vllm-project#6143) by @kzawora-intel in #468
- Fix missed conflict by @kzawora-intel in #469
- Align fork with HPU upstream code by @kzawora-intel in #465
- The output tensor from sampling is the input_tokens to the by @tzielinski-habana in #471
- Add multi step scheduling scenario to jenkins CI by @afierka-intel in #445
- [New Feature][Habana-Main] speculative_decoding HPU support by @xuechendi in #375
- [Doc] Fix broken urls in gaudi-installation by @MohitIntel in #473
- [Installation] Avoid ModuleNotFoundError:setuptools-scm error by @MohitIntel in #475
- Add option to disable duplicates in topk by @kdamaszk in #464
- Handle offsets shape in long contexts by @scsudhak-intel in #477
- [New Feature][Habana main] spec decode PR2 - Medusa, MLP, Eagle by @xuechendi in #461
- Add FP8 TP=2 scenario to Jenkins CI by @afierka-intel in #478
- Commonalize code between contiguous and flat pa by @madamczyk-intel in #493
- Config hidden layer number to run in 1 lazy graph by @libinta in #451
- Fix number of blocks when profiling contiguous pa by @madamczyk-intel in #496
- Warmup for multi-step scheduling by @tzielinski-habana in #501
- Enable patching matmuls in block2batch and batch2block by @nirda7 in #500
- Add FP8 inference procedure by @afierka-intel in #504
- Terminate ray workers on ray_hpu_executor shutdown by @kzawora-intel in #505
- Add valid_seq_lengths to fusedsdpa - port from 1.18.0 by @iboiko-habana in #509
- Set vllm-hpu-extension to 2542c18 by @iboiko-habana in #517
- [BUGFIX] fix worker selector non-return issue by @xuechendi in #508
- Use contiguous pa by default by @madamczyk-intel in #519
- Set vllm-hpu-extension to 3a60b49 by @madamczyk-intel in #520
- Add async copying to input preparation by @jkaniecki in #497
- Nov 18 rebase by @kzawora-intel in #485
- [BUGFIX]fix FP8 failing issue on habana_main [PatchedVLLMKVCache fwd rror] by @xuechendi in #502
- Set vllm-hpu-extension to a69bb99 by @madamczyk-intel in #521
- Update ray_hpu_executor.py by @michalkuligowski in #522
- Random sampler warmup by @mfylcek in #506
- Skip empty steps in multi step sheduling by @jkaniecki in #526
- [bucketing overhaul 2/n] Delegate bucket management to HPUBucketingContext by @kdamaszk in #530
- [SW-201504] Adding Test Trigger by @RonBenMosheHabana in #533
- [SW-201504] Add Jenkins Tests Trigger by @RonBenMosheHabana in #537
- [bucketing overhaul 3/n] Move HPUBucketingContext to vllm-hpu-extension by @kdamaszk in #534
- Limit decode block size by @mfylcek in #532
- fix marlin flag set on hpu by @nirda7 in #540
- Fix profile run for multi LoRA by @kdamaszk in #549
- Fix cutlass_fp8_supported flag set on HPU by @nirda7 in #550
- [HPU] Add mark_step configurable for the decoder layer. by @jiminha in #525
- Update cpu-test.yml by @michalkuligowski in #544
- Update *.sh by @michalkuligowski in #545
- Update run-lm-eval-gsm-vllm-baseline.sh by @michalkuligowski in #552
- Add HPU information to collect_env script by @michalkuligowski in #430
- Intern2 habana by @skirdey-inflection in #489
- Added "hpu" as configurable device argument in test_lora_manager_hpu by @rsshaik1 in #556
- [BUG FIX] [SPEC DECODE] 0.6.4 rebase cause incorrectness in spec decode, fix in this PR by @xuechendi in #523
- CI fix by @tzielinski-habana in #563
- Set vllm-hpu-extension to 50e10ea by @mswiniarsk in #565
- Refactor FP8 Inc config and flow by @nirda7 in #564
- Set vllm-hpu-extension to bc01901 by @iboiko-habana in #567
- to make repetition penalty faster by @ccrhx4 in #442
- Enable alibi fusedsdpa by @itaraban in #561
- Set vllm-hpu-extension to fb36408 by @mswiniarsk in #572
- Set vllm-hpu-extension to cd520df by @mswiniarsk in #574
- Revert "to make repetition penalty faster" by @michalkuligowski in #570
- Regional compilation support by @Kacper-Pietkun in #576
- Revert "Enable alibi fusedsdpa" by @madamczyk-intel in #585
- Prepare sin/cos buffers for rope outside model forward by @tzielinski-habana in #566
- Enable DeepseekV2 Lite/Chat models by @hlin99 in #516
- Set vllm-hpu-extension to 070591a by @mswiniarsk in #591
- [CI/BUILD] Spec decode ci by @xuechendi in #524
- Add host traces to high-level profilings by @szutenberg in #577
- Enable patching Fused SDPA by @nirda7 in #569
- revert INC fixed version installation in requirements-hpu.txt for 1.19, add tmp one for 1.18 by @xuechendi in #547
- Add multiprocessing HPU executor by @kzawora-intel in #559
- fix WorkerWrapperBase and spec_decode rebase by @xuechendi in #582
- Dec 06 rebase by @kzawora-intel in #571
- fix hpu destructors flow and remove finish_measurements by @nirda7 in #379
- Set vllm-hpu-extension to 4312768 by @scsudhak-intel in #604
- Support mllama (llama 3.2) model for HPU by @yisonzhu in #491
- Enable padding aware scheduling by default on HPU by @kzawora-intel in #606
- Update CODEOWNERS by @kzawora-intel in #608
- Fix TP>1 in encoder-decoder models by @jkaniecki in #607
- Align LoRA handling in HPU with PunicaWrapper class by @scsudhak-intel in #614
- Dec 10 rebase by @kzawora-intel in #605
- Revert "Dec 10 rebase" by @michalkuligowski in #618
- Revert "Revert "Dec 10 rebase"" by @kzawora-intel in #619
- Fix multiprocessing executor shutdown by @kzawora-intel in #621
- Update GitHub Actions targets by @kzawora-intel in #622
- Add padding to encoder_seq_lens by @kdamaszk in #610
- Remove workaround for one_hot in eager/compile by @anko-intel in #632
- Add shutdown_inc method to MultiprocessingHPUExecutor by @nirda7 in #634
- Fix recompilations due to different batch_sizes in MSS by @mfylcek in #637
- Fix CI reports by @afierka-intel in #636
- Unit scales in FP8 CI scenarios by @afierka-intel in #633
- TC llama recompile fix - no_grad to inference_mode by @RafLit in #640
- Generic call for prepare_cos_sin in rotary embedding by @tzielinski-habana in #638
- Update CODEOWNERS by @vivekgoe in #649
- Fix long contexts in LoRA by @scsudhak-intel in #624
- Lora manager tests fix by @rsshaik1 in #652
- Fix LoRA tests by @scsudhak-intel in #664
- [BUG fix] Rebase caused spec decode fix by @xuechendi in #613
- fix slow sampling when repetition_penalty is set. by @ccrhx4 in #584
- Optimize for topk=1 case if we do not handle duplicates by @ssarkar2 in #603
- [bugfix] fix RuntimeError on apc by @kkimmk in #648
- Add llava support to benchmark_throuhput by @adobrzyn in #665
- Add mllama support to benchmark_throughput by @kdamaszk in #668
- Add mark_step for encoder layers by @yma11 in #669
- Use FusedSDPA for MllamaVisionSdpaAttention by @kdamaszk in #620
- Limit number of dummy cross attention blocks by @kdamaszk in #667
- [SW-197036] - use torch._scaled_mm with hpu by @nirda7 in #660
- Handle LoRA specific changes in MSS by @scsudhak-intel in #675
- [SW-201504] Trigger Internal Tests by @RonBenMosheHabana in #538
- Fix model OOM issue in llama-405 and mixtral - 2nd attempt by @afierka-intel in #644
- Add inc fp8 qunatization documentation by @nirda7 in #635
- Adds LoRA tests to vLLM CI pipeline by @rsshaik1 in #680
- Update CODEOWNERS by @michalkuligowski in #683
- Set vllm-hpu-extension to 6ac93fb by @mfylcek in #684
- Set cache size for t.compile even if there is no warmup by @anko-intel in #689
- Jan 10 rebase by @kzawora-intel in #677
- Workaround to handle multi-card stall issue by @scsudhak-intel in #688
- Fix weights load device use by @nirda7 in #686
- Move scores to float32 in case of running xgrammar on cpu by @madamczyk-intel in #695
- Clean-up LoRA flow by @scsudhak-intel in #518
- Check if kv_cache is tuple before calling split_kv_cache by @kdamaszk in #697
- [CI] Cleanup run_tests.sh logs by @kzawora-intel in #700
- Multimodality fix for llava by @adobrzyn in #641
- Rebase 2025-01-17 by @kzawora-intel in #701
- Fix LoRA tests by @scsudhak-intel in #696
- Updating README_GAUDI in habana_main by @MohitIntel in #690
- Change vllm-hpu-extension revision to ae726d4 by @iboiko-habana in #707
- Capabilities overhaul by @madamczyk-intel in #692
- [SW-216156] Fix mixtral Fused MoE issues after rebase by @dudilester in #708
- Disable enforcing eager mode for mllama and deepseek_v3 on hpu by @jkaniecki in #713
- Fix for random sampler recompilations for incomplete batches by @mfylcek in #663
- [SW-216413] - Fix new executors shutdown and shutdown_inc flow by @nirda7 in #716
- Pin triton to v3.1.0 for HPU by @tannervoas742 in #728
- [SW-199650] Add HPU fp8 DynamicMOE Op by @dudilester in #721
- Make sure that all workers are notified about end of execution loop by @kdamaszk in #730
- Support for multi step scheduling in enc dec models by @jkaniecki in #715
- [SW-216666] - add fp8 to the hpu supported quantization list by @nirda7 in #739
- Hotfix recompilations caused by unified attention for hpu by @bkowalskiINTEL in #740
- Rebase 2025.01.21 by @kzawora-intel in #714
- Revert "Rebase 2025.01.21" by @kzawora-intel in #742
- Rebase 2025.01.21 - take 2 by @kzawora-intel in #743
- Fix Gaudi tutorial links in the main README by @dmsuehir in #746
- Rebase 2025.01.28 by @kzawora-intel in #744
- Revert "Rebase 2025.01.28" by @kzawora-intel in #749
- Rebase 2025.01.28 - attempt 2 by @kzawora-intel in #750
- Fix LoRA test by @scsudhak-intel in #711
- Copy changes from vllm PR 12536 by @anko-intel in #753
- Expand supported attention head sizes by @kzawora-intel in #752
- Rebase 2025.01.29 by @kzawora-intel in #751
- GPTQ Support [Cont.] by @maktukmak in #481
- AWQ Support by @maktukmak in #458
- Update requirements-hpu.txt by @michalkuligowski in #756
- Generator bypass for dummy init by @maktukmak in #747
- Update vllm-hpu-extension; fix for compile recompilations due to @CAC… by @afierka-intel in #767
- Add pip upgrade to installation steps by @michalkuligowski in #699
- Fix: selecting correct backend for MultiHeadAttention by @adobrzyn in #645
- Make padding-aware scheduling disableable by @kzawora-intel in #771
- README_GAUDI llava update by @adobrzyn in #779
- Update README_GAUDI.md by @PatrykWo in #783
- [HS-4827] README correction for QUANT_CONFIG by @schoi-habana in #748
- Docs: Getting started llava update + vllm execution modes table fix by @adobrzyn in #788
- Add basic CI checks for enc dec models by @jkaniecki in #741
- Refactor long-context + LoRA flow by @scsudhak-intel in #807
- Updated Troubleshooting section by @MohitIntel in #766
- Add real BS & seq_len to profiling by @kamil-kaczor in #601
- Fix guided decoding crashes by @kzawora-intel in #811
- [CI] Fix test_lazy_outlines.py performance, add more guided decode tests by @kzawora-intel in #825
- Rebase 2025-02-10 by @kzawora-intel in #810
- Initial enablement for text-embedding by @libinta in #758
- Add license to hpu_pooling_model_runner by @jkaniecki in #847
- Fix CI (transformer) by @adobrzyn in #852
- Align encoder decoder warmup to new input processors by @jkaniecki in #839
- Recalc scales from user by @linoybu in #774
- Improve RMSNorm to support 2D inputs by @YangQun1 in #784
- [SW-212036] Change gc thr multiplier to 16 by @kamil-kaczor in #832
- Fix sporadic issue in async_engine/test_api_server tests by @akarnows in #794
- Allow tests to run in t.compile by @Kacper-Pietkun in #724
- Fix fsdpa usage inside enc_dec attention forward by @jkaniecki in #874
- Revert "Improve RMSNorm to support 2D inputs" by @michalkuligowski in #872
- Readme gaudi update to align with 1.20 branch by @bartekkuncer in #875
- Update requirements-hpu.txt by @adobrzyn in #878
- Automatic Prefix Caching by @adobrzyn in #882
- Long Context usage README update by @iboiko-habana in #836
- Port delayed sampling to habana_main by @madamczyk-intel in #849
- Allow to use flex_attention instead of FSDPA in HPUAttentionImpl by @m-a-nowak in #876
- Fix LLaMa 3.2 11b acc drop issue by @libinta in #898
- Long Context usage README gaudi.inc.md update by @iboiko-habana in #896
- Automatic Prefix Caching - acc fix by @adobrzyn in #900
- Update compile CI tests by @afierka-intel in #899
- Resolved ALIBI bias regression due to porting flat PA by @tannervoas742 in #503
- Revert "Resolved ALIBI bias regression due to porting flat PA" by @kzawora-intel in #907
- Update Meta-Llama-3.1-8B-Instruct-fp8.yaml by @kzawora-intel in #911
- Disable lora long-context test by @afierka-intel in #913
- Update uniproc_executor.py by @michalkuligowski in #909
- Fix a typo in dtype conversion log by @mandy-li in #910
- [CI] Don't fail on high accuracy by @kzawora-intel in #918
- Fix "Configuration was not warmed up" logs for uns without APC by @adobrzyn in #920
- Update README_GAUDI.md by @pranjalst in #919
- Fix check config arguments by @adobrzyn in #928
- Automatic Prefix Caching - disable contiguous pa with APC by @adobrzyn in #927
- [Gaudi][Model] Qwen2.5-vl by @malkomes in #923
- Update requirements-hpu.txt by @afierka-intel in #931
- Move query reshape to flat_pa by @kdamaszk in #933
- [SW-222952] Remove Priority flag from tests by @RonBenMosheHabana in #938
- Add VLLM_T_COMPILE_FULLGRAPH flag by @anko-intel in #932
- vLLM v1 - HPU integration by @kzawora-intel in #831
- Temporary disable checking if whole graph is compiled in gsm8k_fp8 tests by @anko-intel in #950
- [SW-218309] Updated Tests Workflow To Support Forked PR's by @RonBenMosheHabana in #930
- [SW-218309] Fix For CI by @RonBenMosheHabana in #952
- [SW-218309][CI] Added pagination to fetch all jobs from Github by @RonBenMosheHabana in #953
- Enable split qkv for LLama and GPTBigCode by @kdamaszk in #914
- [SW-218309][CI] Added always conditions to summarize job by @RonBenMosheHabana in #956
- [SW-218309][CI] Changed How Synapse Version Is Calculated For Tests by @RonBenMosheHabana in #958
- [SW-219324] Add block-size 256 to args list by @vijethk-intel in #971
- Align cross-attn kv cache update to new hpu-extension by @jkaniecki in #948
- Revert "Temporary disable checking if whole graph is compiled in gsm… by @anko-intel in #973
- merged_prefill+ - initial cleanup by @madamczyk-intel in #942
- Moved set_indices_and_offsets into prepare_prompt and prepare_decode by @jczaja in #949
- [SW-224062] Create Separate Workflow For Cleaning Branches by @RonBenMosheHabana in #989
- Optimize rope if head_size == rotary_dim by @kdamaszk in #968
- Fix APC on v0 by @adobrzyn in #975
- fix server crash when the client use random seed sampling by @yangulei in #924
- Revert "Moved set_indices_and_offsets into prepare_prompt and prepare_decode" by @afierka-intel in #990
- Fixing (1) tensor size mismatch and (2) missing prepare_cos_sin issues for Phi-3.5 by @mrezavand in #916
- Extend accuracy tests for models that we support by @AnetaKaczynska in #824
- Automatic Prefix Caching - fully cached prefill by @adobrzyn in #982
- enable LoRA for embedding models by @skaulintel in #821
- Move torch.compile to HPUModelRunnerBase by @anko-intel in #966
- Bump jinja2 from 3.1.4 to 3.1.6 by @dependabot in #891
- Enabled and optimized GLM-4v-9b on Gaudi by @gyou2021 in #691
- Update documentation to reflect current bucket defaults by @nngokhale in #817
- [SW-223065] Sync commits from habana_main to master_next in vllm-fork by @RonBenMosheHabana in #960
- Fix regional compilation by @anko-intel in #994
- [SW-222977] Fix for test_lora_manager_hpu.py by @rsshaik1 in #965
- Fix apc v1 by @iboiko-habana in #995
- introduce testowners config by @michalkuligowski in #988
- Adjust mllama to regional compilation by @jkaniecki in #999
- Add t.compile performance test to jenkins by @anko-intel in #998
- Experimental support for merged_prefill by @madamczyk-intel in #987
- Fix failure raporting issue in jenkins benchmark by @anko-intel in #1011
- [SW-222456] - Fix Multi Lora UT by @hlahkar in #1006
- [SW-224648] Redirect test logs to file by @bmyrcha in #1017
- [SW-224648] Fix test logs redirection by @bmyrcha in #1027
- [SW-225233] Adjust method of getting synapse_build by @bmyrcha in #1045
- Implement Pipeline Parallelism support for HPU. (#1000) by @jmaksymczuk in #1040
- [1.21 cherry-pick] Fix async callback ordering (#1023) by @madamczyk-intel in #1028
- [1.21 cherry-pick] Make lazy mode autodetection more robust by @madamczyk-intel in #1038
- APC - Remove prompt attn with context and use existing implementation by @adobrzyn in #1059
- Cherry pick exponential bucketing integration from #642 by @kzawora-intel in #1067
- [deepseek r1] HPU support for deepseek by @xuechendi in #1030
- Modify RobertaEmbedding forward as custom op method by @yeonsily in #1049
- Fix embedding model accuracy issue when merged prefill is enabled by @libinta in #1048
- [SW-226128]Disable mark scales as const & add
dist.barrier
only for multiple cards by @yiliu30 in #1100 - [1.21.0 cherry-pick] Synchronize vLLM flags to support cross-node inference by @afierka-intel in #1103
- [SW-225980] Allow to skip pytest for non-code related changes by @bmyrcha in #1093
- [1.21.0 cherry-pick] Set VLLM_T_COMPILE_FULLGRAPH=False in CI multi-modal tests (#1042) by @afierka-intel in #1104
- [1.21.0 cherry-pick] Enable APC pre-merge tests to compile test suite (#1076) by @afierka-intel in #1105
- [SW-224431] Fix fp8 measurement for mixtral by @kwisniewski98 in #1119
- [WIP] 1.21 docs update by @PatrykWo in #1080
- Update hpu_worker.py by @michalkuligowski in #943
- Update requirements-hpu.txt by @afierka-intel in #1123
- [1.21 cherry-pick] Restore fsdpa calibration by @madamczyk-intel in #1087
- Update CODEOWNERS by @michalkuligowski in #1139
- Michalkuligowski patch update workflows by @michalkuligowski in #1019
- Add in Dockerfile.hpu.ubi by @AnetaKaczynska in #1118
- Fix the llama3.2-11b/90b accuracy drop issue. by @libinta in #1175
- [SW-226779]Fix attribute not found issue by @xuechendi in #1160
- Update README_GAUDI.md 1.21.0 by @anastasiauvarovaintel in #1196
- Update links and tags for 1.21.0 release by @bartekkuncer in #1204
- Removed OS specification from requirements list by @PatrykWo in #1221
New Contributors
- @wenbinc-Bin made their first contribution in #69
- @yuwenzho made their first contribution in #202
- @dolszewska made their first contribution in #169
- @dudilester made their first contribution in #267
- @ssarkar2 made their first contribution in #195
- @schoi-habana made their first contribution in #294
- @zehao-intel made their first contribution in #238
- @Yantom1 made their first contribution in #286
- @tpawlows made their first contribution in #425
- @huijjj made their first contribution in #162
- @rozhukov made their first contribution in #431
- @ldurejko made their first contribution in #459
- @RonBenMosheHabana made their first contribution in #533
- @skirdey-inflection made their first contribution in #489
- @ccrhx4 made their first contribution in #442
- @yisonzhu made their first contribution in #491
- @RafLit made their first contribution in #640
- @kkimmk made their first contribution in #648
- @yma11 made their first contribution in #669
- @tannervoas742 made their first contribution in #728
- @linoybu made their first contribution in #774
- @YangQun1 made their first contribution in #784
- @m-a-nowak made their first contribution in #876
- @mandy-li made their first contribution in #910
- @pranjalst made their first contribution in #919
- @vijethk-intel made their first contribution in #971
- @mrezavand made their first contribution in #916
- @nngokhale made their first contribution in #817
- @anastasiauvarovaintel made their first contribution in #1196
Full Changelog: v0.6.2...v0.7.2+Gaudi-1.21.0