vllm.model_executor.models.interfaces ¶
MultiModalEmbeddings module-attribute ¶
The output embeddings must be one of the following formats:
- A list or tuple of 2D tensors, where each tensor corresponds to each input multimodal data item (e.g, image).
- A single 3D tensor, with the batch dimension grouping the 2D tensors.
HasInnerState ¶
Bases: Protocol
The interface required for all models that has inner state.
Source code in vllm/model_executor/models/interfaces.py
HasNoOps ¶
IsAttentionFree ¶
Bases: Protocol
The interface required for all models like Mamba that lack attention, but do have state whose size is constant wrt the number of tokens.
Source code in vllm/model_executor/models/interfaces.py
IsHybrid ¶
Bases: Protocol
The interface required for all models like Jamba that have both attention and mamba blocks, indicates that hf_config has 'layers_block_type'
Source code in vllm/model_executor/models/interfaces.py
is_hybrid class-attribute ¶
is_hybrid: Literal[True] = True
A flag that indicates this model has both mamba and attention blocks , also indicates that the model's hf_config has 'layers_block_type'
get_mamba_state_copy_func classmethod ¶
get_mamba_state_copy_func() -> tuple[
MambaStateCopyFunc, ...
]
Calculate copy-function callables for each Mamba state.
Returns:
| Type | Description |
|---|---|
MambaStateCopyFunc | A tuple of MambaStateCopyFunc callables that correspond, in order, |
... | to the Mamba states produced by the model. Each callable accepts |
tuple[MambaStateCopyFunc, ...] | (state, block_ids, cur_block_idx, num_accepted_tokens) and returns |
tuple[MambaStateCopyFunc, ...] | a MambaCopySpec describing the memory-copy parameters for prefix |
tuple[MambaStateCopyFunc, ...] | caching in align mode. |
Source code in vllm/model_executor/models/interfaces.py
get_mamba_state_shape_from_config classmethod ¶
get_mamba_state_shape_from_config(
vllm_config: VllmConfig,
) -> tuple[tuple[int, int], tuple[int, int, int]]
Calculate shapes for Mamba's convolutional and state caches.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
vllm_config | VllmConfig | vLLM config | required |
Returns:
| Type | Description |
|---|---|
tuple[int, int] | Tuple containing: |
tuple[int, int, int] |
|
tuple[tuple[int, int], tuple[int, int, int]] |
|
Source code in vllm/model_executor/models/interfaces.py
MixtureOfExperts ¶
Bases: Protocol
Check if the model is a mixture of experts (MoE) model.
Source code in vllm/model_executor/models/interfaces.py
808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 | |
expert_weights instance-attribute ¶
expert_weights: MutableSequence[Iterable[Tensor]]
Expert weights saved in this rank.
The first dimension is the layer, and the second dimension is different parameters in the layer, e.g. up/down projection weights.
num_expert_groups instance-attribute ¶
num_expert_groups: int
Number of expert groups in this model.
num_local_physical_experts instance-attribute ¶
num_local_physical_experts: int
Number of local physical experts in this model.
num_logical_experts instance-attribute ¶
num_logical_experts: int
Number of logical experts in this model.
num_physical_experts instance-attribute ¶
num_physical_experts: int
Number of physical experts in this model.
num_redundant_experts instance-attribute ¶
num_redundant_experts: int
Number of redundant experts in this model.
num_routed_experts instance-attribute ¶
num_routed_experts: int
Number of routed experts in this model.
num_shared_experts instance-attribute ¶
num_shared_experts: int
Number of shared experts in this model.
set_eplb_state ¶
set_eplb_state(
expert_load_view: Tensor,
logical_to_physical_map: Tensor,
logical_replica_count: Tensor,
) -> None
Register the EPLB state in the MoE model.
Since these are views of the actual EPLB state, any changes made by the EPLB algorithm are automatically reflected in the model's behavior without requiring additional method calls to set new states.
You should also collect model's expert_weights here instead of in the weight loader, since after initial weight loading, further processing like quantization may be applied to the weights.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
expert_load_view | Tensor | A view of the expert load metrics tensor. | required |
logical_to_physical_map | Tensor | Mapping from logical to physical experts. | required |
logical_replica_count | Tensor | Count of replicas for each logical expert. | required |
Source code in vllm/model_executor/models/interfaces.py
SupportsCrossEncoding ¶
Bases: Protocol
The interface required for all models that support cross encoding.
Source code in vllm/model_executor/models/interfaces.py
SupportsEagle ¶
Bases: SupportsEagleBase, Protocol
The interface required for models that support EAGLE-1 and EAGLE-2 speculative decoding.
Source code in vllm/model_executor/models/interfaces.py
SupportsEagle3 ¶
Bases: SupportsEagleBase, Protocol
The interface required for models that support EAGLE-3 speculative decoding.
Source code in vllm/model_executor/models/interfaces.py
supports_eagle3 class-attribute ¶
supports_eagle3: Literal[True] = True
A flag that indicates this model supports EAGLE-3 speculative decoding.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
get_eagle3_aux_hidden_state_layers ¶
SupportsEagleBase ¶
Bases: Protocol
Base interface for models that support EAGLE-based speculative decoding.
Source code in vllm/model_executor/models/interfaces.py
SupportsLoRA ¶
Bases: Protocol
The interface required for all models that support LoRA.
Source code in vllm/model_executor/models/interfaces.py
packed_modules_mapping class-attribute instance-attribute ¶
SupportsMRoPE ¶
Bases: Protocol
The interface required for all models that support M-RoPE.
Source code in vllm/model_executor/models/interfaces.py
supports_mrope class-attribute ¶
supports_mrope: Literal[True] = True
A flag that indicates this model supports M-RoPE.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
get_mrope_input_positions ¶
get_mrope_input_positions(
input_tokens: list[int],
mm_features: list[MultiModalFeatureSpec],
) -> tuple[Tensor, int]
Get M-RoPE input positions and delta value for this specific model.
This method should be implemented by each model that supports M-RoPE to provide model-specific logic for computing input positions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_tokens | list[int] | List of input token IDs | required |
mm_features | list[MultiModalFeatureSpec] | Information about each multi-modal data item | required |
Returns:
| Type | Description |
|---|---|
Tensor | Tuple of |
int |
|
tuple[Tensor, int] |
|
Source code in vllm/model_executor/models/interfaces.py
SupportsMambaPrefixCaching ¶
Bases: Protocol
The interface for models whose mamba layers support prefix caching.
This is currently experimental.
Source code in vllm/model_executor/models/interfaces.py
SupportsMultiModal ¶
Bases: Protocol
The interface required for all multi-modal models.
Source code in vllm/model_executor/models/interfaces.py
79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 | |
_language_model_names class-attribute instance-attribute ¶
Set internally by _mark_language_model.
_processor_factory class-attribute ¶
_processor_factory: _ProcessorFactories
Set internally by MultiModalRegistry.register_processor.
_tower_model_names class-attribute instance-attribute ¶
Set internally by _mark_tower_model.
requires_raw_input_tokens class-attribute ¶
requires_raw_input_tokens: bool = False
A flag that indicates this model processes input id tokens in their raw form and not input embeddings.
supports_encoder_tp_data class-attribute ¶
supports_encoder_tp_data: bool = False
A flag that indicates whether this model supports multimodal_config.mm_encoder_tp_mode="data".
supports_multimodal class-attribute ¶
supports_multimodal: Literal[True] = True
A flag that indicates this model supports multi-modal inputs.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
supports_multimodal_raw_input_only class-attribute ¶
supports_multimodal_raw_input_only: bool = False
A flag that indicates this model supports multi-modal inputs and processes them in their raw form and not embeddings.
_embed_text_input_ids ¶
_embed_text_input_ids(
input_ids: Tensor,
embed_input_ids: Callable[[Tensor], Tensor],
*,
is_multimodal: Tensor | None,
handle_oov_mm_token: bool,
) -> Tensor
Source code in vllm/model_executor/models/interfaces.py
_mark_composite_model ¶
_mark_composite_model(
vllm_config: VllmConfig,
*,
language_targets: type[Module]
| tuple[type[Module], ...],
tower_targets: dict[
str, type[Module] | tuple[type[Module], ...]
],
)
Composite wrapper over _mark_language_model and _mark_tower_model by modality.
Source code in vllm/model_executor/models/interfaces.py
_mark_language_model ¶
_mark_language_model(
vllm_config: VllmConfig,
*,
targets: type[Module]
| tuple[type[Module], ...]
| None = None,
)
Mark each child module that was assigned to this model during this context as a language model component.
Language model components are automatically skipped in --mm-encoder-only mode.
If targets is set, instead include descendants that are an instance of targets, even if they aren't direct children.
Source code in vllm/model_executor/models/interfaces.py
_mark_tower_model ¶
_mark_tower_model(
vllm_config: VllmConfig,
modalities: set[str] | str,
*,
targets: type[Module]
| tuple[type[Module], ...]
| None = None,
)
Mark each child module that was assigned to this model during this context as a tower model component.
Tower model components are automatically skipped when --limit-mm-per-prompt is set to zero for all of their modalities.
If targets is set, instead include descendants that are an instance of targets, even if they aren't direct children.
Source code in vllm/model_executor/models/interfaces.py
embed_input_ids ¶
embed_input_ids(
input_ids: Tensor,
multimodal_embeddings: MultiModalEmbeddings,
*,
is_multimodal: Tensor,
handle_oov_mm_token: bool = False,
) -> Tensor
embed_input_ids(
input_ids: Tensor,
multimodal_embeddings: MultiModalEmbeddings
| None = None,
*,
is_multimodal: Tensor | None = None,
handle_oov_mm_token: bool = False,
) -> Tensor
Apply token embeddings to input_ids.
If multimodal_embeddings is passed, scatter them into input_ids according to the mask is_multimodal.
In case the multi-modal token IDs exceed the vocabulary size of the language model, you can set handle_oov_mm_token=False to avoid calling the language model's embed_input_ids method on those tokens. Note however that doing so increases memory usage as an additional buffer is needed to hold the input embeddings.
Source code in vllm/model_executor/models/interfaces.py
embed_multimodal ¶
embed_multimodal(**kwargs: object) -> MultiModalEmbeddings
Returns multimodal embeddings generated from multimodal kwargs to be merged with text embeddings.
Note
The returned multimodal embeddings must be in the same order as the appearances of their corresponding multimodal data item in the input prompt.
Source code in vllm/model_executor/models/interfaces.py
get_language_model ¶
get_language_model() -> VllmModel
Returns the underlying language model used for text generation.
This is typically the torch.nn.Module instance responsible for processing the merged multimodal embeddings and producing hidden states
Returns:
| Type | Description |
|---|---|
VllmModel | torch.nn.Module: The core language model component. |
Source code in vllm/model_executor/models/interfaces.py
get_num_mm_connector_tokens ¶
Implement this function to enable LoRA support for the connector module of the multi-modal model. Given the number of vision tokens, output the number of multi-modal connector tokens.
Source code in vllm/model_executor/models/interfaces.py
get_num_mm_encoder_tokens ¶
Implement this function to enable LoRA support for the tower module of the multi-modal model. Given the number of image tokens, output the number of multi-modal encoder tokens.
Source code in vllm/model_executor/models/interfaces.py
get_placeholder_str classmethod ¶
Get the placeholder text for the ith modality item in the prompt.
SupportsMultiModalPruning ¶
Bases: Protocol
The interface required for models that support returning both input embeddings and positions. Model may require custom positions for dynamic pruning of multimodal embeddings.
Source code in vllm/model_executor/models/interfaces.py
recompute_mrope_positions ¶
recompute_mrope_positions(
input_ids: list[int],
multimodal_embeddings: MultiModalEmbeddings,
mrope_positions: LongTensor,
num_computed_tokens: int,
) -> tuple[MultiModalEmbeddings, Tensor, int]
Update part of input mrope positions (starting with num_computed_tokens index). Original mrope_positions are computed for unpruned sequence and becomes incorrect once pruning occurs, so once we prune media tokens we should reflect this in the mrope_positions before we feed it to LLM.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_ids | list[int] | (N,) All input tokens of the prompt containing entire sequence. | required |
multimodal_embeddings | MultiModalEmbeddings | Tuple of multimodal embeddings that fits into the prefill chunk that is being processed. | required |
mrope_positions | LongTensor | Existing mrope positions (3, N) for entire sequence | required |
num_computed_tokens | int | A number of computed tokens so far. | required |
Returns:
| Type | Description |
|---|---|
tuple[MultiModalEmbeddings, Tensor, int] | Tuple of (multimodal_embeddings, mrope_positions, mrope_position_delta). |
Source code in vllm/model_executor/models/interfaces.py
SupportsPP ¶
Bases: Protocol
The interface required for all models that support pipeline parallel.
Source code in vllm/model_executor/models/interfaces.py
supports_pp class-attribute ¶
supports_pp: Literal[True] = True
A flag that indicates this model supports pipeline parallel.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
forward ¶
forward(
*, intermediate_tensors: IntermediateTensors | None
) -> IntermediateTensors | None
Accept IntermediateTensors when PP rank > 0.
Return IntermediateTensors only for the last PP rank.
Source code in vllm/model_executor/models/interfaces.py
make_empty_intermediate_tensors ¶
make_empty_intermediate_tensors(
batch_size: int, dtype: dtype, device: device
) -> IntermediateTensors
Called when PP rank > 0 for profiling purposes.
SupportsQuant ¶
The interface required for all models that support quantization.
Source code in vllm/model_executor/models/interfaces.py
__new__ ¶
__new__(*args, **kwargs) -> Self
Source code in vllm/model_executor/models/interfaces.py
_find_quant_config staticmethod ¶
_find_quant_config(
*args, **kwargs
) -> QuantizationConfig | None
Find quant config passed through model constructor args
Source code in vllm/model_executor/models/interfaces.py
SupportsScoreTemplate ¶
Bases: Protocol
The interface required for all models that support score template.
Source code in vllm/model_executor/models/interfaces.py
supports_score_template class-attribute ¶
supports_score_template: Literal[True] = True
A flag that indicates this model supports score template.
Note
There is no need to redefine this flag if this class is in the MRO of your model class.
get_score_template classmethod ¶
Generate a full prompt by populating the score template with query and document content.
post_process_tokens classmethod ¶
post_process_tokens(prompt: TokensPrompt) -> None
SupportsTranscription ¶
Bases: Protocol
The interface required for all models that support transcription.
Source code in vllm/model_executor/models/interfaces.py
1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 | |
supports_segment_timestamp class-attribute ¶
supports_segment_timestamp: bool = False
Enables the segment timestamp option for supported models by setting this to True.
supports_transcription_only class-attribute ¶
supports_transcription_only: bool = False
Transcription models can opt out of text generation by setting this to True.
__init_subclass__ ¶
Source code in vllm/model_executor/models/interfaces.py
get_generation_prompt classmethod ¶
get_generation_prompt(
audio: ndarray,
stt_config: SpeechToTextConfig,
model_config: ModelConfig,
language: str | None,
task_type: Literal["transcribe", "translate"],
request_prompt: str,
to_language: str | None,
) -> PromptType
Get the prompt for the ASR model. The model has control over the construction, as long as it returns a valid PromptType.
Source code in vllm/model_executor/models/interfaces.py
get_num_audio_tokens classmethod ¶
get_num_audio_tokens(
audio_duration_s: float,
stt_config: SpeechToTextConfig,
model_config: ModelConfig,
) -> int | None
Map from audio duration to number of audio tokens produced by the ASR model, without running a forward pass. This is used for estimating the amount of processing for this audio.
Source code in vllm/model_executor/models/interfaces.py
get_other_languages classmethod ¶
get_speech_to_text_config classmethod ¶
get_speech_to_text_config(
model_config: ModelConfig,
task_type: Literal["transcribe", "translate"],
) -> SpeechToTextConfig
Get the speech to text config for the ASR model.
validate_language classmethod ¶
Ensure the language specified in the transcription request is a valid ISO 639-1 language code. If the request language is valid, but not natively supported by the model, trigger a warning (but not an exception).
Source code in vllm/model_executor/models/interfaces.py
SupportsXDRoPE ¶
Bases: Protocol
The interface required for all models that support XD-RoPE.
Source code in vllm/model_executor/models/interfaces.py
supports_xdrope class-attribute ¶
supports_xdrope: Literal[True] = True
A flag that indicates this model supports XD-RoPE.
Note
There is no need to redefine this flag if this class is in the XDRope of your model class.
get_xdrope_input_positions ¶
get_xdrope_input_positions(
input_tokens: list[int],
mm_features: list[MultiModalFeatureSpec],
) -> Tensor
Get XD-RoPE input positions and delta value for this specific model.
This method should be implemented by each model that supports XD-RoPE to provide model-specific logic for computing input positions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_tokens | list[int] | List of input token IDs | required |
mm_features | list[MultiModalFeatureSpec] | Information about each multi-modal data item | required |
Returns:
| Name | Type | Description |
|---|---|---|
llm_positions | Tensor | Tensor of shape |
Tensor | 4D(P/W/H/T) or 3D(W/H/T) positions. |
Source code in vllm/model_executor/models/interfaces.py
_SupportsPPType ¶
Bases: Protocol
Source code in vllm/model_executor/models/interfaces.py
forward ¶
forward(
*, intermediate_tensors: IntermediateTensors | None
) -> Tensor | IntermediateTensors
make_empty_intermediate_tensors ¶
make_empty_intermediate_tensors(
batch_size: int, dtype: dtype, device: device
) -> IntermediateTensors
_require_is_multimodal ¶
A helper function to be used in the context of vllm.model_executor.models.interfaces.SupportsMultiModal.embed_input_ids to provide a better error message.
Source code in vllm/model_executor/models/interfaces.py
_supports_cross_encoding ¶
_supports_cross_encoding(
model: type[object] | object,
) -> (
TypeIs[type[SupportsCrossEncoding]]
| TypeIs[SupportsCrossEncoding]
)
_supports_lora ¶
_supports_pp_attributes ¶
_supports_pp_inspect ¶
has_inner_state ¶
has_inner_state(model: object) -> TypeIs[HasInnerState]
has_inner_state(
model: type[object],
) -> TypeIs[type[HasInnerState]]
has_inner_state(
model: type[object] | object,
) -> TypeIs[type[HasInnerState]] | TypeIs[HasInnerState]
has_noops ¶
is_attention_free ¶
is_attention_free(model: object) -> TypeIs[IsAttentionFree]
is_attention_free(
model: type[object],
) -> TypeIs[type[IsAttentionFree]]
is_attention_free(
model: type[object] | object,
) -> (
TypeIs[type[IsAttentionFree]] | TypeIs[IsAttentionFree]
)
is_hybrid ¶
is_mixture_of_experts ¶
is_mixture_of_experts(
model: object,
) -> TypeIs[MixtureOfExperts]
requires_raw_input_tokens ¶
supports_any_eagle ¶
supports_any_eagle(
model: type[object],
) -> TypeIs[type[SupportsEagleBase]]
supports_any_eagle(
model: object,
) -> TypeIs[SupportsEagleBase]
supports_any_eagle(
model: type[object] | object,
) -> (
TypeIs[type[SupportsEagleBase]]
| TypeIs[SupportsEagleBase]
)
Check if model supports any EAGLE variant (1, 2, or 3).
Source code in vllm/model_executor/models/interfaces.py
supports_cross_encoding ¶
supports_cross_encoding(
model: type[object],
) -> TypeIs[type[SupportsCrossEncoding]]
supports_cross_encoding(
model: object,
) -> TypeIs[SupportsCrossEncoding]
supports_cross_encoding(
model: type[object] | object,
) -> (
TypeIs[type[SupportsCrossEncoding]]
| TypeIs[SupportsCrossEncoding]
)
supports_eagle ¶
supports_eagle(
model: type[object],
) -> TypeIs[type[SupportsEagle]]
supports_eagle(model: object) -> TypeIs[SupportsEagle]
supports_eagle(
model: type[object] | object,
) -> TypeIs[type[SupportsEagle]] | TypeIs[SupportsEagle]
supports_eagle3 ¶
supports_eagle3(
model: type[object],
) -> TypeIs[type[SupportsEagle3]]
supports_eagle3(model: object) -> TypeIs[SupportsEagle3]
supports_eagle3(
model: type[object] | object,
) -> TypeIs[type[SupportsEagle3]] | TypeIs[SupportsEagle3]
supports_lora ¶
supports_lora(
model: type[object],
) -> TypeIs[type[SupportsLoRA]]
supports_lora(model: object) -> TypeIs[SupportsLoRA]
supports_lora(
model: type[object] | object,
) -> TypeIs[type[SupportsLoRA]] | TypeIs[SupportsLoRA]
Source code in vllm/model_executor/models/interfaces.py
supports_mamba_prefix_caching ¶
supports_mamba_prefix_caching(
model: object,
) -> TypeIs[SupportsMambaPrefixCaching]
supports_mamba_prefix_caching(
model: type[object],
) -> TypeIs[type[SupportsMambaPrefixCaching]]
supports_mamba_prefix_caching(
model: type[object] | object,
) -> (
TypeIs[type[SupportsMambaPrefixCaching]]
| TypeIs[SupportsMambaPrefixCaching]
)
supports_mrope ¶
supports_mrope(
model: type[object],
) -> TypeIs[type[SupportsMRoPE]]
supports_mrope(model: object) -> TypeIs[SupportsMRoPE]
supports_mrope(
model: type[object] | object,
) -> TypeIs[type[SupportsMRoPE]] | TypeIs[SupportsMRoPE]
supports_multimodal ¶
supports_multimodal(
model: type[object],
) -> TypeIs[type[SupportsMultiModal]]
supports_multimodal(
model: object,
) -> TypeIs[SupportsMultiModal]
supports_multimodal(
model: type[object] | object,
) -> (
TypeIs[type[SupportsMultiModal]]
| TypeIs[SupportsMultiModal]
)
supports_multimodal_encoder_tp_data ¶
supports_multimodal_pruning ¶
supports_multimodal_pruning(
model: type[object],
) -> TypeIs[type[SupportsMultiModalPruning]]
supports_multimodal_pruning(
model: object,
) -> TypeIs[SupportsMultiModalPruning]
supports_multimodal_pruning(
model: type[object] | object,
) -> (
TypeIs[type[SupportsMultiModalPruning]]
| TypeIs[SupportsMultiModalPruning]
)
supports_multimodal_raw_input_only ¶
supports_pp ¶
supports_pp(
model: type[object],
) -> TypeIs[type[SupportsPP]]
supports_pp(model: object) -> TypeIs[SupportsPP]
supports_pp(
model: type[object] | object,
) -> bool | TypeIs[type[SupportsPP]] | TypeIs[SupportsPP]
Source code in vllm/model_executor/models/interfaces.py
supports_score_template ¶
supports_score_template(
model: type[object],
) -> TypeIs[type[SupportsScoreTemplate]]
supports_score_template(
model: object,
) -> TypeIs[SupportsScoreTemplate]
supports_score_template(
model: type[object] | object,
) -> (
TypeIs[type[SupportsScoreTemplate]]
| TypeIs[SupportsScoreTemplate]
)
supports_transcription ¶
supports_transcription(
model: type[object],
) -> TypeIs[type[SupportsTranscription]]
supports_transcription(
model: object,
) -> TypeIs[SupportsTranscription]
supports_transcription(
model: type[object] | object,
) -> (
TypeIs[type[SupportsTranscription]]
| TypeIs[SupportsTranscription]
)
supports_xdrope ¶
supports_xdrope(
model: type[object],
) -> TypeIs[type[SupportsXDRoPE]]
supports_xdrope(model: object) -> TypeIs[SupportsXDRoPE]
supports_xdrope(
model: type[object] | object,
) -> TypeIs[type[SupportsXDRoPE]] | TypeIs[SupportsXDRoPE]