Indian Flag
Government Of India
A-
A
A+

MAIRA-2 - Multimodal AI for Radiology Report Generation

A multimodal Transformer model designed for generating grounded and non-grounded radiology reports from chest X-rays, integrating vision and language understanding for medical AI research.

About Model

MAIRA-2 is a multimodal Transformer model developed by Microsoft Research Health Futures for automated radiology report generation. It processes chest X-ray images and generates structured reports, with or without grounding, by linking findings to specific image regions. The model is built using the RAD-DINO-MAIRA-2 image encoder and the Vicuna-7b-V1.5 language model, enabling advanced medical text generation. MAIRA-2 is intended for research purposes only and is not suitable for clinical practice due to potential biases and limitations in generalizability. Trained on datasets such as MIMIC-CXR, PadChest, and USMix, it aims to facilitate comparative studies in AI-driven radiology analysis. The model supports findings generation, phrase grounding, and structured medical text generation, providing valuable insights into automated medical imaging interpretation while promoting fairness and responsible AI use in healthcare.

MAIRA-2 - Multimodal AI for Radiology Report Generation

Metadata Metadata

msrla

Microsoft Health Futures

Text Generation

N.A.

Open

Healthcare, Wellness and Family Welfare

11/04/25 06:30:08

0

Activity Overview Activity Overview

  • Downloads0
  • Redirect 6
  • File Size 0
  • Views 240

Tags Tags

  • Transformers
  • safetensors
  • Text Generation
  • conversational
  • maira2
  • custom_code

License Control License Control

msrla

More Models from Microsoft Corporation (India) Pvt. Ltd. More Models from Microsoft Corporation (India) Pvt. Ltd.

BiomedBERT - Domain-Specific Biomedical Language Model
A biomedical NLP model pre-trained from scratch on abstracts and full-text articles from PubMed and PubMed Central, achieving state-of-the-art performance on biomedical language understanding tasks.
inference endpoints
exbert
Bert
Fill-Mask
Transformers
PyTorch
JAX
English
  • See Upvoters1
  • Downloads16
  • File Size0
  • Views195
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.

Phi-1.5 - Lightweight Transformer Model for Text Generation and Coding
A 1.3B parameter Transformer model trained on structured QA, NLP tasks, and Python code, optimized for text generation, summarization, and creative writing, with no fine-tuning from human feedback.
Code Generation
Microsoft
NLP
Transformers
Reasoning
Instruction Following
  • See Upvoters0
  • Downloads10
  • File Size0
  • Views939
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.

Phi-1 - Lightweight Transformer Model for Python Code Generation
A 1.3B parameter Transformer model specialized for Python code generation, trained on a mix of StackOverflow, competition code, Python textbooks, and synthetic datasets, achieving over 50% accuracy on HumanEval.
Reasoning
NLP
Microsoft
Code Generation
Instruction Following
Transformers
  • See Upvoters0
  • Downloads2
  • File Size0
  • Views117
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.

Phi-3-Mini-4K-Instruct GGUF - Optimized AI Model for GGUF Inference
A GGUF-optimized version of Phi-3-Mini-4K-Instruct, designed for efficient low-memory AI inference, supporting quantized and full-precision formats for balanced performance and quality.
Transformers
Microsoft
Text Generation
Instruction Following
Reasoning
GGUF
Quantization
NLP
  • See Upvoters0
  • Downloads3
  • File Size0
  • Views105
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.

Phi-3-Vision-128K-Instruct ONNX - Optimized Multimodal AI for ONNX Runtime
An ONNX-optimized version of Phi-3-Vision-128K-Instruct, designed for efficient multimodal AI inference on CPUs and GPUs, supporting vision and text-based reasoning with INT4 quantization.
Reasoning
ONNX
Transformers
NLP
Microsoft
Multimodal
Text Generation
Visual Question Answering
Image-to-Text
Long Context
  • See Upvoters0
  • Downloads5
  • File Size0
  • Views63
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.

Phi-3-Medium-128K-Instruct ONNX-DirectML - Optimized AI Model for Windows GPU Inference
An ONNX-optimized version of Phi-3-Medium-128K-Instruct, designed for efficient AI inference on Windows machines using DirectML, supporting INT4 quantization for high-performance execution on AMD, Intel, and NVIDIA GPUs.
NLP
Text Generation
Instruction Following
Long Context
Reasoning
ONNX
DirectML
Transformers
Microsoft
  • See Upvoters0
  • Downloads0
  • File Size0
  • Views44
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.

Phi-3-Medium-128K-Instruct ONNX-CUDA - Optimized AI Model for NVIDIA GPU Inference
An ONNX-optimized version of Phi-3-Medium-128K-Instruct, designed for high-speed, efficient inference on NVIDIA GPUs, supporting FP16 and INT4 quantization for enhanced performance.
Microsoft
NLP
Transformers
Text Generation
Instruction Following
Long Context
Reasoning
ONNX
CUDA
  • See Upvoters0
  • Downloads2
  • File Size0
  • Views60
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.

Phi-3-Medium-128K-Instruct ONNX-CPU - Optimized AI Model for CPU Inference
An ONNX-optimized version of Phi-3-Medium-128K-Instruct, designed for efficient long-context inference on CPUs, supporting structured reasoning, text generation, and code processing.
Transformers
Text Generation
Instruction Following
Long Context
Reasoning
ONNX
CPU
NLP
Microsoft
  • See Upvoters0
  • Downloads1
  • File Size0
  • Views45
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.

Phi-3-Medium-4K-Instruct ONNX-DirectML - Optimized AI Model for Windows GPU Inference
An ONNX-optimized version of Phi-3-Medium-4K-Instruct, designed for efficient AI inference on Windows machines using DirectML, supporting INT4 quantization for high-performance execution on AMD, Intel, and NVIDIA GPUs.
NLP
Transformers
DirectML
ONNX
Reasoning
Instruction Following
Text Generation
Microsoft
  • See Upvoters0
  • Downloads0
  • File Size0
  • Views73
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.

Phi-3-Medium-4K-Instruct ONNX-CUDA - Optimized AI Model for NVIDIA GPU Inference
An ONNX-optimized version of Phi-3-Medium-4K-Instruct, designed for high-speed, efficient inference on NVIDIA GPUs, supporting FP16 and INT4 quantization for enhanced performance.
NLP
Microsoft
Text Generation
Instruction Following
Reasoning
ONNX
CUDA
Transformers
  • See Upvoters0
  • Downloads0
  • File Size0
  • Views48
Updated 1 year(s) ago

MICROSOFT CORPORATION (INDIA) PVT. LTD.