
Human Evaluation Validates MindEye2's Superior Image Reconstruction Quality
17 Apr 2025
This section details two-alternative forced-choice experiments assessing human preference for MindEye2 reconstructions against random reconstructions.

Visualizing Brain Function: MindEye2 Reconstructions from ROI-Specific fMRI
17 Apr 2025
Explore MindEye2's ROI analysis, revealing the preferential stimuli associated with brain regions like V1, Face-ROI, Word-ROI, Place-ROI, and Body-ROI

Visualizing Embedding Alignment: UMAP Dimensionality Reduction in MindEye2
17 Apr 2025
Understand the "modality gap" in multimodal learning and how MindEye2 utilizes a diffusion prior to achieve effective embedding alignment

Pretraining Efficiency: MindEye2's Performance with Fewer Subjects
16 Apr 2025
Learn how MindEye2 achieves efficient learning by evaluating the effect of reducing the number of pretraining subjects without significant performance loss

Two-Way Identification: Evaluating MindEye2 Reconstruction Accuracy
16 Apr 2025
Details the evaluation of MindEye2's image reconstructions using two-way comparisons with features extracted from AlexNet, InceptionV3, and CLIP models.

COCO Image Retrieval with MindEye2: Challenges and Insights with OpenCLIP bigG Embeddings
16 Apr 2025
Explore MindEye2's image retrieval experiments on the MS-COCO dataset using OpenCLIP bigG embeddings.

OpenCLIP BigG to CLIP L Conversion: What You Need to Know
16 Apr 2025
To map from OpenCLIP ViT-bigG/14 image latents to CLIP ViT-L/14 image latents during MindEye2 inference we independently trained a linear model

MindEye2 unCLIP vs. Versatile Diffusion: Evaluating Image Generation from CLIP Latents
16 Apr 2025
To compare the image generation capabilities of our unCLIP model with Versatile Diffusion, we computed Fréchet inception distance (FID)

MindEye2 (Not Pretrained) vs. MindEye1
15 Apr 2025
In this section, we show how MindEye2 outperforms MindEye1 even without pretraining on other subjects.