Computational Antibody Papers

Filter by tags
All
Filter by published year
All
TitleKey points
    • Novel paratope prediction model.
    • It predicts antibody paratopes from sequence alone by concatenating embeddings from six protein language models — AbLang2, AntiBERTy, ESM-2, IgT5, IgBert, and ProtTrans
    • It does not require structural antibody data nor antigen data.
    • Across three benchmark datasets (PECAN, Paragraph, MIPE), it outperforms all sequence-based and structure-modeling methods, achieving PR-AUC up to ~0.76 and ROC-AUC up to ~0.97.
    • The training set is somewhat similar in size to previous methods so the better performance is not due to increase in number of structures in sabdab alone.
    • It was benchmarked against a positional-likelihood baseline (predicting commonly binding positions) and surpassed it by a reasonable margin (PR-AUC ~0.73 vs. ~0.62).
    • Introduced a novel pairing predictor for VhVl chains with a clever strategy to sample negative pairs.
    • Defines three negative sampling strategies:
    • Random pairing, where heavy and light chains are shuffled without constraints.
    • V-gene mismatching, where non-native pairs are generated by combining VH and VL sequences drawn from different V-gene families, but within biologically plausible V-gene segments. This captures realistic but unobserved combinations that could occur during recombination.
    • Full V(D)J mismatching, where heavy and light chains are paired using completely distinct germline origins across V, D, and J gene segments. This produces negative examples that are maximally diverse yet biologically meaningful, reflecting combinations never seen in natural repertoires.
    • Shows that the space of possible VH–VL germline combinations is far larger than what is observed in public datasets, revealing non-random biological constraints on pairing.
    • Demonstrates that models trained on V-gene and especially VDJ mismatched datasets achieve the highest and most generalizable performance, outperforming existing methods such as ImmunoMatch, p-IgGen, and Humatch — confirming that biologically grounded negative sampling is key to robust VH–VL pairing prediction.
    • Novel LLM suite for designing antibodies.
    • Peleke-1 models were fine-tuned on 9,500 antibody–antigen complexes from SAbDab, each annotated with interacting residues identified from crystal structures.
    • Structure was incorporated by annotating epitope residues explicitly in antigen sequences, allowing the LLMs to learn binding context without direct 3D input.
    • Generated antibodies were assessed for humanness, structural validity, stability (FoldX), and binding affinity (HADDOCK3) across seven benchmark antigens.
    • No wet-lab testing was performed.
  • 2025-10-28

    BoltzGen: Toward Universal Binder Design

    • nanobodies
    • protein design
    • Novel protein design framework based on a unified all-atom diffusion model that performs both structure prediction and binder generation.
    • It is fully open and free.
    • Training setup resembles recent diffusion architectures (e.g., AlphaFold3, Chai), but its distinguishing feature is broad wet-lab validation across diverse target types.
    • Experimental scale: generated tens of thousands of nanobody and protein designs for 9 novel targets (no homologous complexes in PDB).
    • Results: tested 15 designs per target, obtaining nanomolar binders for 6 of 9 targets (≈66% success rate) — a notably strong experimental outcome.
    • Novel protein folding predictor that shows that using a simpler model architecture one can get quite far.
    • Architecture/training: SimpleFold swaps AF2/RF-style pair reps, triangle updates, MSAs, and equivariant blocks for plain Transformer layers trained with a flow-matching objective to generate full-atom structures; rotational symmetry is handled via SO(3) augmentation.
    • Training data: It is not crystals-only like previous predictors, the model mixes ~160k PDB experimental structures with large distilled sets from AFDB SwissProt (~270k) and AFESM (≈1.9M; 8.6M for the 3B model), then finetunes on PDB + SwissProt. So practically this is not a head-to-head comparison with other methods as they started from the smaller x-al dataset.
    • Performance: It’s competitive but generally below AlphaFold2/RoseTTAFold2/ESMFold on CAMEO22, while on CASP14 the 3B model beats ESMFold but does not surpass AlphaFold2; overall they claim ~95% of AF2/RF2 on most metrics, with especially strong results for ensemble generation.
    • Benchmarking of computational models for predicting antibody aggregation propensity (developability) using size-exclusion chromatography (SEC) readouts.
    • Developed an experimental dataset of ~1,200 IgG1 antibodies, measured for monomer percentage and ΔRT (difference in retention time) relative to a reference.
    • Evaluated four main prediction pipelines: Sequence + structure-based features (hand-crafted biophysical features from Schrödinger, using AlphaFold2 or ImmuneBuilder for structure). PLM (protein language model) pipeline (e.g., ESM2-8M, fine-tuned or LoRA-adapted). GNN (graph neural network) pipeline using residue graphs from predicted structures. PLM + GNN hybrid pipeline combining sequence embeddings with structural graphs.
    • Two structure prediction tools were benchmarked: AlphaFold2 (high accuracy, slow) and ImmuneBuilder (faster, antibody-optimized, slightly less accurate).
    • The sequence + structure feature model achieved the highest accuracy overall, but low sensitivity (missed many problematic antibodies).
    • The PLM-only pipeline performed nearly as well and offered a much faster, high-throughput solution, making it attractive for early screening.
    • The GNN and PLM + GNN approaches performed comparably, with GNN slightly better for ΔRT predictions but more variable.
    • Using ImmuneBuilder instead of AlphaFold2 reduced sensitivity slightly but greatly improved speed without major loss of accuracy.
    • So all pipelines performed similarly within a narrow performance range, but faster, less resource-intensive approaches (PLM and ImmuneBuilder-based pipelines) offer strong trade-offs for early-stage developability screening.
    • They introduce a template-free diffusion model for antibody humanization.
    • It receives CDR sequences, reconstructing the framework regions without needing humanized templates.
    • Benchmarked against Sapiens, Humatch, Llamanade, and AbNatiV across multiple datasets (e.g., HuAb348, Humab25, Nano300), showing improved humanness, germline identity, and binding retention.
    • Demonstrates preserved or enhanced binding and stability in vitro, though no direct ADA correlation analysis was performed.
    • Investigation how biases in the Observed Antibody Space (OAS) database, such as overrepresentation of a few donors and limited species or chain diversity, affect the performance and generalizability of antibody language models.
    • The authors developed OAS-explore, an open-source pipeline to analyze, filter, balance, and sample OAS data by donor, species, chain type, and publication, enabling systematic assessment of data biases.
    • By training 17 RoBERTa models on datasets with different compositions, they found that models struggle to generalize across chain types, species, individuals, and batches, and that even increased donor diversity alone does not guarantee better performance.
    • They recommend systematic preprocessing, inclusion of more diverse data, and open sharing of datasets and pipelines to mitigate biases and improve antibody LM robustness.
    • Review of currently available large scale software for antibody analysis.
    • Today’s biologics R&D is slowed by fragmented tools and manual data wrangling; the paper proposes a unified, open-architecture platform that spans registration, tracking, analysis, and decisions from discovery through developability.
    • Key components are end-to-end registration of molecules/materials/assays; a harmonized data schema with normalized outputs; automated analytics with consistent QC; complete metadata capture and “data integrity by design.”
    • The platform should natively interface with AI, enable multimodal foundation models and continuous “lab-in-the-loop” learning, and support federated approaches to counter data scarcity while preserving privacy.
    • Dotmatics, Genedata, and Schrödinger each cover pieces (e.g., LiveDesign lacks end-to-end registration), and the authors stress regulatory-ready features.
  • 2025-09-30

    A Generative Foundation Model for Antibody Design

    • generative methods
    • protein design
    • Novel de novo antibody design method.
    • Trained on SAbDab with a time split—6,448 heavy+light complexes + 1,907 single-chain (nanobodies), clustered at 95% ID into 2,436 clusters; val/test are 101 and 60 complexes, plus 27 nanobodies.
    • A two-stage diffusion (structure→seq+structure) followed by consistency distillation, epitope-aware conditioning, frozen ESM-PPI features, and mixed task sampling (CDR-H3 / heavy CDRs / all CDRs / no seq).
    • Antigen structure (can warm-start from AlphaFold3) + VH/VL framework sequences; you pick which CDRs (and lengths) to design; model outputs CDR sequences and the full complex.
    • Runs without an epitope but docking drops (DockQ ~0.246 → 0.069, SR 0.433 → 0.050); AF3 initialization lifts success to 0.627 (≈+0.19 vs baseline).