Introduces a novel diffusion-based inverse folding method (RL-DIF) that improves the foldable diversity of generated sequences—i.e., it can generate more diverse sequences that still fold into the desired structure.
The model uses categorical denoising diffusion for sequence generation, followed by reinforcement learning (DDPO) to improve structural consistency with the target fold.
During reinforcement learning, ESMFold is used to predict the 3D structure of generated sequences, which is then compared (via TM-score) to the structure predicted from the native sequence to ensure they fold similarly.
Compared to baselines like PiFold and ProteinMPNN, RL-DIF achieves similar sequence recovery and structural consistency but significantly better foldable diversity—a critical advantage in protein design.
Novel protein language model with applications to epitope prediction and ranking hits in campaigns.
NextGenPLM introduces a modular, multimodal transformer that fuses frozen pretrained protein language models with structural information via spectral contact-map embeddings, enabling efficient modeling of multi-chain antibody–antigen complexes without requiring full 3D folding of antibodies.
The model was benchmarked on 112 diverse antibody–antigen complexes against state-of-the-art structure predictors (Chai-1 and Boltz-1x), matching their contact-map and epitope prediction accuracy while achieving ~100× higher throughput (4 complexes/sec vs. ~1 min/complex).
The model was experimentally validated through an internal affinity-maturation campaign. Using its predictions to rank antibody variants led to designs that achieved up to 17× binding affinity improvements over the wild-type, as confirmed by surface plasmon resonance (SPR) assays.
AntiDIF, a diffusion-based inverse folding method specialized for antibodies, built on the RL-DIF framework.
It is trained using antibody-specific data (from SAbDab and OAS) to generate diverse and accurate antibody sequences for a given backbone structure.
Unlike prior methods like AntiFold, which trade off diversity for recovery, AntiDIF achieves a better trade-off: it produces substantially higher sequence diversity across CDRs while maintaining comparable or higher sequence recovery.
Forward folding (via ABodyBuilder2) confirms that AntiDIF's sequences fold into structures that match the native antibody backbones with low RMSD, demonstrating structural plausibility.
Mutational analysis of Trastuzumab framework (FW) regions to modulate antibody stability and function, moving beyond the traditional focus on CDRs.
Authors evaluated antibody-specific language models (AbLang2, AntiBERTy, etc.), a general protein language model (ESM-2), and a structure-based Rosetta approach. While the language models showed limited utility in suggesting beneficial FW mutations, Rosetta provided more reliable predictions based on structural stability.
Language model-derived mutation suggestions were generally less informative than Rosetta’s, which successfully identified stabilizing FW mutations not biased toward germline residues.
Authors experimentally characterized selected mutants in vitro, assessing thermostability, antigen (HER2) binding, and functional effects such as ADCC and tumor cell viability. Some mutations preserved function, while others decoupled binding from downstream activity.
Demonstration of generation of novel natural antibodies biased towards favorable biophysical scores.
Developed a masked discrete diffusion–based generative model by retraining an ESM-2 (8M) architecture on paired heavy- and light-chain sequences from the Observed Antibody Space (OAS), using an order-agnostic diffusion objective to capture natural repertoire features.
Built ridge-regression predictors on ESM-2 embeddings using experimental developability measurements for 246 clinical-stage antibodies, focusing on hydrophobicity (HIC RT) and self-association (AC-SINS pH 7.4), and achieved cross-validated Spearman’s ρ of 0.42 for HIC and 0.49 for AC-SINS.
Showed that unconditionally generated sequences maintain high naturalness, scoring with AbLang-2 and p-IgGen language models to yield log-likelihood distributions comparable to natural and clinical repertoires.
Applied Soft Value-based Decoding in Diffusion (SVDD) guidance during sampling to bias generation toward sequences with low predicted hydrophobicity and self-association, enriching the fraction of candidates in the high-developability quadrant.
Introduced a novel foundational protein folding model, IntFold, designed for both general and specialized biomolecular structure prediction.
Builds on architectural principles from AlphaFold 3, while adding key innovations like modular adapters, a custom attention kernel, and training-free structure ranking. While it does not outperform AlphaFold 3.
While it does not outperform AlphaFold 3, IntFold exceeds all other public reproductions of AF3 across multiple biomolecular tasks.
The fine-tuned IntFold+ model achieves 43.2% accuracy on antibody-antigen interfaces, approaching AlphaFold 3’s 47.9%.
Introduces IBEX, a pan‑immunoglobulin structure predictor for antibodies, nanobodies, and TCRs that explicitly models both bound (holo) and unbound (apo) conformations via a conformation token.
Training data comprise ~14 000 high‑quality antibody (SAbDab) and TCR (STCRDab) structures (including 760 matched apo/holo pairs), augmented by distillation from ~60 000 predicted immunoglobulin‑like structures to improve generalization (from OAS, modeled with ESMFOld and Boltz-1).
Architecture builds on AlphaFold2’s invariant‑point‑attention and the ABodyBuilder2 framework, adding a residual connection from the initial embedding into every structure module and feeding an apo/holo token at each block.
Performance on a private benchmark of 286 novel antibodies shows IBEX achieves mean CDR‑H3 RMSD = 2.28 Å, outperforming Chai‑1 (2.55 Å), Boltz‑1 (2.30 Å), and Boltz‑2 (2.42 Å). Most of its advantage arises from greater robustness to sequences whose CDR‑H3 loops have larger edit distances to any structure in the training set.
Introduced GAMA, an attribution approach for autoregressive LSTM generative models that pinpoints which sequence positions drive binding in a one-antigen–many-antibody setting
Benchmarked on 270 synthetic motif-implant datasets and simulated binder sequences from the Absolut! framework across multiple antigens, then applied to an experimental set of 8,955 Trastuzumab CDRH3 variants binding HER2
On the Trastuzumab-HER2 dataset, GAMA flags CDRH3 positions 103, 104, 105, and 107 as most critical—overlapping three of the four crystallographically determined paratope residues
Novel inverse folding algorithm based on a discrete diffusion framework.
Unlike earlier methods that focused on masked language modeling (MLM) (e.g., LM-Design) or autoregressive sequence generation (e.g., ProteinMPNN), this work introduces a discrete denoising diffusion model (MapDiff) to iteratively refine protein sequences toward the native sequence. The method incorporates an IPA-based refinement step that selectively re-predicts low-confidence residues.
Structural input is limited to the protein backbone only, represented as residue-level graphs. All-atom information is not used for either masked or unmasked residues.
On the CATH 4.2 full test set, their method achieves the best sequence recovery rate of 61.03%, outperforming baselines such as: ProteinMPNN: 48.63% PiFold: 51.40% LM-Design: 53.19% GRADE-IF: 52.63%
MapDiff also achieves the lowest perplexity (3.46) across models.
A novel antibody-specific language model, trained on paired human antibody data, and explicitly designed for practical antibody engineering applications.
The model was trained on a carefully curated dataset of productive, paired sequences, prioritizing biological fidelity over sheer volume or data heterogeneity.
It uses a masked language modelling (MLM) objective. The initial version was based on RoBERTa, while later versions introduced custom architectural modifications tailored to antibody sequences.
The model was benchmarked on recapitulating clinical humanization decisions and outperformed prior models such as Sapiens and AntiBERTa.
It was applied to redesign an existing therapeutic antibody, generating variants with retained or improved affinity, reduced predicted liabilities, and confirmed in vitro performance, including CHO expression and binding assays.