Improvement upon earlier RFDiffusion, enhancing stability and accuracy in designing enzyme active sites.
Catalytic sites can now be specified at the atomic level instead of the residue backbone level used previously. This eliminates the need to explicitly enumerate side-chain rotamers.
Training uses flow matching, a technique that simplifies and stabilizes the diffusion training process.
Benchmarked on a set of 41 diverse enzyme active sites; RFdiffusion2 succeeded in all 41 cases, significantly outperforming the earlier RFDiffusion, which succeeded in only 16.
Boltz-1 is an open-source reproduction of AlphaFold3, which uses a diffusion module to co-fold molecular structures (proteins, ligands, etc.).
For design purposes, BoltzDesign1 sidesteps the full structure generation step and instead uses only the Pairformer (which outputs a distogram — a probabilistic representation of all pairwise residue distances). This allows broader exploration of sequence space, as it optimizes over the distribution of possible structures rather than committing to a single conformation.
Given a target (such as a small molecule or protein), they weakly initialize a binder sequence using random logits. This sequence is then iteratively refined by backpropagating loss through the Pairformer (and optionally through the Confidence module) to increase the predicted quality of the binder–target interaction.
A full 3D structure can be generated at the end using the Boltz-1 structure module, but this is not part of the optimization loop.
They benchmarked their method in silico on small molecule targets and a set of protein–protein interactions from the BindCraft benchmark, comparing performance to RfDiffusion All-Atom.
BindCraft is an easy-to-use pipeline for computational protein binder design.
It employs AlphaFold2-Multimer to hallucinate binders via backpropagation.
Given a target structure and binder parameters (e.g., sequence length), the binder sequence is initialized with random logits and iteratively optimized via gradient descent through the AF2-Multimer network.
After binder hallucination, the sequence and surface residues are further optimized using MPNNsol, and AF2-Monomer is used to repredict and filter high-confidence designs.
Binder designs were validated experimentally through in vitro assays, X-ray crystallography, and cryo-EM.
Reported success rates ranged from 25% to 100%, with most binders in the nanomolar affinity range, a few in the micromolar range, and backbone RMSDs of ~1.7 Å to 3.1 Å between design models and solved structures.
The model can do autoregressive generation N-to-C, C-to-N, and also supports span infilling.
The architecture is a Transformer with a Sparse Mixture of Experts (MoE), activating about 27% of parameters per forward pass to improve computational efficiency.
They studied how sampling affects training by trying different family-level weighting schemes. Uniform sampling across families (where small and large families have equal chance) gave better diversity and generalization, while unmodified sampling (letting big families dominate) performed worst.
They validated the models by showing that generated proteins express well in wet lab experiments (split-GFP assays, spanning both highly novel and moderately novel sequence spaces).
They used a large thermostability dataset to align model predictions to stability. This alignment is not standard fine-tuning — instead, preference optimization was applied, teaching the model to prefer sequences predicted to have higher stability. Upon experimental validation, aligned models indeed produced proteins with higher expression and stability.
CNN surrogate for costly SCM calculations to correlate with viscosity.
Developed a shallow CNN (tens of thousands of params) to correlate with calculation of SCM using MD simulations - that are inherently slow.
Correlation between CNN and MD-derived is ca. 0.8.
The CNN’s output, when translated into a viscosity prediction (via a correlation with SCM score), achieves a reasonably high correlation with experimentally measured viscosity values—again, with correlation coefficients in the range of 0.7 to 0.8.
Introduced Spatial Charge Map - simple structural descriptor that correlates with viscosity measurements.
Prior studies have correlated pronounced negative surface patches on the antibody’s Fv domain with elevated solution viscosity.
The SCM score aggregates partial charges from the Fv and correlates them to viscosity readouts.
Pfizer Medi and Novartis contributed antibodies to benchmark.
A panel of IgG1 antibodies was selected and their viscosities were measured experimentally at high concentrations (around 150 mg/mL) under nearly identical formulation conditions (e.g., pH 5.8, temperature at 25°C).
In each of the cases the high viscosity abs had the highest SCM scores showing that this is a good way to go about predicting viscosity.
Benchmarking of a proprietary antibody design algorithm.
The method generates novel antibodies against a target, for a specific epitopic constraint & can be used to re-design antibodies.
Altogether they find good affinity scfv binders for six targets for which they found a complex in the PDB, like PD1 and Her2.
The de novo antibody design methods were computationally benchmarked against a curated set of 32 experimentally resolved antibody–antigen complexes using metrics like the G-pass rate and orientation recovery (measured by Fw RMSD). This allowed the authors to compare their method (across different versions) against other approaches.
They compare against RFAntibody and dyMEAN but in the computational tasks - reproducing the orientation of an existing antibody.
Several rounds of biopanning are employed to enrich for high-affinity, target-specific binders from a pre-designed library and do not involve the introduction of new mutations.
They benchmark the developability properties such as monomericity, yield and polyreactivity to show that their antibodies have good properties.
They demonstrate that most of their designed binders have less than 50% H3 sequence identity to antibodies in the PDB.
AbMAP - Language model transfer learning framework with applications to antibody engineering.
Authors address the process of dichotomy of language models in antibodies - either one uses a bare-bones protein model like ESM or only antibody model like Antiberty/IgLM. Normal protein models will not capture hypervariability of CDRs whereas antibody models would focus too much on the framework. They focus solely on CDRs + flanking regions as a solution.
They show their applicability to three off the shelf models with structure template finding as well as low-n generative modeling.