CNN surrogate for costly SCM calculations to correlate with viscosity.
Developed a shallow CNN (tens of thousands of params) to correlate with calculation of SCM using MD simulations - that are inherently slow.
Correlation between CNN and MD-derived is ca. 0.8.
The CNN’s output, when translated into a viscosity prediction (via a correlation with SCM score), achieves a reasonably high correlation with experimentally measured viscosity values—again, with correlation coefficients in the range of 0.7 to 0.8.
Introduced Spatial Charge Map - simple structural descriptor that correlates with viscosity measurements.
Prior studies have correlated pronounced negative surface patches on the antibody’s Fv domain with elevated solution viscosity.
The SCM score aggregates partial charges from the Fv and correlates them to viscosity readouts.
Pfizer Medi and Novartis contributed antibodies to benchmark.
A panel of IgG1 antibodies was selected and their viscosities were measured experimentally at high concentrations (around 150 mg/mL) under nearly identical formulation conditions (e.g., pH 5.8, temperature at 25°C).
In each of the cases the high viscosity abs had the highest SCM scores showing that this is a good way to go about predicting viscosity.
Benchmarking of a proprietary antibody design algorithm.
The method generates novel antibodies against a target, for a specific epitopic constraint & can be used to re-design antibodies.
Altogether they find good affinity scfv binders for six targets for which they found a complex in the PDB, like PD1 and Her2.
The de novo antibody design methods were computationally benchmarked against a curated set of 32 experimentally resolved antibody–antigen complexes using metrics like the G-pass rate and orientation recovery (measured by Fw RMSD). This allowed the authors to compare their method (across different versions) against other approaches.
They compare against RFAntibody and dyMEAN but in the computational tasks - reproducing the orientation of an existing antibody.
Several rounds of biopanning are employed to enrich for high-affinity, target-specific binders from a pre-designed library and do not involve the introduction of new mutations.
They benchmark the developability properties such as monomericity, yield and polyreactivity to show that their antibodies have good properties.
They demonstrate that most of their designed binders have less than 50% H3 sequence identity to antibodies in the PDB.
AbMAP - Language model transfer learning framework with applications to antibody engineering.
Authors address the process of dichotomy of language models in antibodies - either one uses a bare-bones protein model like ESM or only antibody model like Antiberty/IgLM. Normal protein models will not capture hypervariability of CDRs whereas antibody models would focus too much on the framework. They focus solely on CDRs + flanking regions as a solution.
They show their applicability to three off the shelf models with structure template finding as well as low-n generative modeling.
Benchmarking of the structure prediction/docking and co-folding methods for antibody design
Authors measure the impact of antibody-antigen model quality on the success rate of epitope prediction and antibody design.
For epitope prediction and antibody design they use a proxy measure of DockQ score - they call success when DockQ is better than 0.23, for antibody design they use a stricter threshold of 0.49.
Using these measures, AlphaFold3 comes out on top, and it would be successful roughly ~47% times.
THey introduce an approach where ProPOSE and ZDOCK decoys are refined using AlphaFold. With this combined protocol they reach success rates of 35% for epitope mapping and 30% for antibody design.
Novel inverse folding algorithm, studying the effect of pretraining on the effectiveness of Antibody design
Authors check multiple inverse folding regimens, pretraining on general proteins, ppi interfaces and antibody-antigen interfaces and likewise finetuning on these.
They only use the backbone atoms (N,C,Ca), with special provisions for Cb.
They mask portion of the sequence and have the model guess its amino acids.
The 37% recovery at 100% masking appears slightly lower than the same feat for proteinMPNN.
Pretraining on antibodies still holds a signal towards antibody-antigen complexes, showing the power of such pre-training.