The ESCAPE Benchmark enforces a unified multilabel evaluation protocol on a shared label space covering antibacterial, antifungal, antiviral, and antiparasitic activities, with fixed public train, validation, and test splits. Each method is trained under the same preprocessing and split definitions, and all reported results correspond to the held-out test set.
AMP classification methods fall into two main categories. Sequence-based models learn directly from amino acid sequences, such as AMPlify (Bi-LSTM with attention), TransImbAMP, and AMP-BERT (pretrained protein language models). Feature-augmented models incorporate computed descriptors, including physicochemical and structural features; examples include amPEPpy (CTD features with Random Forest), AMPs-Net (graph neural networks), PEP-Net, and AVP-IFT (contrastive Transformer).
To assess robustness, each method is trained with multiple random seeds differing in initialization and shuffling, and we report the mean ± standard deviation across seeds. The ESCAPE Baseline extends prior designs by jointly encoding peptide sequences and 3D distance maps through bidirectional cross-attention, unifying structural and sequential cues for state-of-the-art multilabel performance.