PyPI Stats
  • Insights
  • PyPI
  • GitHub
  • Search
  • Compare
  • Advisories
  • Ecosystem
  • About
Home

Search Packages

Find Python packages by name, description, GitHub topic, or filter by metrics
makcedward
nlpaug

Data augmentation for NLP

716K 5K 477
Trusted-AI
adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

31K 6K 1K
QData
textattack

TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/

14K 3K 445
HarryK24
torchattacks

PyTorch implementation of adversarial attacks [torchattacks]

12K 2K 371
bethgelab
foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

10K 3K 439
fra31
pyautoattack

Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"

3K 743 117
DSE-MSU
deeprobust

A pytorch adversarial library for attack and defense methods on images and graphs

2K 1K 192
gparrella12
ml-pentest

This is a software framework that can be used for the evaluation of the robustness of Malware Detection methods with respect to adversarial attacks.

1K 1 0
BorealisAI
advertorch

A Toolbox for Adversarial Robustness Research

1K 1K 199
dynaroars
neuralsat

NeuralSAT: A DPLL(T) Framework for Verifying Deep Neural Networks

934 31 11
infinitode
deepdefend

DeepDefend is an open-source Python library for adversarial attacks and defenses in deep learning models, enhancing the security and robustness of AI systems.

742 2 0
thunlp
openattack

An Open-Source Package for Textual Adversarial Attack.

731 774 128
HarryK24
torchdefenses

Adversarial Defenses for PyTorch

674 2K 371
spencerwooo
torchattack

🛡 A curated list of adversarial attacks in PyTorch, with a focus on transferable black-box attacks.

658 71 6
ain-soph
trojanzoo

TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.

618 303 66
microsoft
promptbench

A unified evaluation framework for large language models

603 3K 220
AINTRUST-AI
aixploit

AI redTeaming Python library

491 8 2
dlshriver
dnnf

Deep Neural Network Falsification

459 9 4
wuhanstudio
deepapi

Deep Learning Cloud Service for Black-Box Adversarial Attacks

456 5 0
SemanticBrainCorp
semanticshield

The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).

449 23 2
cassidylaidlaw
perceptual-advex

Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".

423 56 9
jaschadub
harmonydagger

A tool for protecting audio against use in AI training

415 53 9
neu-autonomy
nfl-veripy

Formal Verification of Neural Feedback Loops (NFLs)

392 84 17
TortueSagace
versatile-evasion-attacks

Security protocols for estimating adversarial robustness of machine learning models for both tabular and image datasets. This package implements a set of evasion attacks based on metaheuristic optimization algorithms, and complex cost functions to give reliable results for tabular problems.

340 3 0
    • Data from PyPI, GitHub, ClickHouse, and BigQuery