Adversarial Evaluator Library

B 80 completed
Library
unknown / markdown · small
349
Files
34,631
LOC
2
Frameworks
6
Languages

Pipeline State

completed
Run ID
#362576
Phase
done
Progress
1%
Started
Finished
2026-04-13 01:31:02
LLM tokens
0

Pipeline Metadata

Stage
Cataloged
Decision
proceed
Novelty
71.80
Framework unique
Isolation
Last stage change
2026-05-10 03:35:38
Deduplication group #54894
Member of a group with 3 similar repo(s) — canonical #24308 view group →
Top concepts (2)
Project DescriptionWeb Backend
Repobility's GitHub App fixes findings like these · https://github.com/apps/repobility-bot

AI Prompt

Create a Python library called `adversarial-evaluator-library` designed for stress-testing AI models against adversarial attacks on documents, code, and specifications. The tool should allow users to run evaluations using various pre-configured evaluators across multiple providers like Anthropic, OpenAI, Google, and Mistral. I need the ability to run commands like `adversarial evaluate your-document.md` from the command line, and the setup should guide users to configure API keys from a `.env` file. Please structure it to support different review categories and provide a quick start guide using `pip install -e .`.
python fastapi pytest ai-evaluation adversarial-attack library cli markdown api
Generated by gemma4:latest

Catalog Information

This project provides a library for evaluating the robustness of artificial intelligence models against adversarial attacks.

Description

The adversarial-evaluator-library is designed to assess the vulnerability of AI models to malicious inputs, helping developers improve their model's resilience and trustworthiness. This library can be used in various applications where AI model security is a concern, such as image classification, natural language processing, or recommender systems.

الوصف

هذه المكتبة مصممة لتقيم استقرار النماذج الذكية ضد الهجمات العدوانية، مما يساعد المطورين على تحسين ثبات وثقة نماذجهم. يمكن استخدام هذه المكتبة في تطبيقات متعددة حيث يعتبر أمان نموذج الذكاء الاصطناعي من القضايا المهمة مثل تصنيف الصور أو معالجة اللغة الطبيعية أو أنظمة التوصيات.

Novelty

7/10

Tags

adversarial-attack ai-security model-evaluation robustness-testing machine-learning

Technologies

fastapi

Claude Models

claude-opus-4.5 claude-opus-4.6 claude-sonnet-4

Quality Score

B
80.3/100
Structure
83
Code Quality
79
Documentation
79
Testing
85
Practices
74
Security
84
Dependencies
60

Strengths

  • CI/CD pipeline configured (github_actions)
  • Good test coverage (75% test-to-source ratio)
  • Code linting configured (ruff (possible))
  • Good security practices \u2014 no major issues detected

Weaknesses

  • No LICENSE file \u2014 legal ambiguity for contributors
  • 211 duplicate lines detected \u2014 consider DRY refactoring

Recommendations

  • Add a LICENSE file (MIT recommended for open source)
  • Address 25 TODO/FIXME items \u2014 consider tracking them as issues

Security & Health

10.8h
Tech Debt (A)
A
OWASP (100%)
PASS
Quality Gate
A
Risk (1)
Repobility · code-quality intelligence · https://repobility.com
MIT
License
14.1%
Duplication
Full Security Report AI Fix Prompts SARIF SBOM

Languages

markdown
80.1%
python
9.6%
shell
4.4%
yaml
4.4%
json
1.1%
toml
0.2%

Frameworks

FastAPI pytest

Concepts (2)

Source-of-truth: Repobility · https://repobility.com
CategoryNameDescriptionConfidence
Repobility analyzer · published findings · https://repobility.com
auto_descriptionProject Description![CI](https://github.com/movito/adversarial-evaluator-library/actions/workflows/ci.yml) ![Version](https://github.com/movito/adversarial-evaluator-library/releases) ![License](LICENSE)80%
auto_categoryWeb Backendweb-backend70%

Quality Timeline

1 quality score recorded.

View File Metrics

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/86738.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV