Llm Authenticity Detection

D 60 completed
Other
library / python · tiny
37
Files
4,469
LOC
0
Frameworks
7
Languages

Pipeline State

completed
Run ID
#1534223
Phase
done
Progress
0%
Started
2026-04-16 14:55:28
Finished
2026-04-16 14:55:28
LLM tokens
0

Pipeline Metadata

Stage
Skipped
Decision
skip_scaffold_dup
Novelty
25.63
Framework unique
Isolation
Last stage change
2026-04-16 18:15:42
Deduplication group #47371
Member of a group with 311 similar repo(s) — canonical #1523155 view group →
About: code-quality intelligence by Repobility · https://repobility.com

AI Prompt

Create a Python library for model fingerprinting to detect if an LLM API is being proxied or disguised. I need it to support testing against both official OpenAI APIs and custom third-party endpoints. The system should analyze four detection layers: API/protocol, cognitive/prompt, alignment/review, and logic/math. It should read configuration from `config.yaml` and generate a detailed report showing the detection status for each layer, including a final conclusion and a pass rate.
python library llm ai security fingerprinting api detection
Generated by gemma4:latest

Catalog Information

Create a Python library for model fingerprinting to detect if an LLM API is being proxied or disguised. I need it to support testing against both official OpenAI APIs and custom third-party endpoints. The system should analyze four detection layers: API/protocol, cognitive/prompt, alignment/review, and logic/math. It should read configuration from config.yaml and generate a detailed report showing the detection status for each layer, including a final conclusion and a pass rate.

Tags

python library llm ai security fingerprinting api detection

Quality Score

D
59.5/100
Structure
55
Code Quality
74
Documentation
54
Testing
20
Practices
60
Security
100
Dependencies
90

Strengths

  • Consistent naming conventions (snake_case)
  • Good security practices — no major issues detected

Weaknesses

  • No LICENSE file — legal ambiguity for contributors
  • No CI/CD configuration — manual testing and deployment
  • 3 bare except/catch blocks swallowing errors
  • 225 duplicate lines detected — consider DRY refactoring

Recommendations

  • Add a test suite — start with critical path integration tests
  • Set up CI/CD (GitHub Actions recommended) to automate testing and deployment
  • Add a linter configuration to enforce code style consistency
  • Add a LICENSE file (MIT recommended for open source)
  • Replace bare except/catch blocks with specific exception types

Languages

python
56.2%
html
30.9%
markdown
6.6%
json
4.5%
yaml
0.9%
sql
0.8%
text
0.0%

Frameworks

None detected

Symbols

method73
variable53
constant35
class24
function5

Quality Timeline

1 quality score recorded.

View File Metrics

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/1218933.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV
Provenance: Repobility (https://repobility.com) — every score reproducible from /scan/

BinComp Dependency Hardening

All packages →
3 of this repo's dependencies have been scanned for binary hardening. Grade reflects RELRO / stack canary / FORTIFY / PIE coverage.
Nasyncio4.0.0 · 0 gadgets · risk 0.0Nhttpx0.28.1 · 0 gadgets · risk 0.0Nsetuptools82.0.1 · 0 gadgets · risk 0.0