Agentlint

B 85 completed
Cli Tool
cli / python · small
131
Files
14,990
LOC
1
Frameworks
6
Languages

Pipeline State

completed
Run ID
#358865
Phase
done
Progress
1%
Started
Finished
2026-04-13 01:31:02
LLM tokens
0

Pipeline Metadata

Stage
Skipped
Decision
skip_scaffold_dup
Novelty
42.67
Framework unique
Isolation
Last stage change
2026-04-16 18:15:42
Deduplication group #47626
Member of a group with 2 similar repo(s) — canonical #93576 view group →
Top concepts (2)
Project DescriptionTesting
Repobility · severity-and-effort ranking · https://repobility.com

AI Prompt

Create a command-line tool, similar to agentlint, that provides real-time quality guardrails specifically for AI coding agents. I need it to check for common agent mistakes like accidentally committing API keys (`no-secrets`), leaving debug statements like `print()` or `console.log`, and skipping tests. The tool should support various checks, including warnings for things like force-pushing or leaving TODO comments. Since it's a CLI tool, please structure it using Python and ensure it's testable with pytest.
python cli ai-coding linting testing guardrails pytest security
Generated by gemma4:latest

Catalog Information

agentlint is a tool designed to provide real-time quality guardrails for AI coding agents.

Description

Agentlint is a tool that ensures the quality of code generated by AI coding agents in real-time. It acts as a safety net, preventing errors and ensuring that the output meets certain standards. This tool is particularly useful for developers who rely on AI to generate code, as it helps maintain the integrity and reliability of their projects.

الوصف

هذا الأداة تقوم بمراجعة جودة التعليمات البرمجية التي يتم إنشاؤها بواسطة एजنت الكود في الوقت الفعلي. يعتبر أداة حارسًا للجودة، مما يحمي من الأخطاء ويضمن أن الناتج يلتزم مع معايير معينة. هذا الأداة مفيدة بشكل خاص للمطورين الذين يعتمدون على أجنة الكود لإنشاء التعليمات البرمجية، حيث تساعد في الحفاظ على استقرار وثبات مشاريعهم.

Novelty

7/10

Tags

code-quality ai-generated-code real-time-validation coding-agents software-integrity

Technologies

click

Claude Models

claude-opus-4.6

Quality Score

B
84.8/100
Structure
97
Code Quality
83
Documentation
78
Testing
85
Practices
72
Security
92
Dependencies
60

Strengths

  • CI/CD pipeline configured (github_actions)
  • Good test coverage (51% test-to-source ratio)
  • Code linting configured (ruff (possible))
  • Consistent naming conventions (snake_case)
  • Good security practices \u2014 no major issues detected
  • Properly licensed project

Weaknesses

  • 459 duplicate lines detected \u2014 consider DRY refactoring

Recommendations

  • Address 47 TODO/FIXME items \u2014 consider tracking them as issues

Security & Health

18.3h
Tech Debt (C)
A
OWASP (100%)
PASS
Quality Gate
A
Risk (3)
Same scanner, your repo: https://repobility.com — Repobility
MIT
License
6.6%
Duplication
Full Security Report AI Fix Prompts SARIF SBOM

Languages

python
72.6%
markdown
25.8%
json
0.6%
yaml
0.5%
toml
0.3%
shell
0.1%

Frameworks

pytest

Concepts (2)

Repobility (https://repobility.com) — every score reproducible
CategoryNameDescriptionConfidence
Open data scored by Repobility · https://repobility.com
auto_descriptionProject Description![CI](https://github.com/mauhpr/agentlint/actions/workflows/ci.yml) ![codecov](https://codecov.io/gh/mauhpr/agentlint) ![PyPI](https://pypi.org/project/agentlint/)80%
auto_categoryTestingtesting70%

Quality Timeline

1 quality score recorded.

View File Metrics

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/83008.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV