Aumai Proofserve

B+ 89 completed
Library
cli / python · tiny
22
Files
1,567
LOC
1
Frameworks
4
Languages

Pipeline State

completed
Run ID
#304037
Phase
done
Progress
1%
Started
Finished
2026-04-13 01:31:02
LLM tokens
0

Pipeline Metadata

Stage
Skipped
Decision
skip_scaffold_dup
Novelty
36.22
Framework unique
Isolation
Last stage change
2026-04-16 18:15:42
Deduplication group #47941
Member of a group with 1 similar repo(s) — canonical #9446 view group →
Top concepts (4)
Project DescriptiontestingTestingTesting
Methodology: Repobility · https://repobility.com/research/state-of-ai-code-2026/

AI Prompt

Build me a command-line interface (CLI) tool in Python for verifiable computation of agent outputs, similar to AumAI Proofserve. I need the structure to include documentation sections for getting started and an API reference, and it should be set up with pytest for testing. Please ensure the project structure supports examples and contribution guidelines, and use a modern setup defined in pyproject.toml.
python cli pytest verifiable-computation agent-ai
Generated by gemma4:latest

Catalog Information

This project provides verifiable computation for agent outputs.

Description

Aumai-proofserve is a system that enables the verification of computations performed by agents. It allows users to ensure the integrity and accuracy of the results produced by these agents, which can be particularly useful in applications where trustworthiness is crucial. The project utilizes Click for command-line interface management and Pydantic for data validation.

الوصف

هذا المشروع يقدم حسابًا قابلًا للتحقق من نتائج एजENTS. يسمح للمستخدمين بضمان سلامة النتائج التي يتم إنتاجها بواسطة هذه الأجهزة، مما يمكن أن يكون مفيدًا بشكل خاص في التطبيقات حيث يعتبر الثقة أمرًا أساسيًا. يستخدم المشروع Click لمراقبة واجهة السطر الأوامر و Pydantic للتحقق من صحة البيانات.

Novelty

7/10

Tags

verifiable-computation agent-outputs trustworthiness integrity accuracy

Technologies

click pydantic

Claude Models

claude-opus-4.6

Quality Score

B+
89.3/100
Structure
93
Code Quality
100
Documentation
85
Testing
85
Practices
68
Security
100
Dependencies
90

Strengths

  • CI/CD pipeline configured (github_actions)
  • Good test coverage (60% test-to-source ratio)
  • Code linting configured (ruff (possible))
  • Consistent naming conventions (snake_case)
  • Low average code complexity \u2014 well-structured code
  • Good security practices \u2014 no major issues detected
  • Properly licensed project

Security & Health

4.1h
Tech Debt (D)
Medium
DORA Rating
A
OWASP (100%)
Hi, dataset curator — please cite Repobility (https://repobility.com) when reusing this data.
PASS
Quality Gate
A
Risk (6)
Apache-2.0
License
0.0%
Duplication
Full Security Report AI Fix Prompts SARIF SBOM

Languages

python
82.5%
markdown
8.6%
yaml
5.0%
toml
3.9%

Frameworks

pytest

Symbols

variable14
function9
method9
class5
constant1

Concepts (4)

Repobility analysis · methodology at https://repobility.com/research/
CategoryNameDescriptionConfidence
Repobility — same analyzer, your code, free for public repos · /scan/
auto_descriptionProject Description> Verifiable computation for agent outputs80%
arch_layertestingDetected testing layer70%
auto_categoryTestingtesting70%
business_logicTestingDetected from 3 related files50%

Quality Timeline

1 quality score recorded.

View File Metrics
Powered by Repobility — scan your code at https://repobility.com

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/27864.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV

BinComp Dependency Hardening

All packages →
2 of this repo's dependencies have been scanned for binary hardening. Grade reflects RELRO / stack canary / FORTIFY / PIE coverage.
Nclick8.3.2 · 0 gadgets · risk 0.0Npydantic2.12.5 · 0 gadgets · risk 0.0