Aumai Agentsim

B+ 89 completed
Library
cli / python · tiny
22
Files
1,285
LOC
1
Frameworks
4
Languages

Pipeline State

completed
Run ID
#303983
Phase
done
Progress
1%
Started
Finished
2026-04-13 01:31:02
LLM tokens
0

Pipeline Metadata

Stage
Skipped
Decision
skip_scaffold_dup
Novelty
35.28
Framework unique
Isolation
Last stage change
2026-04-16 18:15:42
Deduplication group #47941
Member of a group with 1 similar repo(s) — canonical #9446 view group →
Top concepts (5)
Project DescriptiontestingTestingFactoryTesting
About: code-quality intelligence by Repobility · https://repobility.com

AI Prompt

Create a command-line tool in Python that simulates multi-agent interactions for testing purposes. I need the structure to support core functionality, and it should be set up with pytest for testing. Please ensure the project includes necessary documentation sections like 'Getting Started' and 'API Reference' within the docs folder, and provide examples in an examples directory. The project should be structured to be easily installable via pip.
python cli testing multi-agent simulation pytest
Generated by gemma4:latest

Catalog Information

This project simulates multi-agent interactions to facilitate testing and development.

Description

Aumai-agentsim is a tool designed to simulate complex interactions between multiple agents. It allows developers to test and refine their systems in a controlled environment, making it easier to identify and address potential issues. The simulator can be used for a variety of applications, including but not limited to, testing AI models, evaluating system performance, and optimizing resource allocation.

الوصف

هذا المشروع يsimulate التفاعلات بين العديد من الوكلاء لتمكين الاختبار والتنمية. يسمح المطورون باختبار وتحسين أنظمتهم في بيئة محكومة، مما يجعل من السهل تحديد وإصلاح أي مشاكل محتملة. يمكن استخدام المحاكي لأغراض متعددة، بما في ذلك ولكن لا يقتصر على اختبار النماذج الذكية، تقييم أداء النظام، وتحسين تخصيص الموارد.

Novelty

5/10

Tags

multi-agent-simulation testing-tool system-performance-evaluation ai-model-testing resource-allocation-optimization

Technologies

click pydantic

Claude Models

claude-opus-4.6

Quality Score

B+
89.3/100
Structure
93
Code Quality
100
Documentation
85
Testing
85
Practices
68
Security
100
Dependencies
90

Strengths

  • CI/CD pipeline configured (github_actions)
  • Good test coverage (60% test-to-source ratio)
  • Code linting configured (ruff (possible))
  • Consistent naming conventions (snake_case)
  • Good security practices \u2014 no major issues detected
  • Properly licensed project

Security & Health

4.1h
Tech Debt (D)
Medium
DORA Rating
A
OWASP (100%)
Generated by Repobility's multi-pass static-analysis pipeline (https://repobility.com)
PASS
Quality Gate
A
Risk (8)
Apache-2.0
License
0.0%
Duplication
Full Security Report AI Fix Prompts SARIF SBOM

Languages

python
77.9%
markdown
10.8%
yaml
6.3%
toml
4.9%

Frameworks

pytest

Symbols

variable26
method14
class8
function5

Concepts (5)

Same analyzer free for public repos: https://repobility.com
CategoryNameDescriptionConfidence
Repobility · severity-and-effort ranking · https://repobility.com
auto_descriptionProject Description> Simulate multi-agent interactions for testing80%
arch_layertestingDetected testing layer70%
auto_categoryTestingtesting70%
design_patternFactoryFound factory/create_ naming patterns60%
business_logicTestingDetected from 3 related files50%

Quality Timeline

1 quality score recorded.

View File Metrics
All rows scored by the Repobility analyzer (https://repobility.com)

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/27810.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV

BinComp Dependency Hardening

All packages →
2 of this repo's dependencies have been scanned for binary hardening. Grade reflects RELRO / stack canary / FORTIFY / PIE coverage.
Nclick8.3.2 · 0 gadgets · risk 0.0Npydantic2.12.5 · 0 gadgets · risk 0.0