Infer

C+ 76 completed
Ai Ml
unknown / python · small
106
Files
24,311
LOC
2
Frameworks
5
Languages

Pipeline State

completed
Run ID
#368671
Phase
done
Progress
1%
Started
Finished
2026-04-13 01:31:02
LLM tokens
0

Pipeline Metadata

Stage
Cataloged
Decision
proceed
Novelty
60.40
Framework unique
Isolation
Last stage change
2026-05-10 03:35:34
Deduplication group #50151
Member of a group with 12 similar repo(s) — canonical #31076 view group →
Top concepts (2)
Project DescriptionWeb Backend
Repobility · severity-and-effort ranking · https://repobility.com

AI Prompt

Create an educational Large Language Model (LLM) inference runtime using Python. I need the structure to allow users to test and learn from various LLMs. Please include setup instructions, ideally using a tool like `uv` for dependency management, and ensure there are clear sections for running tests using `pytest` and formatting code with `ruff`. The project should be well-documented, perhaps using Markdown files for explanations.
python llm inference fastapi pytest educational runtime ai
Generated by gemma4:latest

Catalog Information

This project provides an educational Large Language Model (LLM) inference runtime for users to test and learn from.

Description

The infer project is a Python-based runtime that enables users to easily deploy and use pre-trained LLMs for various tasks. Built using FastAPI, Hugging Face Transformers, PyTorch, and Uvicorn, this project provides a simple way to perform inference on educational LLMs. With its lightweight design and modular architecture, infer makes it easy to integrate LLMs into applications or workflows.

الوصف

هذا المشروع يوفّر وقت تشغيل الاستدلال للنماذج اللغوية الكبيرة التعليمية للمستخدمين لاختبارها وتعلم منها. بني باستخدام بايثون ويتضمن استخدام FastAPI، Transformers من Hugging Face، PyTorch، و Uvicorn، هذا المشروع يوفّر طريقة بسيطة للقيام بالاستدلال على النماذج اللغوية التعليمية. مع تصميمه الخفيف ومعماريته المكونة من أجزاء متسلسلة، infer يجعل من السهل دمج النماذج اللغوية في التطبيقات أو تدفقات العمل.

Novelty

5/10

Tags

educational large-language-models inference-runtime natural-language-processing machine-learning deep-learning

Technologies

fastapi huggingface pytorch uvicorn

Claude Models

claude-opus-4.6

Quality Score

C+
75.8/100
Structure
81
Code Quality
75
Documentation
79
Testing
70
Practices
63
Security
92
Dependencies
60

Strengths

  • Good test coverage (95% test-to-source ratio)
  • Code linting configured (ruff (possible))
  • Consistent naming conventions (snake_case)
  • Good security practices \u2014 no major issues detected
  • Properly licensed project

Weaknesses

  • No CI/CD configuration \u2014 manual testing and deployment
  • 517 duplicate lines detected \u2014 consider DRY refactoring
  • 3 'god files' with >500 LOC need decomposition

Recommendations

  • Set up CI/CD (GitHub Actions recommended) to automate testing and deployment

Security & Health

4.6h
Tech Debt (A)
A
OWASP (100%)
PASS
Quality Gate
A
Risk (0)
Source: Repobility analyzer · https://repobility.com
MIT
License
9.5%
Duplication
Full Security Report AI Fix Prompts SARIF SBOM

Languages

python
69.9%
markdown
26.2%
json
3.6%
toml
0.3%
yaml
0.1%

Frameworks

FastAPI pytest

Concepts (2)

Repobility analysis · methodology at https://repobility.com/research/
CategoryNameDescriptionConfidence
All rows scored by the Repobility analyzer (https://repobility.com)
auto_descriptionProject DescriptionEducational LLM inference runtime.80%
auto_categoryWeb Backendweb-backend70%

Quality Timeline

1 quality score recorded.

View File Metrics

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/92872.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV