Webllamamanager

F 44 completed
Other
monorepo / javascript · tiny
36
Files
19,083
LOC
2
Frameworks
7
Languages

Pipeline State

completed
Run ID
#395109
Phase
done
Progress
1%
Started
Finished
2026-04-13 01:31:02
LLM tokens
0

Pipeline Metadata

Stage
Cataloged
Decision
proceed
Novelty
54.40
Framework unique
Isolation
Last stage change
2026-05-10 03:35:31
Deduplication group #53605
Member of a group with 10 similar repo(s) — canonical #119583 view group →
Top concepts (2)
Project DescriptionWeb Backend
Repobility (the analyzer behind this table) · https://repobility.com

AI Prompt

Create a comprehensive LLM management and performance monitoring platform for llama.cpp. I need a modern web UI built with Express and Vite that displays real-time telemetry like GPU temperature, CPU load, and VRAM usage. Key features must include persistent historical analytics for resource usage, detailed request tracking with error breakdown charts, and token throughput analysis. Additionally, implement full conversation logging, an OpenAI-compatible API proxy, and a multi-model router that supports dynamic model loading and LRU eviction. The system should also offer a hands-free fullscreen dashboard mode.
javascript express vite llm monitoring web-ui telemetry llama.cpp api-proxy analytics
Generated by gemma4:latest

Catalog Information

A comprehensive LLM management, debugging, and performance monitoring platform for llama.cpp. Provides a modern web UI with real-time GPU/CPU/memory telemetry, persistent historical analytics, request tracking with error breakdown, token throughput analysis, full conversation logging, and a hands-fr

Description

A comprehensive LLM management, debugging, and performance monitoring platform for llama.cpp. Provides a modern web UI with real-time GPU/CPU/memory telemetry, persistent historical analytics, request tracking with error breakdown, token throughput analysis, full conversation logging, and a hands-fr

Novelty

3/10

Tags

javascript express vite llm monitoring web-ui telemetry llama.cpp api-proxy analytics

Claude Models

claude-opus-4-6

Quality Score

F
44.5/100
Structure
41
Code Quality
58
Documentation
58
Testing
0
Practices
51
Security
54
Dependencies
60

Strengths

  • Properly licensed project

Weaknesses

  • No tests found \u2014 high risk of regressions
  • No CI/CD configuration \u2014 manual testing and deployment
  • Potential hardcoded secrets in 3 files
  • 2514 duplicate lines detected \u2014 consider DRY refactoring
  • 2 'god files' with >500 LOC need decomposition

Recommendations

  • Add a test suite \u2014 start with critical path integration tests
  • Set up CI/CD (GitHub Actions recommended) to automate testing and deployment
  • Add a linter configuration to enforce code style consistency
  • Move hardcoded secrets to environment variables or a secrets manager

Security & Health

4.8h
Tech Debt (A)
A
OWASP (100%)
FAIL
Quality Gate
A
Risk (13)
Open data scored by Repobility · https://repobility.com
MIT
License
7.7%
Duplication
Full Security Report AI Fix Prompts SARIF SBOM

Languages

javascript
44.9%
json
27.4%
css
20.7%
markdown
5.0%
shell
1.9%
html
0.1%
xml
0.0%

Frameworks

Express Vite

Concepts (2)

Same analyzer free for public repos: https://repobility.com
CategoryNameDescriptionConfidence
Source: Repobility analyzer · https://repobility.com
auto_descriptionProject DescriptionA comprehensive LLM management, debugging, and performance monitoring platform for llama.cpp. Provides a modern web UI with real-time GPU/CPU/memory telemetry, persistent historical analytics, request tracking with error breakdown, token throughput analysis, full conversation logging, and a hands-fr80%
auto_categoryWeb Backendweb-backend70%

Quality Timeline

1 quality score recorded.

View File Metrics

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/119462.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV