Hallunot

C+ 79 completed
Other
web_app / json · small
112
Files
16,913
LOC
3
Frameworks
6
Languages

Pipeline State

completed
Run ID
#357716
Phase
done
Progress
1%
Started
Finished
2026-04-13 01:31:02
LLM tokens
0

Pipeline Metadata

Stage
Skipped
Decision
skip_scaffold_dup
Novelty
55.67
Framework unique
Isolation
Last stage change
2026-04-16 18:15:42
Deduplication group #48254
Member of a group with 1 similar repo(s) — canonical #98591 view group →
Top concepts (2)
Project DescriptionWeb Frontend
Repobility · code-quality intelligence platform · https://repobility.com

AI Prompt

Create a web application called Hallunot that helps developers assess the risk of using specific library and framework versions with different Large Language Models (LLMs). The tool should calculate a final heuristic score by combining a Library Confidence Score (LCS) and an LLM Generic Score (LGS). I need the UI to display this score color-coded as green, yellow, or red. The tech stack should use Next.js 16, TypeScript, and Tailwind CSS for styling. The application should integrate data from the Libraries.io API and models.dev API to calculate these scores, and I'd like to use Vitest for testing the core scoring logic.
typescript next.js tailwindcss web-app llm api scoring react vitest json
Generated by gemma4:latest

Catalog Information

Hallunot is a tool that helps developers choose library and framework versions that are more likely to be known well by Large Language Models (LLMs), reducing hallucinations when coding with AI.

Description

Hallunot combines library-level and model-level signals into a single heuristic score for every library + version + LLM combination. It evaluates factors such as stability, simplicity, popularity, language affinity, and recency risk to provide a color-coded risk assessment. The tool uses Clean Architecture and Domain-Driven Design principles to ensure maintainability and testability.

الوصف

يُستخدم Hallunot لتحديد إصدارات المكتبات والفرームوركس التي تتمتع بمرونة أكبر مع Large Language Models (LLMs)، مما يقلل من الحلوكات عند كتابة الكود باستخدام الذكاء الاصطناعي. يتضمن Hallunot تراكب signals library-level و model-level في نتيجة تقييمية واحدة لكل kombinasi library + version + LLM. يقيّم العوامل مثل stabilitiy، simplicity، popularity، language affinity، و recency risk لتقديم تقييم خطر ملون.

Novelty

7/10

Tags

library-version-chooser llm-hallucination-reduction ai-assisted-coding clean-architecture domain-driven-design

Technologies

framer-motion nextjs radix-ui react tailwind vitest zod

Claude Models

claude-opus-4.5 claude-opus-4.6

Quality Score

C+
78.8/100
Structure
76
Code Quality
97
Documentation
44
Testing
65
Practices
87
Security
100
Dependencies
60

Strengths

  • CI/CD pipeline configured (github_actions)
  • Code linting configured (eslint)
  • Low average code complexity \u2014 well-structured code
  • Good security practices \u2014 no major issues detected
  • Properly licensed project

Weaknesses

  • 2 files with critical complexity need refactoring
  • 350 duplicate lines detected \u2014 consider DRY refactoring

Security & Health

10.8h
Tech Debt (B)
A
OWASP (100%)
PASS
Quality Gate
A
Risk (2)
Repobility · severity-and-effort ranking · https://repobility.com
MIT
License
6.6%
Duplication
Full Security Report AI Fix Prompts SARIF SBOM

Languages

json
60.4%
typescript
37.7%
markdown
1.4%
css
0.2%
yaml
0.2%
javascript
0.1%

Frameworks

React Next.js Vitest

Concepts (2)

Repobility · the analyzer behind every row · https://repobility.com
CategoryNameDescriptionConfidence
Want this analysis on your repo? https://repobility.com/scan/
auto_descriptionProject DescriptionHallunot is a tool that helps developers pick library and framework versions that a given LLM is more likely to know well — reducing hallucinations when coding with AI without extra context (no RAG, no web search, no MCP).80%
auto_categoryWeb Frontendweb-frontend70%

Quality Timeline

1 quality score recorded.

View File Metrics

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/81850.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV