Jury

C+ 70 completed
Ai Ml
web_app / typescript · small
300
Files
55,867
LOC
2
Frameworks
9
Languages

Pipeline State

completed
Run ID
#372284
Phase
done
Progress
1%
Started
Finished
2026-04-13 01:31:02
LLM tokens
0

Pipeline Metadata

Stage
Cataloged
Decision
proceed
Novelty
73.00
Framework unique
Isolation
Last stage change
2026-05-10 03:35:31
Deduplication group #54055
Member of a group with 1 similar repo(s) — this repo is canonical view group →
Top concepts (2)
Project DescriptionFull Stack
Repobility's GitHub App fixes findings like these · https://github.com/apps/repobility-bot

AI Prompt

Create an open-source "Jury" application using Next.js and TypeScript. This tool should function as a Perplexity Model Council, allowing users to query multiple AI models in parallel. The core functionality must include peer-reviewing the responses, calculating a Borda count trust score, and synthesizing the results into a single, definitive verdict. I also need features for detecting blind spots and scoring the complexity of the initial question. Please structure the project to handle API keys for multiple models like OpenRouter.
typescript next.js web-app ai llm parallel-processing api open-source
Generated by gemma4:latest

Catalog Information

Jury is an open-source Perplexity Model Council that queries multiple AI models in parallel, has them peer-review each other, and synthesizes their responses into a single verdict.

Description

Jury is an open-source platform that leverages the collective intelligence of multiple AI models to provide accurate and trustworthy answers. It works by querying multiple models in parallel, having them peer-review each other's responses, and synthesizing the results into a single verdict. This approach helps to identify blind spots and provides a more comprehensive understanding of complex questions.

الوصف

يورى هو منصة مفتوحة المصدر تستخدم Collective Intelligence من عدة أجهزة التعلم الآلي لتقديم الإجابات الدقيقة والموثوقة. يعمل على استفسار عدة أجهزة في نفس الوقت، وقياس إجاباتها، ثم دمج النتائج في إجابة واحدة. يساعد هذا النهج في تحديد الأخطاء التي يغفلها جميع الأجهزة، وتوفير فهم شامل أكثر للأسئلة المعقدة.

Novelty

9/10

Tags

question-answering collective-intelligence peer-review trust-scoring blind-spot-detection

Technologies

nextjs react supabase tailwind vitest

Claude Models

claude-opus-4.6

Quality Score

C+
70.5/100
Structure
64
Code Quality
86
Documentation
65
Testing
75
Practices
57
Security
65
Dependencies
60

Strengths

  • CI/CD pipeline configured (github_actions)
  • Good test coverage (32% test-to-source ratio)
  • Code linting configured (eslint)

Weaknesses

  • No LICENSE file \u2014 legal ambiguity for contributors
  • 5 files with critical complexity need refactoring
  • Potential hardcoded secrets in 1 files
  • 1931 duplicate lines detected \u2014 consider DRY refactoring
  • 3 'god files' with >500 LOC need decomposition

Recommendations

  • Add a LICENSE file (MIT recommended for open source)
  • Move hardcoded secrets to environment variables or a secrets manager
  • Address 31 TODO/FIXME items \u2014 consider tracking them as issues

Security & Health

26.8h
Tech Debt (B)
A
OWASP (100%)
PASS
Quality Gate
A
Risk (1)
All rows above produced by Repobility · https://repobility.com
Unknown
License
2.5%
Duplication
Full Security Report AI Fix Prompts SARIF SBOM

Languages

typescript
60.6%
json
31.3%
markdown
3.6%
text
1.9%
python
1.7%
javascript
0.5%
css
0.2%
shell
0.1%
yaml
0.1%

Frameworks

Next.js Vitest

Concepts (2)

Repobility (https://repobility.com) — every score reproducible
CategoryNameDescriptionConfidence
Repobility · code-quality intelligence · https://repobility.com
auto_descriptionProject DescriptionThe open-source Perplexity Model Council.80%
auto_categoryFull Stackfull-stack70%

Quality Timeline

1 quality score recorded.

View File Metrics

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/96501.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV