Autoreview

B 82 completed
Other
cli / python · small
273
Files
46,944
LOC
1
Frameworks
6
Languages

Pipeline State

completed
Run ID
#359300
Phase
done
Progress
1%
Started
Finished
2026-04-13 01:31:02
LLM tokens
0

Pipeline Metadata

Stage
Skipped
Decision
skip_scaffold_dup
Novelty
38.27
Framework unique
Isolation
Last stage change
2026-04-16 18:15:42
Deduplication group #47545
Member of a group with 1 similar repo(s) — canonical #99059 view group →
Top concepts (2)
Project DescriptionTesting
Generated by Repobility's multi-pass static-analysis pipeline (https://repobility.com)

AI Prompt

Build me a command-line tool in Python that acts as a fully autonomous pipeline for generating scientific review papers. It needs to take a research topic as input and handle the entire process, from literature search to final formatted output. Key features to include are multi-source searching (like PubMed and Semantic Scholar), structured evidence extraction, thematic clustering, and contradiction detection. The system must support iterative self-critique at outline, section, and holistic levels, and it should save its state after every stage for crash recovery. The CLI should allow specifying the domain and the desired review depth.
python cli scientific-writing automation literature-review ai-pipeline research
Generated by gemma4:latest

Catalog Information

Fully autonomous pipeline for generating publication-ready scientific review papers. Given a topic, AutoReview searches the literature, extracts structured evidence, synthesizes findings, writes a complete review, and self-critiques iteratively until quality thresholds are met — no human interventio

Description

Fully autonomous pipeline for generating publication-ready scientific review papers. Given a topic, AutoReview searches the literature, extracts structured evidence, synthesizes findings, writes a complete review, and self-critiques iteratively until quality thresholds are met — no human interventio

Novelty

3/10

Tags

python cli scientific-writing automation literature-review ai-pipeline research

Technologies

anthropic pydantic

Claude Models

claude-opus-4-6

Quality Score

B
82.1/100
Structure
83
Code Quality
74
Documentation
84
Testing
85
Practices
78
Security
100
Dependencies
60

Strengths

  • CI/CD pipeline configured (github_actions)
  • Good test coverage (89% test-to-source ratio)
  • Code linting configured (ruff (possible))
  • Consistent naming conventions (snake_case)
  • Good security practices \u2014 no major issues detected
  • Properly licensed project

Weaknesses

  • 2191 duplicate lines detected \u2014 consider DRY refactoring
  • 5 'god files' with >500 LOC need decomposition

Security & Health

9.3h
Tech Debt (A)
A
OWASP (100%)
PASS
Quality Gate
A
Risk (0)
Repobility (the analyzer behind this table) · https://repobility.com
MIT
License
3.8%
Duplication
Full Security Report AI Fix Prompts SARIF SBOM

Languages

python
70.5%
markdown
28.1%
yaml
1.2%
toml
0.2%
json
0.0%
shell
0.0%

Frameworks

pytest

Concepts (2)

Repobility · code-quality scanner for AI-generated software · https://repobility.com
CategoryNameDescriptionConfidence
Repobility — same analyzer, your code, free for public repos · /scan/
auto_descriptionProject DescriptionFully autonomous pipeline for generating publication-ready scientific review papers. Given a topic, AutoReview searches the literature, extracts structured evidence, synthesizes findings, writes a complete review, and self-critiques iteratively until quality thresholds are met — no human interventio80%
auto_categoryTestingtesting70%

Quality Timeline

1 quality score recorded.

View File Metrics

Embed Badge

Add to your README:

![Quality](https://repos.aljefra.com/badge/83444.svg)
Quality BadgeSecurity Badge
Export Quality CSVDownload SBOMExport Findings CSV