Rewrite on welcome to lf section#2872
Rewrite on welcome to lf section#2872annabellscha wants to merge 1 commit intolangfuse-academy-researchfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
@claude review |
|
|
||
| Across the loop, teams are usually balancing three things at once: output quality, latency, and cost. The point is not to optimize one number in isolation, but to make tradeoffs explicit and grounded in evidence from your own application. | ||
|
|
||
| ## Where the docs fit |
| That changes what teams need to understand and manage. It is no longer enough to ask whether the system ran. You also need a way to reason about output quality, iteration, and the tradeoffs that come with shipping AI products. | ||
|
|
||
| ## Why LLM observability is different | ||
| Langfuse Academy exists to help you build that mental model. It maps the AI engineering lifecycle step by step so you can understand how the pieces fit together and what it takes to move from prototype to production. |
There was a problem hiding this comment.
Swap mental model with understanding.
| Building with LLMs changes the job of engineering teams. Once outputs become probabilistic, a system can be technically healthy and still produce responses that are wrong, incomplete, off-brand, unsafe, or simply not useful. | ||
|
|
||
| Rather than focusing on individual product features, Academy is meant to help you understand the bigger picture, and how teams can work with that change in a systematic way. | ||
| That changes what teams need to understand and manage. It is no longer enough to ask whether the system ran. You also need a way to reason about output quality, iteration, and the tradeoffs that come with shipping AI products. |
There was a problem hiding this comment.
Change this to output, quality, cost, latency, and their trade-offs that come with AI shifting. With shipping AI products
|
|
||
| The loop is a working model, not a strict waterfall. Teams move through it repeatedly, and different parts of the loop become more important as a product matures. | ||
|
|
||
| ## The steps |
There was a problem hiding this comment.
Generally, let's try to make all steps actionable, so:
- Tracing
- monitoring
- building datasets
- experimenting
- evaluate
Something like that, so it's all the kind of same word.
There was a problem hiding this comment.
make sure to propagate
| ## Why we are publishing this | ||
|
|
||
| But LLM applications introduce a different kind of challenge. Their behavior is probabilistic: the same input can produce different outputs, and a response can look plausible even when it is wrong, incomplete, off-brand, unsafe, or simply unhelpful. In other words, a request can succeed technically and still fail for the user. | ||
| Langfuse is open source, and we want to open source the conceptual side of AI engineering too. The Academy is our attempt to make the mental models, vocabulary, and workflows behind LLM application development easier to access for everyone. |
There was a problem hiding this comment.
Is our way of making mental models, vocabulary, and workflows behind LLM application development easier to access for everyone
| - Software engineers moving into AI product development | ||
| - Product managers who need to reason about quality, iteration, and tradeoffs | ||
| - People learning the field and trying to understand the core concepts | ||
| - Technical and business leaders who need a working model of how AI systems are built and improved |
There was a problem hiding this comment.
Agents that help the humans figure out LLM engineering.
| "title": "Academy", | ||
| "pages": [ | ||
| "index", | ||
| "ai-engineering-loop", |
There was a problem hiding this comment.
Not sure if it's here, but it's weird that the AI engineering loop is nested in the AI engineering loop. It should just be one tab that I click on, and then I get the content; nothing to unfold.
|
|
||
| ## The steps | ||
|
|
||
| ### 1. Tracing |
There was a problem hiding this comment.
While I like the content, it's visually difficult because there is so much text. Can we somehow either bullet point or shorten it without losing a lot of content, because I think it gives the high-level overview so I
annabellscha
left a comment
There was a problem hiding this comment.
Please implement all commands.
| Tracing captures the full path of a request so you can inspect prompts, retrieved context, tool calls, outputs, latency, and cost in one place. Read [Tracing](/academy/tracing) for a breakdown of what a useful trace looks like and why traces become the foundation for everything else. | ||
|
|
||
| ### 2. Monitoring | ||
|
|
||
| Monitoring turns raw traces into ongoing visibility by tracking trends and surfacing the cases that deserve attention. Read [Monitoring](/academy/monitoring) to understand how teams watch quality, cost, latency, and production failures over time. | ||
|
|
||
| ### 3. Datasets | ||
|
|
||
| Datasets turn real scenarios into repeatable test cases so you can check whether a change helps across more than a handful of examples. Read [Datasets](/academy/datasets) for how to structure dataset items and when it makes sense to split or grow a dataset. | ||
|
|
||
| ### 4. Experiments | ||
|
|
||
| Experiments let you change one variable at a time and compare outputs against a stable baseline instead of relying on intuition alone. Read [Experiments](/academy/experiments) to see how to isolate variables, compare variants, and learn what actually improved. | ||
|
|
||
| ### 5. Evaluation | ||
|
|
||
| Evaluation is how you decide whether results are good enough to ship, using manual review, code-based checks, or LLM judges depending on the task. Read [Evaluate](/academy/evaluate) for how teams score outputs and turn qualitative judgments into a repeatable process. |
There was a problem hiding this comment.
🔴 All primary CTA links in the new ai-engineering-loop/overview.mdx and the rewritten index.mdx will 404. Each academy subsection contains only overview.mdx (no index.mdx), so under fumadocs these resolve at /academy/<section>/overview, not /academy/<section> — and unlike the docs section, no equivalent redirects exist in lib/redirects.js. Fix by either appending /overview to each link, or adding /academy/* entries to lib/redirects.js:60-71 analogous to the existing docs redirects.
Extended reasoning...
What's broken
Every primary call-to-action link added by this PR points to a bare /academy/<section> URL, but those URLs do not resolve. The affected links are:
content/academy/index.mdx:
- Line 30:
[The AI Engineering Loop](/academy/ai-engineering-loop) - Lines 32–36:
/academy/tracing,/academy/monitoring,/academy/datasets,/academy/experiments,/academy/evaluate
content/academy/ai-engineering-loop/overview.mdx:
- Line 18:
[Tracing](/academy/tracing) - Line 22:
[Monitoring](/academy/monitoring) - Line 26:
[Datasets](/academy/datasets) - Line 30:
[Experiments](/academy/experiments) - Line 34:
[Evaluate](/academy/evaluate)
Why they 404
I verified directly that each academy subfolder (ai-engineering-loop, tracing, monitoring, datasets, experiments, evaluate) contains only meta.json + overview.mdx — no index.mdx. Under fumadocs-core's default source loader, a file at content/academy/tracing/overview.mdx is exposed at URL /academy/tracing/overview. There is no automatic 'use overview as folder index' fallback — only an index.mdx in a folder takes the bare folder URL. So academySource.getPage(["tracing"]) returns undefined, and app/academy/[[...slug]]/page.tsx:14 falls through to notFound(), rendering AcademyNotFound.
The repo's own precedent confirms this
lib/redirects.js:60-71 explicitly maps these docs paths to their /overview children:
...[
"/docs/integrations",
"/docs/scores",
"/docs/datasets",
"/docs/security",
"/docs/observability",
"/docs/evaluation",
"/docs/metrics",
"/docs/api-and-data-platform",
"/docs/prompt-management",
].map((path) => [path, path + "/overview"]),These redirects exist precisely because those docs sections have the same structure (only overview.mdx, no index.mdx) and the bare folder URLs do not resolve on their own. Grepping lib/redirects.js for 'academy' returns only an unrelated /customers/khan-academy entry — no /academy/* redirect was added by this PR.
Step-by-step proof
- User lands on
/academy(works —content/academy/index.mdxexists at the root, mapped to/academy). - User clicks the first CTA,
[Tracing](/academy/tracing), on line 32 of the newindex.mdxor line 18 ofai-engineering-loop/overview.mdx. - Next.js routes the request to
app/academy/[[...slug]]/page.tsxwithslug = ["tracing"]. academySource.getPage(["tracing"])looks for a page whose URL path segments are["tracing"]. The only file undercontent/academy/tracing/isoverview.mdx, which fumadocs maps to slug["tracing", "overview"]— no match.- Result is
undefined, soif (!page) notFound();triggers. - The user sees the
AcademyNotFound404 page.
The same chain applies to all six bare-folder URLs introduced by this PR. The PR landing page itself loads, but the moment a reader clicks any 'next step' link, the journey breaks.
Impact
Every primary CTA in the two pages this PR adds/rewrites is broken. This is a release blocker for the Academy section: the rewritten welcome page exists specifically to send readers to the loop sub-sections, and the new ai-engineering-loop/overview.mdx is structured around five 'Read [section]' links that all 404. Greptile flagged a similar broken link to /academy/error-analysis in the pre-existing monitoring page but missed this systemic issue across the new content.
Fix
Two equivalent options:
- Append
/overviewat each link site — change/academy/tracing→/academy/tracing/overview, etc., in both files (matches whatlib/nav-links.tsxalready does for the docs section). - Add redirects to
lib/redirects.jsanalogous to lines 60–71:
...[
"/academy/ai-engineering-loop",
"/academy/tracing",
"/academy/monitoring",
"/academy/datasets",
"/academy/experiments",
"/academy/evaluate",
].map((path) => [path, path + "/overview"]),Option 2 is more robust because it also catches any future external links that target the bare folder URL.
Disclaimer: Experimental PR review
Greptile Summary
This PR introduces a new Langfuse Academy section — a conceptual learning hub covering the AI engineering lifecycle (tracing, monitoring, datasets, experiments, evaluation) — along with all the necessary Next.js routing, source loader, and nav registration to wire it into the site.
datasets,evaluate,tracing,monitoring,experiments) ship with TODO-filled<Callout>blocks that will render as visible content for site visitors in production.monitoring/overview.mdxlinks to/academy/error-analysis, a page that does not exist and has no corresponding MDX file ormeta.jsonentry in this PR, resulting in a 404.Confidence Score: 3/5
Not safe to merge as-is — internal TODO notes and a broken link will reach production readers.
Two P1 findings: (1) internal author TODO callouts across five content pages will render as visible production content, and (2) a hard link to /academy/error-analysis will 404 since that page does not exist. The routing and infrastructure code is clean and consistent with the rest of the codebase.
content/academy/monitoring/overview.mdx, content/academy/datasets/overview.mdx, content/academy/evaluate/overview.mdx, content/academy/tracing/overview.mdx, content/academy/experiments/overview.mdx
Important Files Changed
notFound()is correct since it returnsnever.academySourceloader following the same pattern as existing sources; no issues.docslayout andhasOwnRoute: true; consistent with other sections.baseFrontmatterSchema; placed correctly before marketing export.Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[Next.js /academy/...slug...] --> B[app/academy/layout.tsx] B --> C[SharedDocsLayout] C --> D[academySource.getPageTree] A --> E[app/academy/page.tsx] E --> F{academySource.getPage} F -- found --> G[DocsChromePage] F -- not found --> H[AcademyNotFound] subgraph Content Sources I[source.config.ts\nacademy = defineDocs] --> J[lib/source.ts\nacademySource loader] J --> K[lib/section-registry.ts\nacademy section] end subgraph Academy Pages L[index.mdx] --> M[ai-engineering-loop/overview.mdx] M --> N[tracing/overview.mdx] M --> O[monitoring/overview.mdx] M --> P[datasets/overview.mdx] M --> Q[experiments/overview.mdx] M --> R[evaluate/overview.mdx] O -. broken link .-> S[error-analysis missing] end D --> LComments Outside Diff (2)
content/academy/monitoring/overview.mdx, line 598-602 (link)This
<Callout>block contains raw internal working notes that will render as visible content for site visitors. TheIMPORTANT:annotation and the "Add more details…" instruction are author reminders, not reader-facing content. The same pattern appears across several other Academy pages (e.g.datasets/overview.mdxhas three TODO callouts,evaluate/overview.mdxhas three,tracing/overview.mdxhas three,experiments/overview.mdxhas two). All of these will be publicly visible as rendered callout components until they are resolved or removed.Prompt To Fix With AI
content/academy/monitoring/overview.mdx, line 608 (link)/academy/error-analysisis linked here (and referenced inline at line 576 as "covered in depth in error analysis"), but nocontent/academy/error-analysis/directory or MDX file exists in this PR and the topic does not appear incontent/academy/meta.json. Clicking this link will produce a 404 via theAcademyNotFoundcomponent.Prompt To Fix With AI
Prompt To Fix All With AI
Reviews (1): Last reviewed commit: "rewrite intro" | Re-trigger Greptile