close
Skip to content

Add JS/TS cookbooks for Google Gemini tracing (basic + multimodal)#2878

Open
JK-Amanda-Goo wants to merge 1 commit intolangfuse:mainfrom
JK-Amanda-Goo:add-js-google-gemini-cookbooks
Open

Add JS/TS cookbooks for Google Gemini tracing (basic + multimodal)#2878
JK-Amanda-Goo wants to merge 1 commit intolangfuse:mainfrom
JK-Amanda-Goo:add-js-google-gemini-cookbooks

Conversation

@JK-Amanda-Goo
Copy link
Copy Markdown

@JK-Amanda-Goo JK-Amanda-Goo commented Apr 27, 2026

Summary

Adds two JS/TS cookbooks for native Google Gemini tracing with Langfuse (closes #2822). Both examples come from a working Next.js application.

js_integration_google_gemini.ipynb — basic text generation tracing:

  • Wraps generateContent calls with langfuse.trace() + trace.generation()
  • Passes usageMetadata fields (promptTokenCount, candidatesTokenCount, totalTokenCount) for accurate cost tracking
  • Documents the correct model name string (e.g. "gemini-2.0-flash") needed to match Langfuse's model registry
  • Shows the streaming pattern (generateContentStream) with token capture after stream completion
  • Includes the flushAsync() pattern for serverless/Next.js with a complete Route Handler example

js_integration_google_gemini_multimodal.ipynb — image + text tracing:

  • Passes inline base64 images to Gemini using inlineData
  • Logs images in the OpenAI message format (image_url content block) so they render in the Langfuse UI
  • Explains that promptTokenCount covers combined image + text tokens
  • Includes a complete Next.js Route Handler example with flushAsync()

Notes

There is currently no @arizeai/openinference-instrumentation-google-generative-ai package for JS (unlike Python), so these cookbooks use the Langfuse JS SDK for manual tracing. Happy to adjust the approach if there is a preferred pattern for manual instrumentation in the JS cookbook collection.

Test plan

  • Both notebooks run end-to-end with a valid GEMINI_API_KEY and Langfuse keys
  • Traces appear in Langfuse UI with correct token counts and model cost
  • Images render inline in the multimodal trace
  • flushAsync() examples confirmed working in a Next.js serverless environment

🤖 Generated with Claude Code

Disclaimer: Experimental PR review

Greptile Summary

This PR adds two new JS/TS cookbooks demonstrating manual Langfuse tracing for Google Gemini — one for basic text generation (including streaming) and one for multimodal image+text calls. The overall approach (manual trace/generation wrapping, usageMetadata token capture, flushAsync() for serverless) is sound and fills a genuine gap since no auto-instrumentation exists for the Gemini JS SDK.

  • The streaming cell in js_integration_google_gemini.ipynb uses process.stdout.write, which is a Node.js API unavailable in the Deno kernel — this throws ReferenceError at runtime.
  • The same notebook re-declares const genAI, const MODEL, and const prompt in the streaming cell after they are already declared in Step 4, causing SyntaxError: Identifier 'MODEL' has already been declared when cells are run sequentially in Deno's shared REPL scope.

Confidence Score: 3/5

Not safe to merge as-is — two P1 runtime errors in the streaming cell will cause the notebook to fail when executed in Deno.

Two P1 bugs in the basic cookbook (wrong stdout API for Deno, const re-declarations) mean the streaming example cannot run as written. The multimodal cookbook is cleaner but has a P2 reliability concern for large images. Both issues are straightforward to fix.

cookbook/js_integration_google_gemini.ipynb — streaming cell needs process.stdout.write replaced and duplicate const declarations removed.

Important Files Changed

Filename Overview
cookbook/js_integration_google_gemini.ipynb New cookbook for basic Gemini JS/TS tracing; two runtime bugs: process.stdout.write (Node.js API in a Deno notebook) and const re-declarations between cells that will throw SyntaxErrors when run sequentially.
cookbook/js_integration_google_gemini_multimodal.ipynb New cookbook for multimodal Gemini tracing; spread-based base64 conversion can stack-overflow for large images, but the overall tracing logic and image rendering approach are correct.

Sequence Diagram

sequenceDiagram
    participant App as App Code (JS/TS)
    participant LF as Langfuse SDK
    participant Gemini as Google Gemini API
    participant LFBE as Langfuse Backend

    App->>LF: langfuse.trace({ name, input })
    App->>LF: trace.generation({ name, model, input })
    App->>Gemini: model.generateContent(prompt)
    Gemini-->>App: result (text + usageMetadata)
    App->>LF: generation.end({ output, usage })
    App->>LF: trace.update({ output })
    App->>LF: await langfuse.flushAsync()
    LF->>LFBE: Batch POST events (trace + generation)
    LFBE-->>LF: 200 OK
Loading
Prompt To Fix All With AI
This is a comment left during a code review.
Path: cookbook/js_integration_google_gemini.ipynb
Line: 192

Comment:
**`process` is not defined in Deno**

`process.stdout.write` is a Node.js API. The notebook uses the Deno kernel (all other cells use `Deno.env`), where `process` is not a global — this line will throw `ReferenceError: process is not defined` at runtime. Replace with `Deno.stdout.write(new TextEncoder().encode(part))`.

```suggestion
  Deno.stdout.write(new TextEncoder().encode(part));
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: cookbook/js_integration_google_gemini.ipynb
Line: 174-177

Comment:
**`const` re-declarations clash with Step 4 cell in Deno REPL**

`const genAI`, `const MODEL`, and `const prompt` are already declared in the Step 4 cell (lines 105–108). Deno's Jupyter kernel shares a single REPL scope across cells, so running this streaming cell after Step 4 will throw `SyntaxError: Identifier 'MODEL' has already been declared`. Consider renaming these to unique identifiers (e.g. `streamGenAI`, `streamModel`, `streamPrompt`) or removing the duplicate declarations.

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: cookbook/js_integration_google_gemini_multimodal.ipynb
Line: 115

Comment:
**Spread into `String.fromCharCode` fails for large images**

Spreading a `Uint8Array` as function arguments copies all bytes onto the call stack. For images larger than ~100–200 KB this will throw `RangeError: Maximum call stack size exceeded`. Users adapting this for real photos will hit this silently. Use `.map()` instead.

```suggestion
const imageBase64 = btoa([...new Uint8Array(imageBuffer)].map(b => String.fromCharCode(b)).join(""));
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "Add JS/TS cookbooks for Google Gemini tr..." | Re-trigger Greptile

Greptile also left 2 inline comments on this PR.

Adds two new JS/TS cookbooks for native Google Gemini integration with Langfuse:

- js_integration_google_gemini.ipynb: basic text generation tracing using
  the Langfuse JS SDK, including the correct model name for cost tracking
  and the flushAsync() pattern required for serverless/Next.js environments.

- js_integration_google_gemini_multimodal.ipynb: multimodal (image + text)
  tracing showing how to capture image token counts from usageMetadata,
  render images inline in the Langfuse UI, and apply flushAsync() in a
  Next.js Route Handler.

Closes langfuse#2822

Co-Authored-By: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude Code Review

This pull request is from a fork — automated review is disabled. A repository maintainer can comment @claude review to run a one-time review.

@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 27, 2026

@JK-Amanda-Goo is attempting to deploy a commit to the langfuse Team on Vercel.

A member of the Team first needs to authorize it.

@review-notebook-app
Copy link
Copy Markdown

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@dosubot dosubot Bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Apr 27, 2026
@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@dosubot dosubot Bot added the documentation Improvements or additions to documentation label Apr 27, 2026
"let fullText = \"\";\n",
"for await (const chunk of streamResult.stream) {\n",
" const part = chunk.text();\n",
" process.stdout.write(part);\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 process is not defined in Deno

process.stdout.write is a Node.js API. The notebook uses the Deno kernel (all other cells use Deno.env), where process is not a global — this line will throw ReferenceError: process is not defined at runtime. Replace with Deno.stdout.write(new TextEncoder().encode(part)).

Suggested change
" process.stdout.write(part);\n",
Deno.stdout.write(new TextEncoder().encode(part));
Prompt To Fix With AI
This is a comment left during a code review.
Path: cookbook/js_integration_google_gemini.ipynb
Line: 192

Comment:
**`process` is not defined in Deno**

`process.stdout.write` is a Node.js API. The notebook uses the Deno kernel (all other cells use `Deno.env`), where `process` is not a global — this line will throw `ReferenceError: process is not defined` at runtime. Replace with `Deno.stdout.write(new TextEncoder().encode(part))`.

```suggestion
  Deno.stdout.write(new TextEncoder().encode(part));
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +174 to +177
"const genAI = new GoogleGenerativeAI(Deno.env.get(\"GEMINI_API_KEY\") ?? \"\");\n",
"\n",
"const MODEL = \"gemini-2.0-flash\";\n",
"const prompt = \"List three benefits of LLM tracing.\";\n",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 const re-declarations clash with Step 4 cell in Deno REPL

const genAI, const MODEL, and const prompt are already declared in the Step 4 cell (lines 105–108). Deno's Jupyter kernel shares a single REPL scope across cells, so running this streaming cell after Step 4 will throw SyntaxError: Identifier 'MODEL' has already been declared. Consider renaming these to unique identifiers (e.g. streamGenAI, streamModel, streamPrompt) or removing the duplicate declarations.

Prompt To Fix With AI
This is a comment left during a code review.
Path: cookbook/js_integration_google_gemini.ipynb
Line: 174-177

Comment:
**`const` re-declarations clash with Step 4 cell in Deno REPL**

`const genAI`, `const MODEL`, and `const prompt` are already declared in the Step 4 cell (lines 105–108). Deno's Jupyter kernel shares a single REPL scope across cells, so running this streaming cell after Step 4 will throw `SyntaxError: Identifier 'MODEL' has already been declared`. Consider renaming these to unique identifiers (e.g. `streamGenAI`, `streamModel`, `streamPrompt`) or removing the duplicate declarations.

How can I resolve this? If you propose a fix, please make it concise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add JS/TypeScript cookbooks for Google Gemini tracing (basic + multimodal)

2 participants