Skip to content

fix(wren-ai-service): langfuse cost display issue #1699

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 2, 2025

Conversation

cyyeh
Copy link
Member

@cyyeh cyyeh commented May 29, 2025

Summary by CodeRabbit

  • New Features
    • Added display and tracking of the model name used for AI-generated responses across various features, including chart generation, SQL generation, intent classification, and more.
    • Enhanced cost tracking for AI operations, providing improved transparency for users regarding resource usage.
  • Bug Fixes
    • Minor formatting improvements for better consistency in certain functions.

@cyyeh cyyeh added module/ai-service ai-service related ci/ai-service ai-service related labels May 29, 2025
Copy link
Contributor

coderabbitai bot commented May 29, 2025

Walkthrough

A new async decorator, trace_cost, was introduced and applied to numerous generation and retrieval pipeline functions. These functions now accept a new generator_name parameter and return both the generation result and the generator's name. Pipeline classes were updated to store the generator name as a component.

Changes

File(s) Change Summary
.../chart_adjustment.py, .../chart_generation.py, .../intent_classification.py, .../question_recommendation.py, .../relationship_recommendation.py, .../semantics_description.py, .../sql_correction.py, .../sql_generation.py, .../sql_generation_reasoning.py, .../sql_question.py, .../sql_regeneration.py, .../sql_tables_extraction.py, .../user_guide_assistance.py Added trace_cost decorator, new generator_name parameter, updated return values to include generator name, and updated class constructors to include "generator_name" in components.
.../data_assistance.py, .../misleading_assistance.py, .../followup_sql_generation.py, .../followup_sql_generation_reasoning.py, .../sql_answer.py Added trace_cost decorator, new generator_name parameter, updated function signatures and class components.
.../retrieval/db_schema_retrieval.py Added trace_cost decorator, new generator_name parameter, updated return value, and updated class components.
.../utils.py Added the new async decorator trace_cost and a minor regex formatting change.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant PipelineFunction
    participant Generator
    participant LangfuseContext

    Caller->>PipelineFunction: call with prompt, generator, generator_name
    PipelineFunction->>Generator: generate(prompt, ...)
    Generator-->>PipelineFunction: result
    PipelineFunction->>LangfuseContext: trace_cost(result, generator_name)
    PipelineFunction-->>Caller: (result, generator_name)
Loading

Possibly related PRs

Suggested reviewers

  • paopa

Poem

In the warren where code does hop,
A decorator called trace_cost did drop.
Now every generator shares its name,
And usage is traced in the model game.
With tuples returned and context in tow,
The pipelines now sparkle, all aglow!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
wren-ai-service/src/utils.py (1)

164-186: Well-implemented cost tracing decorator with proper async handling.

The trace_cost decorator correctly handles the expected tuple return format and appropriately extracts usage metadata for Langfuse tracking. The implementation properly handles both dict and list result types.

However, consider adding error handling for potential edge cases:

def trace_cost(func):
    @functools.wraps(func)
    async def wrapper(*args, **kwargs):
        result, generator_name = await func(*args, **kwargs)
+
+        try:
            if isinstance(result, dict):
                if meta := result.get("meta", []):
                    langfuse_context.update_current_observation(
                        model=generator_name,
                        usage_details=meta[0].get("usage", {}),
                    )
            elif isinstance(result, list):
                for item in result:
                    if meta := item.get("meta", []):
                        langfuse_context.update_current_observation(
                            model=generator_name,
                            usage_details=meta[0].get("usage", {}),
                        )
+        except Exception as e:
+            # Log the error but don't break the main flow
+            logger.warning(f"Failed to update Langfuse observation: {e}")

        return result

    return wrapper
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 564ac69 and c3dcf92.

📒 Files selected for processing (20)
  • wren-ai-service/src/pipelines/generation/chart_adjustment.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/chart_generation.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/data_assistance.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/followup_sql_generation.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/followup_sql_generation_reasoning.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/intent_classification.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/misleading_assistance.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/question_recommendation.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/relationship_recommendation.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/semantics_description.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/sql_answer.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/sql_correction.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/sql_generation.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/sql_generation_reasoning.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/sql_question.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/sql_regeneration.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/sql_tables_extraction.py (3 hunks)
  • wren-ai-service/src/pipelines/generation/user_guide_assistance.py (3 hunks)
  • wren-ai-service/src/pipelines/retrieval/db_schema_retrieval.py (3 hunks)
  • wren-ai-service/src/utils.py (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (4)
wren-ai-service/src/pipelines/generation/data_assistance.py (2)
wren-ai-service/src/utils.py (1)
  • trace_cost (164-185)
wren-ai-service/src/core/provider.py (2)
  • get_model (11-12)
  • get_model (30-31)
wren-ai-service/src/pipelines/generation/sql_generation.py (2)
wren-ai-service/src/utils.py (1)
  • trace_cost (164-185)
wren-ai-service/src/core/provider.py (2)
  • get_model (11-12)
  • get_model (30-31)
wren-ai-service/src/pipelines/generation/sql_tables_extraction.py (1)
wren-ai-service/src/utils.py (1)
  • trace_cost (164-185)
wren-ai-service/src/pipelines/generation/sql_generation_reasoning.py (2)
wren-ai-service/src/utils.py (1)
  • trace_cost (164-185)
wren-ai-service/src/pipelines/generation/followup_sql_generation_reasoning.py (2)
  • generate_sql_reasoning (89-98)
  • prompt (65-84)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: pytest
  • GitHub Check: pytest
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
🔇 Additional comments (58)
wren-ai-service/src/utils.py (1)

222-223: Minor formatting improvement - LGTM!

The change from single quotes to double quotes in the regex pattern is a minor formatting improvement with no functional impact.

wren-ai-service/src/pipelines/retrieval/db_schema_retrieval.py (3)

18-18: Import statement correctly added.

The import of trace_cost from src.utils is properly placed with other utility imports.


319-328: Function correctly updated for cost tracing pattern.

The filter_columns_in_tables function has been properly updated to:

  • Accept the new generator_name parameter
  • Apply the @trace_cost decorator
  • Return tuples in both conditional branches

The implementation correctly maintains the original logic while adapting to the new return format.


459-459: Generator name component properly added.

The addition of "generator_name": llm_provider.get_model() to the components dictionary correctly provides the model name for the pipeline.

wren-ai-service/src/pipelines/generation/sql_answer.py (3)

13-13: Import statement correctly added.

The import of trace_cost from src.utils is properly placed with other utility imports.


67-73: Function correctly updated for cost tracing pattern.

The generate_answer function has been properly updated to:

  • Accept the new generator_name parameter
  • Apply the @trace_cost decorator
  • Return a tuple of the generator result and generator name

The implementation maintains the original async generator call while adapting to the new return format.


94-94: Generator name component properly added.

The addition of "generator_name": llm_provider.get_model() to the components dictionary correctly provides the model name for the pipeline.

wren-ai-service/src/pipelines/generation/chart_adjustment.py (3)

19-19: Import statement correctly added.

The import of trace_cost from src.utils is properly placed with other utility imports.


112-118: Function correctly updated for cost tracing pattern.

The generate_chart_adjustment function has been properly updated to:

  • Accept the new generator_name parameter
  • Apply the @trace_cost decorator
  • Return a tuple of the generator result and generator name

The implementation maintains the original async generator call while adapting to the new return format.


161-161: Generator name component properly added.

The addition of "generator_name": llm_provider.get_model() to the components dictionary correctly provides the model name for the pipeline.

wren-ai-service/src/pipelines/generation/relationship_recommendation.py (3)

16-16: LGTM! Proper import of tracing utility.

The import of trace_cost from src.utils is correctly added to support the cost tracing functionality.


58-60: LGTM! Consistent implementation of cost tracing pattern.

The changes correctly implement the cost tracing pattern:

  • @trace_cost decorator is properly applied
  • generator_name parameter is added to function signature
  • Return value follows the expected tuple format (result, generator_name)

The trace_cost decorator will unwrap the tuple and return just the result, maintaining compatibility with downstream functions like normalized that expect the generation result.


199-199: LGTM! Proper initialization of generator name component.

The addition of "generator_name": llm_provider.get_model() correctly provides the model name to the generation function, enabling proper cost tracking and model identification.

wren-ai-service/src/pipelines/generation/sql_generation_reasoning.py (3)

17-17: LGTM! Proper import of tracing utility.

The import of trace_cost from src.utils is correctly added to support the cost tracing functionality.


77-83: LGTM! Consistent implementation of cost tracing pattern.

The changes correctly implement the cost tracing pattern:

  • @trace_cost decorator is properly applied
  • generator_name parameter is added while preserving existing query_id parameter
  • Return value follows the expected tuple format for the decorator
  • The function maintains its original logic while adding the tracing capability

108-108: LGTM! Proper initialization of generator name component.

The addition of "generator_name": llm_provider.get_model() correctly provides the model name to the generation function for cost tracking purposes.

wren-ai-service/src/pipelines/generation/intent_classification.py (3)

18-18: LGTM! Proper import of tracing utility.

The import of trace_cost from src.utils is correctly added to support the cost tracing functionality.


284-286: LGTM! Consistent implementation of cost tracing pattern.

The changes correctly implement the cost tracing pattern for the intent classification function:

  • @trace_cost decorator is properly applied
  • generator_name parameter is added to the function signature
  • Return value follows the expected tuple format (result, generator_name)

The implementation is consistent with the pattern used across other generation functions.


353-353: LGTM! Proper initialization of generator name component.

The addition of "generator_name": llm_provider.get_model() correctly provides the model name to the classification function for cost tracking and model identification.

wren-ai-service/src/pipelines/generation/question_recommendation.py (3)

14-14: LGTM! Proper import of tracing utility.

The import of trace_cost from src.utils is correctly added to support the cost tracing functionality.


44-47: LGTM! Consistent implementation of cost tracing pattern.

The changes correctly implement the cost tracing pattern:

  • @observe decorator maintains proper ordering before @trace_cost
  • generator_name parameter is properly added to the function signature
  • Return value follows the expected tuple format for the decorator
  • The implementation is consistent with the pattern used across all other generation functions

240-240: LGTM! Proper initialization of generator name component.

The addition of "generator_name": llm_provider.get_model() correctly provides the model name to the generation function, completing the cost tracking implementation for this pipeline.

wren-ai-service/src/pipelines/generation/sql_correction.py (2)

20-20: LGTM: Consistent import addition

The import of trace_cost from src.utils aligns with the pattern applied across all generation pipeline modules.


116-116: LGTM: Proper component registration

The addition of generator_name component correctly stores the model name for cost tracking purposes.

wren-ai-service/src/pipelines/generation/chart_generation.py (2)

19-19: LGTM: Consistent import addition

The import follows the same pattern as other pipeline modules for enabling cost tracing.


130-130: LGTM: Consistent component registration

The generator_name component is properly added to track the model used for cost attribution.

wren-ai-service/src/pipelines/generation/sql_question.py (2)

15-15: LGTM: Consistent import addition

The trace_cost import maintains consistency across the generation pipeline modules.


114-114: LGTM: Proper component registration

The generator_name component addition is consistent with the cost tracking implementation.

wren-ai-service/src/pipelines/generation/sql_regeneration.py (2)

22-22: LGTM: Consistent import addition

The trace_cost import follows the established pattern for cost tracking integration.


162-162: LGTM: Consistent component registration

The generator_name component addition completes the pattern for cost tracking across generation pipelines.

wren-ai-service/src/pipelines/generation/semantics_description.py (3)

14-14: LGTM: Proper import of trace_cost decorator

The import statement correctly adds the trace_cost decorator needed for cost tracking functionality.


69-71: LGTM: Correct implementation of cost tracing

The implementation properly:

  • Applies the @trace_cost decorator
  • Adds generator_name parameter to the function signature
  • Returns a tuple of (result, generator_name) as expected by the decorator

This follows the established pattern for enabling cost tracking in generation functions.


219-219: LGTM: Proper generator name propagation

The addition of "generator_name": llm_provider.get_model() to the components dictionary correctly ensures the model name is available for cost tracking throughout the pipeline.

wren-ai-service/src/pipelines/generation/sql_tables_extraction.py (3)

14-14: LGTM: Proper import of trace_cost decorator

The import statement correctly adds the trace_cost decorator for cost tracking functionality.


67-69: LGTM: Correct cost tracing implementation

The implementation properly:

  • Applies the @trace_cost decorator
  • Adds generator_name parameter to function signature
  • Returns tuple format (result, generator_name) as expected by the decorator

This aligns with the established pattern for cost tracking in generation functions.


108-108: LGTM: Proper generator name setup

Adding "generator_name": llm_provider.get_model() to the components dictionary correctly propagates the model name for cost tracking.

wren-ai-service/src/pipelines/generation/user_guide_assistance.py (3)

13-13: LGTM: Proper import of trace_cost decorator

The import statement correctly adds the trace_cost decorator needed for cost tracking.


62-68: LGTM: Correct cost tracing implementation

The implementation properly:

  • Applies the @trace_cost decorator
  • Adds generator_name parameter to function signature
  • Returns tuple format (result, generator_name) as expected by the decorator

The multiline return statement maintains proper tuple structure for the decorator.


87-87: LGTM: Proper generator name setup

Adding "generator_name": llm_provider.get_model() to the components dictionary correctly enables model name tracking for cost tracing.

wren-ai-service/src/pipelines/generation/followup_sql_generation_reasoning.py (3)

17-17: LGTM: Proper import of trace_cost decorator

The import statement correctly adds the trace_cost decorator for cost tracking functionality.


88-98: LGTM: Correct cost tracing implementation

The implementation properly:

  • Applies the @trace_cost decorator
  • Adds generator_name parameter to function signature
  • Returns tuple format (result, generator_name) as expected by the decorator

This follows the established pattern for enabling cost tracking in generation functions.


123-123: LGTM: Proper generator name setup

Adding "generator_name": llm_provider.get_model() to the components dictionary correctly propagates the model name for cost tracking throughout the pipeline.

wren-ai-service/src/pipelines/generation/data_assistance.py (4)

13-13: LGTM: Import addition for cost tracing

The import of trace_cost from src.utils is correctly added to support the new cost tracking functionality.


72-75: LGTM: Function signature enhanced for cost tracing

The addition of the @trace_cost decorator and generator_name: str parameter correctly implements the cost tracking pattern. The decorator will handle extracting usage metadata and updating Langfuse observations.


79-79: LGTM: Return statement correctly modified for decorator compatibility

The return statement now returns a tuple (result, generator_name) as expected by the @trace_cost decorator. The decorator will unwrap this tuple and return only the result to maintain compatibility with existing callers.


97-97: LGTM: Generator name component properly added

The addition of "generator_name": llm_provider.get_model() to the components dictionary correctly provides the model identifier for cost tracking and tracing purposes.

wren-ai-service/src/pipelines/generation/misleading_assistance.py (4)

13-13: LGTM: Consistent import addition

The import of trace_cost is consistently added across all pipeline files for uniform cost tracking implementation.


72-75: LGTM: Consistent decorator and parameter addition

The implementation follows the same pattern as other pipeline files, ensuring consistent cost tracking behavior across the codebase.


79-79: LGTM: Consistent return tuple implementation

The return statement correctly implements the tuple format expected by the @trace_cost decorator, maintaining consistency with other pipeline modifications.


97-97: LGTM: Consistent component addition

The generator name component is added consistently with other pipeline classes, enabling uniform cost tracking across all generation pipelines.

wren-ai-service/src/pipelines/generation/sql_generation.py (4)

22-22: LGTM: Import addition for cost tracing

The trace_cost import is correctly added to enable cost tracking functionality in the SQL generation pipeline.


110-114: LGTM: Function enhanced with cost tracing

The @trace_cost decorator and generator_name parameter are correctly added to the generate_sql function, enabling cost tracking for SQL generation operations.


116-116: LGTM: Return statement properly modified

The return statement correctly returns the tuple (result, generator_name) format expected by the @trace_cost decorator while maintaining the original function's output semantics.


149-149: LGTM: Component properly added

The "generator_name" component is correctly initialized with llm_provider.get_model(), ensuring the model identifier is available for cost tracking in the SQL generation pipeline.

wren-ai-service/src/pipelines/generation/followup_sql_generation.py (4)

23-23: LGTM: Import addition for cost tracing

The trace_cost import is correctly added to support cost tracking in the follow-up SQL generation pipeline.


114-119: LGTM: Function signature correctly enhanced

The @trace_cost decorator and generator_name parameter are properly added to the generate_sql_in_followup function. The function correctly handles the additional histories parameter alongside the new cost tracking functionality.


124-124: LGTM: Return statement properly implemented

The return statement correctly implements the tuple format (result, generator_name) expected by the @trace_cost decorator, ensuring cost tracking data is properly captured.


157-157: LGTM: Component correctly added

The "generator_name" component is properly initialized in the follow-up SQL generation pipeline, maintaining consistency with other pipeline implementations for cost tracking.

Copy link
Contributor

@imAsterSun imAsterSun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

approved

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
wren-ai-service/src/utils.py (1)

169-169: Address the WIP comment for handling list of dicts.

The current implementation only handles cases where result is a dict. The WIP comment indicates this needs to support cases where result is a list of dicts.

Would you like me to help implement the handling for when result is a list of dicts? This would ensure comprehensive cost tracking across different result formats.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c3dcf92 and a16bdce.

📒 Files selected for processing (1)
  • wren-ai-service/src/utils.py (2 hunks)
🧰 Additional context used
🪛 Pylint (3.3.7)
wren-ai-service/src/utils.py

[convention] 164-164: Missing function or method docstring

(C0116)

🔇 Additional comments (1)
wren-ai-service/src/utils.py (1)

216-216: LGTM! Regex pattern quote change for consistency.

The change from single quotes to double quotes in the regex pattern maintains the same functionality while improving code consistency.

Comment on lines +164 to +179
def trace_cost(func):
@functools.wraps(func)
async def wrapper(*args, **kwargs):
result, generator_name = await func(*args, **kwargs)

# WIP: deal with the case where the result is a list of dicts
if isinstance(result, dict):
if meta := result.get("meta", []):
langfuse_context.update_current_observation(
model=generator_name,
usage_details=meta[0].get("usage", {}),
)

return result

return wrapper
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add docstring and improve error handling for the trace_cost decorator.

The trace_cost decorator implementation looks correct for its intended purpose, but needs documentation and better error handling.

Apply this diff to add proper documentation and error handling:

 def trace_cost(func):
+    """
+    Decorator to trace cost and usage details for functions that return (result, generator_name).
+    
+    This decorator expects the wrapped function to return a tuple of (result, generator_name).
+    It extracts usage metadata from the result and updates the current Langfuse observation
+    context with the model name and usage details for cost tracking.
+    
+    Args:
+        func (Callable): An async function that returns (result, generator_name)
+        
+    Returns:
+        Callable: The decorated function that returns only the result
+    """
     @functools.wraps(func)
     async def wrapper(*args, **kwargs):
-        result, generator_name = await func(*args, **kwargs)
+        func_result = await func(*args, **kwargs)
+        
+        if not isinstance(func_result, tuple) or len(func_result) != 2:
+            raise ValueError(f"Function {func.__name__} must return a tuple of (result, generator_name)")
+            
+        result, generator_name = func_result

         # WIP: deal with the case where the result is a list of dicts
         if isinstance(result, dict):
             if meta := result.get("meta", []):
+                if not isinstance(meta, list) or len(meta) == 0:
+                    return result
                 langfuse_context.update_current_observation(
                     model=generator_name,
                     usage_details=meta[0].get("usage", {}),
                 )

         return result

     return wrapper
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def trace_cost(func):
@functools.wraps(func)
async def wrapper(*args, **kwargs):
result, generator_name = await func(*args, **kwargs)
# WIP: deal with the case where the result is a list of dicts
if isinstance(result, dict):
if meta := result.get("meta", []):
langfuse_context.update_current_observation(
model=generator_name,
usage_details=meta[0].get("usage", {}),
)
return result
return wrapper
def trace_cost(func):
"""
Decorator to trace cost and usage details for functions that return (result, generator_name).
This decorator expects the wrapped function to return a tuple of (result, generator_name).
It extracts usage metadata from the result and updates the current Langfuse observation
context with the model name and usage details for cost tracking.
Args:
func (Callable): An async function that returns (result, generator_name)
Returns:
Callable: The decorated function that returns only the result
"""
@functools.wraps(func)
async def wrapper(*args, **kwargs):
func_result = await func(*args, **kwargs)
if not isinstance(func_result, tuple) or len(func_result) != 2:
raise ValueError(f"Function {func.__name__} must return a tuple of (result, generator_name)")
result, generator_name = func_result
# WIP: deal with the case where the result is a list of dicts
if isinstance(result, dict):
if meta := result.get("meta", []):
if not isinstance(meta, list) or len(meta) == 0:
return result
langfuse_context.update_current_observation(
model=generator_name,
usage_details=meta[0].get("usage", {}),
)
return result
return wrapper
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 164-164: Missing function or method docstring

(C0116)

🤖 Prompt for AI Agents
In wren-ai-service/src/utils.py around lines 164 to 179, the trace_cost
decorator lacks a docstring and does not handle errors that might occur during
the decorated function execution. Add a clear docstring explaining the purpose
and behavior of the decorator. Wrap the call to the decorated function in a
try-except block to catch exceptions, log or handle them appropriately, and
optionally re-raise or return a default value to improve robustness.

@cyyeh cyyeh merged commit 383159a into main Jun 2, 2025
12 checks passed
@cyyeh cyyeh deleted the fix/ai-service/langfuse-cost branch June 2, 2025 02:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/ai-service ai-service related module/ai-service ai-service related wren-ai-service
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants