Skip to content

User Feedback

Saving feedback to Trubrics

Upon creating a feedback component in Trubrics, a code snippet is generated for users to incorporate into their apps. There are several ways to save feedback:

1. With the Python SDK

Install Trubrics with:

pip install trubrics

Set Trubrics email and password as environment variables:

export TRUBRICS_EMAIL="trubrics_email"
export TRUBRICS_PASSWORD="trubrics_password"

and push some feedback to the default feedback component:

import os
from trubrics import Trubrics

trubrics = Trubrics(
    project="default",
    email=os.environ["TRUBRICS_EMAIL"],
    password=os.environ["TRUBRICS_PASSWORD"],
)

user_feedback = trubrics.log_feedback(
    component="default",
    model="gpt-3.5-turbo",
    prompt_id=None,  # see `Prompts` to store user prompts and model generations
    user_response={
        "type": "thumbs",
        "score": "👎",
        "text": "Not a very funny joke...",
    }
)

trubrics.log_feedback() arguments

Log user feedback to Trubrics.

Parameters:

Name Type Description Default
component str

feedback component name created in Trubrics

required
model str

model name

required
user_response dict

a user response dict that must contain these fields {"type": "", "score": "", "text": None}

required
prompt_id Optional[str]

an optional prompt_id for tracing feedback on a specific prompt / model generation

None
user_id Optional[str]

a user_id

None
tags list

feedback tags

[]
metadata dict

any feedback metadata

{}
Source code in trubrics/platform/__init__.py
def log_feedback(
    self,
    component: str,
    model: str,
    user_response: dict,
    prompt_id: Optional[str] = None,
    user_id: Optional[str] = None,
    tags: list = [],
    metadata: dict = {},
) -> Optional[Feedback]:
    """
    Log user feedback to Trubrics.

    Parameters:
        component: feedback component name created in Trubrics
        model: model name
        user_response: a user response dict that must contain these fields {"type": "", "score": "", "text": None}
        prompt_id: an optional prompt_id for tracing feedback on a specific prompt / model generation
        user_id: a user_id
        tags: feedback tags
        metadata: any feedback metadata
    """
    user_response = Response(**user_response)
    feedback = Feedback(
        component=component,
        model=model,
        user_response=user_response,
        prompt_id=prompt_id,
        user_id=user_id,
        tags=tags,
        metadata=metadata,
    )
    auth = get_trubrics_auth_token(
        self.config.firebase_api_key,
        self.config.email,
        self.config.password.get_secret_value(),
        rerun=expire_after_n_seconds(),
    )
    components = list_components_in_organisation(
        firestore_api_url=self.config.firestore_api_url, auth=auth, project=self.config.project
    )
    if feedback.component not in components:
        raise ValueError(f"Component '{feedback.component}' not found. Please select one of: {components}.")
    res = save_document_to_collection(
        auth,
        firestore_api_url=self.config.firestore_api_url,
        project=self.config.project,
        collection=f"feedback/{feedback.component}/responses",
        document=feedback,
    )
    if "error" in res:
        logger.error(res["error"])
        return None
    else:
        logger.info("User feedback saved to Trubrics.")
        return feedback

2. With Streamlit

Trubrics has an out-of-the-box integration with Streamlit:

pip install "trubrics[streamlit]"
import streamlit as st
from trubrics.integrations.streamlit import FeedbackCollector

collector = FeedbackCollector(
    project="default",
    email=st.secrets.TRUBRICS_EMAIL,
    password=st.secrets.TRUBRICS_PASSWORD,
)

collector.st_feedback(
    component="default",
    feedback_type="thumbs",
    open_feedback_label="[Optional] Provide additional feedback",
    model="gpt-3.5-turbo",
    prompt_id=None,  # see `Prompts` to log prompts and model generations
)

Take a look at our demo LLM app for an example.

3. With Flask

Here is an example of how the python SDK can be used with a Flask app.

4. With React

Here is an example showing how to collect feedback from a React app.

Types of feedback

Each feedback response in a component must be of a particular type, as seen in the user_response field of the Feedback data object.

Feedback object

There are three out-of-the-box types of feedback:

  • thumbs feedback (👍, 👎), with an optional open text box
  • faces feedback (😞, 🙁, 😐, 🙂, 😀), with an optional open text box
  • textbox feedback, an open text box for purely qualitative feedback

To save custom feedback with multiple fields, such as collecting survey responses, users can make use of the Feedback metadata field.

Analyse quantitative user feedback in Trubrics

Various filters allow AI teams to:

  • Aggregate responses by a frequency (hourly, daily, weekly, monthly)
  • View all responses for a given score, model or user
  • Compare responses for all scores, models or users

Tip

All quantitative analysis is viewed per feedback component. Each feedback component should have a unique set of scores (i.e a unique type) for analysis to be correctly computed.

Review user comments

User comments are collected in the text field of user_response. All comments are listed in the Comments tab, and may be grouped together to create an issue.

Export all raw data

Export a raw json file of all responses allows AI teams to conduct their own analysis. Use the `📥 Download all` button for a full export to json.


Last update: November 15, 2023
Created: November 15, 2023