FeedbackCollector Streamlit Integration
The FeedbackCollector takes user feedback from within an app and saves it to Trubrics.
Install
To get started with Streamlit, install the additional dependency:
Streamlit Example Apps
Once you have created an account with Trubrics, you can try our deployed example Streamlit apps that use the integration to save feedback:
- LLM chat - deployed app | code : A chatbot that queries OpenAI's API and allows users to leave feedback.
- LLM single answer - deployed app | code : An LLM app that queries OpenAI's API and allows users to leave feedback on single text generations.
The code for these apps can be viewed in the trubrics-sdk, and may be run by cloning the repo and running:
Tip
To run this app, you are required to have your own OpenAI API key.
Install openai:
Then save your OpenAI API key with OPENAI_API_KEY='your_openai_key'
in st.secrets, and run:
Tip
To run this app, you are required to have your own OpenAI API key.
Install openai:
Then save your OpenAI API key with OPENAI_API_KEY='your_openai_key'
in st.secrets, and run:
Add the FeedbackCollector to your App
Here is a complete example to log user prompts and feedback from a simple streamlit application:
import streamlit as st
from trubrics.integrations.streamlit import FeedbackCollector
if "logged_prompt" not in st.session_state:
st.session_state.logged_prompt = None
if "feedback_key" not in st.session_state:
st.session_state.feedback_key = 0
# 1. authenticate with trubrics
collector = FeedbackCollector(
email=st.secrets.TRUBRICS_EMAIL,
password=st.secrets.TRUBRICS_PASSWORD,
project="default"
)
if st.button("Refresh"):
st.session_state.feedback_key += 1
st.session_state.logged_prompt = None
st.experimental_rerun()
prompt = "Tell me a joke"
generation = "Why did the chicken cross the road? To get to the other side."
st.write(f"#### :orange[Example user prompt: {prompt}]")
if st.button("Generate response"):
# 2. log a user prompt & model response
st.session_state.logged_prompt = collector.log_prompt(
config_model={"model": "gpt-3.5-turbo"},
prompt=prompt,
generation=generation,
)
if st.session_state.logged_prompt:
st.write(f"#### :blue[Example model generation: {generation}]")
# 3. log some user feedback
user_feedback = collector.st_feedback(
component="default",
feedback_type="thumbs",
open_feedback_label="[Optional] Provide additional feedback",
model=st.session_state.logged_prompt.config_model.model,
prompt_id=st.session_state.logged_prompt.id,
key=st.session_state.feedback_key,
align="flex-start",
)
What's going on here? Let's break down this snippet:
1. FeedbackCollector()
Tip
The authentication token is cached already, but to optimise your app further, wrap the FeedbackCollector
in @st.cache_data.
FeedbackCollector object
Parameters:
Name | Type | Description | Default |
---|---|---|---|
project |
Optional[str]
|
a Trubrics project name |
required |
email |
Optional[str]
|
a Trubrics account email |
required |
password |
Optional[str]
|
a Trubrics account password |
required |
Source code in trubrics/integrations/streamlit/collect.py
2. collector.log_prompt()
.log_prompt() parameters
Log user prompts to Trubrics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
config_model |
dict
|
model configuration with fields "model", "prompt_template", "temperature" |
required |
prompt |
str
|
user prompt to the model |
required |
generation |
str
|
model generation |
required |
user_id |
Optional[str]
|
user id |
None
|
session_id |
Optional[str]
|
session id, for example for a chatbot conversation |
None
|
tags |
list
|
feedback tags |
[]
|
metadata |
dict
|
any feedback metadata |
{}
|
Source code in trubrics/platform/__init__.py
3. collector.st_feedback()
.st_feedback() parameters
Collect ML model user feedback with UI components from a Streamlit app.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
component |
str
|
component name. Create a new component directly in Trubrics. |
required |
feedback_type |
str
|
type of feedback to be collected
|
required |
textbox_type |
str
|
if textbox selected as feedback_type, the type of textbox to use ["text-input", "text-area"] |
'text-input'
|
model |
str
|
the model used - can be a model version, a link to the saved model artifact in cloud storage, etc |
required |
prompt_id |
Optional[str]
|
id of the prompt object |
None
|
tags |
list
|
a list of tags for the feedback |
[]
|
metadata |
dict
|
any data to save with the feedback |
{}
|
user_id |
Optional[str]
|
an optional reference to a user, for example a username if there is a login, a cookie ID, etc |
None
|
key |
Optional[str]
|
a key for the streamlit components (necessary if calling this method multiple times) |
None
|
open_feedback_label |
Optional[str]
|
label of optional text_input for "faces" or "thumbs" feedback_type |
None
|
save_to_trubrics |
bool
|
whether to save the feedback to Trubrics, or just to return the feedback object |
True
|
disable_with_score |
Optional[str]
|
an optional score to disable the component. Must be a "thumbs" emoji or a "faces" emoji. Can be used to pass state from one component to another. |
None
|
align |
str
|
where to align the feedback component ["flex-end", "center", "flex-start"] |
'flex-end'
|
success_fail_message |
bool
|
whether to display an st.toast message on feedback submission. |
True
|
Source code in trubrics/integrations/streamlit/collect.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
|
Created: November 15, 2023