Choose the right response type to collect exactly the information you need from human reviewers. Each response type is optimized for different use cases and provides structured data that’s easy to process programmatically.
Response types determine how reviewers interact with your requests in the mobile app and what format the response data takes when returned to your application.
For single_select and multi_select response types, you can use two different formats for the options array:
Both formats work identically - choose based on your preference. Examples below show both formats.
Response Type Overview
Text Response Best for : Open-ended feedback, explanations, detailed reviewsReturns : String value with reviewer’s text input
Single Select Best for : Yes/No decisions, choosing one option from a listReturns : String value of the selected option
Multi Select Best for : Selecting multiple items, feature identification, taggingReturns : Array of selected option strings
Rating Best for : Quality assessment, scoring content, performance evaluationReturns : Number value within configured range
Number Input Best for : Quantities, measurements, counting tasksReturns : Number value with optional validation
Text Response
Perfect for collecting detailed feedback, explanations, and open-ended responses from reviewers.
Configuration
Basic Text
Text with Guidelines
{
"response_type" : "text" ,
"response_config" : {
"placeholder" : "Enter your feedback here..." ,
"max_length" : 500 ,
"required" : true
}
}
Configuration Options
Hint text shown in the input field
Maximum character limit (default: 1000, max: 5000)
Minimum character requirement (default: 0)
Whether response is required (default: true)
Additional instructions displayed to reviewers
Use Cases & Examples
# AI-generated content review
text_config = {
"response_type" : "text" ,
"response_config" : {
"placeholder" : "Explain what makes this content high or low quality..." ,
"max_length" : 800 ,
"min_length" : 100 ,
"guidelines" : "Consider accuracy, clarity, usefulness, and engagement factors."
}
}
# Example response: "The content is well-structured and informative, but contains several factual errors about renewable energy statistics that need correction."
Customer Support Escalation
# Support ticket resolution
support_config = {
"response_type" : "text" ,
"response_config" : {
"placeholder" : "Describe how you would resolve this customer issue..." ,
"max_length" : 600 ,
"guidelines" : "Include specific steps and any additional resources needed."
}
}
# Example response: "1. Issue refund within 24 hours 2. Send apology email with discount code 3. Follow up in 1 week to ensure satisfaction"
# Software code review
code_review_config = {
"response_type" : "text" ,
"response_config" : {
"placeholder" : "Provide code review feedback..." ,
"max_length" : 1200 ,
"min_length" : 50 ,
"guidelines" : "Focus on correctness, performance, security, and maintainability."
}
}
# Example response: "Good use of error handling. Consider extracting the validation logic into a separate function for reusability. Line 45 has a potential memory leak."
Single Select
Ideal for binary decisions or choosing one option from multiple choices.
Configuration
Simple Format (String Array)
Complex Format (Value/Label Objects)
{
"response_type" : "single_select" ,
"response_config" : {
"options" : [ "Approve" , "Reject" , "Needs Review" ],
"required" : true
},
"default_response" : "reject"
}
// Response data will be: "approve", "reject", or "needs_review"
// (auto-generated: lowercase, spaces -> underscores)
Configuration Options
Array of option strings to choose from (2-10 options recommended)
Whether a selection is required (default: true)
Allow reviewers to enter custom text option (default: false)
Randomize option display order to reduce bias (default: false)
Use Cases & Examples
# User-generated content approval
moderation_config = {
"response_type" : "single_select" ,
"response_config" : {
"options" : [
"✅ Approve - Follows community guidelines" ,
"⚠️ Approve with Warning - Minor guideline issues" ,
"❌ Reject - Violates guidelines" ,
"🚨 Reject and Flag - Serious violation"
],
"randomize_order" : false # Keep logical order
}
}
# Example response: "❌ Reject - Violates guidelines"
# Legal document categorization
classification_config = {
"response_type" : "single_select" ,
"response_config" : {
"options" : [
"Contract" ,
"Invoice" ,
"Legal Notice" ,
"Insurance Claim" ,
"Other Business Document"
],
"allow_other" : true, # Allow custom categories
"randomize_order" : true # Reduce bias
}
}
# Example response: "Contract"
# Translation quality review
translation_config = {
"response_type" : "single_select" ,
"response_config" : {
"options" : [
"Excellent - Perfect translation" ,
"Good - Minor improvements needed" ,
"Fair - Several issues to fix" ,
"Poor - Needs complete rework"
]
}
}
# Example response: "Good - Minor improvements needed"
Multi Select
Perfect when reviewers need to select multiple items or identify several features.
Configuration
Simple Format (String Array)
Complex Format (Value/Label Objects)
{
"response_type" : "multi_select" ,
"response_config" : {
"options" : [ "Grammar Issues" , "Factual Errors" , "Tone Problems" , "Formatting Issues" ],
"min_selections" : 0 ,
"max_selections" : 4 ,
"required" : false
},
"default_response" : []
}
// Response data will be array like: ["grammar_issues", "tone_problems"]
// (auto-generated: lowercase, spaces -> underscores)
Configuration Options
Array of selectable options (3-15 options recommended)
Minimum number of selections required (default: 0)
Maximum selections allowed (default: unlimited)
Whether at least one selection is required (default: false)
Allow custom text entries (default: false)
Use Cases & Examples
Content Issue Identification
# Identify multiple issues in content
issue_detection = {
"response_type" : "multi_select" ,
"response_config" : {
"options" : [
"Spelling/Grammar Errors" ,
"Factual Inaccuracies" ,
"Inappropriate Tone" ,
"Missing Information" ,
"Poor Structure" ,
"Copyright Issues"
],
"min_selections" : 0 , # Issues are optional
"max_selections" : 6 ,
"allow_other" : true
}
}
# Example response: ["Spelling/Grammar Errors", "Missing Information"]
Product Feature Verification
# Verify product features mentioned
feature_check = {
"response_type" : "multi_select" ,
"response_config" : {
"options" : [
"Free Shipping" ,
"24/7 Support" ,
"Money-back Guarantee" ,
"Mobile App Available" ,
"International Shipping" ,
"Bulk Discounts"
],
"min_selections" : 1 ,
"max_selections" : 6
}
}
# Example response: ["Free Shipping", "Money-back Guarantee", "Mobile App Available"]
# Identify elements in an image
image_analysis = {
"response_type" : "multi_select" ,
"response_config" : {
"options" : [
"People" ,
"Text/Writing" ,
"Logos/Branding" ,
"Products" ,
"Buildings/Architecture" ,
"Nature/Landscape" ,
"Vehicles"
],
"min_selections" : 1 ,
"max_selections" : 4
}
}
# Example response: ["People", "Products", "Logos/Branding"]
Rating Response
Collect numerical ratings and scores from reviewers for quantitative assessment.
Configuration
5-Star Rating
10-Point Scale
{
"response_type" : "rating" ,
"response_config" : {
"scale_max" : 5
}
}
Configuration Options
Maximum rating value (typically 5 or 10)
Minimum rating value (typically 0 or 1)
Rating increment (default: 1, can use 0.5 for half-stars)
Whether a rating is required
Use Cases & Examples
# Rate AI-generated article quality
quality_rating = {
"response_type" : "rating" ,
"response_config" : {
"scale_max" : 5
# scale_min defaults to 1
# scale_step defaults to 1
}
}
# Example response: 4
Customer Service Evaluation
# Rate customer interaction quality
service_rating = {
"response_type" : "rating" ,
"response_config" : {
"scale_max" : 10 ,
"scale_step" : 1
# scale_min defaults to 1
}
}
# Example response: 8
# Rate translation accuracy with half-points
translation_rating = {
"response_type" : "rating" ,
"response_config" : {
"scale_max" : 5 ,
"scale_step" : 0.5 # Allow half-star ratings
# scale_min defaults to 1
}
}
# Example response: 3.5
Collect specific numerical values like counts, measurements, or quantities.
Configuration
Basic Number Input
Decimal Numbers
{
"response_type" : "number" ,
"response_config" : {
"max_value" : 1000
}
}
Configuration Options
Number of decimal places allowed (0-10)
Whether negative numbers are allowed
Whether input is required
Use Cases & Examples
Content Analysis Counting
# Count specific elements in content
counting_config = {
"response_type" : "number" ,
"response_config" : {
"min_value" : 0 ,
"max_value" : 50 ,
"decimal_places" : 0 # Whole numbers only
}
}
# Example response: 7
# Verify pricing information
price_config = {
"response_type" : "number" ,
"response_config" : {
"min_value" : 0 ,
"max_value" : 10000
# decimal_places defaults to 2 (perfect for currency)
}
}
# Example response: 49.99
Advanced Response Configurations
Combining Multiple Response Types
For complex reviews, create multiple requests with different response types:
# Multi-stage content review
def comprehensive_content_review ( content , loop_id ):
# Stage 1: Overall quality rating
quality_request = create_request({
"loop_id" : loop_id,
"request_text" : f "Rate the overall quality of this content: \n\n { content } " ,
"response_type" : "rating" ,
"response_config" : {
"scale_max" : 5
# scale_min defaults to 1
},
"metadata" : { "stage" : "quality_rating" }
})
# Stage 2: Issue identification
issues_request = create_request({
"loop_id" : loop_id,
"request_text" : f "Identify any issues in this content: \n\n { content } " ,
"response_type" : "multi_select" ,
"response_config" : {
"options" : [ "Grammar" , "Factual Errors" , "Tone" , "Structure" ],
"min_selections" : 0
},
"metadata" : { "stage" : "issue_detection" }
})
# Stage 3: Detailed feedback
feedback_request = create_request({
"loop_id" : loop_id,
"request_text" : f "Provide detailed improvement suggestions: \n\n { content } " ,
"response_type" : "text" ,
"response_config" : {
"min_length" : 50 ,
"max_length" : 500
},
"metadata" : { "stage" : "detailed_feedback" }
})
return [quality_request, issues_request, feedback_request]
Dynamic Response Configuration
Adjust response types based on content characteristics:
def get_dynamic_response_config ( content_type , content_length ):
"""Return appropriate response config based on content"""
if content_type == "image" :
return {
"response_type" : "multi_select" ,
"response_config" : {
"options" : [ "People" , "Text" , "Logos" , "Products" , "Inappropriate Content" ],
"min_selections" : 1
}
}
elif content_type == "short_text" and content_length < 100 :
return {
"response_type" : "single_select" ,
"response_config" : {
"options" : [ "Approve" , "Reject" , "Needs More Info" ]
}
}
else : # Long form content
return {
"response_type" : "text" ,
"response_config" : {
"min_length" : 100 ,
"max_length" : 1000 ,
"placeholder" : "Provide detailed feedback..."
}
}
Response Validation
Add custom validation for response data:
def validate_response_data ( response_type , response_data , config ):
"""Validate response data meets requirements"""
if response_type == "rating" :
min_val = config.get( "scale_min" , 1 )
max_val = config.get( "scale_max" , 5 )
if not (min_val <= response_data <= max_val):
raise ValueError ( f "Rating must be between { min_val } and { max_val } " )
elif response_type == "text" :
min_len = config.get( "min_length" , 0 )
max_len = config.get( "max_length" , 1000 )
if not (min_len <= len (response_data) <= max_len):
raise ValueError ( f "Text length must be { min_len } - { max_len } characters" )
elif response_type == "multi_select" :
min_sel = config.get( "min_selections" , 0 )
max_sel = config.get( "max_selections" , len (config[ "options" ]))
if not (min_sel <= len (response_data) <= max_sel):
raise ValueError ( f "Must select { min_sel } - { max_sel } options" )
return True
Best Practices
Response Type Selection
Text : Use for subjective feedback, explanations, or when you need qualitative insightsSingle Select : Perfect for binary decisions or when one clear choice is neededMulti Select : When multiple aspects need to be identified or taggedRating : For quantitative assessment or when you need to compare/rank itemsNumber : When you need specific measurements, counts, or calculations
Optimize for Mobile Experience
Keep option lists concise (max 8-10 options for single/multi select)
Use clear, descriptive labels that work on small screens
Provide appropriate placeholder text and guidelines
Test response times on mobile devices
Reduce Reviewer Cognitive Load
Use consistent response types within similar request categories
Provide clear instructions and examples
Use logical ordering for options (e.g., severity levels)
Consider randomizing options to reduce position bias
Set appropriate validation rules (min/max lengths, value ranges)
Use required fields judiciously - only for truly essential data
Provide “Other” or “Not Applicable” options when appropriate
Include quality checks in your webhook processing
Response Processing
class ResponseProcessor :
def process_response ( self , request_data , response_data ):
"""Process different response types appropriately"""
response_type = request_data[ 'response_type' ]
if response_type == 'rating' :
return self .process_rating(response_data, request_data[ 'response_config' ])
elif response_type == 'text' :
return self .process_text(response_data, request_data[ 'response_config' ])
elif response_type == 'multi_select' :
return self .process_multi_select(response_data, request_data[ 'response_config' ])
# ... handle other types
def process_rating ( self , rating , config ):
"""Convert rating to actionable insights"""
min_val = config.get( 'scale_min' , 1 )
max_val = config.get( 'scale_max' , 5 )
# Normalize to 0-1 scale
normalized = (rating - min_val) / (max_val - min_val)
# Categorize rating
if normalized >= 0.8 :
category = "excellent"
elif normalized >= 0.6 :
category = "good"
elif normalized >= 0.4 :
category = "average"
elif normalized >= 0.2 :
category = "poor"
else :
category = "unacceptable"
return {
"raw_rating" : rating,
"normalized_score" : normalized,
"category" : category,
"actionable" : category in [ "poor" , "unacceptable" ]
}
def process_text ( self , text , config ):
"""Extract insights from text responses"""
import re
# Basic sentiment analysis (you'd use a proper library)
positive_words = [ "good" , "excellent" , "great" , "perfect" , "approve" ]
negative_words = [ "poor" , "bad" , "terrible" , "reject" , "inappropriate" ]
positive_count = sum ( 1 for word in positive_words if word in text.lower())
negative_count = sum ( 1 for word in negative_words if word in text.lower())
# Extract action items (sentences with "should", "need", "must")
action_pattern = r ' [ ^ .!? ] * (?: should | need | must )[ ^ .!? ] * [ .!? ] '
action_items = re.findall(action_pattern, text, re. IGNORECASE )
return {
"text" : text,
"word_count" : len (text.split()),
"sentiment_score" : positive_count - negative_count,
"action_items" : action_items,
"has_specific_feedback" : len (action_items) > 0
}
Next Steps