Skip to main content

Rating Responses

Rating responses allow reviewers to provide numeric assessments on customizable scales, making them ideal for quality evaluations, performance reviews, and any scenario where you need quantifiable feedback that can be easily aggregated and analyzed.

When to Use Rating Responses

Rating responses are perfect for:

Quality Assessment

Evaluating content quality, product ratings, service assessments, or any subjective quality measurement

Performance Evaluation

Rating employee performance, AI model outputs, system effectiveness, or process efficiency

User Experience Rating

Collecting feedback on user experience, satisfaction levels, or preference measurements

Risk Assessment

Scoring risk levels, threat assessments, or priority ratings where numeric scales provide clarity

Configuration Options

Rating responses support flexible scale configuration with custom labels and increments:

Required Parameters

scale_min
number
required
Minimum value on the rating scale (can be negative)
scale_max
number
required
Maximum value on the rating scale (must be greater than scale_min)

Optional Parameters

scale_step
number
default:"1"
Increment between valid values (e.g., 0.5 allows half-point ratings)
required
boolean
default:"false"
Whether a rating is mandatory for completion

Implementation Examples

Content Quality Rating

Five-star quality assessment with descriptive labels:
request_data = {
    "processing_type": "deferred",
    "type": "markdown", 
    "priority": "medium",
    "request_text": "Please rate the overall quality of this blog article:\n\n# '10 Essential Tips for Remote Work Productivity'\n\nWorking from home has become the new normal for millions of professionals worldwide. Whether you're a seasoned remote worker or just starting your work-from-home journey, these proven strategies will help you maintain peak productivity while enjoying the flexibility of remote work.\n\n## 1. Create a Dedicated Workspace\n\nDesignate a specific area in your home exclusively for work. This physical separation helps create mental boundaries between work and personal life...\n\n[Article continues with detailed tips and examples]",
    "response_type": "rating",
    "response_config": {
        "scale_min": 1,
        "scale_max": 5,
        "scale_step": 0.5,
        "required": True
    },
    "default_response": 3,  # Average rating if timeout
    "timeout_seconds": 86400,  # 24 hours
    "platform": "api"
}

Risk Assessment Scale

Ten-point risk assessment with threshold labels:
# Security threat risk assessment
request_data = {
    "processing_type": "time-sensitive",
    "type": "markdown",
    "priority": "high",
    "request_text": "Assess the risk level of this security alert:\n\n**Alert Type:** Suspicious Login Activity\n**Details:** Multiple failed login attempts from IP 192.168.1.100 (Russia) targeting admin accounts\n**Time:** 15 attempts in the last 5 minutes\n**User Accounts:** admin@company.com, root@company.com, security@company.com\n**Additional Context:** These IPs have been flagged in threat intelligence feeds\n\nPlease rate the risk level from 1 (minimal) to 10 (critical threat).",
    "response_type": "rating",
    "response_config": {
        "scale_min": 1,
        "scale_max": 10,
        "scale_step": 1,
        "required": True
    },
    "default_response": 8,  # Conservative high-risk default
    "timeout_seconds": 900,  # 15 minutes
    "platform": "api"
}

User Experience Satisfaction

Net Promoter Score (NPS) style rating:
# Customer satisfaction survey
request_data = {
    "processing_type": "deferred",
    "type": "markdown",
    "priority": "low",
    "request_text": "Based on this customer feedback, how likely would this customer be to recommend our service to others?\n\n**Customer Feedback:**\n'The onboarding process was smooth and the support team was incredibly helpful when I had questions. The product does exactly what I need it to do, and the pricing is fair. I've been using it for 6 months now and haven't had any major issues. The recent feature updates have made my workflow even more efficient. I'm quite satisfied overall.'\n\n**Usage Data:**\n- Customer for 6 months\n- Regular active user (4-5 times per week)\n- No support tickets for technical issues\n- Upgraded to premium plan after 3 months\n\nPlease rate on the NPS scale: 0-10 where 10 means extremely likely to recommend.",
    "response_type": "rating",
    "response_config": {
        "scale_min": 0,
        "scale_max": 10,
        "scale_step": 1,
        "required": True
    },
    "default_response": 5,  # Neutral default
    "timeout_seconds": 604800,  # 7 days
    "platform": "api"
}

Response Format

When a reviewer provides a rating, you’ll receive the numeric value:
{
  "response_data": 4.5
}

Use Case Examples

1. Content Quality Evaluation

  • Configuration
  • Sample Response
  • Processing
quality_evaluation_config = {
    "response_type": "rating",
    "response_config": {
        "scale_min": 1,
        "scale_max": 10,
        "scale_step": 0.5,
        "required": True
    }
}

2. AI Model Performance Rating

  • Configuration
  • Sample Response
  • Processing
ai_performance_config = {
    "response_type": "rating", 
    "response_config": {
        "scale_min": 0,
        "scale_max": 100,
        "scale_step": 5,
        "labels": {
            "0": "Completely Incorrect",
            "25": "Poor Accuracy",
            "50": "Average Performance",
            "75": "Good Accuracy", 
            "90": "Excellent Performance",
            "100": "Perfect Accuracy"
        },
        "required": True
    }
}

3. Customer Satisfaction Survey

  • Configuration
  • Sample Response
  • Processing
satisfaction_survey_config = {
    "response_type": "rating",
    "response_config": {
        "scale_min": 1,
        "scale_max": 7,
        "scale_step": 1,
        "labels": {
            "1": "Extremely Dissatisfied",
            "2": "Dissatisfied", 
            "3": "Somewhat Dissatisfied",
            "4": "Neutral",
            "5": "Somewhat Satisfied",
            "6": "Satisfied",
            "7": "Extremely Satisfied"
        },
        "required": True
    }
}

Validation and Error Handling

Automatic Validation

The mobile app automatically validates rating responses:
  • Range validation: Ensures rating falls within scale_min and scale_max bounds
  • Step validation: Verifies rating aligns with scale_step increments
  • Required validation: Prevents submission when required=true and no rating provided
  • Numeric validation: Ensures only valid numeric values are accepted

Server-Side Validation

Your application should validate received ratings:
def validate_rating_response(response_data, response_config):
    """Validate rating response against configuration"""
    
    if not isinstance(response_data, dict):
        return False, "Response must be an object"
    
    if "rating" not in response_data:
        return False, "Missing rating field"
    
    rating = response_data["rating"]
    
    # Validate numeric type
    if not isinstance(rating, (int, float)):
        return False, "Rating must be a number"
    
    # Check bounds
    scale_min = response_config["scale_min"]
    scale_max = response_config["scale_max"] 
    
    if rating < scale_min or rating > scale_max:
        return False, f"Rating must be between {scale_min} and {scale_max}"
    
    # Check step alignment
    scale_step = response_config.get("scale_step", 1)
    if scale_step > 0:
        # Calculate if rating aligns with step
        steps_from_min = (rating - scale_min) / scale_step
        if not steps_from_min.is_integer():
            return False, f"Rating must align with step increment of {scale_step}"
    
    # Check required
    if response_config.get("required", False) and rating is None:
        return False, "Rating is required"
    
    return True, "Valid"

# Usage example
is_valid, error_message = validate_rating_response(
    response_data={
        "rating": 4.5,
        "rating_label": "Good - High quality"
    },
    response_config={
        "scale_min": 1,
        "scale_max": 5,
        "scale_step": 0.5,
        "required": True
    }
)

Best Practices

Scale Design

  • 1-5 scale: Best for simple quality assessments, easy to understand
  • 1-10 scale: Good for detailed evaluations, allows more granularity
  • 0-100 scale: Ideal for percentage-based ratings, performance metrics
  • Custom ranges: Use negative values for scales like -5 to +5 for sentiment
  • Whole numbers (1.0): Simplest option, good for most use cases
  • Half points (0.5): Adds precision without overwhelming complexity
  • Decimal precision: Use sparingly, mainly for calculated scores
  • Larger steps (5): Good for percentage-based scales (0, 5, 10, 15…)
  • Always label the endpoints (minimum and maximum values)
  • Include middle anchor point for context
  • Add labels at natural breakpoints (quarters, thirds)
  • Use descriptive labels that explain the meaning, not just “poor/good”
  • Match scale complexity to reviewer expertise
  • Use familiar scales when possible (5-star, 1-10, percentage)
  • Consider cultural differences in rating interpretation
  • Test scales with actual users to ensure clarity

Processing Best Practices

# Define clear action thresholds
thresholds = {
    "immediate_action": 9.0,    # Exceptional - promote immediately
    "approve": 7.0,             # Good - approve with minimal review
    "review_needed": 5.0,       # Average - needs additional review
    "major_revision": 3.0,      # Poor - significant work needed
    "reject": 1.0               # Unacceptable - reject
}

def determine_action(rating):
    for action, threshold in thresholds.items():
        if rating >= threshold:
            return action
    return "reject"  # Default for ratings below all thresholds
# Combine multiple ratings intelligently
def aggregate_ratings(ratings, method="weighted_average"):
    if method == "simple_average":
        return sum(ratings) / len(ratings)
    
    elif method == "weighted_average":
        # Weight more recent ratings higher
        weights = [1.0 + (i * 0.1) for i in range(len(ratings))]
        weighted_sum = sum(r * w for r, w in zip(ratings, weights))
        return weighted_sum / sum(weights)
    
    elif method == "median":
        sorted_ratings = sorted(ratings)
        n = len(sorted_ratings)
        return sorted_ratings[n//2] if n % 2 else (sorted_ratings[n//2-1] + sorted_ratings[n//2]) / 2
    
    elif method == "consensus":
        # Remove outliers and average remaining
        if len(ratings) >= 5:
            sorted_ratings = sorted(ratings)
            # Remove top and bottom 20%
            trimmed = sorted_ratings[1:-1] if len(ratings) >= 5 else ratings
            return sum(trimmed) / len(trimmed)
        else:
            return sum(ratings) / len(ratings)
# Track rating trends over time
def analyze_rating_trends(entity_id, time_period_days=30):
    ratings = get_ratings_for_period(entity_id, time_period_days)
    
    if len(ratings) < 3:
        return {"trend": "insufficient_data"}
    
    # Calculate trend direction
    recent_avg = sum(ratings[-3:]) / 3
    earlier_avg = sum(ratings[:-3]) / len(ratings[:-3]) if len(ratings) > 3 else recent_avg
    
    trend_direction = "improving" if recent_avg > earlier_avg + 0.3 else \
                     "declining" if recent_avg < earlier_avg - 0.3 else \
                     "stable"
    
    return {
        "trend": trend_direction,
        "current_average": recent_avg,
        "overall_average": sum(ratings) / len(ratings),
        "rating_count": len(ratings),
        "volatility": calculate_rating_volatility(ratings)
    }

def calculate_rating_volatility(ratings):
    if len(ratings) < 2:
        return 0
    
    avg = sum(ratings) / len(ratings)
    variance = sum((r - avg) ** 2 for r in ratings) / len(ratings)
    return variance ** 0.5  # Standard deviation

Analytics and Reporting

Rating Distribution Analysis

def analyze_rating_distribution(ratings):
    """Analyze patterns in rating data"""
    from collections import Counter
    import statistics
    
    if not ratings:
        return {"error": "No ratings to analyze"}
    
    # Basic statistics
    stats = {
        "count": len(ratings),
        "mean": statistics.mean(ratings),
        "median": statistics.median(ratings),
        "mode": statistics.mode(ratings) if len(set(ratings)) < len(ratings) else None,
        "std_dev": statistics.stdev(ratings) if len(ratings) > 1 else 0,
        "min": min(ratings),
        "max": max(ratings)
    }
    
    # Distribution analysis
    rating_counts = Counter(ratings)
    total_ratings = len(ratings)
    
    distribution = {}
    for rating, count in rating_counts.items():
        percentage = (count / total_ratings) * 100
        distribution[str(rating)] = {
            "count": count,
            "percentage": round(percentage, 1)
        }
    
    # Identify patterns
    patterns = {
        "central_tendency": "low" if stats["mean"] < 3 else "high" if stats["mean"] > 7 else "middle",
        "variability": "low" if stats["std_dev"] < 1 else "high" if stats["std_dev"] > 2 else "moderate",
        "most_common_rating": max(rating_counts.items(), key=lambda x: x[1])[0]
    }
    
    return {
        "statistics": stats,
        "distribution": distribution,
        "patterns": patterns
    }

Performance Benchmarking

def benchmark_against_category(rating, category_id):
    """Compare individual rating against category benchmarks"""
    
    # Get category statistics
    category_stats = get_category_rating_stats(category_id)
    
    if not category_stats:
        return {"error": "No benchmark data available"}
    
    # Calculate percentile
    percentile = calculate_percentile(rating, category_stats["all_ratings"])
    
    # Determine performance tier
    if percentile >= 90:
        performance_tier = "Top 10%"
    elif percentile >= 75:
        performance_tier = "Above Average" 
    elif percentile >= 25:
        performance_tier = "Average"
    else:
        performance_tier = "Below Average"
    
    return {
        "rating": rating,
        "category_average": category_stats["mean"],
        "percentile": percentile,
        "performance_tier": performance_tier,
        "above_average": rating > category_stats["mean"]
    }

def calculate_percentile(value, data_set):
    """Calculate what percentile a value represents in a dataset"""
    below_value = sum(1 for x in data_set if x < value)
    return (below_value / len(data_set)) * 100

Next Steps

I