Boolean responses provide the simplest form of human decision-making: true or false, yes or no, approve or reject. With customizable labels and colors, they’re perfect for binary decisions that don’t require complex analysis or multiple options.
Simple approve/reject decision for content moderation:
Copy
request_data = { "processing_type": "time-sensitive", "type": "markdown", "priority": "high", "request_text": "Does this user-generated post comply with our community guidelines?\n\n**Post Content:**\n'Just finished reading an amazing book on productivity! \"Getting Things Done\" by David Allen completely changed how I organize my work. The key insight for me was the two-minute rule - if something takes less than two minutes, just do it immediately instead of adding it to your todo list. Has anyone else read this? What productivity methods work best for you?'\n\n**Community Guidelines Check:**\n- No spam or promotional content ✓\n- Respectful tone ✓ \n- Relevant to community topic ✓\n- No personal attacks ✓\n- Encourages discussion ✓", "response_type": "boolean", "response_config": { "true_label": "✅ Approve Post", "false_label": "❌ Reject Post", "true_color": "#16a34a", "false_color": "#dc2626", "required": True }, "default_response": False, # Conservative default - reject if timeout "timeout_seconds": 1800, # 30 minutes "platform": "api"}
# Feature release approvalrequest_data = { "processing_type": "deferred", "type": "markdown", "priority": "medium", "request_text": "Should we proceed with the scheduled release of the new user dashboard feature?\n\n**Release Readiness Check:**\n\n**✅ Completed Items:**\n- All unit tests passing (247/247)\n- Integration tests successful\n- Security audit completed with no critical findings\n- Performance testing shows 15% improvement in load times\n- Documentation updated\n- Support team training completed\n\n**⚠️ Outstanding Items:**\n- 2 minor UI bugs in Safari (non-blocking)\n- Analytics tracking for new features (nice-to-have)\n- A/B testing framework integration (planned for next sprint)\n\n**Risk Assessment:**\n- Low risk deployment\n- Rollback plan tested and ready\n- Feature flags enabled for gradual rollout\n- Customer support prepared for potential issues", "response_type": "boolean", "response_config": { "true_label": "🚀 GO - Proceed with Release", "false_label": "⏸️ NO-GO - Delay Release", "true_color": "#16a34a", "false_color": "#f59e0b", "required": True }, "default_response": False, # Conservative - don't release without explicit approval "timeout_seconds": 86400, # 24 hours "platform": "api"}
Your application should validate received boolean responses:
Copy
def validate_boolean_response(response_data, response_config): """Validate boolean response against configuration""" if not isinstance(response_data, dict): return False, "Response must be an object" if "boolean" not in response_data: return False, "Missing boolean field" boolean_value = response_data["boolean"] # Validate boolean type if not isinstance(boolean_value, bool): return False, "Value must be true or false" # Check required if response_config.get("required", False) and boolean_value is None: return False, "Boolean decision is required" # Validate associated label exists if "boolean_label" not in response_data: return False, "Missing boolean_label field" return True, "Valid"# Usage exampleis_valid, error_message = validate_boolean_response( response_data={ "boolean": True, "boolean_label": "✅ Approve Post" }, response_config={ "required": True })
# Use boolean responses to evaluate A/B test resultsdef evaluate_ab_test_results(test_id, variant_a_metrics, variant_b_metrics): """Human evaluation of A/B test statistical significance""" request_text = f""" Based on these A/B test results, should we proceed with Variant B? **Test Duration:** 14 days **Sample Size:** {variant_a_metrics['users']} vs {variant_b_metrics['users']} users **Variant A (Control):** - Conversion Rate: {variant_a_metrics['conversion_rate']:.2%} - Revenue per User: ${variant_a_metrics['revenue_per_user']:.2f} - User Satisfaction: {variant_a_metrics['satisfaction']:.1f}/10 **Variant B (Test):** - Conversion Rate: {variant_b_metrics['conversion_rate']:.2%} - Revenue per User: ${variant_b_metrics['revenue_per_user']:.2f} - User Satisfaction: {variant_b_metrics['satisfaction']:.1f}/10 **Statistical Significance:** {calculate_statistical_significance(variant_a_metrics, variant_b_metrics)} """ return create_boolean_request( loop_id="data_science_team", request_text=request_text, true_label="📊 Deploy Variant B", false_label="🔄 Keep Current Version" )