Learning Agent
ID: learning
Purpose
Collect metrics, analyze user feedback, optimize content generation, and manage A/B tests. The Learning Agent runs independently on an admin-configurable schedule (stored in Schedule table, example: daily at midnight) and creates optimized agent versions for admin review and deployment.
Inputs
- None (agent reads all data from database independently)
Note: The agent reads metrics, feedback, and performance data directly from the database. It does not receive data from other agents. The agent works independently and queries the database for all necessary information.
Schedule: Schedules are created by admins via the admin interface (stored in Schedule table). Example: Daily at midnight schedule. Can also be triggered manually via admin interface.
Configurations
Configuration is stored in AgentConfig with identifier learning and the following structure:
{
metrics: {
collectionInterval: 3600, // seconds
retentionDays: 90,
trackEngagement: true,
trackFeedback: true,
trackAgentPerformance: true
},
optimization: {
enabled: true,
updateFrequency: 'daily',
minDataPoints: 100,
confidenceLevel: 0.95
},
abTesting: {
enabled: true,
variants: Array<{
id: string,
name: string,
config: object,
trafficSplit: number
}>,
metrics: ['openRate', 'clickRate', 'engagementTime', 'feedbackScore'],
minSampleSize: 50,
significanceLevel: 0.05
},
ai: {
feedbackAnalysisModel: 'gpt-4',
optimizationModel: 'gpt-4',
temperature: 0.3
}
}Outputs
{
agentId: 'learning',
agentVersion: string, // Semantic version (e.g., "1.2.3") of the agent that generated this output
timestamp: Date,
executionTime: number,
analysis: {
engagement: {
averageOpenRate: number,
averageClickRate: number,
averageEngagementTime: number,
trends: {
openRate: 'increasing' | 'decreasing' | 'stable',
clickRate: 'increasing' | 'decreasing' | 'stable',
engagementTime: 'increasing' | 'decreasing' | 'stable'
},
topPerformingSections: string[],
lowPerformingSections: string[]
},
feedback: {
averageRating: number,
sentiment: 'positive' | 'negative' | 'neutral',
commonThemes: Array<{ theme: string, frequency: number, sentiment: string }>,
suggestions: Array<{ suggestion: string, priority: 'high' | 'medium' | 'low' }>,
sectionFeedback: Array<{
sectionId: string,
sectionType: string,
likeRate: number,
usefulRate: number,
totalFeedback: number,
trends: {
likeRate: 'increasing' | 'decreasing' | 'stable',
usefulRate: 'increasing' | 'decreasing' | 'stable'
}
}>
},
agentPerformance: {
dataCollection: { avgExecutionTime: number, successRate: number, qualityScore: number },
analysis: { avgExecutionTime: number, accuracyScore: number },
contentGeneration: { avgExecutionTime: number, qualityScore: number, userSatisfaction: number },
qualityAssurance: { avgExecutionTime: number, catchRate: number }
}
},
optimizations: Array<{
agent: string,
component: string,
currentConfig: object,
recommendedConfig: object,
expectedImprovement: number,
confidence: number
}>,
abTestResults: Array<{
testId: string,
variant: string,
metrics: object,
winner?: string,
confidence: number,
recommendation: 'keep' | 'reject' | 'continue'
}>,
recommendations: Array<{
priority: 'high' | 'medium' | 'low',
agent: string,
action: string,
rationale: string
}>,
newVersions: Array<{
agentId: string,
version: string,
status: 'draft',
config: object,
expectedImprovement: number,
metadata: {
optimizationRationale: string,
changes: Array<{ component: string, change: string }>
}
}>
}Process
-
Initialize: Load metrics config, A/B test configs
-
Metrics Collection:
- Aggregate engagement metrics (opens, clicks, time spent) from last period
- Collect user feedback (ratings, comments)
- Collect agent performance metrics
- Store in database
-
Engagement Analysis:
- Calculate averages and trends
- Identify top/low performing content sections
- Analyze engagement patterns by user segment
-
Feedback Analysis:
- Aggregate ratings and calculate average
- Section-Level Feedback: Analyze feedback per newsletter section
- Identify which sections receive positive/negative feedback
- Track section-specific engagement patterns
- Correlate section feedback with content type, topic, or format
- Use AI to analyze comments for themes and sentiment
- Extract actionable suggestions
- Feedback Patterns: Identify trends in user preferences
- Which content types are most liked/useful
- Which sections need improvement
- User preference evolution over time
-
Agent Performance Analysis:
- Analyze execution times, success rates, quality scores
- Identify bottlenecks and improvement opportunities
-
A/B Test Analysis:
- For each active A/B test, compare variant performance
- Calculate statistical significance
- Determine winners
- Generate recommendations
-
Optimization:
- Use AI to analyze patterns and suggest config changes
- Generate optimization recommendations for each agent
- Calculate expected improvements
-
Version Creation (for optimized agents):
- Create new agent versions with optimized configurations
- Store version in
AgentVersiontable with status 'draft' - Include optimization rationale and expected improvements in metadata
- Link version to performance metrics for comparison
- Note: The Learning Agent creates
AgentVersionrecords, notAgentConfigupdates. Agent versions are separate from runtime configurations. - New versions require admin approval before production deployment
-
Configuration Updates (if auto-update enabled):
- Learning Agent does NOT directly update
AgentConfigfor production agents - Instead, it creates new
AgentVersionrecords with optimized configurations - Admins review and promote versions through the versioning system
- A/B test traffic splits can be updated automatically if configured
- Note: Production deployments always require admin approval via admin dashboard
- Learning Agent does NOT directly update
-
Storage: Write all analysis results, recommendations, and new configurations to database
-
Note: Other agents read updated configurations from the database independently on their next execution. The Learning Agent does not directly communicate with other agents.