Documentation Index
Fetch the complete documentation index at: https://grantmaster.dev/llms.txt
Use this file to discover all available pages before exploring further.
Engineering reference: For service contracts, EventBus events, and data-layer details see src/features/ai/ai.md.
AI Feature - GrantMaster
Location:src/features/ai/
Status: Stable (Multi-modal AI platform with RAG, Gemini, and document processing)
Last Updated: 2026-03-17
Overview
The AI feature is the intelligent backbone of GrantMaster, providing conversational assistance, document retrieval, financial analysis, and compliance intelligence through Google Gemini APIs and a Retrieval-Augmented Generation (RAG) pipeline. It powers:- GrantMaster Chat Interface — Full-featured AI chat with project context awareness
- Grant Assistant Sidebar — Embedded chat widget for contextual queries across the app
- Document Intelligence — RAG-based document analysis and metadata extraction
- AI-Generated Insights — Budget forecasts, compliance checks, expense eligibility, report narratives, and journal submission analysis
- Multi-Modal Analysis — Receipt OCR, document classification, and vision-based content extraction
- Conversation Memory — localStorage-backed chat session persistence with citations
Architecture
Data Model
Core Collections & Types
| Collection | Entity Type | Purpose |
|---|---|---|
processed-documents | ProcessedDocument | Stores uploaded project documents with chunking metadata and processing status |
organizations/{org}/projects/{project}/documents | Implicit via RAG | Virtual collection tracking document chunks for a project |
Key Types
Zod Schemas
journalSchema— Journal entry validationgeminiSchemas.*— Response validation for compliance checks, expense analysis, forecasts
Key Behaviors
1. Grant Assistant Chat Interface
Where:AIAssistantPage.tsx (the former AIGrantMasterPage was consolidated into this single page)
- Full-screen AI chat with category-based quick prompts (Budget, Reporting, Documents, etc.)
- Smart Context Selection: When user is in a project, queries are routed to RAG for document-grounded responses
- Generic Fallback: Outside project context, uses general grant assistant for broad queries
- Message Persistence: Sessions saved to localStorage, sorted by
updatedAt - Citation Display: Sources from retrieved documents shown inline via
CitationDisplay
2. Embedded Grant Assistant Sidebar
Where:GrantAssistant.tsx (globally embedded)
- Collapsible sidebar widget integrated into pages
- Shares state with full-page chat via
useGrantAssistantState - Same RAG/Assistant routing logic as full interface
- Quick actions: Create new session, delete conversations, clear history
- Responsive: Collapses on mobile, shows hover tooltip
3. Document Upload & RAG Processing
Flow:- User uploads file →
uploadAndProcessDocument()createsProcessedDocumentrecord - File stored in Firebase Storage (
project-documents/{org}/{project}/{filename}) - Status tracked:
uploading→processing→completed/failed - Cloud Function chunks document, extracts metadata (deadlines, activity rules), generates embeddings
- Chunks stored in Firestore with vector embeddings for retrieval
- User can query document context via RAG
uploadAndProcessDocument(file, projectId, orgId, userId)— Initiates uploadgetDocumentStatus(documentId)— Check processing progresssubscribeToDocumentStatus(documentId, callback)— Real-time updateslistProjectDocuments(projectId, orgId)— List all documents in a projectretryDocumentProcessing(documentId)— Retry failed documentsdeleteDocument(documentId)— Remove document and chunksgetAllProcessedDocuments(organizationId)— Full org document list
4. RAG Query & Retrieval
Flow:- User query →
queryRAGAuto(projectId, organizationId, userId, queryText) - Embedding generated for user query
- Vector search retrieves top-K relevant chunks from document corpus
- Chunks + user query → Gemini context window
- Response includes citations (document source metadata)
queryRAG(query, documents)— Core RAG pipeline (client-side)queryRAGAuto(projectId, orgId, userId, queryText)— Intelligent routing (client + Cloud Function fallback)queryRAGCloudFunction(...)— Server-side RAG (Genkit)retrieveRelevantChunks(query, documents)— Vector retrievalstoreDocumentChunks(projectId, chunks)— Persist chunkschunkText(text)— Split documents into overlapping chunksgenerateEmbedding(text)— Get vector representationextractDeadlines(text)— Metadata extractionextractActivityRules(text)— Grant eligibility rules from documents
5. AI-Powered Insights
Journal Entry Generation
- Trigger: Journal submission flow (end-of-month reflection)
- Input: User notes, notable events, project assignments
- Output: Auto-generated
JournalEntry[]with time allocations - Implementation: Cloud Function via
generateJournalEntries(input, projects, user, org) - Fallback: Client-side Gemini if Cloud Function unavailable
- Tracking: Usage tracked via
trackAiGeneration()for billing
Compliance Analysis
- Service:
analyzeCompliance(organizationId, projects) - Checks: Policy violations, missing documentation, audit readiness
- Output: Structured compliance alerts with severities
Expense Eligibility
- Service:
checkExpenseEligibility(expense, grantTerms) - Logic: Validates expense against grant agreement restrictions
- Output: Eligibility verdict + explanation
Budget Forecasts & Narratives
- Service:
generateProjectForecast(project, expenses)— Predict burn rate, runway - Service:
generateReportNarrative(project, data)— Narrative summary of progress - Output: Structured text for reports
Receipt & Document Analysis
- Service:
analyzeReceipt(file)— OCR + metadata extraction - Service:
analyzeDocumentContent(file)— Vision-based content extraction - Output: Extracted fields, amount, vendor, date
Proposal Impact Analysis
- Service:
generateProposalSection(proposal, section)— Draft grant proposal sections - Service:
suggestMEIndicators(project)— Recommend impact metrics - Service:
generateMENarrative(indicators, data)— Impact narrative - Service:
detectMEAnomalies(data)— Anomaly detection in measurement data
6. Prompt Library
Storage: localStorage viaPromptLibraryService
- System Prompts: Built-in templates across 5 categories (budget, compliance, reporting, general, custom)
- User Prompts: Custom prompts saved per organization/user
- Methods:
getAllPrompts(userId, orgId)— System + user combinedgetPromptsByCategory(category, userId, orgId)— Filter by categorygetUserPrompts(userId, orgId)— Custom onlysavePrompt(title, desc, text, category, userId, orgId)— Create customdeletePrompt(promptId)— Remove custom prompt
Service Contract
| Service | Owns/Does | Key Public Methods |
|---|---|---|
| GeminiService | Facade for all Gemini-powered AI operations; BaseService wrapper for DI | generateJournalEntries, analyzeCompliance, generateReportNarrative, generateProjectForecast, analyzeReceipt, checkExpenseEligibility, queryGrantAssistant, generateProposalSection, generateFullProposal, suggestMEIndicators, generateMENarrative, detectMEAnomalies, analyzeDocumentContent |
| RAG Pipeline | Document chunking, embedding, retrieval, caching | queryRAG, queryRAGAuto, queryRAGCloudFunction, storeDocumentChunks, retrieveRelevantChunks, chunkText, generateEmbedding |
| RAG Document Management | Upload, processing, metadata persistence, version control | uploadAndProcessDocument, getDocumentStatus, subscribeToDocumentStatus, listProjectDocuments, retryDocumentProcessing, deleteDocument, getAllProcessedDocuments |
| Chat History Service | localStorage-backed session persistence, message history | getAllSessions, getActiveSessions, getArchivedSessions, getSession, addMessage, createSession, deleteSession, archiveSession |
| Prompt Library Service | System + custom prompt templates, localStorage-backed | getAllPrompts, getPromptsByCategory, getUserPrompts, savePrompt, deletePrompt, updatePrompt |
| Gemini Client | Gemini API initialization, retry logic, JSON parsing | getAIClient, parseGeminiJsonResponse, callWithRetry |
| Gemini Assistant | Grant-specific query handler (general assistant queries) | queryGrantAssistant |
| Gemini Compliance | Compliance analysis, expense eligibility checks | analyzeCompliance, checkExpenseEligibility |
| Gemini Documents | Receipt analysis, document content extraction (vision) | analyzeReceipt, analyzeDocumentContent |
| Gemini Forecast | Budget forecasting, report narratives | generateProjectForecast, generateReportNarrative |
| Gemini Proposal Impact | Proposal section generation, M&E indicators, anomaly detection | generateProposalSection, generateFullProposal, suggestMEIndicators, generateMENarrative, detectMEAnomalies |
| Response Safety | Input validation, output sanitization, injection prevention | (internal validators) |
| RAG Document Processing | Text extraction, chunking, embedding generation | chunkText, generateEmbedding, storeDocumentChunks |
| RAG Retrieval | Vector search, chunk ranking, metadata filtering | retrieveRelevantChunks |
| RAG Metadata | Deadline/activity rule extraction from documents | extractDeadlines, extractActivityRules |
| RAG Cache | Embedding cache, query result caching | (internal) |
| Report Generation | Report template application, PDF/Excel export | generateReport, exportToPDF, exportToExcel, getTemplateById |
Events
Emitted Events
| Event | When Emitted | Severity | Persisted |
|---|---|---|---|
| None currently | — | — | — |
Consumed Events
| Event | What Happens |
|---|---|
| None currently | — |
Dependencies
Features That Depend on AI
- projects — Project context for RAG queries
- grants — Grant data for assistant queries, proposal generation
- users — User context for journal generation, chat sessions
- expenses — Expense eligibility checks
- documents — Document classification, upload handling (integrates with Document Brain)
- reports — Report generation, narrative composition
- billing — Tracks AI usage for credit/quota management
- compliance — Compliance analysis inputs/outputs
- impact — M&E indicator suggestions, anomaly detection
AI Feature Dependencies
- core — BaseService, eventBus, Firebase (Firestore, Storage, Functions), logger
- types — DocumentCitation, JournalEntry, ProcessedDocument, Project, Organization, User
- Google Gemini API — LLM inference
- Firebase Cloud Functions — Genkit-powered journal generation, RAG (cloud variant)
- Firebase Storage — Document file persistence
- Firestore — Document metadata (
processed-documentscollection)
File Structure
Integration Points
With Other Features
- Projects: Provides project context for RAG;
AIAssistantPageextracts projectId from route - Grants: Grant data used in assistant queries, proposal generation
- Expenses: Feeds expense data to eligibility checker, forecaster
- Documents: Uploads integrated with document storage; RAG documents share Firebase Storage
- Reports: AI generates report narratives; powered by
reportGenerationservice - Billing: AI usage tracked via
trackAiGeneration()for credit/token counting - Compliance: AI analyzes compliance status; outputs feed compliance alerts
With External APIs
- Google Gemini (
gemini-1.5-pro,gemini-1.5-flash) — Main LLM inference - Gemini Vision — Document/receipt image analysis
- Firebase Cloud Functions (Genkit) — Server-side journal generation, RAG fallback
- Firebase Storage — Document file hosting
- Firestore — Document metadata, processed-document collection
Configuration
Environment Variables
VITE_GEMINI_API_KEY— Google Gemini API key (required)
Feature Flags / Constants
USE_CLOUD_FUNCTIONS(geminiClient.ts) — Route journal generation to Cloud Function (Genkit)MODEL_NAME(geminiClient.ts) — Active Gemini model (defaults togemini-1.5-flash)MAX_SESSIONS(chatHistoryService.ts) — Max chat sessions per user (default: 50)STORAGE_KEY(chatHistoryService.ts) — localStorage key for sessions
Models & Limits
- Primary Model:
gemini-1.5-flash(speed, cost-optimized) - Fallback:
gemini-1.5-pro(for complex reasoning if configured) - Vision Model:
gemini-1.5-pro-vision-128k(receipt/document analysis) - Token Limits: Context windows respect Gemini tier limits; large documents chunked to stay within limits
Testing
Unit Tests (Vitest)
Located inservices/__tests__/:
geminiRetry.test.ts— Retry logic validationgeminiJsonParser.test.ts— JSON extraction from LLM responsesragPipeline.test.ts— RAG orchestration flowranking.test.ts— Chunk ranking/relevance scoringcontextAssembler.test.ts— Context building for prompts
E2E Tests (Playwright)
- AI chat flow (message send/receive, context switching)
- Document upload & processing
- RAG query with citations
- Session persistence across page reloads
Common Development Tasks
Adding a New AI-Powered Feature
- Create service function in
services/gemini*.tsorservices/rag*.ts - Add to GeminiService facade (
geminiServiceFacade.ts) if using BaseService DI - Export via
index.tspublic API - Add tests to
services/__tests__/ - Integrate UI in components that need the feature
Adding Document Types / RAG Metadata
- Extend extraction logic in
ragMetadata.ts(deadlines, rules, etc.) - Update CloudFunction (if using server-side processing)
- Test extraction with sample documents
- Document type mappings in
defaultPrompts.tsor feature config
Adding Prompt Templates
- Update
defaultPrompts.ts— Add todefaultPrompts[]array - Category: Choose from
'budget','compliance','reporting','general','custom' - Icon: Optional emoji or Lucide icon name
- Test: Via UI category navigation
Performance Considerations
- Chunking: Large documents split with overlaps (default: ~1000 tokens) to stay within context windows
- Caching: RAG embeddings cached in-memory to avoid regeneration
- Lazy Loading: Components use
React.lazy()for code splitting - Streaming (future): Consider streaming Gemini responses for real-time UX
- Rate Limiting: Implement backoff for API rate limits (see
geminiRetry.ts) - Session Limit: Max 50 chat sessions per user (localStorage optimization)
Security & Privacy
- No Sensitive Data in Context: Credentials, passwords, financial account numbers excluded from prompts
- Response Filtering:
responseSafety.tsvalidates outputs - Storage: Chat sessions stored in browser localStorage (client-side, not synced)
- PII Handling: User/project data anonymized in bulk AI operations if applicable
- API Keys: Gemini key stored in
.env.local, never exposed to client (via Cloud Function in production)
Known Limitations
- Chat Sessions: Stored client-side (localStorage) — not synced across devices/browsers
- Document Size: Very large PDFs may require chunking tuning; OCR accuracy depends on image quality
- Context Window: Gemini context limits may truncate very long document sets in RAG
- Real-time Collaboration: Chat is single-user; no multi-user chat synchronization
- Prompt Customization: Limited to text; no template variables or conditional logic yet
Related Documentation
- Intelligence Platform (
docs/product/features/intelligence-platform.md) — Cross-org anonymized benchmarks, separate from AI feature - Document Brain (
docs/product/features/document-brain.md) — Document classification, versioning, workflow; integrates with AI for analysis - EventBus Architecture (
docs/engineering/architecture/base-service-and-eventbus.md) — Service coordination (GeminiService/RAG extend BaseService) - Firebase Integration (
docs/engineering/architecture/firebase.md) — Storage, Firestore, Cloud Functions setup - API Reference (
docs/engineering/api-reference/) — Service method signatures
Version History
| Version | Date | Changes |
|---|---|---|
| 1.0 | 2026-03-17 | Initial comprehensive README; documented all services, components, data model, and event catalog |