Neuronia: Building a Mini On-Device LLM for Brain Training
A Case Study by ViRan Digital Labs
Discover how a father-daughter vision became a comprehensive cognitive platform with 88,000+ adaptive questions running entirely offline.
The Origin Story
Eight years old, Vizhinee imagined a game. Not just any game—a game that would challenge your mind in ways you didn't expect. Her father, inspired by her vision, turned that imagination into reality. What started as a weekend project between a father and daughter evolved into something far more ambitious: Neuronia, a comprehensive cognitive platform with 12 interconnected games, 88,000+ carefully crafted questions, and an adaptive machine learning engine that runs entirely on your device.
This isn't a typical brain training app. It's a laboratory for cognitive enhancement—built with privacy at its core, intelligence at its heart, and a vision of making adaptive learning accessible to everyone, forever, without selling your data.
The Challenge
Building Neuronia meant solving six interconnected challenges that most apps avoid:
100% Offline Capability
No internet? No problem. Every game, every question, every adaptive decision had to work completely offline without any server dependency.
Adaptive Intelligence Engine
Creating an on-device ML engine that learns from user behavior, adjusts difficulty, prevents boredom, and personalizes the experience without cloud processing.
88,000+ Questions at Scale
Generating, validating, and organizing tens of thousands of unique questions across multiple difficulty levels and cognitive domains.
12 Cohesive Games
Designing 12 distinct game mechanics that share a common adaptive engine while remaining fun, engaging, and cognitively diverse.
App Store Compliance
Meeting iOS and Google Play approval standards, privacy regulations (GDPR, CCPA, PDPA, COPPA), and content rating requirements simultaneously.
Zero Data Collection
Building all analytics, personalization, and intelligence locally without collecting, storing, or transmitting any user data whatsoever.
How Concepts Evolved
The journey from a single game concept to a comprehensive cognitive platform:
HowSmartRU
The original game concept focused on rapid recall and general knowledge, powered by the first generation of the adaptive engine.
BrainRain
Introduced memory and pattern recognition as a separate cognitive domain, with its own specialized questions and difficulty curve.
4 Cognitive Zones
Expanded to cover Knowledge, Memory, Logic, and Perception—each zone with unique mechanics and question pools.
12 Games
Further specialization within each cognitive zone, creating 12 distinct game experiences that share the unified adaptive engine.
CognitivLab Ensemble
The capstone: a composite profile system that aggregates scores across all 12 games, creating a holistic view of cognitive strengths.
The Adaptive Intelligence Engine
At the heart of Neuronia lies an on-device machine learning engine—a mini LLM for cognitive assessment that learns, adapts, and personalizes without the cloud.
How It Works
Rolling Accuracy Window
Like LLM context windows that track recent tokens, Neuronia maintains a rolling window of your last 20-50 answers. This local context enables real-time difficulty adjustment without any centralized data storage.
Difficulty Distribution
Similar to temperature controls in LLMs, the engine manages difficulty variance. It keeps you in the "flow zone"—challenging enough to learn, easy enough to succeed—by dynamically distributing question difficulty based on your performance.
Anti-Stagnation & Anti-Repetition
Just as LLMs penalize repetitive tokens, our engine prevents question repetition and detects when you're in a cognitive plateau. It automatically refreshes the question pool and adjusts difficulty when engagement drops.
11 Specialized Scoring Engines
Like domain-specific models in multi-task learning, Neuronia runs 11 parallel scoring engines—one for each game plus a composite profile. Each specializes in evaluating performance within its cognitive domain.
CognitivLab Ensemble Scoring
The pinnacle: an ensemble aggregator that combines scores from all 12 games into a unified "CognitivLab" composite profile. Like model ensemble averaging in machine learning, it creates a more robust and reliable assessment of overall cognitive ability.
Zero Cloud Dependency
All computation happens locally on your device. No training data is sent anywhere. No user behavior is logged on servers. The engine learns about you for you, not for advertisers.
By The Numbers
Question Database Architecture
How we generated 88,000+ unique, adaptive questions from just 25 parameterized templates:
The Parameterization Strategy
Rather than manually creating 88,000 questions, we designed 25 parameterized question templates. Each template contains variables for difficulty modifiers, time constraints, numeric parameters, and contextual variations. By systematically varying these parameters across different difficulty levels, cognitive domains, and question types, we generated over 88,000 unique questions from a compact, maintainable template system.
Quizzeria Pool
38,000+ questions covering general knowledge, trivia, and rapid-fire recall. Multiple difficulty tiers ensure beginners and experts alike stay engaged.
Knowledge DomainIQ Pool
11,000+ questions focused on logic, pattern recognition, and abstract reasoning. Designed to challenge analytical thinking and spatial visualization.
Logic DomainBrainRain Elements
19,000+ questions spanning memory games, sequence prediction, and cognitive endurance. These elements are mixed and matched across multiple games.
Memory DomainThe distributed architecture means that each game can draw from multiple question pools, ensuring variety while maintaining thematic coherence. This design allows us to reuse question elements across games while the adaptive engine ensures each user sees a truly personalized sequence.
Quality Assurance
Reliability and correctness are non-negotiable in cognitive assessment:
Test Suites
Comprehensive test coverage across all game mechanics, adaptive engine logic, and question database integrity.
Test Cases
Covering unit tests, integration tests, end-to-end scenarios, and edge case handling. Every critical path tested.
Critical Bugs Fixed
Identified and resolved during beta testing, preventing any production impact. Each fix hardened the platform.
Crashes Prevented
Memory leaks, race conditions, and edge cases handled through rigorous testing and defensive programming practices.
App Store Journey
Taking Neuronia from development to global distribution:
iOS Submission
Submitted to the Apple App Store with full compliance to Apple's Human Interface Guidelines, performance requirements, and privacy standards. Clean binary, optimized assets, and seamless onboarding.
In ReviewGoogle Play Preparation
Prepared for Google Play Store with Android-specific optimizations, minimum SDK compliance, and material design patterns. Ready for launch pending final configuration.
ReadyPrivacy Compliance
Full compliance with GDPR, CCPA, PDPA, and COPPA regulations. Privacy policy transparent and user-friendly. No third-party trackers. No data sharing.
CertifiedContent Rating
Educational content designation across all app stores. No ads, no in-app purchases with dark patterns, no exploitative mechanics. Pure learning.
ApprovedLessons Learned
Key insights from building Neuronia:
- Privacy is a Feature: Making zero-data collection the default didn't compromise functionality—it forced better design. Every capability had to prove its worth without relying on behavioral data.
- On-Device ML is Practical: You don't need servers and cloud infrastructure for intelligent personalization. Local machine learning is faster, more private, and more reliable.
- Question Quality Scales Non-Linearly: Going from 10K to 88K questions taught us that question diversity matters more than pure volume. Well-designed parameterized templates beat manual authoring.
- Adaptive Engines Prevent Boredom: Static difficulty curves fail. Players disengage. The rolling accuracy window and anti-stagnation logic were game-changers for long-term engagement.
- Test Early, Test Often: Cognitive assessment is unforgiving. A subtle bug in scoring affects user trust. 361 test cases weren't overkill—they were essential.
- Composition Over Monoliths: Designing 11 specialized scoring engines that feed into an ensemble was more maintainable than one monolithic scoring system. Domain specialization scales.
- Offline-First Simplifies Everything: No network latency, no server downtime, no sync complexity. Offline-first design led to a more robust, faster, and more enjoyable user experience.
Tech Stack
Built with modern, proven technologies:
Download Case Study
Get the complete Neuronia case study as a PDF for offline reading or sharing.
Download PDF ↓