Technology

Technical Advantages

Rust-Based High-Performance Engine

Our text processing engine built with Rust ensures ultra-low latency performance with excellent battery efficiency through native execution.

AWS Managed Infrastructure

All server infrastructure is built on AWS Managed services, providing flexibility, scalability, and high availability.

Security & Privacy

User data is safely protected through on-device FST processing and PII filtering.

Core Technology

Intelligent algorithms engineered to deliver the best typing experience

Touch Correction Model

Our touch correction engine combines Gaussian distance modeling with per-key hit probability to infer the intended key from imprecise taps. User-specific typing patterns are continuously refined through Bayesian calibration, dynamically adjusting key boundaries for each individual.

Gaussian Distance ModelHit Probability MatrixBayesian CalibrationDynamic Key Boundaries

High-Performance Word Suggestion via FST

We compile word dictionaries from publicly available Korean language corpora into Finite State Transducers (FST), enabling O(n) lookup speed. Combined with Levenshtein Automaton for fuzzy matching and weighted shortest-path search, Keyfred delivers accurate word suggestions in real time — all processed natively on-device.

Finite State TransducerLevenshtein AutomatonWeighted Shortest PathOn-device Processing

Personalized Dictionary with Privacy

By learning from user input patterns, we build a personalized FST word dictionary on the server and deliver it to the device. All learning data passes through PII filtering and anonymization to ensure no personally identifiable information is stored or exposed.

Server-side FST BuildPII FilteringData AnonymizationPersonalized Recommendations

Context Engine Powered by Rust (BufferEngine)

Our Rust-built BufferEngine tracks and manages the user's input flow and context in real time. More than a simple text correction tool, it deeply understands the current context and reflects the surrounding flow to power a truly intelligent writing assistant — with native performance that minimizes context analysis overhead.

Rust BufferEngineReal-time Context TrackingContext-aware AINative Performance

Intelligent AI Service Routing

We analyze the characteristics and complexity of each task — correction, rewriting, translation — and dynamically route it to the optimal AI model. Our multi-model orchestration architecture maximizes both response quality and cost efficiency through per-task model selection optimization.

Dynamic Model RoutingTask-based OrchestrationMulti-model ArchitectureCost-Quality Optimization

Expert-Crafted Prompt Engineering

AI specialists have meticulously designed domain-specific prompts for each feature. By applying Few-shot Learning and Chain-of-Thought reasoning techniques, we achieve deep contextual understanding and generate results of significantly higher quality compared to naive AI calls.

Domain-specific PromptsFew-shot LearningChain-of-ThoughtContext-aware Reasoning

Caching Context for Maximum Performance

Leveraging Prompt Caching, we cache recurring system context and instructions, dramatically reducing token processing on consecutive requests. This minimizes Time to First Token (TTFT) and pushes AI response performance to its limits.

Prompt CachingTTFT OptimizationToken EfficiencyConsecutive Request Acceleration

Competitive Comparison

CategoryKeyfredGrammarlyTypewise
Core TechRust Core + FFI (Native)Java/Kotlin (Managed)Honeycomb Algorithm
Response SpeedUltra-low latency (Rust)Slight delay on analysisFast (simple typo focus)
Battery EfficiencyHigh efficiency (Native)High consumptionGood
Key FeaturesCorrection + 15 Tones + TranslationGrammar + Tone AnalysisTypo correction only
SecurityOn-device FST + PII FilteringCloud-basedOn-device