Technology
Technical Advantages
Rust-Based High-Performance Engine
Our text processing engine built with Rust ensures ultra-low latency performance with excellent battery efficiency through native execution.
AWS Managed Infrastructure
All server infrastructure is built on AWS Managed services, providing flexibility, scalability, and high availability.
Security & Privacy
User data is safely protected through on-device FST processing and PII filtering.
Core Technology
Intelligent algorithms engineered to deliver the best typing experience
Touch Correction Model
Our touch correction engine combines Gaussian distance modeling with per-key hit probability to infer the intended key from imprecise taps. User-specific typing patterns are continuously refined through Bayesian calibration, dynamically adjusting key boundaries for each individual.
High-Performance Word Suggestion via FST
We compile word dictionaries from publicly available Korean language corpora into Finite State Transducers (FST), enabling O(n) lookup speed. Combined with Levenshtein Automaton for fuzzy matching and weighted shortest-path search, Keyfred delivers accurate word suggestions in real time — all processed natively on-device.
Personalized Dictionary with Privacy
By learning from user input patterns, we build a personalized FST word dictionary on the server and deliver it to the device. All learning data passes through PII filtering and anonymization to ensure no personally identifiable information is stored or exposed.
Context Engine Powered by Rust (BufferEngine)
Our Rust-built BufferEngine tracks and manages the user's input flow and context in real time. More than a simple text correction tool, it deeply understands the current context and reflects the surrounding flow to power a truly intelligent writing assistant — with native performance that minimizes context analysis overhead.
Intelligent AI Service Routing
We analyze the characteristics and complexity of each task — correction, rewriting, translation — and dynamically route it to the optimal AI model. Our multi-model orchestration architecture maximizes both response quality and cost efficiency through per-task model selection optimization.
Expert-Crafted Prompt Engineering
AI specialists have meticulously designed domain-specific prompts for each feature. By applying Few-shot Learning and Chain-of-Thought reasoning techniques, we achieve deep contextual understanding and generate results of significantly higher quality compared to naive AI calls.
Caching Context for Maximum Performance
Leveraging Prompt Caching, we cache recurring system context and instructions, dramatically reducing token processing on consecutive requests. This minimizes Time to First Token (TTFT) and pushes AI response performance to its limits.
Competitive Comparison
| Category | Keyfred | Grammarly | Typewise |
|---|---|---|---|
| Core Tech | Rust Core + FFI (Native) | Java/Kotlin (Managed) | Honeycomb Algorithm |
| Response Speed | Ultra-low latency (Rust) | Slight delay on analysis | Fast (simple typo focus) |
| Battery Efficiency | High efficiency (Native) | High consumption | Good |
| Key Features | Correction + 15 Tones + Translation | Grammar + Tone Analysis | Typo correction only |
| Security | On-device FST + PII Filtering | Cloud-based | On-device |