What Changed
Our keyword scoring system estimates search volume and competition difficulty for every keyword in our database. Until now, these estimates relied on calibration data that was several months old. We've refreshed it with April 2026 data, covering real search patterns from the App Store and Google Play.
Two algorithms were updated:
- Volume estimator — maps keyword popularity signals to estimated monthly search ranges more precisely
- Difficulty scorer — factors in market-specific competition data instead of a global average
- Brand detector — improved filtering to prevent brand keywords from inflating or deflating generic keyword scores
Updated Markets
This update covers five markets. Each received independent calibration data rather than extrapolating from a single global model.
| Market | Code | What Changed |
|---|---|---|
| 🇺🇸 United States | US | Volume baseline recalibrated |
| 🇹🇷 Turkey | TR | New calibration data added |
| 🇩🇪 Germany | DE | Difficulty scores updated |
| 🇮🇹 Italy | IT | Volume estimates refined |
| 🇮🇳 India | IN | High-volume keywords re-scored |
Why Your Scores May Look Different
If you're checking your keyword list today, you may notice scores that are higher, lower, or in a different range than before. This is intentional — not a bug.
The previous model was conservative, tending to underestimate volume for mid-tier keywords in non-US markets. The new calibration corrects for this. Practically, this means:
- Some keywords that looked low-volume may now show higher estimates
- Difficulty scores for saturated niches in TR and IN markets may be higher
- Brand keyword noise has been reduced, so generic terms should rank more cleanly
What Stays the Same
The core keyword explorer interface, rank tracking, and competitor monitoring are unchanged. This update is entirely about the accuracy of the underlying scores — not the features built on top of them.
Historical rank data is also preserved. You'll see updated scores when you look at keywords now, but your tracked positions and historical charts are untouched.
Share Your Feedback
Calibration is an ongoing process. We refine the model as more real-world data becomes available, and user feedback is one of our most valuable inputs.
If you see scores that don't match your expectations — either surprisingly high, surprisingly low, or just inconsistent with what you know about a keyword — please tell us. Use the feedback button inside the app or reply to this post.
We're actively monitoring the data and will continue iterating. The goal is scores you can trust when making real decisions about your app's metadata.