How we evaluate eSIM providers (methodology & transparency)
This page documents the methodology used to evaluate eSIM providers and the transparency principles applied to that evaluation.
It explains evaluation criteria, evidence sources, and weighting logic without ranking providers or making recommendations.
This page exists to define process and accountability, not to influence purchasing decisions.
Purpose of This Methodology
This methodology exists to ensure that any evaluation of eSIM providers is consistent, explainable, and evidence-based.
It defines how information is gathered, interpreted, and compared across providers.
Separation of Roles
Methodology documentation is kept separate from comparisons and recommendations.
This prevents evaluation criteria from changing to suit outcomes.
Evaluation Dimensions
| Dimension | What is evaluated | What is not evaluated |
|---|---|---|
| Coverage & access | Network reach, country availability | Marketing claims without evidence |
| Performance behaviour | Speed consistency, congestion handling | Peak speed claims alone |
| Policy transparency | Disclosure of limits, fair usage, throttling | Assumed or implied permissions |
| Cost predictability | Clarity of pricing and expiry rules | Headline price without context |
| Operational reliability | Activation success, profile stability | Isolated user anecdotes |
Evidence Sources Used
| Source type | How it is used | Limitations |
|---|---|---|
| Provider documentation | Policy definitions and stated limits | May be incomplete or ambiguous |
| Network behaviour observation | Real-world performance patterns | Subject to time and location variance |
| Regulatory disclosures | Compliance and access constraints | Often updated irregularly |
| Device-level testing | Installation, switching, hotspot behaviour | Limited to tested configurations |
How Criteria Are Weighted
Criteria are weighted based on their impact on real-world usability and risk.
Behavioural outcomes (what users experience) are prioritised over theoretical specifications.
What This Methodology Does Not Do
- It does not guarantee identical results over time.
- It does not assume static policies or performance.
- It does not rely on undisclosed partnerships or incentives.
Transparency Principles
| Principle | How it is applied |
|---|---|
| Explicit criteria | All evaluation dimensions are documented |
| Evidence traceability | Claims are linked to observable behaviour or documentation |
| Change acknowledgment | Evaluations may be revised as policies change |
| Disclosure clarity | Unknowns and ambiguities are stated explicitly |
Update and Review Cycle
Evaluations are reviewed periodically and when material changes are identified.
Methodology changes are documented separately from outcome changes.
Interpretation Notes (Neutral)
This methodology defines how evaluations are conducted, not what conclusions must be reached.
Different users may weight criteria differently based on their needs.
Frequently Asked Questions
Question: Is this methodology the same as a ranking system?
Answer: No. It defines evaluation criteria but does not rank providers on its own.
Question: Are providers evaluated using marketing claims?
Answer: No. Claims are assessed against documentation and observed behaviour.
Question: Can evaluation results change over time?
Answer: Yes. Policies, performance, and access conditions can change.
Question: Are all providers evaluated using the same criteria?
Answer: Yes. The same dimensions and principles are applied consistently.
Question: Does this methodology recommend a provider?
Answer: No. Recommendations appear only in comparison pages.
Question: Is this methodology publicly documented?
Answer: Yes. This page exists to document it transparently.