Experience
As a Data Analyst at TELUS International, I was responsible for evaluating and annotating real-world user queries to support Apple’s search engine. Over the course of 8 months, I completed and quality-assessed over 1,500 individual tasks, ensuring that the datasets used in machine learning pipelines were clean, consistent, and contextually relevant. This involved reviewing hundreds of search results weekly, applying strict quality standards, and identifying dozens of anomalies and edge cases that required escalation or guideline interpretation — all while managing full-time studies, a teaching assistant role, and the successful outcome of immigration responsibilities.
I regularly documented issues through structured ticketing systems, contributing detailed feedback on recurring patterns and tool limitations that helped refine internal annotation workflows. I worked autonomously in a fully remote environment, using internal documentation and evolving guidelines updated multiple times per month to adapt to changing task requirements. Over time, I built a strong sense for data reliability and consistency, maintaining a sustained high accuracy rating under internal performance benchmarks.
The role sharpened my understanding of the link between raw data and model outcomes — particularly in large-scale, real-world systems impacting millions of users globally. It also gave me hands-on exposure to data governance principles, and deepened my appreciation for reproducibility, traceability, and auditability in the lifecycle of AI systems. These insights continue to shape my approach to responsible data science and model evaluation.