The following article is drawn from our conversations with leading industry voices across Latin America, featured in our State of AI 2025 report (available through the button on the right). In this edition, we highlight the insights of Brenda Olivas, Senior Solutions Architect at Nvidia.
"Training models with local data helps them better understand how people in the region actually speak and interact." - Brenda Olivas
Limited access to scalable compute and the need to upskill local teams remain major blockers in Latin America, even as governments in Chile and Mexico fund supercomputing “AI Factory” centers, and sovereign AI laws unlock infrastructure funding. Petrobras, a strategic NVIDIA account, leverages these hubs for reservoir simulation and hybrid cloud-plus-on-prem setups.
English-centric models often stumble in Spanish and Portuguese, missing local expressions and cultural context. Initiatives like LATAM-GPT, trained on government archives, public data, and other regional sources, aim to fix this by delivering responses that reflect how people in the region speak. NVIDIA supports these efforts with frameworks like NeMo, infrastructure optimized for large-scale AI workloads, and an ecosystem of partners that help accelerate development and deployment across the region. Fine-tuning for sectors like healthcare and customer support is growing, yet progress depends on gathering quality regional data, building evaluation suites that mirror local needs, and sustaining the compute required for continual improvement.