In this webinar, Senior Product Manager Shriram Sankaran demonstrated how to build intelligent, natural-language-driven map experiences inside Blazor applications without relying on geospatial APIs, external databases, or complex GIS knowledge. The session focused on combining Azure OpenAI’s generative capabilities with Syncfusion Blazor components to generate location data dynamically and render it on an interactive map.
If you missed the webinar or would like to watch it again, its recording is posted to our YouTube page and also embedded here.
To follow along, you’ll need:
The core of the demo involved creating a Blazor application that:
Q1: Azure OpenAI only or OpenAI too?
A: This demonstration focuses on integrating specifically with Azure OpenAI, not the public OpenAI API. Azure OpenAI offers enterprise-grade security, legal compliance, and regional deployment options, which is why it’s used in this example. If you prefer to use OpenAI’s public API, you would need to modify the code to call OpenAI’s endpoints and manage authentication accordingly.
Q2: Hi, guys. What’s the major difference between this vs. Google Maps?
A: Syncfusion Blazor Maps is a UI component designed for data visualization in web and desktop applications. It is primarily used to render GeoJSON maps, integrate map providers, and display custom map layers for thematic or analytical purposes.
Google Maps, on the other hand, is a comprehensive mapping service offering global coverage, real-time navigation, traffic data, satellite imagery, and APIs for geolocation and routing. It serves as a map provider that can be displayed within Syncfusion Maps for additional customization and enhanced data visualization.
Q3: Are there any plans for an MCP server for Syncfusion docs? I was hoping for an MCP server to use with VS Code Copilot, similar to the MS Learn MCP server.
A: We already have an MCP server available for multiple platforms as AI coding assistants. Please refer to the link below for more information.