At Microsoft Build 2023, Azure Cosmos DB joins the AI ​​toolchain (2023)

Microsoft firma

VonSimon Bisson,columnist,InfoWorld |

Microsoft has introduced a number of new tools to make it easier to customize and focus the output of GPT-based AI models. Cosmos DB plays an important role.

Microsoft CEO Satya Nadella described the launch of big-language AI models like GPT-4 as "a mosaic moment" comparable to the launch of the first graphical web browser. Unlike the original "Mosaic" moment, when Microsoft was late to the browser war and forced to buy its first web development tools, the company has taken a leading position in AI, quickly adopting AI technologies in enterprise and consumer products .

A key to understanding Microsoft is how it sees itself as a platform company. This internal culture drives it to provide tools and technologies for developers and foundations for developers to build upon. For AI, it starts with thatAzure OpenAI-API'erand extends to tools such asPrompt-motorAndsemantic core, which simplifies the development of custom experiences based on OpenAI's Transformer-based neural network.

As such, much of this year's Microsoft Build developer event focuses on how you can use these tools to build your own AI-powered applications, following the "Copilot" model of assistive AI tools that Microsoft has rolled out across Edge browser and Bing search engine, in GitHub and its developer tools, and for enterprises in Microsoft 365 and Power Platform. We also learn where Microsoft plans to fill gaps in its platform and make its tools a one-stop shop for AI development.

LLMs are vector processing tools

At the heart of a large language model like OpenAI's GPT-4 is a giant neural network that operates on a vector representation of the language, searches for vectors corresponding to those describing its prompts, and constructs and refines the optimal path through a multidimensional semantics space provides an understandable output. It's similar to the search engine approach, but where search is about finding vectors that match those that answer your queries, LLMs expand the initial set of semantic tokens that make up your initial prompt (and the prompt used to set the context, used). . LLM in use). That's one of the reasons why Microsoft's first LLM products, GitHub Copilot and Bing Copilot, were built on search-based services, as they already use vector databases and indexes, providing context that keeps LLM answers on track.

Unfortunately for the rest of us, vector databases are relatively rare and based on very different principles than familiar SQL and NoSQL databases. They are perhaps best thought of as multidimensional extensions of graph databases, where data is transformed and embedded as vectors with direction and magnitude. Vectors allow for fast and accurate searching of similar data, but requirea completely different way of workingend andre former for data.

If we want to build our own enterprise co-pilots, we need our own vector databases, as they allow us to extend and refine LLMs with our domain-specific data. This data could be a library of decades of standard contracts or product documentation, or even all of your customer support requests and responses. If we could store this data just right, it could be used to build AI-powered interfaces for your business.

But do we have the time or resources to store this data in an unknown format on an untested product? What we need is a way to quickly feed this data to AI that builds on tools we already use.

Vector search is coming to Cosmos DB

Microsoft announced a series of updates to its cloud-native document database Cosmos DB at BUILD 2023. While most of the updates focus on working with big data and managing queries, perhaps the most useful for developing AI applications is the addition of Vector's search capabilities. This also applies to existing Cosmos DB instances, allowing customers to avoid moving data to a new vector database.

Cosmos DB's new vector search builds on the recently introduced oneCosmosDB til MongoDB vCoreService that allows you to limit instances to a specific virtual infrastructure, along with high availability across Availability Zones - and use a more predictable per-node pricing model, while still using the familiar MongoDB APIs. Existing MongoDB databases can be migrated to Cosmos DB, so you can use MongoDB on-premises to manage your data and use Cosmos DB on Azure to run your applications. Cosmos DB's new change feed tools aim to make it easier to replicate across regions and replicate changes from one database to other clusters.

Vector Search extends these tools and adds a new query mode to your databases that can be used to work with your AI applications. Although Vector Search is not a true vector database, it offers many of the same features, including a way to store embeddings and use them as search keys for your data, using the same similarity rules as more complex alternatives. The tools introduced by Microsoft support basic vector indexing (using IVF Flat), three types of distance measurements, and the ability to store and search vectors up to 2,000 dimensions in size. Distance measurements are a key feature of vector search because they help define how similar vectors are.

Perhaps the most interesting thing about Microsoft's original solution is that it is an extension of a popular document database. Using a document database to create a semantic store for an LLM makes a lot of sense: it's a familiar tool that we already know how to deploy and manage content with. Libraries already exist that allow us to capture, convert, and encapsulate various document formats in JSON, allowing us to move from existing storage tools to LLM-enabled vector embeddings without changing workflows or developing skills with a whole new class of databases , which is needed.

It's an approach designed to simplify the process of gathering the custom records needed to create your own semantic search. Azure OpenAI provides APIs to generate embeds from your documents, which can then be stored in Cosmos DB along with the source documents. Applications generate new embeddings based on user input that can be used with Cosmos DB vector search to find similar documents.

These documents are not required to contain any of the keywords of the original query. They just have to be semantically similar. All you need to do is run documents through a GPT summer and then generate embeds and add a data preparation step to your application development. Once you have a prepared dataset, create a loading process that automates adding embeds when new documents are saved to Cosmos DB.

This approach should work well with the Azure AI Studio updates to deliver AI-enabled private data to your Azure OpenAI-based applications. For your code, this means it's much easier to keep applications focused, reducing the risk of them crashing quickly and producing illusory results. Instead, an application that generates bid responses for, say, government contracts, can use document data from your company's history of successful bids to create a summary that can be populated and personalized.

Using vector search as a semantic memory

Along with its cloud-based AI tools, Microsoft is bringing an interactive Semantic Kernel Extension to Visual Studio Code that allows developers to develop and test AI skills and plugins around Azure OpenAI and OpenAI APIs using C# or Python. Tools like Cosmos DB's Vector Search should simplify building semantic storage for semantic cores and allow you to build more complex applications around API calls. An example of using embeddings is available as an extension to the Copilot chat example, which allows you to embed vector search instead of the out-of-the-box document analysis functionality.

Microsoft's AI platform is just that, a platform you can build on. Azure OpenAI forms the backbone and hosts the LLMs. Incorporating vector search into data in Cosmos DB makes it easier to anchor results to our own organization's knowledge and content. Other AI platform announcements should take this into account, such as tools like Azure Cognitive Search, which automates the process of attaching any data source to Azure OpenAI models, providing a simple endpoint for your applications and tools to test the service without Azure AI to leave the study.

What Microsoft provides here is a spectrum of AI developer tools, from Azure AI Studio and its low-code Copilot Maker, to custom cognitive search endpoints for your own vector search of your documents. That should be enough to help you build the LLM-based application that suits your needs.

Then read the following:

  • The best open source software of 2022
  • Developers don't want to perform operations
  • 7 Reasons Why Java Is Still Great
  • Why Wasm is the future of cloud computing
  • Why software engineering estimates are rubbish
  • Continuous Integration and Continuous Delivery explained

Related:

  • Artificial intelligence
  • machine learning

Simon Bisson, author of InfoWorld's Enterprise Microsoft blog, has worked in academic and telecommunications research, was CTO of a startup, led the technical side of UK Online and was responsible for consulting and technology strategy.

Follow

Copyright © 2023 IDG Communications, Inc.

References

Top Articles
Latest Posts
Article information

Author: Virgilio Hermann JD

Last Updated: 04/15/2023

Views: 5647

Rating: 4 / 5 (41 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Virgilio Hermann JD

Birthday: 1997-12-21

Address: 6946 Schoen Cove, Sipesshire, MO 55944

Phone: +3763365785260

Job: Accounting Engineer

Hobby: Web surfing, Rafting, Dowsing, Stand-up comedy, Ghost hunting, Swimming, Amateur radio

Introduction: My name is Virgilio Hermann JD, I am a fine, gifted, beautiful, encouraging, kind, talented, zealous person who loves writing and wants to share my knowledge and understanding with you.