In this post about Azure Cognitive Services, I’ll focus on the Vision APIs and Services within this stack. Azure Vision Cognitive Services leverages image-processing algorithms to smartly identify, caption and moderate your pictures. There are 3 APIs and 3 Services available in the Vision stack:
In my recent posts I’ve been focusing on Azure Cognitive Services and what these features can add to your applications and your organization. Today my focus is the Speech APIs, which you can use to convert spoken audio into text, use voice for verification, or add speaker recognition to your app.
In today’s post focusing on Azure Cognitive Analytics, I’ll look at the Language Analytics APIs that are available. These language APIs allow your apps to process natural language with pre-built scripts, evaluate sentiment and learn how to recognize what users want. Often, you’ll work with Speech and Language APIs together, but I’ll cover Language today and Speech in my next post.
Azure Every Day mini-series is focused on the different APIs available within Azure Cognitive Services. Today I’ll focus on the Knowledge APIs, which map complex information and data in order to solve tasks such as intelligent recommendations and semantic search.
In this recent group of posts, my focus is on different APIs available with Azure Cognitive Services. With Azure Cognitive Services you can infuse your apps, websites and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication, so you can easily add the power of machine learning and bring advanced intelligence into your product.
With Artificial Intelligence and Machine Learning, the possibilities for your applications are endless. Would you like to be able to infuse your apps, websites and bots with intelligent algorithms to see, hear, speak, understand and interpret your user needs through natural methods of communication, all without having any data science expertise?