Microsoft Introduces Personalizer, Handwriting recognition and Other APIs in Cognitive Services

Microsoft Introduces Personalizer, Handwriting recognition and Other APIs in Cognitive Services

Microsoft has recently announced its already built Machine learning model for its Cognitive Services Platform.

This model has an API that can be used to build personalized features, a recognizer that will automatically recognize data that have already been entered, an API for recognizing handwriting and an improved speech recognizing service for transcribing conversations. 

Personalizer is the most prominent out of these services, as some of the apps and websites do not offer it to their users. It’s not easy to build such models as data from different silos have to be used. 

Microsoft is claiming reinforced learning with Personalizer, which is a machine learning technique, in which typical need of labeled training data is not required. Rather users’ activities are analyzed by reinforcement agent for training. 

Being possibly the first company to provide such service, Microsoft has tried it on its Xbox. After which a 40% increase in content engagement was noticed. 

Ink Recognizer is an API used to automatically recognize handwriting, simple shapes and documents. The company has been working on it after launching Windows 10 inking capabilities and have introduced it as cognitive service as well. 

Though Microsoft Office 360 and Windows are already using this service, now developers will also be able to have it in their applications. 

Conversation Transcription is a part of speech-to-text features of Microsoft in Cognitive services. Transcribing conversations include labeling various speakers, to transcribe in real-time and can manage crosstalk as well. Microsoft Teams and some of the other meeting software already have this service. 


Photo: Ted S. Warren/Associated Press

Form recognizer is an API that helps in extracting text and data from other documents or business forms. Without much manual labeling, the service will need only five documents to train itself, to solve the simple yet hectic problem. 

Now developers will be able to have Form Recognizer, speech-to-text, and text-to-speech models in their edge devices, other than Azure.

The company has also made Neural Text-to-Speech, text Analytics Named Entity Recognition APIS and Computer Vision Ready APIs available for others. 

The already available services will be having some other features updates as well. Five voices will be supported by Neural Text-to-Speech service. Previously, Computer Vision API was capable of recognizing 200,000 celebrities only, whereas now it will be able to recognize 1 million celebrities along with understanding 10,000 scenes, objects, and concepts.


Source of the post : https://www.digitalinformationworld.com 

Comments

Popular posts from this blog