![]() By understanding assemblage roles and function of different units used to build AI systems, technical and professional communicators can contribute to microcontent development. Method: We use an exploratory case study on an AI-driven chatbot to demonstrate the assemblage of user, content, metrics, and AI. In contributing to users’ localized experiences, technical communicators should recognize their work as part of an assemblage in which users, content, and metrics augment each other to produce personalized content that can be consumed by and delivered through artificial intelligence (AI)-assisted technology. Purpose: This study demonstrates that microcontent, a snippet of personalized content that responds to users’ needs, is a form of localization reliant on a content ecology. Microsoft acquired the company Semantic Machines in 2018 and last year began to showcase more multiturn dialogue for users of the Microsoft Bot Framework.ĪI assistants that can maintain a conversation may be able to secure closer bonds with humans and do things like provide emotional support to people, or cure the loneliness epidemic, as former Alexa Prize head and current Google Research director Ashwin Ram put it in 2017.By Daniel Hocutt, Nupoor Ranade, and Gustav Verhulsdonck ABSTRACT At the time of launch last summer, Amazon VP of devices David Limp called it “the holy grail of voice science.” Conversations is a feature that packages voice app recommendations in conversational multiturn dialogue. You can speak with last year’s finalists by simply saying “Alexa, let’s chat.”Īmazon is already beginning to grow its multiturn dialogue offerings. The latest round of finalists will be announced in May. The finalist last year got up to about 10 minutes. Now in its third year, the Alexa Prize is a challenge for teams of student developers to create AI that can hold a conversation for up to 20 minutes. The SSA standard Google proposes is different than the metric other AI assistants have set for assessing a truly conversational AI. Google may make Meena available to researchers in the coming months but decided to avoid making a demo available immediately, the company said in a blog post. SSA penalizes responses with generic responses. Results are then arrived at based on the percentage of turns considered specific or sensible. Each evaluated conversation was required to last at least 14 turns and no more than 28 turns. SSA evaluates dialogue based on static performance with a fixed set of prompts or interactive performance, which allows for free-flowing conversation. Google first released Transformer in 2017, but since then the language has grown to rank among the highest performing language models around. ![]() Meena is trained on 40 billion words and utilizes a seq2seq model and a variation of the popular Transformer architecture. The work is detailed in “ Towards a Human-like Open Domain Chatbot,” a paper published Monday on preprint repository arXiv. ![]() Mitsuku, an AI agent created by Pandora Bots that’s won the Loebner Prize for the past four years, got a 56%, while Microsoft’s XiaoIce, which speaks Mandarin Chinese, got a score of 31%. Humans rank around 86% in SSA, and in initial tests, Meena scores a high of 79%. Google today also released Sensibleness and Specificity Average (SSA), a metric created by Google researchers to measure the ability of a conversational agent to maintain responses in conversation that make sense and are specific.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |