Virtual agents that anticipate customers’ expectations

agent, an American developer of multimodal neuro-symbolic solutions for conversational artificial intelligence, has received a US patent for its new development.

Digital consultants, avatars, and voice agents built on Openstream’s Eva (Enterprise Virtual Assistant) platform are able to understand user preferences using multimodal input and context to dynamically generate real-time human dialogue across any communication channel without the need for scripting or pre-existing hallucinations.

“When people talk to a brand agent, they expect it to be a responsive and knowledgeable agent that understands them regardless of the mode of communication, channel, or language,” said David Stark, chief marketing officer at, in a statement. “AI agents need to generate and understand context-sensitive dialog using all of the end user’s inputs in real-time to seamlessly engage them in unscripted conversations that feel natural and help them achieve their goals. Without this, brand perception and long-term relationships are at risk.”

AI agents built with Eve can engage in complex, diverse, rich dialogues, constantly receiving contextual information across any channel or language in real-time 24/7, any day of the