Under the hood, the AI models powering the assistant are currently from Microsoft’s Azure OpenAI service, which provides enterprise access to OpenAI’s powerful language models like GPT-3. However, Vanwinkle said Adobe plans to take an “LLM-agnostic” approach in the future that will enable enterprise customers to plug in other large language models based on their specific needs. But any partner models would need to meet Adobe’s standards for ethics, security, and privacy, she noted.
Allowing an external LLM to process enterprise data held in potentially confidential documents could be seen as a security risk, but Adobe said that the AI assistant is governed by data security protocols.
To help enterprises successfully deploy the AI assistant, Adobe is providing best practices guides and customer success managers to advise companies on implementation, integration, and driving organizational change. The company is also assisting customers in setting up “communities of practice” that bring together AI champions from different functions to share knowledge and identify high-value use cases.
Shared responsibility
But even with the most advanced AI, Vanwinkle emphasized that human judgment remains essential and that the assistant is not meant to replace human workers but rather augment their capabilities. Users still need to carefully review and validate the AI’s outputs, especially for any externally facing content.
“We want to make sure that there’s always a human in the loop,” she said. “Understanding really strong prompting, understanding the documents, doing that verification process all help us with the hallucination issue” that sometimes causes AI systems to generate inaccurate or nonsensical information.
Adobe insists that when enterprises make use of Acrobat AI Assistant, they agree to use the features responsibly.