As a global benefits company based in the UK, Thanks Ben is committed to leveraging AI responsibly to enhance user experience and operational efficiency. Below are some frequently asked questions (FAQs) regarding our AI use.
1.What are Thanks Ben's AI use cases?
To remain at the forefront of the benefits space, Thanks Ben leverages AI to improve operational processes and provide a more tailored user experience.
2. How does Thanks Ben’s platform interact with AI models, and what data flows between systems?
All AI-powered interactions on the Thanks Ben platform remain within our AWS infrastructure. We use AWS Bedrock for model hosting and AI tooling, ensuring that AI services operate under the same stringent security controls as the rest of our AWS estate.
3. Does any customer data get processed in Large Language Models (LLMs), and if so, what data?
Our AI features primarily process benefit policy documents to provide users with accurate information. While our core AI currently focuses on policy documents, in some instances our system may reference identified personal employee information to deliver personalized responses to specific queries. This is done securely within our protected environment and only when necessary to address users specific benefits questions.
4. Does any customer data get stored in Large Language Models?
All data processing occurs within our AWS estate. We ensure that your queries are enhanced with relevant information from your employer's policy documents without storing your data in the AI models themselves.
5. Which LLMs does Thanks Ben use?
Thanks Ben utilises AWS Bedrock, which provides access to various well-known AI models within our AWS estate. Currently, we use:
Anthropic Claude for text processing.
Amazon Titan for embeddings generation.
The specific models in use may change as we continue to review response quality and employee experience. Our FAQs will be updated any time significant changes occur.