What organizational security practices are in place at ASSIST AI?
ASSIST AI uses enterprise-grade security controls to protect customer data. The platform provides SSO, role-based access control, and fine-grained permissions. Data is encrypted in transit and at rest, with tenant isolation in deployments. ASSIST supports single-tenant cloud deployments, uses scoped API keys, respects source-system permissions, and conducts regular penetration testing with defined incident response procedures.
How is my data legally protected?
You retain full ownership of all data you upload to our platform. Assist AI only accesses your data when necessary to provide the services you've requested, and we never use it for any other purpose. We don't mix your data with other customers' data, share it with third parties, or access it beyond what's needed to deliver the platform functionality. If you provide feedback about our services, we may use those suggestions to improve our product, but your underlying data remains yours. Any sharing of your data with third parties would only occur if you explicitly instruct us to do so
Where is my data stored geographically?
Your data is stored in the region where your ASSIST AI environment is deployed. All data is encrypted in transit and at rest, and storage location depends on your selected deployment configuration.
How do I restrict access to my data on ASSIST AI?
Assist AI provides multiple layers of access control to ensure your data remains secure. We use Role-Based Access Control (RBAC) to define what different users and teams can see and do within the platform. You can integrate your existing Single Sign-On (SSO) to manage authentication centrally. Additionally, our permission-aware connectors automatically respect and mirror the access permissions from your source systems, so users only see the data they're already authorized to access in those original systems. These combined security measures ensure that data access is tightly controlled and aligned with your organization's existing security policies.
How do I report a security vulnerability?
If you identify a potential security issue with Assist AI, please contact our security team directly at security@tryassist.in with details of your findings. We're committed to addressing security concerns quickly and will work with you to understand and resolve the issue. We value the security research community and appreciate your help in keeping our platform safe.
How does ASSIST AI use Google Workspace data?
When you use the Assist AI Google Drive integration, only the files you explicitly choose are uploaded to our platform for processing. We do not use any Google Workspace data to train, develop, or improve AI or machine learning models. We also never use Google Workspace APIs for building generalized or non-personalized AI/ML models. If you give explicit permission, file contents may be sent to third-party AI model providers strictly to complete the specific tasks you've requested, such as image analysis or text recognition. Processing by third-party providers is governed by their specific data processing agreements.
