Security

Assist AI is dedicated to maintaining the confidentiality,
integrity, and availability of your data. Here’s how we
safeguard your information while adhering to leading
industry standards and compliance requirements.

Customers

E

n

t

e

r

p

r

i

s

e

-

l

e

v

e

l

s

e

c

u

r

i

t

y

.

K

e

e

p

y

o

u

r

d

a

t

a

p

r

i

v

a

t

e

.

Our promise

Enterprise-Grade Security

ASSIST AI uses enterprise-grade security controlspractices SSO support, fine-grained role-based access controls, and permission-aware search. Data is encrypted in transit and at rest, tenant data is logically isolated, and the platform undergoes regular penetration testing and security scanning.

Transparent AI Models

Work with trusted large language models like GPT, Claude, and Gemini, while maintaining control through region-specific data processing and transparent model access.

Data Ownership & Sovereignty

Retain full control and ownership of your data. Our platform aligns with local data protection laws and internal governance policies to ensure complete regulatory compliance.

Advanced Access Management

Enable secure collaboration across teams with detailed role-based permissions, ensuring every project operates with precision, privacy, and accountability.

In numbers

Trusted security.
Tangible results.

In numbers

Strong Security.
Clear Outcomes.

In numbers

Critical Security Issues this year

0

Infosec Training Coverage

0%

Platform Uptime

0.0%

Last updated

July 2, 2025

O

v

e

r

v

i

e

w

At ASSIST AI, trust underpins every layer of the platform. From core infrastructure to AI model governance, we adhere to strict data protection, privacy, and compliance standards. Security is foundational to how we operate.

Our systems undergo regular audits, our processes are encrypted, and our policies are built to protect your most critical assets.

As data-driven systems evolve, we ensure learning is governed by security, transparency, and ethical standards. Trust is essential to empowering automation.

Subprocessors

Name & Purpose

Data storage

DPA

Anthropic

Users may select Anthropic models to process their data

US

AWS

Main application hosting platform. Customer PII stored in RDS databases

and S3 buckets.

EU, US

Google Analytics

Used to monitor page views

EU, US

Google Cloud

Data may be processed using Google Cloud services

EU

Google Gemini Suite

Users may select Google models to process their data

EU

Microsoft Azure

Data may be processed using Microsoft Azure services

EU

OpenAI

Users may select OpenAI models to process their data

US

PostHog

Analytics tool used for monitoring user behavior within the application

EU

Sendgrid (Twilio)

Mailer used to send one-to-one transactional emails triggered by user

actions, such as user invitation emails and password re-set requests

US

Slack

Customer support can be provided by Slack if requested by customer

US

Policies

Master Service Agreement (MSA) - Go

Data Processing Addendum (DPA)

Acceptable Use Policy (AUP)

Service Level Agreement (SLA)

Cookie Policy

Web Data Privacy Statement

Ethics and supplier CoC

Vulnerability Disclosure Policy

Data Processing & Security FAQ

Security

Assist AI is dedicated to maintaining the confidentiality,
integrity, and availability of your data. Here’s how we
safeguard your information while adhering to leading
industry standards and compliance requirements.

What organizational security practices are in place at ASSIST AI?

ASSIST AI uses enterprise-grade security controls to protect customer data. The platform provides SSO, role-based access control, and fine-grained permissions. Data is encrypted in transit and at rest, with tenant isolation in deployments. ASSIST supports single-tenant cloud deployments, uses scoped API keys, respects source-system permissions, and conducts regular penetration testing with defined incident response procedures.

How is my data legally protected?

You retain full ownership of all data you upload to our platform. Assist AI only accesses your data when necessary to provide the services you've requested, and we never use it for any other purpose. We don't mix your data with other customers' data, share it with third parties, or access it beyond what's needed to deliver the platform functionality. If you provide feedback about our services, we may use those suggestions to improve our product, but your underlying data remains yours. Any sharing of your data with third parties would only occur if you explicitly instruct us to do so

Where is my data stored geographically?

Your data is stored in the region where your ASSIST AI environment is deployed. All data is encrypted in transit and at rest, and storage location depends on your selected deployment configuration.

How do I restrict access to my data on ASSIST AI?

Assist AI provides multiple layers of access control to ensure your data remains secure. We use Role-Based Access Control (RBAC) to define what different users and teams can see and do within the platform. You can integrate your existing Single Sign-On (SSO) to manage authentication centrally. Additionally, our permission-aware connectors automatically respect and mirror the access permissions from your source systems, so users only see the data they're already authorized to access in those original systems. These combined security measures ensure that data access is tightly controlled and aligned with your organization's existing security policies.

How do I report a security vulnerability?

If you identify a potential security issue with Assist AI, please contact our security team directly at security@tryassist.in with details of your findings. We're committed to addressing security concerns quickly and will work with you to understand and resolve the issue. We value the security research community and appreciate your help in keeping our platform safe.

How does ASSIST AI use Google Workspace data?

When you use the Assist AI Google Drive integration, only the files you explicitly choose are uploaded to our platform for processing. We do not use any Google Workspace data to train, develop, or improve AI or machine learning models. We also never use Google Workspace APIs for building generalized or non-personalized AI/ML models. If you give explicit permission, file contents may be sent to third-party AI model providers strictly to complete the specific tasks you've requested, such as image analysis or text recognition. Processing by third-party providers is governed by their specific data processing agreements.

What organizational security practices are in place at ASSIST AI?

ASSIST AI uses enterprise-grade security controls to protect customer data. The platform provides SSO, role-based access control, and fine-grained permissions. Data is encrypted in transit and at rest, with tenant isolation in deployments. ASSIST supports single-tenant cloud deployments, uses scoped API keys, respects source-system permissions, and conducts regular penetration testing with defined incident response procedures.

How is my data legally protected?

You retain full ownership of all data you upload to our platform. Assist AI only accesses your data when necessary to provide the services you've requested, and we never use it for any other purpose. We don't mix your data with other customers' data, share it with third parties, or access it beyond what's needed to deliver the platform functionality. If you provide feedback about our services, we may use those suggestions to improve our product, but your underlying data remains yours. Any sharing of your data with third parties would only occur if you explicitly instruct us to do so

Where is my data stored geographically?

Your data is stored in the region where your ASSIST AI environment is deployed. All data is encrypted in transit and at rest, and storage location depends on your selected deployment configuration.

How do I restrict access to my data on ASSIST AI?

Assist AI provides multiple layers of access control to ensure your data remains secure. We use Role-Based Access Control (RBAC) to define what different users and teams can see and do within the platform. You can integrate your existing Single Sign-On (SSO) to manage authentication centrally. Additionally, our permission-aware connectors automatically respect and mirror the access permissions from your source systems, so users only see the data they're already authorized to access in those original systems. These combined security measures ensure that data access is tightly controlled and aligned with your organization's existing security policies.

How do I report a security vulnerability?

If you identify a potential security issue with Assist AI, please contact our security team directly at security@tryassist.in with details of your findings. We're committed to addressing security concerns quickly and will work with you to understand and resolve the issue. We value the security research community and appreciate your help in keeping our platform safe.

How does ASSIST AI use Google Workspace data?

When you use the Assist AI Google Drive integration, only the files you explicitly choose are uploaded to our platform for processing. We do not use any Google Workspace data to train, develop, or improve AI or machine learning models. We also never use Google Workspace APIs for building generalized or non-personalized AI/ML models. If you give explicit permission, file contents may be sent to third-party AI model providers strictly to complete the specific tasks you've requested, such as image analysis or text recognition. Processing by third-party providers is governed by their specific data processing agreements.

What organizational security practices are in place at ASSIST AI?

ASSIST AI uses enterprise-grade security controls to protect customer data. The platform provides SSO, role-based access control, and fine-grained permissions. Data is encrypted in transit and at rest, with tenant isolation in deployments. ASSIST supports single-tenant cloud deployments, uses scoped API keys, respects source-system permissions, and conducts regular penetration testing with defined incident response procedures.

How is my data legally protected?

You retain full ownership of all data you upload to our platform. Assist AI only accesses your data when necessary to provide the services you've requested, and we never use it for any other purpose. We don't mix your data with other customers' data, share it with third parties, or access it beyond what's needed to deliver the platform functionality. If you provide feedback about our services, we may use those suggestions to improve our product, but your underlying data remains yours. Any sharing of your data with third parties would only occur if you explicitly instruct us to do so

Where is my data stored geographically?

Your data is stored in the region where your ASSIST AI environment is deployed. All data is encrypted in transit and at rest, and storage location depends on your selected deployment configuration.

How do I restrict access to my data on ASSIST AI?

Assist AI provides multiple layers of access control to ensure your data remains secure. We use Role-Based Access Control (RBAC) to define what different users and teams can see and do within the platform. You can integrate your existing Single Sign-On (SSO) to manage authentication centrally. Additionally, our permission-aware connectors automatically respect and mirror the access permissions from your source systems, so users only see the data they're already authorized to access in those original systems. These combined security measures ensure that data access is tightly controlled and aligned with your organization's existing security policies.

How do I report a security vulnerability?

If you identify a potential security issue with Assist AI, please contact our security team directly at security@tryassist.in with details of your findings. We're committed to addressing security concerns quickly and will work with you to understand and resolve the issue. We value the security research community and appreciate your help in keeping our platform safe.

How does ASSIST AI use Google Workspace data?

When you use the Assist AI Google Drive integration, only the files you explicitly choose are uploaded to our platform for processing. We do not use any Google Workspace data to train, develop, or improve AI or machine learning models. We also never use Google Workspace APIs for building generalized or non-personalized AI/ML models. If you give explicit permission, file contents may be sent to third-party AI model providers strictly to complete the specific tasks you've requested, such as image analysis or text recognition. Processing by third-party providers is governed by their specific data processing agreements.

Custo

mers

E

n

t

e

r

p

r

i

s

e

-

l

e

v

e

l

s

e

c

u

r

i

t

y

.

K

e

e

p

y

o

u

r

d

a

t

a

p

r

i

v

a

t

e

.

Our promise

Enterprise

security

ASSIST AI uses enterprise-grade security controlspractices SSO support, fine-grained role-based access controls, and permission-aware search. Data is encrypted in transit and at rest, tenant data is logically isolated, and the platform undergoes regular penetration testing and security scanning.

Model

transparency

Access to leading LLMs

including GPT, Claude,

and Gemini, with region-

specific processing

options.

Data sovereignty

Full control and

ownership of your data,

compliant with local

regulations and internal

policies.

Access control

Granular user roles and

permissions across

teams and projects for

secure collaboration.

In

numb

ers

Strong Security.
Clear Outcomes.

In numbers

Critical Security Issues this year

0

Infosec Training Coverage

0%

Platform Uptime

0.0%

Overview

At ASSIST AI, trust underpins every layer of the platform. From core infrastructure to AI model governance, we adhere to strict data protection, privacy, and compliance standards. Security is foundational to how we operate.

Our systems undergo regular audits, our processes are encrypted, and our policies are built to protect your most critical assets.

As data-driven systems evolve, we ensure learning is governed by security, transparency, and ethical standards. Trust is essential to empowering automation.

Subprocessors.

Name &

Purpose

Data storage

DPA

Anthropic

Users may

select

Anthropic

models to

process

their data

US

Auth0 (Okta)

User

authenticati

on and

authorisatio

n

EU

AWS

Main

application

hosting

platform.

Customer PII

stored in

RDS

databases

and S3

buckets.

EU, US

Google

Analytics

Used to

monitor

page views

EU, US

Google

Cloud

Data may be

processed

using

Google

Cloud

services

EU

Google

Gemini Suite

Users may

select

Google

models to

process

their data

EU

Intercom

Used to

provide in-

app chat

support and

email

support

US

Launchdarkl

y

Software

Delivery

Platform

US

Microsoft

Azure

Data may be

processed

using

Microsoft

Azure

services

EU

Nango

Integrations

platform

EU

OpenAI

Users may

select

OpenAI

models to

process

their data

US

Pipedream

Integration

and

automation

platform

US

PostHog

Analytics

tool used for

monitoring

user

behavior

within the

application

EU

Sendgrid

(Twilio)

Mailer used

to send one-

to-one

transactional

emails

triggered by

user actions,

such as user

invitation

emails and

password

re-set

requests

US

Sentry.io

Aggregates

error

messages

from the

application

which might

contain

customer

details

US

Slack

Customer

support can

be provided

by Slack if

requested

by customer

US

Vitally

Customer

Relationship

Managemen

t,

Intelligence,

and Actions

US

What organizational security practices are in place at ASSIST AI?

ASSIST AI uses enterprise-grade security controls to protect customer data. The platform provides SSO, role-based access control, and fine-grained permissions. Data is encrypted in transit and at rest, with tenant isolation in deployments. ASSIST supports single-tenant cloud deployments, uses scoped API keys, respects source-system permissions, and conducts regular penetration testing with defined incident response procedures.

How is my data legally protected?

You retain full ownership of all data you upload to our platform. Assist AI only accesses your data when necessary to provide the services you've requested, and we never use it for any other purpose. We don't mix your data with other customers' data, share it with third parties, or access it beyond what's needed to deliver the platform functionality. If you provide feedback about our services, we may use those suggestions to improve our product, but your underlying data remains yours. Any sharing of your data with third parties would only occur if you explicitly instruct us to do so

Where is my data stored geographically?

Your data is stored in the region where your ASSIST AI environment is deployed. All data is encrypted in transit and at rest, and storage location depends on your selected deployment configuration.

How do I restrict access to my data on ASSIST AI?

Assist AI provides multiple layers of access control to ensure your data remains secure. We use Role-Based Access Control (RBAC) to define what different users and teams can see and do within the platform. You can integrate your existing Single Sign-On (SSO) to manage authentication centrally. Additionally, our permission-aware connectors automatically respect and mirror the access permissions from your source systems, so users only see the data they're already authorized to access in those original systems. These combined security measures ensure that data access is tightly controlled and aligned with your organization's existing security policies.

How do I report a security vulnerability?

If you identify a potential security issue with Assist AI, please contact our security team directly at security@tryassist.in with details of your findings. We're committed to addressing security concerns quickly and will work with you to understand and resolve the issue. We value the security research community and appreciate your help in keeping our platform safe.

How does ASSIST AI use Google Workspace data?

When you use the Assist AI Google Drive integration, only the files you explicitly choose are uploaded to our platform for processing. We do not use any Google Workspace data to train, develop, or improve AI or machine learning models. We also never use Google Workspace APIs for building generalized or non-personalized AI/ML models. If you give explicit permission, file contents may be sent to third-party AI model providers strictly to complete the specific tasks you've requested, such as image analysis or text recognition. Processing by third-party providers is governed by their specific data processing agreements.

What organizational security practices are in place at ASSIST AI?

ASSIST AI uses enterprise-grade security controls to protect customer data. The platform provides SSO, role-based access control, and fine-grained permissions. Data is encrypted in transit and at rest, with tenant isolation in deployments. ASSIST supports single-tenant cloud deployments, uses scoped API keys, respects source-system permissions, and conducts regular penetration testing with defined incident response procedures.

How is my data legally protected?

You retain full ownership of all data you upload to our platform. Assist AI only accesses your data when necessary to provide the services you've requested, and we never use it for any other purpose. We don't mix your data with other customers' data, share it with third parties, or access it beyond what's needed to deliver the platform functionality. If you provide feedback about our services, we may use those suggestions to improve our product, but your underlying data remains yours. Any sharing of your data with third parties would only occur if you explicitly instruct us to do so

Where is my data stored geographically?

Your data is stored in the region where your ASSIST AI environment is deployed. All data is encrypted in transit and at rest, and storage location depends on your selected deployment configuration.

How do I restrict access to my data on ASSIST AI?

Assist AI provides multiple layers of access control to ensure your data remains secure. We use Role-Based Access Control (RBAC) to define what different users and teams can see and do within the platform. You can integrate your existing Single Sign-On (SSO) to manage authentication centrally. Additionally, our permission-aware connectors automatically respect and mirror the access permissions from your source systems, so users only see the data they're already authorized to access in those original systems. These combined security measures ensure that data access is tightly controlled and aligned with your organization's existing security policies.

How do I report a security vulnerability?

If you identify a potential security issue with Assist AI, please contact our security team directly at security@tryassist.in with details of your findings. We're committed to addressing security concerns quickly and will work with you to understand and resolve the issue. We value the security research community and appreciate your help in keeping our platform safe.

How does ASSIST AI use Google Workspace data?

When you use the Assist AI Google Drive integration, only the files you explicitly choose are uploaded to our platform for processing. We do not use any Google Workspace data to train, develop, or improve AI or machine learning models. We also never use Google Workspace APIs for building generalized or non-personalized AI/ML models. If you give explicit permission, file contents may be sent to third-party AI model providers strictly to complete the specific tasks you've requested, such as image analysis or text recognition. Processing by third-party providers is governed by their specific data processing agreements.