SquareX publishes research on attacks that completely bypass Secure Web Gateways at DEF CON’32. Read More

SquareX Uncovers Critical Vulnerabilities in Top Webmail. Providers. Read More

✨ SquareX has raised a USD 6M seed from Sequoia Capital SEA. Read More

Home / Use cases / Gen AI DLP

Gen AI DLP

Artificial intelligence has been making storms in employee productivity by reducing the time an average worker spends on mundane tasks. Some examples include summarising meeting notes and even generating code required for software development. On its own, this has led to critical and confidential company data to be leaked to these publicly congregating AI models. To make matters worse, thousands of private projects have been built, leveraging on the AI models of these large players, which further exposes such data to third parties.

SquareX provided fine grain control over the data exchanged between employees and GenAI platforms beyond the capabilities of Network DLP & Endpoint DLP solutions. With SquareX, enterprises can prevent data leakage through numerous, granular ways a) by creating a whitelist of authorised GenAI sites, b) by controlling clipboard copy and paste based on clipboard copy source, content, type etc, c) by controlling user input text content, d) by controlling permitted file upload based on file type, file contents, file source etc. The possibilities are endless.

Block clipboard copy into ChatGPT

AI applications such as ChatGPT can be of great help at work; however, employees have to be careful while using the content from AI applications as it may contain inaccuracies or be incomplete. It is crucial to verify the information and ensure that it aligns with the organisation's policies and standards as it can pose security risks and potential breaches of confidentiality. Employees should also be aware of licensing and intellectual property issues, as the AI generated content may have specific usage restrictions. Ensuring proper attribution and compliance with licensing terms is essential to avoid legal complications. Instead of blocking AI applications completely, enterprises can apply granular policies. Using the policy generating copilot, admins can prompt ‘Block clipboard copy into ChatGPT’ to generate the appropriate policy. The expected outcome would be:

Block clipboard paste from ChatGPT

While using AI applications such as ChatGPT, employees must exercise caution when sharing information, as it could be company confidential and may be used by the AI for its own training purposes. To mitigate such risk, enterprises can apply policies to block paste operation on AI applications. Using the policy generating copilot, admins can prompt ‘Block Clipboard Paste from ChatGPT’ to generate the appropriate policy. The expected outcome would be:

Block file uploads to ChatGPT

Employees might find it convenient to upload meeting notes and other documents to ChatGPT and ask it to perform a myriad of actions such as summarise or even change the format of the file. However, this opens the enterprise up for attacks as sensitive data could be leaked to these AI models. Using the policy generating copilot, admins can prompt ‘Block file uploads to ChatGPT’ to generate the appropriate policy. The expected outcome would be: