When generic AI devices became widely available at the end of 2022, it was not just to the technologists who paid attention. Employees of all industries recognized the ability of generic AI to immediately promote productivity, streamline communication and speed up work. Consumers-like so many waves of IT innovation, are eager to work through the employees in the hands of the employees, not through the official channels in the file sharing, cloud storage and cooperation platform-AI enterprises.
Facing the risk of feeding sensitive data into a public AI interface, many organizations responded with urgency and force: they blocked the access. It makes sense as an initial defensive measure, blocking public AI apps is not a long-term strategy-it is a stopgap. And in most cases, it is also not effective.
Shadow AI: Unseen Risk
Zscaler Thartlabz Team is tracking AI and Machine Learning (ML) Traffic Across enterprises, and the number tells a compelling story. In 2024 alone, Wallantlabz analyzed 36 times more AI and ML traffic than the previous year, which identifies more than 800 different AI applications in use.
Blocking has not prevented employees from using AI. They email files in individual accounts, use their phone or home equipment, and capture screenshots to input into AI systems. These workaround enterprises transfer sensitive interactions in the shade, out of enterprise monitoring and protection. Result? A growing blind location is known as shadow AI.
Blocking unpublished AI apps can be used up to zero on dashboard reporting, but in fact, your organization is not preserved; This is really blind to what is happening.
Lessons from mother -in -law adoption
We have been here before. When early software emerged as a service tool, IT teams scrambled to control the unpublished use of the cloud-based file storage application. Answer, however, did not ban the file sharing; Rather it was to offer a safe, spontaneous, single-sign-on option that matched the staff expectations for convenience, purpose and speed.
However, this time there is even more around the stake. With mother -in -law, data leakage means often a wrong file. With AI, it can unknowingly train a public model on your intellectual property, with no way to remove or recover that data. There is no “undone” button on the memory of a large language model.
Visibility first, then policy
Before an organization wisely control AI use, it needs to be understood what is really happening. Blocking traffic without visibility is like the construction of a fence without knowing where the property lines are.
We have solved problems like before. The position of Zscaler in traffic flow gives us a unique convenience point. We see which apps are being accessed, by whom and how many times. This is necessary for assessing real -time visibility risk, shaping the policy and enabling the smart, safe AI adoption.
Next, we have developed how we deal with the policy. The provider will give black and white options of “permission” or “block”. The better approach is the reference-inconceivable, policy-powered governance that aligns with zero-trust principles that do not believe in the vested trust and demand constant, relevant evaluation. Each use of AI does not presents the same level of risk and policies should reflect it.
For example, we can provide access to the AI application with caution for the user or allow transactions only in browser-oceleration mode, which means that the user is not able to paste potentially sensitive data in the app. Another approach that works well is to redirect users in a corporate-innovative alternative app that manages on-primeses. This allows employees to get productivity benefits without risking data exposure. If your users have a safe, fast and accepted way to use AI, they will not need to go around you.
Final, Zscler’s data security equipment means that we can allow employees to use some public AI apps, but can prevent them from sending sensitive information inadvertently. Our research shows examples of more than 4 million data loss prevention (DLP) violations in Zscler Cloud, representing examples where sensitive enterprise data – such as financial data, individual identifiable information, source code and medical data – was to be sent to AI application, and that the transcale was blocked by the Zscler policy. Without DLP enforcement of Zscler, these AI apps would have lost real data loss.
Balance the balance with protection
It is not about preventing AI adoption – it is about shaping it responsibly. There should not be any obstacles in safety and productivity. With the right tools and mentality, the organization can achieve both: empowering users and protecting data.
Learn more on zscaler.com/security