David Kelman Cymulate has a field CTO, and is a senior technical customer-facing professional in the field of information and cyber security. David leads customers to success and high-protection standards.
Cymule is a cyber security company that provides continuous security verification through automatic attack simulation. Its platform enables outfit organizations to constantly test, assess and adapt to their safety currency by imitating the real -world cyber threats, including ransomware, phishing and lateral movement attacks. By offering Breach and Attack Simulation (BAS), exposure management and security asana management, Cymulate helps businesses to identify weaknesses and improve their defense in real time.
What do you see as primary driver behind the rise of cyber security hazards related to AI in 2025?
Cyber security hazards related to AI are increasing due to increased access to AI. Danger actors now have access to the AI tool that can help them recur on malware, craft more reliable fishing emails, and increase their attacks to increase their access. These strategies are not “new”, but the speed and accuracy with which they are being deployed have already increased long backlogs of cyber threats, need to address security teams. Organizations run to implement AI technology, while not perfectly understand that safety control needs to be put around it, to ensure that it is not easily exploited by the danger actors.
Is there any specific industry or sector more vulnerable to these AI-related hazards, and why?
Industries that are continuously sharing data in channels among employees, customers, or customers are susceptible to AI-related dangers as AI is making it easier to explain social engineering plans for the threat actors, fishing scams are effectively a number game, and if the attackers can now send more authentic-looking emails, their success will increase. Organizations that highlight their AI-operated services to the public invite the attackers to try to take advantage of it. Although it is a heritage risk of making services public, it is important to correct it.
Does the organization of major weaknesses face when using public LLM for business functions?
Data leakage is probably number one concern. When using a public big language model (LLM), it is difficult to say to ensure where the data will go – and the last thing you want to do accidentally uploads sensitive information to publicly accessible AI tool. If you need to analyze confidential data, keep it at home. Do not turn to public LLM that can turn around and leak that data on the broader internet.
How can enterprises effectively secure sensitive data while testing or implementing the AI system in production?
When testing the AI system in production, organizations should adopt an aggressive mindset (as contrary to a defensive one). By this I mean security teams should continuously test and validate their AI system safety, rather than reacting to the dangers coming. Constant monitoring for attacks and validation of security systems can help ensure that sensitive data is preserved and safety solutions are working according to intent.
How can organizations constantly defend against continuous AI-operated attacks?
While the danger actors are using AI to develop their hazards, safety teams can use AI to use AI to update their violations and attack simulation (BAS) equipment to ensure that they are safe against emerging hazards. The tools, such as the daily danger feed of the tools, load the latest emerging hazards in the violation of Cymule and attack simulation software daily to ensure that the security teams are recognizing the cyber security of their organization against the most recent threats. AI can help automatically to automatically, which allows organizations to stay fit and be prepared to face the latest hazards.
What is the role in reducing the risks generated by automatic safety verification platforms, such as Cymule, AI-AI-run cyber threats?
Automatic safety verification can help platform organizations to be on top of AI-powered cyber threats, which will be aimed at identifying, validing and giving preference to the dangers. AI is serving as a force multiplier for attackers, it is important that you do not find out the potential weaknesses in your network and system, but validate which people post a real danger to the organization. Only then can the exposure be effectively preferred, allowing organizations to reduce the most dangerous threats before going to low pressure items. The attackers are using AI to examine the digital environment for potential weaknesses before launching highly analogous attacks, which means the ability to address dangerous weaknesses in an automatic and effective manner has never been more important.
How can enterprises include breech and attack simulation equipment to prepare for AI-driven attacks?
BAS software is an important element of exposure management, allowing organizations to create real -world attack landscapes, which they can use to validate security controls against the most pressure hazards today. The latest danger intell of the cymule danger research group (combined with emerging hazards and new simulation) is applied to the BAS tool daily and primary research is applied daily to the BAS tool, a new threat to a new threat to the security leaders if a new threat was blocked or detected by their existing security controls. With BAS, organizations can also tailor AI-operated simulation for their unique environment and safety policies with an open structure to create and automate the custom campaigns and advanced attack scenarios.
What will you give to security teams to stay ahead of these emerging hazards. The top three recommendations are,
Every day threats are becoming more complicated. Organizations that do not have an effective exposure management program that risk falling dangerously, so my first recommendation will be to apply a solution that allows the organization to give priority to their exposure effectively. Next, make sure the exposure management solution includes base capabilities that allow the safety team to simulate emerging hazards (AI and otherwise) to perform the organization how to perform the security control. Finally, I would recommend availing automation to ensure that verification and test may be on a continuous basis, not only during periodic reviews. It is important to have up-to-date knowledge, with changing the danger scenario based on one minute-to-minute. The threat data from the previous quarter is already disappointingly obsolete.
What events in AI technology you move forward in the next five years that can either increase or reduce cyber security risks?
A lot will depend on how accessible AI is. Today, low-level attackers can use AI abilities to increase and increase their attacks, but they are not making new, unprecedented strategies-they are just making the current strategy more effective. Right now, we (mostly) can compensate for this. But if AI is getting more advanced and is highly accessible, it can change. Regulations will play a role here – the European Union (and, to some extent, America) has taken steps to rule how AI is developed and used, so it will be interesting to see if there is any effect on AI development.
Do you guess changes in how organizations prioritize AI-related cyber security threats compared to traditional cyber security challenges?
We are already seeing that organizations recognize the value of solutions such as bas and exposure management. The AI is allowing the danger actors to launch advanced, targeted campaigns quickly, and the security teams need any benefit to help them stay ahead. Organizations using the verification tool will have the first time to prioritize the most pressure and dangerous hazards and to keep your head on top of water. Remember, most attackers are looking for an easy score. You may not be able to stop every attack, but you can avoid creating an easy goal yourself.
Thanks for the great interview, those who want to learn more, should go to Cymule.