top of page

Emerging Cyber Threats
in the AI Landscape

Stack of Files

​Recent advances in Artificial Intelligence (AI), especially generative models and large language models (LLMs), have transformed the cybersecurity landscape. While AI enables powerful new defenses, it also expands attackers' capabilities by automating reconnaissance, accelerating exploitation, and enabling unprecedented forms of deception and social manipulation. This workshop brings together researchers from security, machine learning, human-centered computing, and network systems to explore the rapidly evolving threat landscape and to develop robust, trustworthy, and secure AI-driven systems. We are particularly interested in work that examines how emerging AI technologies, such as LLMs, reshape both offensive and defensive operations in modern networked environments, as well as studies that investigate the societal risks emerging from the large-scale deployment of generative technologies. We invite high-quality, previously unpublished research papers, position papers, and system demos addressing security challenges and opportunities at the intersection of AI, machine learning, and networked systems. Contributions that combine empirical analysis, theoretical insight, or practical system design are especially encouraged.

Workshop Paper Submission Guidelines

  • Workshop Paper Submission Deadline: Feb. 20, 2026

  • Notification: March 20, 2026

  • Submission: 5 pages (IEEE conference format)

  • Presentation Mode: Hybrid (Only the workshop paper authors can choose their presentation modes.)

Topics of Interest
Topics include, but are not limited to:

Adverse Applications of Generative AI and Machine Learning (ML):

AI-driven Defenses

  • Generative AI-driven adversarial attacks

  • Generative AI-driven scams and phishing attacks

  • AI-driven online deception, deepfakes, misinformation/disinformation, and other socialmanipulations

  • Jailbreaks, prompt/data injection, scalable content automation (e.g., scam automation) and other attacks using LLMs

  • Fraud, impersonation, and social engineering attacks, such as using Generative AI voice models, powered by Generative AI

  • Multi-agent attack approaches

  • Malware Analysis and Detection

  • Phishing detection

  • Intrusion detection

  • AI model security and testing

  • Privacy Protections for AI-enabled Systems (e.g., healthcare, finance, education)

  • Robustness of AI-enabled Systems

  • Defense against multi-agent attacks

Trust and Safety in Human-Centered Generative AI Systems

AI and Network Security

Generative AI-driven Social Cyber Threats

  • Explainable and trustworthy AI for user-centric applications

  • Trust and reputation systems in multi-agent and peer-to-peer networks

  • Robotics, Security, and Safety

  • GenAI Alignment

  • Hallucination mitigation

  • RAG reliability

  • Post-deployment monitoring, auditing, and compliance

  • Incident reporting and human oversight

  • Generative AI interpretability

  • Processes for high-risk GenAI

  • Autonomous penetration-testing agents for network exploitation

  • Generative AI for crafting malicious network payloads

  • Attacks and defenses on AI models deployed on edge and network Infrastructure

  • Poisoning edge-based federated learning via network-layer attacks

  • AI-based zero-trust network access management

  • AI-driven attacks and defenses on network slicing and virtualization

  • AI-enabled attacks in vehicular ad-hoc networks (VANETs)

  • Generative AI biases against marginalized groups and/or specific groups (e.g., ethnicityand political)

  • Generative AI errors that exploit marginalized groups (e.g., hallucination reliance)

  • Generative AI use to exacerbate polarization (e.g., synthetic media)

  • Illegal content generation (e.g., CSAM and NCII)

  • Privacy breaches, data leakage, and model inversion attacks

  • Bias, discrimination, and representational harms in ML

  • Accessibility and language equity

  • Harms in high-risk domains

Workshop Chairs

PC Members

Hilal Pataci

The University of Texas at San Antoniohilal.pataci@utsa.edu

Xianping Wang

Fayetteville State Universityxwang3@uncfsu.edu

Steven Ullman

The University of Texas at San Antoniosteven.ullman@utsa.edu

Karim Elish

Denis Ulybyshev

Wesam Al Amiri

Bio-sketches

Dr. Nishant Vishwamitra

The University of Texas at San Antonionishant.vishwamitra@utsa.edu

Dr. Nishant Vishwamitra is currently an Assistant Professor of Information Systems andCybersecurity at the University of Texas at San Antonio (UTSA). Dr. Vishwamitra received hisPh.D. in Computer Science and Engineering from the University at Buffalo (UB), USA in 2022. After graduating from UB, he joined UTSA as an assistant professor. In 2023, Dr. Vishwamitrareceived the NSF CRII grant to support his research on online abuse and harassment defense. His research interests are in the areas of design science research and predictive analytics, focusing on Artificial Intelligence (AI), online abuse and online harassment defense, and cybersecurity. He can be contacted at nishant.vishwamitra@utsa.edu.

Dr. Arijet Sarker

Dr. Arijet Sarker is currently an Assistant Professor in the Department of Computer Science at Florida Polytechnic University. Prior to joining FloridaPoly, he received his Ph.D. degree in Engineering (Security) from the Computer Science Department at the University of Colorado
Colorado Springs (UCCS) in 2024. Research during his Ph.D. was supported by NSF and ETRI. His research focuses on addressing and resolving security and privacy issues in distributed networking systems. His research areas include, but are not limited to, cellular network, distributed network, vehicular network, blockchain, electronic voting, software assurance, and supply chain security. Contact Information: asarker@floridapoly.edu.

CONTACT US

event.manager AT svcsi.org

SUBSCRIBE FOR ALL SVCC-2026 UPDATES

QUESTIONS? 

LEAVE US A MESSAGE:

Thanks for submitting!

© All Copyrights @ SVCSI

SVCSI is a 501(c)(3) non-profit organization (Public Charity).

bottom of page