International Conference on Speech, Multimodal and Advanced Communication systems ICSMACS'26

June 29–30, 2026 , Algiers, ALGERIA

Welcome to the First International Conference on Speech, Multimodal, and Advanced Communication Systems "ICSMACS’26”, Algiers, Algeria, From 29 to 30 June, 2026.

We are thrilled to welcome you in-person in Algiers, Algeria.

The Laboratory of Speech Communication and Signal Processing (LCPTS) at the Faculty of Electrical Engineering of USTHB is organizing the first“International Conference on Speech, Multimodal, and Advanced Communication Systems »ICSMACS’26”on June 29-30, 2026.

“The first International Conference on Speech, Multimodal, and Advanced Communication Systems « ICSMACS’26” is a prominent event that brings together researchers, academics, engineers, and industry professionals to discuss advancements in speech processing, natural language processing, multimodal human-computer interaction, and advanced communication systems. The conference seeks to promote information sharing and multidisciplinary cooperation among specialists in various domains.

The ICSMACS2026 is announced on IEEE Website :

https://conferences.ieee.org/conferences_events/conferences/conferencedetails/68968

Motivation

The rapid progress in speech processing, the increasing sophistication of multimodal interaction, and the continuous evolution of advanced communication systems drive this conference. ICSMACS’26 aims to explore the synergy between these critical fields to foster the development of next-generation communication technologies. By addressing key challenges and opportunities, the conference seeks to enable more efficient, natural, and context-aware interactions between humans and technology.

Objectives

          – Research Advancement : Offer a forum for showcasing and debating the most recent discoveries in communication systems, multimodal interaction, and speech processing.

          – Networking Opportunities : Facilitate networking among academia, industry experts, and students to encourage collaborations and partnerships.

          – Educational Outreach : To spread knowledge and inform participants about new trends and technology, host workshops, tutorials, and special sessions.

          -Application Showcase : Emphasize case studies and real-world applications that illustrate the usefulness of voice and communication technology research.


Guest Speakers

IMPORTANT DATES

The Organizing Committee of the ICSMACS conference is pleased to announce an extension of the paper submission deadline to January 5, 2026, in order to allow a greater number of researchers and practitioners to contribute to the event.

Submission of Full Papers : December, 20, 2025 January, 5, 2026
Notification of Acceptance : February, 20, 2026 March, 5,2026
Final Camera-Ready Papers : March, 20, 2026 April, 5,2026
Early Registration : April, 15, 2026.
Late Registration : May, 15, 2026
Conference Dates : June 29-30, 2026

Track 1 : Speech and Signal Processing

  • Speech recognition, synthesis, enhancement, and separation
  • Speech Coding
  • Spoken Language Identification
  • Speaker identification, verification, diarization
  • Emotion Detection, paralinguistics, and affective speech analysis
  • End-to-end and self-supervised models for speech/audio representation
  • Robust, multilingual, and low-resource speech systems
  • Speech command recognition and spoken language understanding
  • Audio event detection and sound scene analysis
  • Deepfake speech detection and spoofing countermeasures
  • Benchmarking resources, speech datasets, and evaluation methodologies
  • Biomedical signal processing for health tech
  • Image and video processing
  • Pattern recognition

Track 2 : Multimodal Human-Computer Interaction and Intelligent Interfaces

  • Multimodal interaction: speech, gesture, gaze, haptics, vision
  • Audio-visual signal processing for human-computer interaction
  • Large Multimodal Models (LMMs) and generative AI
  • Embodied conversational agents, avatars, and virtual assistants
  • Context-aware, adaptive, and explainable multimodal systems
  • Fusion and alignment of multimodal signals (vision, audio, text, bio-signals)
  • Multimodal behavior analysis : emotion, intent, engagement
  • Computer vision for perception, tracking, and activity recognition
  • Immersive, accessible, and inclusive multimodal applications
  • Human-robot interaction and AI-driven assistive systems
  • Interaction in VR, AR, and and mixed reality environments

Track 3 : Natural Language Processing & Generative AI

  • Large Language Models (LLMs) for generation, understanding, and dialogue
  • Prompt engineering, fine-tuning, and domain adaptation
  • Multilingual, low-resource, and cross-lingual NLP
  • Dialogue systems and conversational AI
  • Information extraction, summarization, and knowledge representation
  • Sentiment, emotion, and opinion analysis in text
  • Fact-checking, misinformation detection, and ethical AI for NLP applications
  • Text command ,  intent recognition and spoken language understanding
  • Text Categorization ,  Classification and topic modeling
  • Natural Language Understanding (NLU) and semantic parsing
  • NLP for multimodal, social-impact, and assistive applications

Track 4 : Advanced Communication Systems and Intelligent Networking

  • Next-generation networks (5G/6G and beyond) 
  •  IoT, vehicular networks (VANET), and specialized communications
  • AI & ML for network optimization , spectrum management and orchestration
  • Edge AI, federated learning, and distributed intelligence
  • Multimedia signal processing and smart city applications
  • Quantum, optical, and cognitive communication systems
  • Software-defined networks and radio systems
  • Sustainable and energy-efficient network infrastructures
  • Security, privacy, cybersecurity, and digital forensics
  • Compressed sensing, time-series analysis and anomaly detection
  • Advanced source and speech coding for modern communication networks
  • Channel Coding and Error Correction

Track 5 : Hardware and System Implementation for Intelligent Communication

  • Edge AI and On-device Processing for speech and multimodal applications
  • Neuromorphic Computing for Speech , audio, and multimodal signal processing
  • FPGA and ASIC Design for AI Acceleration
  • Low-Power and Energy-Efficient Hardware architectures
  • Real-time and Embedded Multimodal Systems
  • Hardware-Aware AI algorithm design and optimization
  • System-on-Chip (SoC) Design for Intelligent Communication systems
  • Prototyping , testing, and development platforms
  • Benchmarking and Performance Evaluation of Hardware Platforms
  • Hardware Security and trusted execution for embedded AI
  • AI-powered assistive and rehabilitation technologies for speech and hearing disorders

Latest Events

Our previous Doctoral Days event was a great success, bringing together researchers, students, and experts from across the country. It featured insightful presentations, engaging poster sessions, and valuable discussions on emerging technologies in speech processing, AI, and human-machine interaction. The strong participation and positive feedback highlighted the growing interest in our research themes and set a high standard for this year’s edition.

SPONSORS

aagee_v3_vertical