Overview
This marks our first time organizing a special session at the CLAWAR conference, providing a platform to present our proposal for an international grant collaboration between China and Kazakhstan.

Recent advancements in AI, particularly in Vision–Language–Action (VLA) models, have significantly enhanced robotic perception and decision-making. However, true dexterity requires more than just visual and cognitive reasoning—it demands an ability to physically interact with the environment. Tactile sensing and haptic stimulation bridge this gap by enabling robots to perceive contact forces, adapt to object properties, and refine manipulation strategies. As AI-driven models advance, the integration of haptics and tactile sensing is becoming essential for robotic grasping, manipulation, and simulation.

This special session will explore the latest developments in haptics, tactile sensing, and perception while addressing challenges in assistive devices and robotic applications. Discussions will also highlight the role of haptic stimulation in enhancing sensory feedback, improving robotic control, and enabling more intuitive human-robot interactions.
Touch 2025: Tactile sensing and Haptic Technologies in Touch-driven Robotics
  • Tactile Sensors and Materials
    Novel sensing materials and structures for accurate tactile perception.
  • Haptic Feedback Systems
    Innovations in wearable haptic devices, force feedback, and multimodal interaction.
  • Artificial Tactile Perception
    Algorithms for interpreting tactile data in robotics.
  • Tactile Data Processing and Machine Learning
    AI-driven approaches for tactile signal processing and response generation.
  • Tactile-Based Robotic Manipulation
    Enhancing robotic grasping and dexterous manipulation with tactile sensing.
  • Experimental Validation and Human Studies
    Assessing the effectiveness of tactile interfaces through human-subject experiments.
  • Bio-Inspired Tactile Sensors
    Mimicking biological touch mechanisms for enhanced robotic perception.
  • Multi-modal sensor fusion combining tactile, vision, and force sensing
    Creating more robust and intelligent perception systems through data integration.
  • VLA for Robot Manipulation
    Vision-Language-Action (VLA) models are revolutionizing manipulation by enabling users to communicate with robots directly.
Conference Program
Will be announced soon
Organizers
Zhanat Kappassov (zhkappassov@nu.edu.kz),

Togzhan Syrymova (togzhan.syrymova@nu.edu.kz),

Jabrail Chumakov, Nurlan Kabdyshev, Temirlan Galimzhanov, Amir Yelenov (PhD students)
Submission Template. How It Works?
Download Template
Link to template LaTeX
Open downloaded folder
Open in LaTeX
Join Us at the Conference
Connect with us at the upcoming conference, where we'll showcase our latest research and innovations. Learn about our breakthroughs in robotics tactile technology and explore future collaborations.
Made on
Tilda