My Short CV | University of Lincoln | School of Computer Science | Contact: syue @lincoln.ac.uk or Skype yue.shigang or Phone +44 1522 837397
Prof. Shigang Yue

Research | Publications | Teaching | Talks | Home

Research Interests

I am intertested in understanding how visual brains work, how to model visual neurons and systems, how to apply models and computational intelligence to solve real world problems in robotics, intelligent vehicles, human security and healthcare. In this sense, I prefer multiple disciplinary approaches. To illustrate my specific research interests and experiences, I have listed the relevant keywords.

  • Insects visual neuron modelling, biological visual systems modelling and realization
  • Locust's visual system, LGMD/DCMD, LGMD1, LGMD2, collision detection, EMD, attention models
  • Motion sensitive neurons, directional selective neurons, rotational perception neurons
  • Robotics, neuromorphic vision chip, FPGA, VLSI, collision pattern recognition
  • Non-contact intent detection, physiological data monitoring, stimuli, body languages
  • Artifiical intelligence, neural networks, sensory information processing, spiking neural networks
  • Image processing, computer vision, thermal image processing, fMRI/DTI image analysis, endoscopic image analysis
  • Mobible robot navigation, swarm robots, mini/micro robots, colias, colias phi
  • Flexible robotic arms, force/torque sensors, manipulation, deformable objects, flexible body dynamics, finite element methods
  • Artificial life, co-evolution, multiple neural sub-systems, coordination, redundant vision sub-system

Projects

My research has been supported by EU FP6/FP7/H2020, Alexander von Humboldt Foundation (AvH), Home Office, NTT, UoL and other public funding bodies. Their generious supports are very much appreciated. Some of the projects are listed as below.

image of colias image of colias in my hand Colias: a micro robot has been developed in CIL for swarm intelligence research (Farshad Arvin's paper in open access), it is also a ideal platform for educational use for its cost effective performances. A specifically design visual module (Cheng Hu's paper in PDF, and YouTube video) with collision detection capability can be integrated with Colias easily for complex research experiments. COS(phi): Colias in the artificial pheromone system is described in the IROS2015 paper - as demonstrated in this YouTube clip. The 4th generation (Colias4) becomes a convenient platfom for various experiments - details in design are open to access.
image of colias COS(phi): Colias in the artificial pheromone system is described in the IROS2015 Paper - as also demonstrated in this YouTube clip. This is the world first try to use controlled light to simulate pheromone for physical swarm experiments. As lights can be varied (e.g. RGB components) and sensed by the sensors in robots, this method opens the door for simulating many types of pheromone at the same time in a physical swarm experiment.
temporary image for hazcept HAZCEPT: Towards zero road accidents - nature inspired hazard perception. The number of road traffic accident fatalities world wide has recently reached 1.3 million each year, with between 20 and 50 million injuries being caused by road accidents. In theory, all accidents can be avoided. Studies showed that more than 90% road accidents are caused by or related to human error. Developing an efficient system that can detect hazardous situations robustly is the key to reduce road accidents. This HAZCEPT consortium will focus on automatic hazard scene recognition for safe driving. HAZCEPT will address the hazard recognition from three aspects - lower visual level, cogintive level, and drivers' factors in the safe driving loop. Click hazcept for more details. (2013-2017, coordinated by UoL)
temporary image for livcode LIVCODE: Life like information processing for robust collision detection (EU FP7, coordinator). Animals are especially good at collision avoidance even in a dense swarm. In the future, every kind of man made moving machine, such as ground vehicles, robots, UAVs aeroplanes, boats, even moving toys, should have the same ability to avoid collision with other things, if a robust collision detection sensor is available. The six partners of this EU FP7 project from UK, Germany, Japan and China will further look into insects visual pathways and take inspirations from animal vision systems to explore robust embedded solutions for vision based collision detection for future intelligent machines. Click livcode for more details. (2012-2016, coordinated by UoL)
eye links to electronic chips EYE2E: Building visual brains for fast human machine interaction (EU FP7, coordinator). In the real world, many animals pocess almost perfect sensory systems for fast and efficient interactions within dynamic environments. Vision, as an evolved organ, plays a significant role in the survival of many animal species. The mechanisms in biological visual pathways provide nice models for developing artificial vision systems. The four partners of this consortium will work together to explore biological visual systems in both lower and higher level by modelling, simulation, integration and realization in chips, to investigate fast image processing methodologies for human machine interaction through VLSI chip design and robotic experiments. Click eye2e for more details. (2011-2015, coordinated by UoL)
temporary image for hazcept temporary image for hazcept Mini UAVs with size of hand are specially designed for study swarm intelligence. They may also be used for other application areas, for example, as platform for fly robots coordination research, collision avoidance research, surveillance, human robot interaction, and even take part in rescure etc. Further details to be available soon.
crowded places DiPP: detecting hostile intent by measuring psychological and physiological reactions to stimuli (UK, GOV, PI). This is a fascinating project which is tackling the huge challenge with innovative ideas. The feasibility study has proved the concept and methodologies. Patent has been filed. We are taking steps to push this idea forward further. Potential investors and/or potential collaborators are welcome to contact me for the involvment of further developments towards industrial applications.
pedestrians Pedstrian: pedestrian collision detection with bio-inspired neural networks (Funded Research Studentship). In this project, only the pedestrians are in the collision course or are walking into the collision course to a moving vehcile are monitored and calculated with hierarchical neural networks. Several types of bio-inspired neural networks are combined to get a better performance for pedestrian collision detection.
3D environment Evolving of bio-plausible neural systems to control aerial agents (UoL funded PhD Studentship). This project is looking for ways to evolve visual based neural controllor for autonomous agents. These agents are initially evolved in 3D virtual environments, and the best performed agent will be tested with its physical counteparts flying or running in the real physical world. The platform developed and used in this study can be acessed via Mark Smith's page altURI if you have further interest.
road collision detection robotic navigation LGMD neural networks Vehicle or robot collision detection inspired from locust visual pathway (LOCUST, EU). Life like image processing inspired from locust visual pathway are used for vehicle collision detection.
interact in dynamic environment interact with dynamic object 1 interact with dynamic object 1
Bio-plausible Navigation: with a pair of LGMDs, a robot can navigate easily in a dynamic environment, click [video] on YouTube, to see how it changes its moving course to avoid the rolling can. The short movie presented in AISB'07 is on YouTube [movie] (90s), click to see how a robotic 'locust' with panaromic vision behaves reasonably to approaching objects.
manipulation with a robotic arm
Robotic Manipulation Skills (AvH Research Fellow): intelligent robotic systems can be constructed hierachically start from primary skills. Each of these skills can deal with similar tasks in similar stituations. Taking manipulating soft/flexible objects as an example - to insert an elastic object into a hole efficiently, the robotic arm can, either to damp the vibration very quickly as shown in the video clips [damp1] with PID or [damp2] with fussy controller, or do the insertion directly if knowing the status of the deformable object with the help of different sensors and models as shown in the two video clips [FastInsertion1] and [FastInsertion2].


click, back to CIL homepage
Research | Publications | Teaching | Talks | Home

Contact via post: Prof. S. Yue, School of Computer Science, University of Lincoln, MHAT building, Brayford Pool, Lincoln, LN6 7TS United Kingdom