A Combined Vision-Based Multiple Object Tracking and Visual Odometry System

Over the past year, we have built on our visual odometry algorithm (which I shared in an earlier post and made the source code public) to develop a multi-object tracking algorithm capable of tracking objects in the 3D world from the moving camera observer. An example of where this may be useful is for self-driving vehicles to detect moving objects (other cars) that are on a collision course.

We have written this research up in a research paper, which has been accepted this week into the IEEE Sensors Journal. It will take a few weeks for it to appear on IEEE Xplore. 

Mohamed Aladem, S. A. Rawashdeh, “A Combined Vision-Based Multiple Object Tracking and Visual Odometry System”, IEEE Sensors Journal. Accepted 8/2019. 

Below is a video demo. Credit to Mohamed Aladem, my Ph.D. student who is on track to graduate in mid-2020. 

Lightweight Visual Tracking (LVT): source code and paper published

We have recently published the source code of our visual odometry algorithm, which supports stereo and RGB-D camera systems. 

Source code is available at: (github link).  It is compatible with ROS (Robot Operating System). 

Our journal publication was recently accepted, it is available at Sensors. This is the paper’s abstract:

Lightweight Visual Odometry for Autonomous Mobile Robots

Vision-based motion estimation is an effective means for mobile robot localization and is often used in conjunction with other sensors for navigation and path planning. This paper presents a low-overhead real-time ego-motion estimation (visual odometry) system based on either a stereo or RGB-D sensor. The algorithm’s accuracy outperforms typical frame-to-frame approaches by maintaining a limited local map, while requiring significantly less memory and computational power in contrast to using global maps common in full visual SLAM methods. The algorithm is evaluated on common publicly available datasets that span different use-cases and performance is compared to other comparable open-source systems in terms of accuracy, frame rate and memory requirements. This paper accompanies the release of the source code as a modular software package for the robotics community compatible with the Robot Operating System (ROS).

This video highlights the algorithm in action:

 

Demonstration of a Stereo Visual Odometry Algorithm

I’m pleased to share another demonstration video of our stereo visual odometry algorithm, primarily developed by my student Mohamed Aladem who is wrapping up his master’s at the University of Michigan – Dearborn. Near term goals for our lab using this framework are: navigating mobile robots (namely an autonomous snowplow for the ION Autonomous Snowplow Competition – see previous post to this one), navigating a multi-copter, and explore solutions for automotive driver assistance systems and future autonomous vehicles.

Publications:

  • Mohamed Aladem, Samir Rawashdeh, Nathir Rawashdeh, “Evaluation of a Stereo Visual Odometry Algorithm for Road Vehicle Navigation”, SAE World Congress, April 2017 Detroit, MI
  • S. A. Rawashdeh, M. Aladem, “Toward Autonomous Stereo-Vision Control of Micro Aerial Vehicles”, Proceedings of the IEEE National Aerospace and Electronics Conference, July 2016, Dayton, OH
  • Journal article pending.

 

Stereo Visual Odometry

We have some exciting results. Below is a brief demonstration of our recent success with visual odometry. Using stereo cameras, the camera motion and pose are tracked over time, along with depth sensing (stereo disparity map) and point cloud generation.

Publications, currently in preparation and expected this summer, will discuss our approach.

3rd Place at IGVC 2016 (Intelligent Ground Vehicle Competition)

I am happy to share that we had a good run at the Intelligent Ground Vehicle Competition this weekend which took place at the Oakland University campus.

The vehicle uses a LIDAR system for obstacle avoidance, GPS for Navigation and real-time image processing for lane detection.

Of over 30 teams participating, the Dearborn team came in third overall, with 3rd fastest speed on the basic navigation course, and tying on the advanced navigation course in terms of performance but at a longer time, earning 2nd place. Below are some photos and a short video of what the advanced course is like. Prizes total $3k and a trophy.

The team of primarily undergraduate students from the Intelligent Systems Club did very well and deserve thanks and congratulations. The team inclued Michael Bowyer, Erik Aitken, Saad Pandit, Cristian Adam, Matthew Abraham, Siddharth Mahimkar, Emmanuel Obi, Brendan Ferracciolo, Angelo Bertani, and others from the club.

 

Presenting at IEEE Southeast Michigan 2015 Fall Conference

We are excited to be presenting our research on Obstacle Avoidance for Drones at the IEEE SEM 2015 Fall Conference, 5-6PM, Nov 17. The talk is titled

“Obstacle Detect, Sense, and Avoid for Unmanned Aerial Systems”

Abstract:

Drones, or Unmanned Aerial Systems (UAS), are expected to be adopted for a wide range of commercial applications and become an aspect of everyday life. The Federal Aviation Administration (FAA) regulates airspace access of unmanned systems and has put forward a road map for UAS adoption for commercial use. It is expected that vehicles flying outside line-of-sight be capable of sensing and avoiding other aircraft and obstacles. Whether the UAS is autonomous or remotely piloted, it is expected that drones become capable of safe flight without depending on communication links which are susceptible. Therefore, sensor technologies and real-time processing and control approaches are required on board unmanned aircraft to provide situational awareness without depending on remote operation or inter-aircraft communication. This talk overviews some research activities at the University of Michigan Dearborn to address this challenges. We are developing a stereo-vision system for obstacle detection on aerial vehicles. Using stereo video (3D video), a depth map can be generated and used to detect approaching objects that need to be avoided. We are also developing a visual navigation approach to enable drones to navigate in GPS denied environments, such as between buildings or indoors. Also, a virtual “bumper” system is being developed to over-ride commands being given by an in-experienced pilot in the case of an impending crash. Such a system could help prevent incidences such as the video drone crash at the last US Open Tennis Championships.

IEEE SEM Fall Conf 2015 - Slide Screensho

Conference Time and Venue:

Tuesday Evening, November 17, 2015, From 4:00 PM to 9:00 PM

University of Michigan – Dearborn
Fairlane Center – North Building
19000 Hubbard Drive, Dearborn,
Michigan 48126

More information on conference agenda can be found at the main page, and in this flyer.

 

CubeSat Star Imaging

We have received an award from the NASA Michigan Space Grant Consortium to develop a star imaging approach for CubeSats using an array of miniature cameras.

Attitude determination for small spacecraft in the 1-5 kilogram range is one of the major technological challenges limiting their utility for a variety of missions. In prior work, we have developed a visual approach for attitude propagation. By tracking the motion of stars in a camera’s field of view, the rotation of the spacecraft can be found in three degrees of freedom. We refer to the approach as a stellar gyroscope. The proposed work builds on the prior success and findings to pursue a promising new topology. Essentially, we will miniaturize the sensor nodes and lenses to design a camera in the size range of modern smartphone cameras capable of star imaging while utilizing the stellar gyroscope algorithm’s noise tolerance in post-processing. This will allow small spacecraft to incorporate up to one camera on each side if needed, with one centralized image processing subsystem.

MSGC Star Imaging

 

Obstacle "Sense and Avoid" using Stereo Vision for Unmanned Aerial Systems

We have received a small award to pursue Stereo Vision on board small unmanned aircraft (aka drones, quad-copters, multi-copters). Obstacle sense and avoid (SAA) on board aerial vehicles is a key technology that needs to be addressed in order to meet the FAA safety requirements for future integration of drones into the civil airspace. Visual approaches, such as stereo vision, can play an important role in meeting these requirements.

We will be working with the 3D Robotics X8 copter and a set of embedded cameras. Students interested in being involved, please contact me.

Illustration of depth perception using stereo cameras3D Robotics X8 copter