20 years of innovation in research

figure research-images/actionSpotting.png
Action spotting in videos by machine learning (successfully applied on SoccerNet and on ActivityNet). Source code available at https://github.com/cioppaanthony/context-aware-loss.
Innovations:
* generic loss function specific to action spotting
* allows a context-aware definition of action in the temporal domain
* huge increase in performance for action spotting on SoccerNet
* automatic highlight generation for soccer
Publication: [2]
figure research-images/TimeSquare.png
ARTHuS: technique to build adaptive real-time match-specific networks for human segmentation, without requiring any manual annotation (best paper award in CVPR workshop)
Innovations:
* real-time segmentation of humans in videos
* highly effective real-time human segmentation network that evolves over time
Publication: [1]
figure research-images/midair.jpg
Mid-Air: a multi-purpose synthetic dataset for low altitude drone flights. It provides a large amount of synchronized data corresponding to flight records for multi-modal vision sensors and navigation sensors mounted on board of a flying quadcopter.
Innovations:
* large training set (420k images)
* multi-modal sensors (3 cameras, IMU, GPS)
* 7 weather conditions, 3 maps
Publication: [16]
figure research-images/driver-monitoring.jpg
Drowsiness monitoring of a driver by video analysis: development of a real time system for drowsiness based on the measurement of eyelids distances
Innovations:
* deep learning-based driver monitoring system comprising 3 modules and 2 proxies (eyelids distance, reaction time)
* multi-scale, responsive, and real-time system for drowsiness monitoring
Publication: [33]
figure research-images/AdriNet-HoM-final.png
HitNet: an innovative deep neural network design with several key concepts.
Innovations (covered by the patent):
* new output layer based on capsules, named Hot-or-Miss layer (speeds up convergence), and on a centripetal loss function
* first data augmentation technique mixing the data space and the feature space
* introduction of the notion of ghosts to allow alternatives and cope with wrong annotations
Publications and patent: [4, 3, 17]
figure research-images/image_sbg.png
Stationary Background Generation (LaBGen): algorithms for the generation of a unique representative image of the background of a video in the presence of constant occlusions. Source C++ code for the generation of a reference background image from a series of images or from a video file.
Innovations:
* award winning technology
* two working modes: offline and online
* real time processing
Publications: [8, 9, 10]
figure research-images/vortex.png
Exoplanet detection: development of imaging techniques based on machine learning for the detection of exoplanets by direct imaging. Source code available at github.
Innovations:
* first deep learning-based system for exoplanet detection (SODINN)
* model for the generation of synthetic data
* introduction of ROC-like curves for performance evaluation
Publications: [31, 12, 13, 11]
figure research-images/video-semanticBGS.jpg
Semantic background subtraction: development of techniques to improve motion detection with background subtraction by the incorporation of semantic information, allowing to cope with all the background subtraction challenges simultaneously
Innovation (covered by patents):
* simple technique to combine informations from background subtraction and semantic segmentation
* enabling the real-time execution of a background subtraction with a non real-time asynchronous stream of semantics
Publications and patents: [15, 17, 19, 21, 20]
figure research-images/sceneSpecific.png
Scene-specific background subtraction: use of deep learning technologies for background subtraction
Innovations:
* the first method using deep learning concepts for background subtraction
* design to take into account the specificities of a scene
* a design for mimicking any unsupervised background subtraction algorithm that can operate in a constant time and run in real time
Publication: [14]
figure research-images/depth_processing.png
Processing of range images. The use of range sensors, such as the Kinect cameras and time-of-flight devices, is quickly spreading in a large number of computer vision problems. Yet, the basic image processing toolbox (edge detector, noise filtering, interest point detection) for range images still consists in algorithm designed for intensity images. We develop new tools directly aimed at range image processing.
Innovations:
* first probabilistic model for range images
* filter and edge detectors dedicated to range images
Publications: [5, 6, 7]
figure research-images/synthetic_view_of_gaims-quarterSize.png
GAIMS [“Gait Measuring System”]: we propose to measure the trajectories of the lower limb extremities with range laser scanners, and to derive various gait characteristics from these trajectories. These characteristics can then be further processed in order to determine some information about the observed person, and for example to help diagnosing various diseases and for the longitudinal follow-up of patients with walking impairments.
Innovations:
* tool for the automatic determination of Expanded Disability Status Scale (EDSS) scores
* complete machine learning-based system for gait analysis
* general tool for comparing gait characteristics (based on our large custom dataset)
Publications: [35, 36, 37, 38]
figure research-images/vibe.jpg
ViBe: fast and innovative technique for background subtraction.
Innovations
* fastest algorithm for background subtraction based on samples
* operations limited to subtractions, comparisons and memory manipulation
* patented technology including the following novelties: a technique for model initialization, a random time sampling strategy, a spatial propagation strategy, the backwards analysis of images in a video stream.
* ViBe forms the basis of all the current unsupervised state-of-the-art algorithms for background subtraction
Publications and patents: [23, 32, 28, 25, 26, 27, 24]
figure research-images/tablelands-locally.jpg
Real time selective encryption of JPEG images and MJPEG video streams
Innovation:
* real time encryption of compatible JPEG and MJPEG
* encryption can be limited to some areas
Publications: [29, 30]
figure research-images/triangulation.png
Robot triangulation: C source code, programs, and documentation for 18 triangulation algorithms for mobile robot positioning or for the Resection Problem.
Innovations:
* fastest algorithm for triangulation
* the algorithm provides, at no additional cost, a quality index for the result
Publications: [39, 40]
figure research-images/libmorpho.jpg
libmorpho is a open software library written in ANSI C that implements several basic operations of mathematical morphology: erosions, dilations, openings, and closings by lines, rectangles, or arbitrary shaped structuring elements or by structuring functions. The software is released under the GNU General Public License.
Innovations:
* fastest implementation of morphological operations with rectangular structuring elements on CPUs (even today!)
* allows the computation of morphological operations with any arbitrary shaped structuring element
Publications: [18, 22]
figure research-images/cinema.jpg
CINEMA and AURALIAS: gesture recognition of a user, who is given the real-time control of auralization and audio spatialization processes.
Innovations:
* gesture-based control of a sound environment
* user-specific auralization (including user localization and audio generation)
Publication: [34]

References

[1] A. Cioppa, A. Deliège, M. Istasse, C. De Vlesschouwer, M. Van Droogenbroeck. ARTHuS: Adaptive Real-Time Human Segmentation in Sports through Online Distillation. IEEE International Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), CVsports, 2019.

[2] A. Cioppa, A. Deliège, S. Giancola, B. Ghanem, M. Van Droogenbroeck, R. Gade, T. Moeslund. A Context-Aware Loss Function for Action Spotting in Soccer Videos. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2020. URL http://hdl.handle.net/2268/241893.

[3] A. Deliège, A. Cioppa, M. Van Droogenbroeck. An Effective Hit-or-Miss Layer Favoring Feature Interpretation as Learned Prototypes Deformations. AAAI Conference on Artificial Intelligence, Workshop on Network Interpretability for Deep Learning:1-8, 2019.

[4] A. Deliège, A. Cioppa, M. Van Droogenbroeck. HitNet: a neural network with capsules embedded in a Hit-or-Miss layer, extended with hybrid data augmentation and ghost capsules. CoRR, abs/1806.06519, 2018. URL https://arxiv.org/abs/1806.06519.

[5] A. Lejeune, D. Grogna, M. Van Droogenbroeck, J. Verly. Evaluation of pairwise calibration techniques for range cameras and their ability to detect a misalignment. International Conference on 3D Imaging (IC3D):1-6, 2014. URL http://doi.org/10.1109/IC3D.2014.7032596.

[6] A. Lejeune, M. Van Droogenbroeck, J. Verly. Adaptive bilateral filtering for range images. URSI Benelux Forum, 2012.

[7] A. Lejeune, S. Piérard, M. Van Droogenbroeck, J. Verly. A new jump edge detection method for 3D cameras. IEEE International Conference on 3D Imaging (IC3D):1-7, 2011. URL http://doi.org/10.1109/IC3D.2011.6584393.

[8] B. Laugraud, S. Piérard, M. Braham, M. Van Droogenbroeck. Simple median-based method for stationary background generation using background subtraction algorithms. International Conference on Image Analysis and Processing (ICIAP), Workshop on Scene Background Modeling and Initialization (SBMI), 9281:477-484, 2015. URL http://doi.org/10.1007/978-3-319-23222-5_58.

[9] B. Laugraud, S. Piérard, M. Van Droogenbroeck. LaBGen-P-Semantic: A First Step for Leveraging Semantic Segmentation in Background Generation. Journal of Imaging, 4(7):86, 2018. URL http://doi.org/10.3390/jimaging4070086.

[10] B. Laugraud, S. Piérard, M. Van Droogenbroeck. LaBGen-P: A Pixel-Level Stationary Background Generation Method Based on LaBGen. IEEE International Conference on Pattern Recognition (ICPR), IEEE Scene Background Modeling Contest (SBMC):107-113, 2016. URL http://orbi.ulg.ac.be/handle/2268/201146.

[11] C. Gomez Gonzalez, O. Absil, M. Van Droogenbroeck. Supervised detection of exoplanets in high-contrast imaging sequences. Astronomy & Astrophysics, 613:1-13, 2018. URL http://hdl.handle.net/2268/217926.

[12] C. Gomez Gonzalez, O. Absil, P.-A. Absil, M. Van Droogenbroeck, D. Mawet, J. Surdej. Low-rank plus sparse decomposition for exoplanet detection in direct imaging ADI sequences: The LLSG algorithm. Astronomy & Astrophysics, 589(A54):1-9, 2016. URL http://hdl.handle.net/2268/196090.

[13] C. Gomez Gonzalez, O. Wertz, O. Absil, V. Christiaens, D. Defrere, D. Mawet, J. Milli, P.-A. Absil, M. Van Droogenbroeck, F. Cantalloube, P. Hinz, A. Skemer, M. Karlsson, J. Surdej. VIP: Vortex Image Processing package for high-contrast direct imaging. The Astronomical Journal, 154(1):7:1-7:12, 2017. URL http://doi.org/10.3847/1538-3881/aa73d7.

[14] M. Braham, M. Van Droogenbroeck. Deep Background Subtraction with Scene-Specific Convolutional Neural Networks. IEEE International Conference on Systems, Signals and Image Processing (IWSSIP):1-4, 2016. URL http://doi.org/10.1109/IWSSIP.2016.7502717.

[15] M. Braham, S. Piérard, M. Van Droogenbroeck. Semantic Background Subtraction. IEEE International Conference on Image Processing (ICIP):4552-4556, 2017. URL http://doi.org/10.1109/ICIP.2017.8297144.

[16] M. Fonder, M. Van Droogenbroeck. Mid-Air: A multi-modal dataset for extremely low altitude drone flights. IEEE International Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), UAVision:553-562, 2019. URL https://ieeexplore.ieee.org/document/9025697.

[17] M. Van Droogenbroeck, A. Deliège, A. Cioppa. Image classification using neural networks. 2018. URL https://orbi.uliege.be/handle/2268/226179.

[18] M. Van Droogenbroeck, H. Talbot. Fast computation of morphological operations with arbitrary structuring elements. Pattern Recognition Letters, 17(14):1451-1460, 1996. URL http://doi.org/10.1016/S0167-8655(96)00113-4.

[19] M. Van Droogenbroeck, M. Braham, S. Piérard. Foreground and background detection method. 2017.

[20] M. Van Droogenbroeck, M. Braham, S. Piérard. Foreground and background detection method. 2019.

[21] M. Van Droogenbroeck, M. Braham, S. Piérard. Foreground and background detection method. 2019.

[22] M. Van Droogenbroeck, M. Buckley. Morphological erosions and openings: fast algorithms based on anchors. Journal of Mathematical Imaging and Vision, Special Issue on Mathematical Morphology after 40 Years, 22(2-3):121-142, 2005. URL http://doi.org/10.1007/s10851-005-4886-2.

[23] M. Van Droogenbroeck, O. Barnich. ViBe: A Disruptive Method for Background Subtraction. In Background Modeling and Foreground Detection for Video Surveillance . Chapman and Hall/CRC, Jul 2014. URL http://doi.org/10.1201/b17223-10.

[24] M. Van Droogenbroeck, O. Barnich. Visual Background Extractor. 2011.

[25] M. Van Droogenbroeck, O. Barnich. Visual background extractor. 2009. URL https://patentscope.wipo.int/search/en/detail.jsf?docId=WO2009007198.

[26] M. Van Droogenbroeck, O. Barnich. Visual background extractor. 2011.

[27] M. Van Droogenbroeck, O. Barnich. Visual background extractor. 2010.

[28] M. Van Droogenbroeck, O. Paquot. Background Subtraction: Experiments and Improvements for ViBe. Change Detection Workshop (CDW), in conjunction with CVPR:32-37, 2012. URL http://doi.org/10.1109/CVPRW.2012.6238924.

[29] M. Van Droogenbroeck, R. Benedet. Techniques for a selective encryption of uncompressed and compressed images. Advanced Concepts for Intelligent Vision Systems (ACIVS):90-97, 2002. URL http://www.ulg.ac.be/telecom/publi/publications/mvd/acivs2002mvd/index.html.

[30] M. Van Droogenbroeck. Partial encryption of images for real-time applications. IEEE Signal Processing Symposium:11-15, 2004. URL http://www.ulg.ac.be/telecom/publi/publications/mvd/sps-2004/index.html. Invited presentation.

[31] O. Absil, D. Mawet, C. Delacroix, P. Forsberg, M. Karlsson, S. Habraken, J. Surdej, P.-A. Absil, B. Carlomagno, V. Christiaens, D. Defrère, C. Gomez-Gonzalez, E. Huby, A. Jolivet, J. Milli, P. Piron, E. Vargas-Catalan, M. Van Droogenbroeck. The VORTEX project: first results and perspectives. Proc. SPIE, Adaptive Optics Systems IV, 9148, 2014. URL http://doi.org/10.1117/12.2055702.

[32] O. Barnich, M. Van Droogenbroeck. ViBe: A universal background subtraction algorithm for video sequences. IEEE Transactions on Image Processing, 20(6):1709-1724, 2011. URL http://doi.org/10.1109/TIP.2010.2101613.

[33] Q. Massoz, J. Verly, M. Van Droogenbroeck. Multi-Timescale Drowsiness Characterization Based on a Video of a Driver's Face. Sensors, 18(9):1-17, 2018. URL http://doi.org/10.3390/s18092801.

[34] R. Dardenne, J.-J. Embrechts, M. Van Droogenbroeck, N. Werner. A video-based human-computer interaction system for audio-visual immersion. Proceedings of SPS-DARTS 2006:23-26, 2006.

[35] S. Azrour, S. Piérard, M. Van Droogenbroeck. Defining a score based on gait analysis for the longitudinal follow-up of MS patients. Multiple Sclerosis Journal, 23(S11):408-409, 2015. URL http://orbi.ulg.ac.be//handle/2268/184249. Proceedings of ECTRIMS 2015 (Barcelona, Spain), P817.

[36] S. Piérard, R. Phan-Ba, M. Van Droogenbroeck. Understanding how people with MS get tired while walking. Multiple Sclerosis Journal, 23(S11):406, 2015. URL http://orbi.ulg.ac.be/handle/2268/184207. Proceedings of ECTRIMS 2015 (Barcelona, Spain).

[37] S. Piérard, S. Azrour, M. Van Droogenbroeck. Design of a reliable processing pipeline for the non-intrusive measurement of feet trajectories with lasers. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP):4399-4403, 2014. URL http://doi.org/10.1109/ICASSP.2014.6854433.

[38] S. Piérard, S. Azrour, R. Phan-Ba, M. Van Droogenbroeck. GAIMS: A Reliable Non-Intrusive Gait Measuring System. ERCIM News, 95:26-27, 2013. URL http://hdl.handle.net/2268/157553.

[39] V. Pierlot, M. Van Droogenbroeck. A New Three Object Triangulation Algorithm for Mobile Robot Positioning. IEEE Transactions on Robotics, 30(3):566-577, 2014. URL http://doi.org/10.1109/TRO.2013.2294061.

[40] V. Pierlot, M. Van Droogenbroeck. BeAMS: a Beacon based Angle Measurement Sensor for mobile robot positioning. IEEE Transactions on Robotics, 30(3):533-549, 2014. URL http://doi.org/10.1109/TRO.2013.2293834.

ULg      Institut Montefiore