• Complex
  • Title
  • Author
  • Keyword
  • Abstract
  • Scholars
Search
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:薛建儒

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Export Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 23 >
DADA: Driver Attention Prediction in Driving Accident Scenarios EI SCIE Scopus
期刊论文 | 2022 , 23 (6) , 4959-4971 | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
WoS CC Cited Count: 17 SCOPUS Cited Count: 15
Abstract&Keyword Cite

Abstract :

Driver attention prediction is becoming an essential research problem in human-like driving systems. This work makes an attempt to predict the driver attention in driving accident scenarios (DADA). However, challenges tread on the heels of that because of the dynamic traffic scene, intricate and imbalanced accident categories. In this work, we design a semantic context induced attentive fusion network (SCAFNet). We first segment the RGB video frames into the images with different semantic regions (i.e., semantic images), where each region denotes one semantic category of the scene (e.g., road, trees, etc.), and learn the spatio-temporal features of RGB frames and semantic images in two parallel paths simultaneously. Then, the learned features are fused by an attentive fusion network to find the semantic-induced scene variation in driver attention prediction. The contributions are three folds. 1) With the semantic images, we introduce their semantic context features and verify the manifest promotion effect for helping the driver attention prediction, where the semantic context features are modeled by a graph convolution network (GCN) on semantic images; 2) We fuse the semantic context features of semantic images and the features of RGB frames in an attentive strategy, and the fused details are transferred over frames by a convolutional LSTM module to obtain the attention map of each video frame with the consideration of historical scene variation in driving situations; 3) The superiority of the proposed method is evaluated on our previously collected dataset (named as DADA-2000) and two other challenging datasets with state-of-the-art methods.

Keyword :

Accidents Convolution convolutional LSTM Driver attention prediction driving accident scenarios graph convolution network Predictive models Roads Semantics Vehicles Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fang, Jianwu , Yan, Dingxin , Qiao, Jiahuan et al. DADA: Driver Attention Prediction in Driving Accident Scenarios [J]. | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS , 2022 , 23 (6) : 4959-4971 .
MLA Fang, Jianwu et al. "DADA: Driver Attention Prediction in Driving Accident Scenarios" . | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 23 . 6 (2022) : 4959-4971 .
APA Fang, Jianwu , Yan, Dingxin , Qiao, Jiahuan , Xue, Jianru , Yu, Hongkai . DADA: Driver Attention Prediction in Driving Accident Scenarios . | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS , 2022 , 23 (6) , 4959-4971 .
Export to NoteExpress RIS BibTex
Sparse Semantic Map-Based Monocular Localization in Traffic Scenes Using Learned 2D-3D Point-Line Correspondences EI SCIE Scopus
期刊论文 | 2022 , 7 (4) , 11894-11901 | IEEE ROBOTICS AND AUTOMATION LETTERS
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

Vision-based localization in a prior map is of crucial importance for autonomous vehicles. Given a query image, the goal is to estimate the camera pose corresponding to the prior map, and the key is the registration problem of camera images within the map. While autonomous vehicles drive on the road under occlusion (e.g., car, bus, truck) and changing environment appearance (e.g., illumination changes, seasonal variation), existing approaches rely heavily on dense point descriptors at the feature level to solve the registration problem, entangling features with appearance and occlusion. As a result, they often fail to estimate the correct poses. To address these issues, we propose a sparse semantic map-based monocular localization method, which solves 2D-3D registration via a well-designed deep neural network. Given a sparse semantic map that consists of simplified elements (e.g., pole lines, traffic sign midpoints) with multiple semantic labels, the camera pose is then estimated by learning the corresponding features between the 2D semantic elements from the image and the 3D elements from the sparse semantic map. The proposed sparse semantic map-based localization approach is robust against occlusion and long-term appearance changes in the environments. Extensive experimental results show that the proposed method outperforms the state-of-the-art approaches.

Keyword :

Localization visual learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Xingyu , Xue, Jianru , Pang, Shanmin . Sparse Semantic Map-Based Monocular Localization in Traffic Scenes Using Learned 2D-3D Point-Line Correspondences [J]. | IEEE ROBOTICS AND AUTOMATION LETTERS , 2022 , 7 (4) : 11894-11901 .
MLA Chen, Xingyu et al. "Sparse Semantic Map-Based Monocular Localization in Traffic Scenes Using Learned 2D-3D Point-Line Correspondences" . | IEEE ROBOTICS AND AUTOMATION LETTERS 7 . 4 (2022) : 11894-11901 .
APA Chen, Xingyu , Xue, Jianru , Pang, Shanmin . Sparse Semantic Map-Based Monocular Localization in Traffic Scenes Using Learned 2D-3D Point-Line Correspondences . | IEEE ROBOTICS AND AUTOMATION LETTERS , 2022 , 7 (4) , 11894-11901 .
Export to NoteExpress RIS BibTex
Traffic Accident Detection via Self-Supervised Consistency Learning in Driving Scenarios EI SCIE Scopus
期刊论文 | 2022 , 23 (7) , 9601-9614 | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
SCOPUS Cited Count: 18
Abstract&Keyword Cite

Abstract :

With the rapid progress of autonomous driving and advanced driver assistance systems, there are growing efforts to promote their safety in natural driving scenarios, especially for the detection of the traffic accidents. However, because of the dynamic camera motion and complex scene in driving situations, traffic accident detection is still challenging. In this work, we aim to give the ability of Traffic Accident Detection for driving systems by proposing a Self-Supervised Consistency learning framework, termed as SSC-TAD, that involves the appearance, motion, and context consistency learning. The key formulation is to find the inconsistency of video frames, object locations and the spatial relation structure of scene temporally between different frames captured by the dashcam videos. Within this field, different from the previous works which concentrate on predicting the future object locations or frames, we further focus on predicting the visual scene context in driving scenarios and detecting the traffic accident by considering the temporal frame consistency, temporal object location consistency, and the spatial-temporal relation consistency of road participants. In this work, this formulation is fulfilled by a collaborative multi-task consistency learning network and the visual scene context feature is represented by a graph convolution network. The superiority to the state-of-the-art is verified by exhaustive evaluations on two large scale datasets, i.e., the AnAn Accident Detection (A3D) dataset and DADA-2000 dataset collected recently.

Keyword :

adversarial learning frame and location prediction scene context Traffic accident detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fang, Jianwu , Qiao, Jiahuan , Bai, Jie et al. Traffic Accident Detection via Self-Supervised Consistency Learning in Driving Scenarios [J]. | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS , 2022 , 23 (7) : 9601-9614 .
MLA Fang, Jianwu et al. "Traffic Accident Detection via Self-Supervised Consistency Learning in Driving Scenarios" . | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 23 . 7 (2022) : 9601-9614 .
APA Fang, Jianwu , Qiao, Jiahuan , Bai, Jie , Yu, Hongkai , Xue, Jianru . Traffic Accident Detection via Self-Supervised Consistency Learning in Driving Scenarios . | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS , 2022 , 23 (7) , 9601-9614 .
Export to NoteExpress RIS BibTex
An Adaptive Invariant EKF for Map-Aided Localization Using 3D Point Cloud EI SCIE Scopus
期刊论文 | 2022 , 23 (12) , 24057-24070 | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

In map-aided localization using 3D point cloud sets, the poses estimated by 3D Registration Algorithms (3DRAs) are typically fused with other sensor data via the Extended Kalman Filter (EKF) to obtain reliable and smooth results. However, the challenges of this combined method are: 1) The linearization process of EKF may cause errors and singularities due to the state defined by a 6D pose. 2) The results of 3DRA as the measurements of EKF may cause errors in residual calculation. 3) The approach relies heavily on 3DRA to overcome the effects of dynamic scenes. This paper proposes an adaptive localization framework based on Invariant Extended Kalman Filter (Invariant EKF), in which the Lie Group is introduced to define the state. In this framework, the points of the raw point cloud set are the measurements of the filter, and the 3DRA is only employed for data association between the raw 3D point cloud set and the 3D point cloud map. Then, a Concentric Ring Model (CRM) is proposed to reduce the influence of dynamic objects, which can adaptively estimate the covariance of each observed point via Gaussian Process Regression (GPR). Besides, the CRM considers Sensor Measurement Noise (SMN) and Sensor Vibration Noise (SVN). The performance of the proposed framework is evaluated on the KITTI dataset and our dataset. The experimental results show that the proposed method is superior to other state-of-the-art methods, and the CRM can achieve more accurate measurement than ever before, especially in high-dynamic scenes.

Keyword :

3D point cloud Adaptation models Customer relationship management Gaussian process regression Intelligent vehicle invariant EKF localization Location awareness Noise measurement Point cloud compression registration Three-dimensional displays Vehicle dynamics

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Tao, Zhongxing , Xue, Jianru , Wang, Di et al. An Adaptive Invariant EKF for Map-Aided Localization Using 3D Point Cloud [J]. | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS , 2022 , 23 (12) : 24057-24070 .
MLA Tao, Zhongxing et al. "An Adaptive Invariant EKF for Map-Aided Localization Using 3D Point Cloud" . | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 23 . 12 (2022) : 24057-24070 .
APA Tao, Zhongxing , Xue, Jianru , Wang, Di , Li, Gengxin , Fang, Jianwu . An Adaptive Invariant EKF for Map-Aided Localization Using 3D Point Cloud . | IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS , 2022 , 23 (12) , 24057-24070 .
Export to NoteExpress RIS BibTex
Improve Regression Network on Depth Hand Pose Estimation with Auxiliary Variable EI SCIE
期刊论文 | 2021 , 31 (3) , 890-904 | IEEE Transactions on Circuits and Systems for Video Technology
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

The regression based deep neural networks have achieved state-of-The-Arts performance on depth 3D hand pose estimation task. This paper focuses on improving the regression mapping between features and pose joints. Inspired by the distribution modeling ability of Variational Autoencoders, we introduce an auxiliary variable into the regression network. During training, the auxiliary variable is modeled by an inference distribution that learns the underlying structural kinematics of human hand. Different with other regression methods on hand poses, our network estimates the pose joints from input depth features and the learned auxiliary variable as well. We show that by introducing the auxiliary variable, the regression is benefited from 1) regularization modeled by inference distribution; and 2) prior information carried by the auxiliary model. The effectiveness of the proposed regression method is evaluated with extensively self-comparative experiments and in comparison with other regression methods on hand pose datasets. The proposed network is easy to train in an end-To-end manner and can work with various feature extraction methods. We apply the proposed regression method to an existing hand pose estimation system, and improves the estimation accuracy by 18.35% and 16.65% on public hand pose datasets. © 1991-2012 IEEE.

Keyword :

Deep neural networks Regression analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Lu , Hu, Chen , Tao, Ji'An et al. Improve Regression Network on Depth Hand Pose Estimation with Auxiliary Variable [J]. | IEEE Transactions on Circuits and Systems for Video Technology , 2021 , 31 (3) : 890-904 .
MLA Xu, Lu et al. "Improve Regression Network on Depth Hand Pose Estimation with Auxiliary Variable" . | IEEE Transactions on Circuits and Systems for Video Technology 31 . 3 (2021) : 890-904 .
APA Xu, Lu , Hu, Chen , Tao, Ji'An , Xue, Jianru , Mei, Kuizhi . Improve Regression Network on Depth Hand Pose Estimation with Auxiliary Variable . | IEEE Transactions on Circuits and Systems for Video Technology , 2021 , 31 (3) , 890-904 .
Export to NoteExpress RIS BibTex
Video Frame Prediction by Deep Multi-Branch Mask Network EI SCIE
期刊论文 | 2021 , 31 (4) , 1283-1295 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
WoS CC Cited Count: 9
Abstract&Keyword Cite

Abstract :

Future frame prediction in video is one of the most important problem in computer vision, and useful for a range of practical applications, such as intention prediction or video anomaly detection. However, this task is challenging because of the complex and dynamic evolution of scene. The difficulty of video frame prediction is to model the inherent spatio-temporal correlation between frames and pose an adaptive and flexible framework for large motion change or appearance variation. In this paper, we construct a deep multi-branch mask network (DMMNet) which adaptively fuses the advantages of optical flow warping and RGB pixel synthesizing methods, i.e., the common two kinds of approaches in this task. In the procedure of DMMNet, we add mask layer in each branch to adaptively adjust the magnitude range of estimated optical flow and the weight of predicted frames by optical flow warping and RGB pixel synthesizing, respectively. In other words, we provide a more flexible masking network for motion and appearance fusion on video frame prediction. Exhaustive experiments on Caltech pedestrian and UCF101 datasets show that the proposed model can obtain favorable video frame prediction performance compared with the state-of-the-art methods. In addition, we also put our model into the video anomaly detection problem, and the superiority is verified by the experiments on UCSD dataset.

Keyword :

Adaptive optics deep learning multi-branch mask network multi-frame prediction Optical computing Optical distortion Optical imaging Optical network units Predictive models Synthesizers video anomaly detection Video frame prediction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Sen , Fang, Jianwu , Xu, Hongke et al. Video Frame Prediction by Deep Multi-Branch Mask Network [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2021 , 31 (4) : 1283-1295 .
MLA Li, Sen et al. "Video Frame Prediction by Deep Multi-Branch Mask Network" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 31 . 4 (2021) : 1283-1295 .
APA Li, Sen , Fang, Jianwu , Xu, Hongke , Xue, Jianru . Video Frame Prediction by Deep Multi-Branch Mask Network . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2021 , 31 (4) , 1283-1295 .
Export to NoteExpress RIS BibTex
Perceptual hash-based coarse-to-fine grained image tampering forensics method EI SCIE
期刊论文 | 2021 , 78 | Journal of Visual Communication and Image Representation
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

As an active forensic technology, perceptual image hash has important application in image content authenticity detection and integrity authentication. In this paper, we propose a hybrid-feature-based perceptual image hash method that can be used for image tampering detection and tampering localization. In the proposed method, we use the color features of image as global features, use point-based features and block-based features as local features, and combine with the structural features to generate intermediate hash code. Then we encrypt and randomize to generate the final hash code. Using this hash code, we present a coarse-to-fine grained forensics method for image tampering detection. The proposed method can realize object-level tampering localization. Abundant experimental results show that the proposed method is sensitive to content changes caused by malicious attacks, and the tampering localization precision achieves pixel level, and it is robust to a wide range of geometric distortions and content-preserving manipulations. Compared with the state-of-the-art schemes, the proposed scheme yields superior performance. © 2021 Elsevier Inc.

Keyword :

Authentication Digital forensics Hash functions

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Xiaofeng , Zhang, Qian , Jiang, Chuntao et al. Perceptual hash-based coarse-to-fine grained image tampering forensics method [J]. | Journal of Visual Communication and Image Representation , 2021 , 78 .
MLA Wang, Xiaofeng et al. "Perceptual hash-based coarse-to-fine grained image tampering forensics method" . | Journal of Visual Communication and Image Representation 78 (2021) .
APA Wang, Xiaofeng , Zhang, Qian , Jiang, Chuntao , Xue, Jianru . Perceptual hash-based coarse-to-fine grained image tampering forensics method . | Journal of Visual Communication and Image Representation , 2021 , 78 .
Export to NoteExpress RIS BibTex
Coarse-to-fine-grained method for image splicing region detection EI SCIE Scopus
期刊论文 | 2021 , 122 | Pattern Recognition
WoS CC Cited Count: 4 SCOPUS Cited Count: 15
Abstract&Keyword Cite

Abstract :

In this study, we aim to improve the accuracy of image splicing detection. We propose a progressive image splicing detection method that can detect the position and shape of spliced region. Because image splicing is likely to destroy or change the consistent correlation pattern introduced by color filter array (CFA) interpolation process, we first used a covariance matrix to reconstruct the R, G and B channels of image and utilized the inconsistencies of the CFA interpolation pattern to extract forensics feature. Then, these forensics features were used to perform coarse-grained detection, and texture strength features were used to perform fine-grained detection. Finally, an edge smoothing method was applied to realize precise localization. As compared to the state-of-the-art CFA-based image splicing detection methods, the proposed method has a high-level detection accuracy and strong robustness against content-preserving manipulations and JPEG compression. © 2021

Keyword :

Covariance matrix Digital forensics Edge detection Feature extraction Image compression Image enhancement Image texture Interpolation Textures

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Xiaofeng , Wang, Yan , Lei, Jinjin et al. Coarse-to-fine-grained method for image splicing region detection [J]. | Pattern Recognition , 2021 , 122 .
MLA Wang, Xiaofeng et al. "Coarse-to-fine-grained method for image splicing region detection" . | Pattern Recognition 122 (2021) .
APA Wang, Xiaofeng , Wang, Yan , Lei, Jinjin , Li, Bin , Wang, Qin , Xue, Jianru . Coarse-to-fine-grained method for image splicing region detection . | Pattern Recognition , 2021 , 122 .
Export to NoteExpress RIS BibTex
Improving 3D Object Detection via Joint Attribute-oriented 3D Loss EI Scopus
会议论文 | 2020 , 951-956 | 31st IEEE Intelligent Vehicles Symposium, IV 2020
Abstract&Keyword Cite

Abstract :

3D object detection has become a hot topic in intelligent vehicle applications in recent years. Generally, deep learning has been the primary framework used in 3D object detection, and regression of the object location and classification of the objectness are the two indispensable components. In the process of training, the \ell_{n}\ (n=1,2) and the focal loss are considered as the frequent solutions to minimize the regression and classification loss, respectively. However, there are two problems to be solved in the existing methods. For regression component, there is a gap between evaluation metrics, e.g., 3D Intersection over Union (IoU), and the traditional regression loss. As for the classification component, confidence score exists ambiguous due to the binary label assignment of target. To solve these problems, we propose a loss by jointing 3D IoU and other geometric attributes (named as jointed attribute-oriented 3D loss), which can be directly used in optimizing the regression component. In addition, the jointed attribute-oriented 3D loss can assign a soft label for supervising the training of the classification. By incorporating the proposed loss function into several state-of-the-art 3D object detection methods, the significant performance improvement has been achieved on the KITTI benchmark. © 2020 IEEE.

Keyword :

Benchmarking Deep learning Intelligent vehicle highway systems Object detection Object recognition

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ye, Zhen , Xue, Jianru , Dou, Jian et al. Improving 3D Object Detection via Joint Attribute-oriented 3D Loss [C] . 2020 : 951-956 .
MLA Ye, Zhen et al. "Improving 3D Object Detection via Joint Attribute-oriented 3D Loss" . (2020) : 951-956 .
APA Ye, Zhen , Xue, Jianru , Dou, Jian , Pan, Yuxin , Fang, Jianwu , Wang, Di et al. Improving 3D Object Detection via Joint Attribute-oriented 3D Loss . (2020) : 951-956 .
Export to NoteExpress RIS BibTex
An Efficient Sampling-Based Hybrid A? Algorithm for Intelligent Vehicles EI Scopus
会议论文 | 2020 , 2104-2109 | 31st IEEE Intelligent Vehicles Symposium, IV 2020
Abstract&Keyword Cite

Abstract :

In this paper, we propose an improved sampling-based hybrid A? (SBA?) algorithm for path planning of intelligent vehicles, which works efficiently in complex urban environments. Two main modifications are introduced into the traditional hybrid A? algorithm to improve its adaptivity in both structured and unstructured traffic scenes. Firstly, a hybrid potential field (HPF) model considering both traffic regulation and obstacle configuration is proposed to represent the vehicle's workspace, which is utilized as a heuristic function. Secondly, a set of directional motion primitives is generated by taking the prior topological structure of the workspace into account. The path planner using SBA? not only obeys traffic regulations in structured scenes but also is capable of exploring complex unstructured scenes rapidly. Finally, a post-optimization step is adopted to increase the feasibility of the path. The efficacy of the proposed algorithm is extensively validated and tested with an autonomous vehicle in real traffic scenes. The experimental results show that SBA? works well in complex urban environments. © 2020 IEEE.

Keyword :

Heuristic algorithms Intelligent vehicle highway systems Urban planning Vehicles

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Gengxin , Xue, Jianru , Zhang, Lin et al. An Efficient Sampling-Based Hybrid A? Algorithm for Intelligent Vehicles [C] . 2020 : 2104-2109 .
MLA Li, Gengxin et al. "An Efficient Sampling-Based Hybrid A? Algorithm for Intelligent Vehicles" . (2020) : 2104-2109 .
APA Li, Gengxin , Xue, Jianru , Zhang, Lin , Wang, Di , Li, Yongqiang , Tao, Zhongxing et al. An Efficient Sampling-Based Hybrid A? Algorithm for Intelligent Vehicles . (2020) : 2104-2109 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 23 >

Export

Results:

Selected

to

Format:
FAQ| About| Online/Total:574/160306942
Address:XI'AN JIAOTONG UNIVERSITY LIBRARY(No.28, Xianning West Road, Xi'an, Shaanxi Post Code:710049) Contact Us:029-82667865
Copyright:XI'AN JIAOTONG UNIVERSITY LIBRARY Technical Support:Beijing Aegean Software Co., Ltd.