CORC  > 自动化研究所  > 中国科学院自动化研究所  > 毕业生  > 硕士学位论文
题名基于视觉的行人检测与计数研究
作者刘晶晶
学位类别工学硕士
答辩日期2011-05-21
授予单位中国科学院研究生院
授予地点中国科学院自动化研究所
导师卢汉清
关键词行人检测与计数 运动前景分割 马尔科夫链蒙特卡洛算法 pedestrian detection pedestrian counting foreground image segmentation Markov Chain Monte Carlo algorithm
其他题名Pdestrian Detection and Counting Research Based on Computer Vision
学位专业计算机应用技术
中文摘要基于视觉的行人检测与计数,是指在人不参与干预或者极少参与干预的条件下,分析由固定摄像头拍摄的静态图像或者图像序列,进而对行人进行识别、定位和人数统计。使用摄像头与计算机进行行人检测与计数,将人力从单调、繁重的人眼监控中解放出来,在辅助驾驶、人机互动、公共安全等方面有着广泛的应用。因此对该问题的研究具有非常实际的现实意义。 本文围绕行人检测与计数的研究展开,分析了该领域面临的主要挑战,总结出近些年行人检测与计数的相关研究工作及其不足。运动检测相关研究的发展,使得获取较好的前景图像成为可能。基于运动前景分割的行人检测与计数方法在处理现实场景的图像序列时,显示了良好的性能。本文基于运动前景分割的方法,针对现实场景中经常发生的行人遮挡等现象,提出了改进算法,主要工作与贡献主要有: 1. 提出了基于自适应行人模型的行人检测与计数的方法,包括使用轮廓信息对完整的 行人以及行人局部进行建模,使用栅格模板判断行人躯干可见性,以及建立分支结构的行人分类器。基于运动前景分割的方法构建最大后验概率估计问题,验证与优化自适应行人模型提供的预检测结果。由于局部检测子的引入和行人模型的自适应性,该方法可以很好的处理遮挡问题。同时,较好的预检测结果加快了优化过程中马尔科夫链蒙特卡洛算法的收敛速度,使得该方法可以满足实时的检测与计数。 2. 提出了基于群组上下文的行人计数方法。通过背景减除算法提取前景图像,建立相邻图像帧中群组的相关性矩阵,用以检测、跟踪群组并识别给定群组的群组关系。使用群组及其相关群组的前景图像建立群组上下文,整合时间-空间信息作为计数参考。组建组上下文掩模并构造一个最大联合后验概率估计问题,将基于单帧图像的行人计数方法推广到多帧图像上。实验证明,通过引入历史信息以及更多的空间关联,该方法可以较好的处理行人间遮挡、图像深度影响以及行人姿态的变化。另外,计数结果在时间域上一致。 3. 研究并实现了使用垂直视角摄像头监控行人的系统。基于方向槽匹配,在边缘图像中滑动圆形模板检测行人头顶,使用聚类算法优化检测的行人位置。关联连续图像帧中的检测目标,由此跟踪场景中移动的行人,并画出运动轨迹。根据当前图像帧中的行人轨迹统计场景中的人数。实验证明,由于垂直视角下行人之间几乎不存在遮挡,基于头顶轮廓匹配来监控行人较为有效。 总的说来,本文在基于视觉的行人检测与计数方面作了有益的探索,改进和发展了基于运动前景分割的行人检测 与计数方法。
英文摘要Pedestrian detection and counting tries to recognize, locate and count pedestrians in static images or image sequences taken by cameras of fixed perspectives, without or seldom with human's participation and interference. The usage of digital camera and computer in detecting and counting pedestrians emancipates people from boring and onerous surveillance tasks with human eyes and have various applications, such as driving assistance, human-computer interaction and public security. As a result, research on pedestrian detection and counting based on computer vision is realistically significant. This thesis mainly concentrates on pedestrian detection and counting. We analyze the primary challenges in this domain and sum up the related works in recent years as well as their defects. Since the rapidly development of motion detection, near-perfect foreground images is available. Toward image sequences from realistic scenes, approaches of pedestrian detection and counting based on foreground image segmentation improve the state-of-art. Along with these approaches, we proposes several novel methods so as to deal with the phenomena which are frequently happening in realistic scenes, for instance,inter-person occlusion. The main contents and contributions of this thesis include: 1. Adaptive human model is proposed for pedestrian detection and counting. We model integral pedestrian and body parts based on contour information, and use two grid masks to infer visibility of torso sides, and then construct a branch-structure pedestrian classifier. Using the foreground image segmentation method to formulate a Maximum a Posteriori estimation problem, we verify and optimize the pre-detection results provided by adaptive human model. Because of the part detectors and adaptiveness of pedestrian model,our approach is capable of tackling inter-person occlusion. Besides, good pre-detection results accelerate the speed of convergence of Markov Chain Monte Carlo algorithm during the optimization, thereby satisfying the real-time application. 2. We proposed a group context based pedestrian counting method. With foreground image, we construct correspondence matrixes between consecutive images in order to detect and track groups as well as their relatives. Group context is modeled by foreground masks of a given group and its relatives to integrate spatial and temporal information together. Further, we assemble a series of context masks and formulate a joint Maximum a Posteriori ...
语种中文
其他标识符200828014629076
内容类型学位论文
源URL[http://ir.ia.ac.cn/handle/173211/7589]  
专题毕业生_硕士学位论文
推荐引用方式
GB/T 7714
刘晶晶. 基于视觉的行人检测与计数研究[D]. 中国科学院自动化研究所. 中国科学院研究生院. 2011.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace