논문

실감교류인체감응솔루션연구단의 주요 논문 성과를 소개합니다.

Adaptive FOA Region Extraction for Saliency-Based Visual Attention

Adaptive FOA Region Extraction for Saliency-Based Visual Attention
학술지명 International Journal of Information Processing and Management ISSN 2093-4009
SCI 유무 국내 게재연월 2012-07 Vol. 3 No. 3
표준화된 순위정보영향력지수 - IF - Citation -

Adaptive FOA Region Extraction for Saliency-Based Visual Attention
저자 HyungJik Lee, ChangSeok Bae, JangHan Lee, SungWon Sohn
초록
This paper describes an adaptive extraction of focus of attention region for saliency-based visual attention. The saliency map model generates the most salient and significant location in the visual scene. In human brain, there is an inhibition of return property for which current attending point is prevented from being attended again. Therefore, we need to pay attention to the focus of attention and inhibition of return function by employing an appropriate mask for the salient region and shapedbased mask is maybe more suitable than any other masks. On the contrary to the existing fixed-size FOA, we proposed an adaptive and shape-based FOA region according to the most salient region from saliency map. We determine the most salient point by checking every value in saliency map, and expand the neighborhood of the point until the average value of the neighborhood is smaller than 75% value of the most salient point, and then find the contour of the neighborhood. Therefore our adaptive FOA is close to the shape of attended object and it is efficient to the object recognition or other computer vision fields.
keyword Focus of Attention, Visual Attention, Saliency, Exogenous

Adaptive FOA Region Extraction for Saliency-Based Visual Attention
과제명 인간 교감 신개념 UI 기반 인터랙션 기술
연구기관 한국전자통신연구원 연구책임자 손승원