School of Engineering

Graduates the engineering leaders of tomorrow...

SICOS: a tool for Social Image Cluster-based Organization and Search

Download prototype

Issa Ayoub   Karl Codoumi   Joe Tekli
E.C.E Deptartment
Lebanese American University
36 Byblos, Lebanon
  E.C.E. Deptartment
Lebanese American University
36 Byblos, Lebanon
  E.C.E. Deptartment
Lebanese American University
36 Byblos, Lebanon
issa.ayoub@lau.edu   karl.codoumi@lau.edu   joe.tekli@lau.edu.lb

I. Introduction

In the past few decades, the amount of images published on the Web, especially on social sites like Facebook and Flikr, has been increasing exponentially. This was further fueled by the increasing availability of photo taking gadgets such as smart phones, pads, and tables, as well as the increased connectivity to the Web using wireless network and mobile Internet connectivity. Yet, with the increased availability of social Web images comes the challenge of managing these images in a personalized manner, so that a user can efficiently organize and search for images based on her needs (e.g., grouping together and/or searching for similar images taken at a certain place and/or time, tagged with a certain friend, etc.).

To address this problem, we have designed and implemented a solution called SICOS, for Social Image Cluster-based Organization and Search, allowing to group together images sharing similar semantic and visual features, to simplify their organization and querying. This requires low-level and high-level image feature extraction and processing, where: low-level features represent color, texture, and shape image descriptors, whereas high-level features consist of textual descriptors extracted from image annotations and surrounding text.

II. System Architecture

The overall architecture of SICOS is shown in Fig. 1 . It accepts as input: social Web images (downloaded from a social site like Facebook), user image organization parameters (highlighting the kinds of image features the user is interested in, as well as image organization parameters), and user search parameters (text-based and/or content based user queries). The system then performs image storage and organization following user organization preferences, and returns search results to answer user queries. Different from existing image search and result organization solutions which are either: i) generic, addressing Web-based image processing (and not specifically geared toward social image processing), e.g., [1, 2] , ii) computationally expensive, performing automatic face or object recognition, e.g., [3, 4] , and/ or event detection and identification, e.g., [4, 5] , in indexing and searching for social images, and iii) requiring specific conditions or contextual data to work properly (cf. Section II); we provide here a computationally efficient solution integrating legacy techniques from Web-based and social image processing, requiring minimal contextual/input data, to provide the following functionality: i) efficient indexing and storage of images with the corresponding feature information, ii) comparing images based on low-level visual features including: color, texture, and shape descriptors; iii) comparing images based on high-level textual features, including: tags, captions, comments, and geographic location, iv) clustering images based on low- and high-level feature similarities, v) simple access to images through cluster representatives, vi) allowing different user-friendly cluster visualizations, vii) searching images based on low-level visual features, and viii) searching based on high-level textual features.

 

Architecture-Caption-web.jpg

To test the performance of our solution, we evaluated execution time for each of its constituent components while varying user parameters, including: i) image feature extraction, ii) image similarity computation, iii) max-min clustering, iv) incremental clustering, v) low- and high-level based image search, and vi) image result visualization (considering our different display techniques). Performance experiments (available below) highlight the efficiency of our approach in handling large image repositories, where time is mainly linearly dependent on the number of clusters/cluster representatives rather than the actual size of the repository.

The prototype system and experimental results can be downloaded from the following links:

* This study is partly funded by the CNRS Lebanon, project: NCSR_00695_01/09/15, and by LAU research fund: SOERC1516R003.

References

  1. Chen Y. et al., Content-based Image Retrieval by Clustering. Proc. of the ACM Inter. Conf. on Multimedia IR (MIR’03), 2003. pp. 193-200
  2. Van Leuken R. H et al., Visual Diversification of Image Search Restuls. Proc. of the Inter. World Wide Web Conference, 2009. pp. 341-350
  3. Suh B. and Bederson B., Semi-Automatic Photo Annotation Strategies using Event-based Clustering and Clothing based Person Recognition. Interacting with Computers, 2007. 19(4): 524-544
  4. Phillips P. et al., Preliminary Face Recognition Grand Challenge Results. Inter. Conf. on Automatic Face and Gesture Recognition, 2006. pp. 15-24
  5. Kang H. et al., Capture, Annotate, Browse, Find, Share: Novel Interfaces for Personal Photo Management. IJHCI journal, 2007. 23(3): 315-337

Copyright 1997–2018 Lebanese American University, Lebanon.
Contact LAU | Emergency Numbers | Feedback