教学文库网 - 权威文档分享云平台
您的当前位置:首页 > 文库大全 > 高等教育 >

Accommodating Hybrid Retrieval in a Comprehensive Video Data(6)

来源:网络收集 时间:2025-12-24
导读: 4. An Experimental Prototype As part of our research, we have been building an experimental prototype system on the PC platform. Inthis section, we briefly describe our prototype work in terms of the

4. An Experimental Prototype

As part of our research, we have been building an experimental prototype system on the PC platform. Inthis section, we briefly describe our prototype work in terms of the implementation environment, the actualuser interface and language facilities, and sample results.

4.1 Basic Implementation Environment & Functionalities

Starting from the first prototype of VideoMAP, our experimental prototyping has been conducted basedon Microsoft Visual C++ (as the host programming language), and NeoAccess OODB toolkit (as theunderlying database engine). Here we briefly introduce the main user facilities supported by VideoMAP+,the kernel system based on which our subsequent (web-based) prototype [LCWZ01] is being developed(using IONA’s developer-focused CORBA product, ORBacus, with necessary extensions). VideoMAP+currently runs on Windows and it offers a user-friendly graphical user interface supporting two main kindsof activities: Video editing, and Video retrieval.

4.1.1 Video Editing

When a user invokes the function of video editing, a screen comes up for uploading a new video into thesystem and annotating its semantics (cf. Figure 4.1). For example, s/he can name the video object and assignsome basic descriptions to get started. A sub-module of Video segmentation is devised to help decomposethe whole video stream into segments and to identify keyframes. Further, the Feature Extraction module is tocalculate the visual features of the media object. By reviewing the video abstraction structure composed bythe segments and keyframes, the user can annotate the semantics according to his/her understanding andpreference (see Figure 4.1). As a result, a video object is created and linked with a number of video segmentobjects. Each video segment objects contained keyframe objects and the low-level image features.

After creating and importing several video objects into the database, the user can create scene objects in atree-like structure, and semantic feature objects in another tree. The user can also create attribute and methodobjects associated with the scenes and semantic features. A scene object (e.g. White_House_News) maycontain a ToC object that can link up some video segment objects defined in several video objectsdynamically. Semantic feature objects (e.g. Clinton, Lewinsky) can act as indexes which are attached to the

EDICS

video segment objects in the ToC of a scene object dynamically. (For instance, Clinton may be attached toseveral different video segment objects in the ToCs of different scene objects; whereas Lewinsky may beattached to the others.) Users can specify a query to retrieve all scene objects of different video sources, byimposing the condition that the scene objects should contain both the semantic feature objects.

An activity hierarchy is created in the Specification Language component (see Figure 4.2) in order tofacilitate the creation of visual objects and query retrieval. The Specification Language model can be used toannotate the spatio-temporal information to the semantic feature objects (cf. Figure 4.3), with a visual object

being created and attached to the activity hierarchy meanwhile.

Figure 4.1 Annotating the segments after video segmentation

Figure 4.2 Creating an Activity Hierarchy in the Specification Language dialog

EDICS

Figure 4.3 Annotating spatio-temporal information and creating visual object by using Activity

Hierarchy

4.1.2 Video Retrieval

VideoMAP+ also provides an interface for the user to issue queries using its query language (i.e.CAROL/ST with CBR). All kinds of video objects such as “Scene”, “Segment”, “Keyframe”, can beretrieved by specifying their semantics or visual information. Figure 4.4 shows a sample query issued by theuser, which is validated by a compiler sub-module before execution. The video objects retrieved are thenreturned to the user in the form of a tree (cf. Figure 4.4), whose node not only can be played out, but also can

be used subsequently for formulating new queries in an iterative manner.

Figure 4.4 Query facilities

…… 此处隐藏:2074字,全部文档内容请下载后查看。喜欢就下载吧 ……
Accommodating Hybrid Retrieval in a Comprehensive Video Data(6).doc 将本文的Word文档下载到电脑,方便复制、编辑、收藏和打印
本文链接:https://www.jiaowen.net/wenku/128086.html(转载请注明文章来源)
Copyright © 2020-2025 教文网 版权所有
声明 :本网站尊重并保护知识产权,根据《信息网络传播权保护条例》,如果我们转载的作品侵犯了您的权利,请在一个月内通知我们,我们会及时删除。
客服QQ:78024566 邮箱:78024566@qq.com
苏ICP备19068818号-2
Top
× 游客快捷下载通道(下载后可以自由复制和排版)
VIP包月下载
特价:29 元/月 原价:99元
低至 0.3 元/份 每月下载150
全站内容免费自由复制
VIP包月下载
特价:29 元/月 原价:99元
低至 0.3 元/份 每月下载150
全站内容免费自由复制
注:下载文档有可能出现无法下载或内容有问题,请联系客服协助您处理。
× 常见问题(客服时间:周一到周五 9:30-18:00)