Out-of-Core Tensor Approximation

of Multi-Dimensional Matrices of Visual Data

    Hongcheng Wang, Qing Wu, Lin Shi, Yizhou Yu and Narendra Ahuja




With the advent of data-driven models, such as light field models, BTFs and BRDFs, graphics researchers have been striving for powerful representation and compression methods to cope with the enormous amount of data involved. The goal of the data-driven model is to produce a compact representation from a huge amount of redundant data so efficient rendering can be performed using the representation. Therefore, the incremental method previous discussed, which is mainly used for feature extraction, does not serve this purpose well. An out-of-core processing capability becomes a necessity for the multidimensional datasets such as 4D volume simulation sequences, 6D BTFs as well as 7D DBTFs (We were the first to present the dynamic appearance model with respect to time, lighting and view, which we call Dynamic BTFs (DBTFs)). The out-of-core approach opens the door to compactly model and efficiently render large-scale data on a personal computer.




PCA Modifyied TensorTexture TensorTexture Our Method



1. Hongcheng Wang, Qing Wu, Lin Shi, Yizhou Yu and Narendra Ahuja, Out-of-Core Tensor Approximation of Multi-Dimensional Matrices of Visual Data, SIGGRAPH 2005, Los Angeles, August 2005 (ACM Transactions on Graphics, Vol. 24, No. 3, 2005).

Abstract: Tensor approximation is necessary to obtain compact multilinear models for multi-dimensional visual datasets. Traditionally, each multi-dimensional data item is represented as a vector. Such a scheme flattens the data and partially destroys the internal structures established throughout the multiple dimensions. In this paper, we retain the original dimensionality of the data items to more effectively exploit existing spatial redundancy and allow more efficient computation. Since the size of visual datasets can easily exceed the memory capacity of a single machine, we also present an out-of-core algorithm for higher-order tensor approximation. The basic idea is to partition a tensor into smaller blocks and perform tensor related operations blockwise. We have successfully applied our techniques to three graphics-related data-driven models, including 6D bidirectional texture functions, 7D dynamic BTFs and 4D volume simulation sequences. Experimental results indicate that our techniques can not only process out-of-core data, but also achieve higher compression ratios and quality than previous methods.

Full Text:   PDF (~7MB)


 author = {Hongcheng Wang and Qing Wu and Lin Shi and Yizhou Yu and Narendra Ahuja},
 title = {Out-of-core tensor approximation of multi-dimensional matrices of visual data},
 journal = {ACM Trans. Graph.},
 volume = {24},
 number = {3},
 year = {2005},
 issn = {0730-0301},
 pages = {527--535},
 doi = {},
 publisher = {ACM Press},
 address = {New York, NY, USA},



  1. Bidirectional Texture Functions (BTFs) Compression and Modeling: The BTF captures the appearance of extended textured surfaces. Since the BTF is originally represented as a large collection of images, we compute a high-order tensor with reduced ranks as a compact generative model, which captures the essential characteristics while removing the redundancies. The right figure is a virtual scene with a cube mapped with a SPONGE BTF and a vase mapped with a LICHEN BTF. There is a point light source near the "sponge".

  1. Dynamic Bidirectional Texture Functions (Dynamic BTFs): A dynamic BTF is a seven-dimensional function, describing the dynamic appearance of a surface with changing geometric or photometric properties.  Our BTFs are true changing BTFs of the underlying dynamic surface. The original data is up to 30GB, and it only takes two seconds to generate an image for the pool given an arbitrary time and at arbitrary pair of view and illumination directions. The right figure is a water pool mapped with a dynamic BTF in tensor representation. This image shows the appearance of the pool during sunset.
  1. Physically Simulated Volume Sequence: Many computer animation and simulation techniques generate data in a three-dimensional volume grid. The generated volume data is actually four-dimensional with the volume grid accounting for three dimensions and the time being the fourth. This is the first techniques for compressing 4D dynamic volume data as a whole. The right figure is rendered with 95.3% compression.


Updated: Jan.1, 2006