Accepted Special Sessions

I. VR Video Coding and Transmission

    Yago Sanchez, Fraunhofer HHI, Berlin, Germany
    Sejin Oh, LG Electronics, Seoul, South Korea
    Mathias Wien, RWTH Aachen University, Aachen, Germany
    Emmanuel Thomas, TNO, The Hague, Netherlands
    Jill Boyce, Intel Corporation, Portland, Oregon

Virtual Reality (VR) video (aka. 360° video) streaming to Head Mounted Displays (HMD) has gained considerable attention over the last couple of years. The recent market availability of plenty of consumer Head Mounted Displays (HMD) such as the Oculus Rift, Samsung Gear VR or Google Cardboard, creates a desire for a sustainable business model based on premium services offering high quality 360° video streaming experience. In addition, the number of omnidirectional cameras for the professional and consumer market increased rapidly, leading to an explosion of 360° video content produced by users and studios alike. At the same time, standardization bodies such as ITU and MPEG have started work in the field to provide an efficient solution for VR video with interoperable ecosystems. All this points to the fact that the entertainment industry sees the potential for 360 video to become ubiquitous in the near future. However, VR applications introduce stricter requirements to the established end-to-end service chain compared to traditional streaming services. In fact, 360 videos are expected to have multiple times UHD resolution as a whole and thereby result in extremely high bitrates and throughput requirements for existing coding and transmission procedures. Besides, as an interactive service, the latency of the system need to fulfil stringent constraints. Therefore, the interesting challenges lie in front of feasible VR multi-media applications. 
      As a result, it is crucial that new efficient video coding and transmission concepts are designed to match the requirements of VR and enable services with a desirable quality of experience, especially for premium content.



II. Immersive Video Coding

    Ronggang Wang, Peking University Shenzhen Graduate School, China
    Li Song, Shanghai Jiaotong University, China
    Li Zhang, Qualcomm, USA

Along with the popularity of Head Mounted Devices (HMDs), more and more immersive video contents are immerging. Since the Field of View (FOV) of immersive video is much larger than traditional video, much more pixels are required for immersive video to provide similar quality with traditional video in particular viewpoint. The high data volume involves new technical challenge in transmitting immersive video over network. Recently ISO/IEC MPEG and IEEE 1857 workgroup are trying to define new efficient tools to compress immersive video. 
     This special session will present the new technical developments on immersive video coding.



III. Deep Learning for Image and Video Compression

    Dong Liu, University of Science and Technology of China, China
    Siwei Ma, Penking University, China
    Jizheng Xu, Microsoft Research Asia

Image and video compression is one of the tasks which neural networks were suspected to be good at even since 1980s. Recently, deep learning has made remarkable progress and has shown great potential in many areas of computer vision and image processing. However, deep learning based image and video compression still remains largely unexplored. While convolutional neural network (CNN) indeed has been adopted for post-processing (artifact removal) and accelerating encoding (more decision), the potential benefits brought by deep learning in image and video compression may be much more, and need to be demonstrated to the community in a timely manner. As far as we know, some researchers had started the investigation of deep learning based image and video compression since several years ago, but it is until the last year when such work receives serious attention from both academic and industry. To the best of our knowledge, this proposal is the first special session concentrating on deep learning based image and video compression in related top-tier conferences. 
     The purpose of this special session is to consolidate some high quality and potentially high impact work, as well as to provide a unique stage for the community to exchange and discuss their perspectives.



IV. Astronomical Big Data Processing

    Long Xu, Chinese Academy of Sciences, Beijing, China
    Xiangyang Ji, Tsinghua University, Beijing, China
    Weisi Lin, Nanyang Technological University, Singapore

With the development of high resolution and high precision astronomical observation instruments, the modern astronomy study has stepped into a “big data” era. Apart from the traditional study mainly concerning physics theory, computer science, electronics and information science have being imported into astronomy study, and thus promote traditional astronomy study into a modern era of astronomy with high performance computer processing, automatic control, self-monitoring, remote supervision, and so on. 
    We are proposing to call for theory studies, algorithms and system designs of astronomical big data processing. Especially, the applications of artificial intelligent (AI) on astronomical big data processing, include archiving, classification and compression of observation data. These studies would promote and ease the study of traditional astronomy study. Especially, to astronomical big data, automatic data archiving, classification is very desirable. For this purpose, deep learning which has been widely used in general image processing, can be expected to make a big difference.



V. Point Cloud Compression

    Chen Zhibo, University of Science and Technology of China, China
    Rufael Mekuria, Unified Streaming
    Euee S. Jang, Hanyang University, Korea

With the development of high resolution and high precision Advanced 3D representations of the world are enabling more immersive forms of interaction and communication, and also allow machines to understand, interpret and navigate our world. 3D point clouds have emerged as an enabling representation of such information. A point cloud is a set of points in a 3D space each with associated attributes, e.g. color, material properties, etc. Point clouds can be used to reconstruct an object or a scene as a composition of such points. They can be captured using multiple cameras and depth sensors in various setups and may be made up of thousands up to billions of points in order to realistically represent reconstructed scenes. 
     Compression technologies are needed to reduce the amount of data required to represent a point cloud. As such, technologies are needed for lossy compression of point clouds for use in real-time communications. In addition, technology is sought for lossless point cloud compression in the context of GIS, CAD and cultural heritage applications. Recently, the investigation of new coding tools for static and dynamic 3D point clouds have shown evidence that improved coding efficiency with respect to existing solutions are possible.



VI. Regularization Techniques for High-Dimensional Visual Data Processing and Analysis

    Zhangyang (Atlas) Wang, Texas A&M University, USA
    Xi Peng, Institute for Infocomm Research Agency for Science, Singapore
    Sheng Li, Northeastern University, USA

The explosive growth of high-dimensional visual data in computer vision requires effective techniques to reveal the underlying low-dimensional structure and discover the latent knowledge. Over the past decades, a variety of representative methods are proposed for visual data modelling and analysis, including manifold learning, matrix factorization, subspace learning, sparse coding, and deep learning. However, they often suffer from unsatisfactory robustness and generalization ability, as well as poor theoretical interpretability. To this end, many regularization techniques have been developed and shown effective. Despite the promising progress, many problems remain unsolved, and both theoretical and technical developments are desirable to provide new insights and tools in modelling the complexity of real world visual data. 
   This special session aims to provide a forum for researchers all over the world to discuss their works and recent advances in algorithms and applications for advanced regularization techniques in high dimensional visual data analysis. Papers addressing interesting real-world visual computing applications are especially encouraged.



VII. Transparent Media Computing

    Zhan Ma, Nanjing University
    Ju Ren, Central South University

Media prevails in every aspect of our daily life, for instance, live media streaming, video conferencing, etc.  All the computing processes, such as media transcoding, analysis, compression, communication, have to be well-designed to enable high-fidelity service that is transparent to all users. In this special session, we seek for high quality and innovative ideas regarding the transparent media computing that have not been published previously. Topics are interested but not limited to:
  • Innovative media system (processing, transport, etc) to support transparent service, such as adaptive content caching.
  • Innovative algorithms for enabling the transparent media service, such as network aware media distribution.
  • Transparent media applications at edges, such as context aware media coding/transcoding, enhancement.
  • Standards activities for ubiquitous media processing enabling, such as network distributed video coding.