The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.
The book deals with the development of a methodology to estimate the motion field between two frames for video coding applications. This book proposes an exhaustive study of the motion estimation process in the framework of a general video coder. The conceptual explanations are discussed in a simple language and with the use of suitable figures. The book will serve as a guide for new researchers working in the field of motion estimation techniques.
Video technology promises to be the key for the transmission of motion video. A number of video compression techniques and standards have been introduced in the past few years, particularly the MPEG-1 and MPEG-2 for interactive multimedia and for digital NTSC and HDTV applications, and H.2611H.263 for video telecommunications. These techniques use motion estimation techniques to reduce the amount of data that is stored and transmitted for each frame. This book is about these motion estimation algorithms, their complexity, implementations, advantages, and drawbacks. First, we present an overview of video compression techniques with an emphasis to techniques that use motion estimation, such as MPEG and H.2611H.263. Then, we give a survey of current motion estimation search algorithms, including the exhaustive search and a number of fast search algorithms. An evaluation of current search algorithms, based on a number of experiments on several test video sequences, is presented as well. The theoretical framework for a new fast search algorithm, Densely-Centered Uniform-P Search (DCUPS), is developed and presented in the book. The complexity of the DCUPS algorithm is comparable to other popular motion estimation techniques, however the algorithm shows superior results in terms of compression ratios and video qUality. We should stress out that these new results, presented in Chapters 4 and 5, have been developed by Joshua Greenberg, as part of his M.Sc. thesis entitled "Densely-Centered Uniform P-Search: A Fast Motion Estimation Algorithm" (FAU, 1996).
Multimedia hardware still cannot accommodate the demand for large amounts of visual data. Without the generation of high-quality video bitstreams, limited hardware capabilities will continue to stifle the advancement of multimedia technologies. Thorough grounding in coding is needed so that applications such as MPEG-4 and JPEG 2000 may come to fruition. Image and Video Compression for Multimedia Engineering provides a solid, comprehensive understanding of the fundamentals and algorithms that lead to the creation of new methods for generating high quality video bit streams. The authors present a number of relevant advances along with international standards. New to the Second Edition · A chapter describing the recently developed video coding standard, MPEG-Part 10 Advances Video Coding also known as H.264 · Fundamental concepts and algorithms of JPEG2000 · Color systems of digital video · Up-to-date video coding standards and profiles Visual data, image, and video coding will continue to enable the creation of advanced hardware, suitable to the demands of new applications. Covering both image and video compression, this book yields a unique, self-contained reference for practitioners tobuild a basis for future study, research, and development.
This book constitutes the refereed proceedings of the Second Pacific Rim Symposium on Image and Video Technology, PSIVT 2007, held in Santiago, Chile, in December 2007. The 75 revised full papers presented together with four keynote lectures were carefully reviewed and selected from 155 submissions. The symposium features ongoing research including all aspects of video and multimedia, both technical and artistic perspectives and both theoretical and practical issues.
A discussion of a compressed-domain approach for designing and implementing digital video coding systems, which is drastically different from the traditional hybrid approach. It demonstrates how the combination of discrete cosine transform (DCT) coders and motion compensated (MC) units reduces power consumption and hardware complexity.
Even though video compression has become a mature field, a lot of research is still ongoing. Indeed, as the quality of the compressed video for a given size or bit rate increases, so does users’ level of expectations and their intolerance to artefacts. The development of compression technology has enabled number of applications; key applications in television broadcast field. Compression technology is the basis for digital television. The “Video Compression” book was written for scientists and development engineers. The aim of the book is to showcase the state of the art in the wider field of compression beyond encoder centric approach and to appreciate the need for video quality assurance. It covers compressive video coding, distributed video coding, motion estimation and video quality.
High definition video requires substantial compression in order to be transmitted or stored economically. Advances in video coding standards from MPEG-1, MPEG-2, MPEG-4 to H.264/AVC have provided ever increasing coding efficiency, at the expense of great computational complexity which can only be delivered through massively parallel processing. This book will present VLSI architectural design and chip implementation for high definition H.264/AVC video encoding, using a state-of-the-art video application, with complete VLSI prototype, via FPGA/ASIC. It will serve as an invaluable reference for anyone interested in VLSI design and high-level (EDA) synthesis for video.
This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity. Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard. The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that employs the best complexity reduction and scaling methods presented throughout the book. The methods presented in this book are especially useful in power-constrained, portable multimedia devices to reduce energy consumption and to extend battery life. They can also be applied to portable and non-portable multimedia devices operating in real time with limited computational resources.