A Cloud-Based Multi-threaded Implementation of View Synthesis System


In multiview video applications, view synthesis is a computationally intensive task that needs to be done correctly and efficiently in order to deliver a seamless user experience. In order to provide fast and efficient view synthesis, in this paper, we present a cloud-based implementation that will be especially beneficial to mobile users whose devices may not be powerful enough for high quality view synthesis. Our proposed implementation balances the view synthesis algorithm's components across multiple threads and utilizes the computational capacity of modern CPUs for faster and higher quality view synthesis. For arbitrary view generation, we utilize the depth map of the scene from the cameras' viewpoint and estimate the depth information conceived from the virtual camera. The estimated depth is then used in a backward direction to warp the cameras' image onto the virtual view. Finally, we use a depth-aided inpainting strategy for the rendering step to reduce the effect of disocclusion regions (holes) and to paint the missing pixels. For our cloud implementation, we employed an automatic scaling feature to offer elasticity in order to adapt the service load according to the fluctuating user demands. Our performance results using 4 multiview videos over 2 different scenarios show that our proposed system achieves average improvement of around 1.7 times for speedup, 50% and 52% for efficiency and resource utilization, respectively.


Computer Science

Document Type

Conference Proceeding




cloud computing, elasticity, homography, multi-threading, View synthesis, warping

Publication Date


Journal Title

Proceedings - 2017 IEEE International Symposium on Multimedia, ISM 2017