CS 491 Virtual and Agumented Reality

HW 1 HW 2 Project1 HW 4 HW 5 HW 6 HW 7 HW 8 HW 9 HW 10 HW 11 HW 12 Project 2 Project 3
I think that this is a very cool project. The original project require a lot of equipments, which includes 8 camera to capture the objects from different angles and reconstruct it with the hollolens. This actually gives user better experience since it is real 3D. User can walk around object and look at it from different angles.
The new mobile project still requires a lot of equipments even though less than the original one requires. Here they need only two cameras. One captures the image for the left eye and the other captures the image for the right eye. The difference between the two images create the depth which we perceive as 3D. With that being said, the image constructed is not real 3D but stereoscopic image. It is like 3D movies you see in movie theater User cannot walk around the reconstructed object to look at it from different angles. We can tell that the size of data required to reconstruct images for this project is a lot smaller than the size of data required for the original project ( the original project requires 8 images for each frame and this project requires only 2). It is one of the main reason why the second project can be mobile (set up in a van). As they mention in the project, they were able to reduce the bandwidth requirement while still maintaining the quality. I think that is the trade-off we have to accept
Currently the project allow collaboration of two sub-systems. You can imagine we have 2 sub-systems connecting with each other. Each sub-system captures and tracks its objects separately and send data to other sub-system to reconstruct it in hololens. Each system work independently without interfering each other. The question is can we expand it to have more than two sub-system. It is a possibility. For the first system, we need to take care of the set up more carefully because three dimensional images are projected into real space, we need to make sure that no objects in any sub-system overlap each other. For example the person in sub-system one cannot be in the same position with the person in sub-system two. Otherwise, the person in sub-system three will see a very weird image two people overlapping each other. It is easily achieved with two sub-systems. For the second system (mobile holoportation), it is easier since they just display stereoscopic images in front of you. Users should be able to change the layout or choose how they prefer images are display (vertically, horizontally, etc).
When allowing more than two collaborators, one of the issue with the original holoportation system is to find out a effective algorithm to combine images from different sub-system. With only two-subsystems connecting to each other, the amount of data received by one system is relatively large. If we add one or more sub-systems into it, the amount of data is double, tripple or more. Processing it become a big problem. We still process everything at the same time. It is not acceptable if we see a person talking and the other just freezes because our system could not keep up with the amount of data it gets. Besides, do more collaborators requires more bandwidth and hardware power? Is effective algorithm is enough? Those are question need to be addressed if you want to further develop this cool technology