Photorealism refers to depicting any image in a highly realistic manner. However, a deep understanding of how the world is created and how humans react to it to achieve such photorealism in mixed reality needs to exist. For instance, objects can be inserted in existing scenes, but understanding various aspects of the scenes and their inter-between interaction is essential for such things to occur. If an object has to be inserted into a scene, it needs proper shadows cast by other objects or inter-reflections from the nearby scenes and shading from various light sources. In addition, the framework should also manage similar long-range interactions between distinct parts of the scene to change lighting or materials in sophisticated indoor settings. This can be data-hungry, time-consuming, and expensive.
Now a research team has released OpenRooms, a novel, open-source dataset comprising tools that will enable users to control lighting, objects, materials, and other properties within indoor 3D settings to advance augmented reality and robotics. The new development has the potential to boost the Augmented Reality Market as the tools could enable the creation of better scenes impacting industries such as graphics, machine learning, robotics, and computer vision.
The tool proposed by the team lets users adjust scenes according to their needs realistically. For example, if a family wants to envisage a kitchen remodel, they can easily explore different countertop materials, lighting, or other room aspects. Through OpenRooms, vast knowledge about material, lightning, and 3D shapes can be computed in the scene on a per-pixel basis. People can now take a picture of a room and quickly insert and manipulate virtual objects. They could even take a particular leather chair and change its material to a fabric chair and others to decide which would be best for the room.
The tool employs synthetic data to render images, which facilitates an inexpensive and accurate way of providing a realistic idea of materials, geometry and lighting to the user. The software brings automated tools to the users so that authentic images can be taken and converted into synthetic, photorealistic counterparts. The team has successfully created a framework wherein users, through their 3D scanners or even cell phones, develop datasets that can make their reality applications.
The data brought forth is exceptionally insightful and can be used to train robust deep neural networks to estimate these characteristics in real images. Thus, allowing photorealistic material editing and object insertion.