Creating real-time 3D applications, the old way
Creating real-time 3D applications in the most common workflows today involves a substantial amount of complexity. This task is generally tackled in one of two ways:
Developing from Scratch: This method involves creating the application by developing the software from scratch. It offers a high degree of flexibility and customization but at the cost of facing extreme complexity. This approach requires in-depth knowledge of 3D programming and graphics, which can be time-consuming and resource intensive.
Using Game Engines: The alternative is to use ready-made software like game engines, which are engineered to manage the complexities of real-time 3D environments. This option is generally preferred by most developers who focus on solving their actual business problems rather than getting bogged down by the underlying technology.
The current market offers tools that, while effective, were developed in an era before the widespread adoption of cloud computing—and in some cases, even before the internet was commonplace.
As we’ll explore, utilizing these tools can present several challenges.
High Resource Consumption: The Traditional 3D Software Requirements
We are referring to traditional software that requires downloading and installation on a local computer. Considering the intense demands of real-time 3D, the computer must possess considerable processing power to handle such a load effectively. This requires an upfront investment in hardware to begin development.
A critical component for 3D graphics is the GPU (Graphics Processing Unit), which can be exceptionally costly. The complexity and computational intensity of the desired application directly influences the power—and consequently, the price—of the required GPU. The more demanding the application, the more advanced and expensive the GPU must be to execute tasks efficiently.
High friction in Collaboration
Once your hardware is ready and your team of experts is assembled, you might encounter the next obstacle: high friction in collaboration. Everyone on your team starts working on the virtual experience, but soon enough you realize that collaboration on these tools is not that easy, what’s happening is that every member of your team is working on a local version of the experience, making changes completely oblivious to one another. People start applying conflicting changes to the same part of the experience. It becomes harder to get a true picture of the current situation of the experience everyone is building.
This is where source control comes into play.
Source control is a way to enable multiple people to work on the same project without interfering too much on one another. It basically creates a single source of truth of your data, usually stored on a remote server. Every developer working on the same project can refer to that source of truth to get the latest changes. And in turn every developer - given the proper access - can alter that source of truth if the alteration doesn’t conflict with the current state of the data.
This process is a step forward on collaboration but is still a lockstep process where everyone works independently on their own version of the truth, then stop everything, reconcile their version with the main version, and repeat.
This creates a high friction and diminishes the efficiency of everyone involved as it is still hard to have a true picture of the current state of the project.
Lack of accessibility
This friction is found in multiple places of the pipeline, often, you have multiple stakeholders working on the project, those people have different roles and different qualifications, but they still need to somehow access the project. However, in this conventional paradigm, a divide exists between the developmental build and the version ultimately deployed for end-users. Compounding this issue, the development build is typically accessible only through the specific tools used for its creation. This restriction automatically excludes any team member who doesn't have access to these tools.
Another issue that often arise is that the development team needs to be cognizant of the devices they’re targeting, as those devices may not be powerful enough to support the full range of available features. This creates even more complications and leads to the dreaded min spec requirements that ends up excluding a whole range of potential users who are simply not equipped adequately.
Even for users with devices that can run the application, there's no guarantee they will experience it with the quality and performance intended by the developers. Variations in device capabilities mean that the user experience can significantly differ from what was envisioned, potentially impacting the application's reception and success.
A not-so-new path
Some of the more seasoned readers might recognize that the pattern of digital collaboration evolution is not unique to the realm of 3D applications. There was a time where the Microsoft Office suite was prevalent and people worked on the same document or spreadsheet each on their own and shared working version by email or USB sticks. Those files would look something like:
Then came Google Docs, and with it the headache of keeping track of what file was the last version of a given document was gone; Google Docs finally provided a single source of truth that everybody accesses and modifies collaboratively in real-time. A major advantage was that all documents were accessible through any device anywhere provided that there is a browser with an internet access. No more downloads, no more software updates.
This ushered in an avalanche of web-based collaborative tools—Dropbox, AirTable, Figma, and Canva, to name a few—empowered by cloud technology. This shift enabled even the smallest entities to access significant storage and computing resources, negating the need for substantial initial hardware investments. Thus, the era of Software as a Service (SaaS) was born.
The new era of real-time 3D
So where does real-time 3D stand in this profound transformation? We are still witnessing the early stages of this transition. The inherent complexity of 3D technology poses a formidable barrier to change for established players and presents a steep challenge for newcomers.
A pivotal moment came with the increased accessibility of GPUs in the cloud around 2017. Despite these challenges, our team at 3dverse was not discouraged. In 2018, we ambitiously, maybe even recklessly, decided to confront these challenges head-on. Thus, transforming the 3D content creation and delivery process with our cloud-native 3D Development Platform… but that’s another blog.