Blockchain enables creators to package and monetize their digital content in new ways. However, it’s not the only tech stack that is doing so. Artificial intelligence (AI) is also redefining creativity in the digital space, and here’s how.
Generative AI Models
DALL-E, Stable Diffusion, and Midjourney are all generative AI models. They use AI algorithms to automatically generate digital content based on a simple prompt that would otherwise take a human a long time to complete. This means Web3 creators and digital artists can create enormous collections of unique pieces themselves in a short time, dramatically reducing the production cost of digital content as well as boosting the artists’ creativity with the “creativity” of a machine mind.
Earlier in 2022, NVIDIA announced Get3D, an AI model that uses only 2D images to generate 3D shapes with high-fidelity textures and complex geometric details. These objects are created in the same file format as any other 3D software, making it easy for users to pull content out of the software and into other programs.
Traditionally, 3D workflow requires a special skill set and many hours of labor, ramping up production costs and time for any project that needed 3D such as games, animated movies, or advertisements. NVIDIA’s new tool makes 3D content creation a breeze.
As the world transitions to immersive interfaces like augmented reality (AR) and virtual reality (VR), we need tools like Get3D to allow everyone, not just 3D specialists with a lot of time on their hands, to contribute to 3D worlds. In this way, generative models like Get3D might disrupt 3D content creation in the same way that Squarespace/Wix did for website development. In fact, as mentioned in an article from Analytics India, it might not be long until we can enter a text prompt into an AI model and generate an entire 3D world from it. Want to create a mystical underwater world for your new project on ocean conservation? Just feed in a prompt and immerse yourself in a fully custom environment using VR or AR. If you don’t like certain features, use the same model to restructure the environment.
A Question of Copyright
One recurring issue with generative models is copyright. Although Web3 largely solves the question of digital ownership — Whose JPEG is this? Who owned it in the past? — it doesn’t help us answer that question when artificial intelligence creates something.
Interestingly, visual media conglomerate Getty Images announced in late 2022 its decision to ban content created by AI models like Midjourney, DALL-E, and Stable Diffusion. According to a statement from the organization, “There are real concerns with respect to the copyright of outputs from these models and unaddressed rights issues with respect to the imagery, the image metadata, and those individuals contained within the imagery. We are being proactive to the benefit of our customers.” This is as much an ethical question as it is a legal one. At what point do we consider AI models to be creators?
AI Nouns Town: AI-NFTs and Virtual Characters
Finally, AI can be used to create virtual characters. To outline what this looks like, I’ll dive into a project called The Simulation run by the game studio Fable.
In The Simulation, you can create and nurture AI humans and creatures that have minds of their own. In the first sub-world of The Simulation called AI Nouns Town, owners of NFTs can import their pre-owned characters into the game and make them come to life using procedural animation, natural language processing, computer vision, synthetic speech, and reinforcement learning.
While the project is still in development, the idea is to give life to NFT characters such as Bored Apes, Doodles, or Deadfellaz, which are otherwise sitting around in owners’ wallets doing nothing. Initially, NFT series’ characters are trained and exist in isolation, playing by rules defined by the values and culture. In describing the project, Fable’s CEO Edward Saatchi envisions a conversation between a Doodle and a Bored Ape, each character equipped with unique personality traits, emotions, and even memories that will define their interaction.
The grand vision ties into artificial general intelligence, or AGI, where NFT characters have the same intelligence as humans. The original vision for The Simulation came from films like Westworld, The Matrix, and Free Guy, in which virtual AI beings play off each other and intermingle seamlessly with real humans in a virtual environment like a Metaverse.
Final Thoughts
As AI and immersive technologies like VR/AR improve, we’ll continue to see their convergence. As we move into increasingly virtual environments and keep adding a virtual layer atop the real world using augmented reality, we will continue to see more AI-generated content, whether that’s artwork, 3D models, complete environments, or even artificial beings with their own intelligence and conscience.
If that sounds like science fiction, that’s because it was considered as such years ago. But now it’s becoming reality. Welcome to the Acceleration Economy.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: