Docs
Official Website
  • Metakraft Documentation
  • Getting Started
    • 📖Disclaimer
    • 🚄Our Journey
    • 🦹About
  • How Metakraft AI works
    • 📑Market and Trend Analysis
      • Overview of the current trend and market
      • Existing Gaps and Opportunities in the Market for Metakraft AI
      • The Evolution of 3D Experiences and Technology
    • 🔢Creative Layer: All into Games
      • ⚒️EDK
      • 🤖AIGC
        • 🍧Asset Generation
          • Create first Model
          • Block Models
        • 🏃‍♂️Text-to-Animation
          • Components
        • 🧑‍💼Character Generation
          • Creating Avatars & Animations
        • 🎮InGame UGC - API
          • Generating API Key
          • Creating a Basic Scene
        • ⚙️Custom Models & Tools
      • 🔽IP Management
        • 🔼Launching your IP's
      • 🆔Game ID
      • ⌛KRAFT Protocol
    • 🃏Marketplace
    • 🥽Immersive Media - Games and XR Systems
      • 🛸XR Systems
        • 🥽Head-Mounted Display (HMD)
        • 🖲️Tracking Systems
        • ⚙️XR Runtime
          • 🔮OpenXR Framework
      • 🖍️Game Design Ecosystem
        • Game Engines
        • Spatial Audio
        • Web3.0 SDK
    • 🤖Framework & Compatibility
      • 🛸XR Systems
        • XR Runtime
      • 🖍️Game Design Ecosystem
        • Web3.0 SDK
          • Wallet
          • Identity
  • Token Economy
    • 🚀Tokenomics & Utilities
    • 🪙About $KRFT Token
    • 🦹Usecases for Token
  • Roadmap
    • 🚀Roadmap
  • How to Join
    • 🚂SPX Nodes
      • Node Rewards and Benefits
      • Technical Requirements
      • Node-as-a-Service Partnerships
      • Partnering with Third-Party Services
      • Delegating Node Operations
      • Security and Compliance
      • Buyback Program
    • 👾Creator & Ambassador Program
  • Conclusion
    • 🤞Conclusion
    • 🐟Disclaimer
Powered by GitBook
On this page
  1. How Metakraft AI works
  2. Creative Layer: All into Games
  3. AIGC

Text-to-Animation

PreviousBlock ModelsNextComponents

Last updated 9 months ago

Spatial Scene & Animation (Text-to-Animation Systems): We have been able to isolate systems that generate animations from natural language input, focusing on those that perform some natural language understanding and are not purely data-driven.

We provide a text-to-animation solution that allows users to edit 3D animation by simply writing and editing texts. Our design objectives are multifold: first, we aim to generate virtual scenes close to the daily life where objects can be interacted with; second, we expect the character to be aware of the scene context and adaptively locomote in the scene; third, diverse character motions should be supported, such as waving arm angrily, playing the piano, etc. To accomplish the objectives, we build our system upon Language-Driven Synthesis [MPF∗18] and NSM-based character locomotion [SZKS19] to jointly optimize the scene layout and character motion, and further generate vivid 3D animation by editing texts.

🔢
🤖
🏃‍♂️