Digital Human Project
Last updated
Last updated
DigiMate’s Digital Human Project allows you to integrate lifelike 3D avatars into your applications, enhancing user engagement with realistic and interactive experiences. Here’s everything you need to know to get started:
DigiMate offers a diverse library of pre-made metahuman avatars created using Unreal Engine's advanced Metahuman technology. You can choose from a variety of avatars to find the one that best suits your project’s needs.
As for July 2024, we have a selection of 6 metahuman avatars you can use for your project:
Freya
Darell
Jessica
Cooper
Matilda
Mikasa
Charlotte
Also, if you require a unique avatar, you can contact us to order a custom metahuman avatar tailored specifically to your requirements. This avatars can duplicate real-person's appearance or be created by any reference our customer may has.
To ensure your avatar fits the specific context of your business, you can customize its appearance by selecting from a range of outfits available in our library. We offer three distinct styles for each avatar:
Street Style: Perfect for casual, everyday scenarios.
Smart Casual: Ideal for semi-formal settings.
Business Suits: Suitable for professional and corporate environments.
This flexibility allows you to match the avatar’s appearance to your brand and the specific use case.
Enhance the visual appeal of your digital human project by selecting from our collection of virtual backgrounds. These backgrounds are designed to complement your avatar and create an immersive environment.
If you prefer, you can also upload your own custom backgrounds to provide a unique setting that aligns with your project’s theme and objectives. Just click "Add Background" button, select any image from your computer and upload it (please note that we have a weight limit of 5mb per file).
Our metahuman avatars come equipped with a range of sophisticated animations to enhance their lifelike presence. When no interactions are happening between the user and the application, the avatars exhibit idle animations, which include natural movements such as slight shifts in posture, blinking, and subtle facial expressions to maintain a sense of realism.
Additionally, our avatars are capable of displaying contextual emotions. Based on the AI's context recognition, the metahuman can run various facial animations to express emotions such as happiness, anger, and more.
These dynamic expressions ensure that the avatar responds appropriately to the context of the conversation, providing a more engaging and authentic user experience.
Our metahuman avatars come with advanced lip-sync capabilities, ensuring that their mouth movements align perfectly with spoken words. This feature enhances the realism of interactions and makes conversations with your AI assistant more engaging and believable. The avatars’ voices are generated using Text-to-Speech (TTS) technology, providing natural-sounding speech that can convey information clearly and effectively.
Our lip-sync technology currently supports the following languages:
"English (United States, UK, and other dialects)", "Russian (Russia)", "Ukrainian (Ukraine)", "Chinese (Wu, Simplified)", "Spanish (Spain)", "German (Germany)", "French (France)", "Italian (Italy)", "Portuguese (Portugal)", "Polish (Poland)"
Experience seamless and intuitive interactions with our hands-free conversation feature. This functionality allows users to enable their microphone and engage in real-time conversations with the AI avatar without needing to type a message.
By leveraging advanced Speech-to-Text (STT) technology, the AI can accurately transcribe spoken words into text, enabling smooth and natural dialogue. Whether you're providing commands, asking questions, or having a casual chat, hands-free conversation makes interacting with the AI avatar effortless and more akin to speaking with a human, enhancing user convenience and accessibility.
To enhance the hands-free conversation experience, we’ve included a 3D speech indicator designed to visually communicate the status of the interaction. This indicator is a semi-transparent liquid sphere that uses distortion animations to show when the microphone is active and the user is speaking.
Indicator Colors and States:
Purple: Indicates that the system is actively listening to the user's speech.
Yellow: Indicates that the system is waiting for a response and the microphone is not active.
Green: Indicates that the metahuman is responding to the user.
This visual feedback ensures users are always aware of the current state of the interaction, making the hands-free conversation more intuitive and engaging.
DigiMate’s Digital Human project includes a versatile camera behaviour feature that provides users with comprehensive control over how the camera interacts with the metahuman avatar. This feature enhances the overall user experience by offering various customization options for camera positioning and movement.
Adjustable Camera Distance
Users can set the distance between the camera and the avatar, allowing for different perspectives. Whether you want a close-up view of the avatar's face for detailed interactions or a full-body view for a more comprehensive presentation, you can easily adjust the camera to suit your needs.
Dynamic Camera Movements
The camera can dynamically change its position to create engaging and cinematic experiences. For example, the camera can perform smooth transitions, move to the side when UI elements appear on the screen, or adjust its angle to maintain an optimal view of the avatar.
Cinematic Transitions: Enhance storytelling and user engagement with professional-looking camera movements.
Adaptive Positioning: Ensure the avatar remains in focus even when additional UI elements are displayed, providing a seamless interaction experience.
Enhanced Interactivity: Keep users engaged with dynamic camera movements that respond to the context of the interaction.
By leveraging the camera behavior feature, you can create a more immersive and interactive experience, ensuring that the metahuman avatar always looks its best and remains effectively engaged with users. This level of customization allows you to tailor the visual presentation of your digital human project to meet specific requirements and enhance overall user satisfaction.
Pixel streaming is a cutting-edge technology that allows high-quality, graphically intensive applications to be run on powerful remote servers and streamed directly to end-users over the internet. With DigiMate’s Digital Human Project, pixel streaming ensures that our highly detailed 3D avatars, created using Unreal Engine 5.4, are delivered to your device in real-time without compromising on performance or visual fidelity.
Here’s how it works:
Server-Side Rendering: The avatar and its environment are rendered on a powerful remote server equipped with high-end GPUs. This offloads the heavy computational work from the user’s device.
Real-Time Streaming: The rendered images are then compressed and streamed to the user’s device via the internet. This process is similar to how video streaming works but with much lower latency to support interactive applications.
User Interaction: Users interact with the AI avatar on their device, sending input commands such as voice or text. These inputs are sent back to the server, where the application processes them and updates the avatar’s behavior accordingly.
Low Latency: The entire process happens in milliseconds, ensuring smooth and responsive interactions between the user and the avatar.
By utilizing pixel streaming, DigiMate’s Digital Human Project can deliver a high-quality, immersive experience across a wide range of devices, including those with limited graphical capabilities. This technology ensures that you can interact with our realistic 3D avatars seamlessly, without the need for expensive hardware or complicated setups.
When you select the Digital Human project, DigiMate offers flexible embedding options to suit your specific needs and preferences. These options ensure that your metahuman avatar integrates seamlessly into your application, whether you want a full immersive experience or a more subtle interaction point.
The full-screen embedding option allows your metahuman avatar to occupy the entire screen, creating an immersive and interactive experience. This is ideal for applications where the avatar plays a central role, such as virtual presentations, training simulations, or detailed customer interactions.
Key Benefits:
Immersive Experience: Engage users with a lifelike avatar that occupies the full screen.
Enhanced Interaction: Provide a comprehensive, focused interaction with the AI avatar.
Ideal for High-Engagement Scenarios: Perfect for scenarios requiring detailed explanations, demonstrations, or immersive storytelling.
The Intercom Widget is a more compact embedding option that integrates the avatar into a smaller container, accessible via a widget button. This setup is typical for customer support chatbots or any application where the avatar serves as a supplementary tool rather than the primary focus.
Key Benefits:
Non-Intrusive: The avatar remains accessible without dominating the screen space.
Convenient Access: Users can easily open and close the widget as needed, ensuring the avatar is available on demand.
Versatile Applications: Ideal for customer support, FAQs, and brief user interactions where full-screen engagement is not required.
By offering these embedding options, DigiMate ensures that your digital human project can be tailored to various use cases and user preferences, enhancing the overall user experience and engagement.