Loading...

Follow Crazy Minnow Studio | Independent Game and Asse.. on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Master Audio Integration with SALSA LipSync: SalsaMasterAudio

(UPDATE) 2019-06-22 version 2.0.0: initial release for SALSA LipSync v2.

We've created a set of helper-scripts to link up Dark Tonic's Master Audio Playlist functionality with our SALSA LipSync asset. This workflow allows you to drive lip-sync from a playlist created and managed in Master Audio.

The asset package download contains two C# helper-scripts, one for integrating with Playlists, the other for integrating with Master Audio Groups.

PLEASE NOTE: Master Audio is an awesome, top-selling, top-rated Unity asset created by Dark Tonic and is not affiliated with, or supported by Crazy Minnow Studio. Please contact Dark Tonic Games for information about Master Audio.

Brief instructions are available below, but you can watch the short video on our YouTube channel and follow along if you prefer.

SalsaMasterAudio add-on is now available for SALSA LipSync v2.

NOTE: While every attempt has been made to ensure the safe content and operation of these files, they are provided as-is, without warranty or guarantee of any kind. By downloading and using these files you are accepting any and all risks associated and release Crazy Minnow Studio, LLC of any and all liability.

Download the zip package PLAYLIST INTEGRATION

The SALSA LipSync Unity asset listens for Master Audio's PlaylistController.SongChanged event and updates its AudioSource with the currently playing clip.

Brief Instructions:

  1. Download and import the helper-script unitypackage into your project.
  2. Add a Master Audio prefab to your scene via the Master Audio Manager window.
  3. Setup a new playlist and add clips. NOTE: you may want/need to 'Override' the playlist's 'Crossfade Mode' and set the time to 0.
  4. Add a Playlist Controller prefab to your scene via the Master Audio Manager window.
  5. Configure the Playlist Controler GameObject to use the new playlist created in step #3.
  6. Add SALSA LipSync to your GameObject.
  7. Add SalsaMasterAudioPlaylist helper-script to the GameObject using SALSA (as shown in the screenshot).
  8. Choose whether to mute the Master Audio playlist (recommended).
  9. Choose whether to autodetect (recommended), force, or ignore Playlist synchronization (see Release Notes section below)
  10. Link the Playlist Controller (from step #4) to the helper-script's 'Playlist' slot.
  11. Play your scene for some playlist-driven, lip-syncing goodness.
GROUPS INTEGRATION

Master Audio's PlaySoundResult is linked up with SALSA LipSync and when programatically accessed, Master Audio plays a clip from the group and SALSA is linked to the AudioSource Master Audio plays through.

Brief Instructions:
NOTE: this requires some sort of programatic access to the public SalsaMasterAudioGroup.PlayDialog() method. Implementing this in your project is beyond the scope of support for this add-on and SALSA LipSync in general. In the included example scene, a UGUI button is linked to call the method.

  1. Download and import Master Audio, SALSA LipSync Suite, AND this add-on (SalsaMasterAudio) into your project.
  2. Create and name your Master Audio Group(s) according to the Master Audio documentation.
  3. Configure a character model with SALSA and confirm it is working.
  4. Add SalsaMasterAudioGroup helper-script to the GameObject using SALSA (as shown in the screenshot).
  5. Configure a way to call the group playback (i.e. a GUI button as in the demo scene)
  6. Play the scene and invoke the PlayDialog() method (using step #5).
Release Notes:
  1. Using Master Audio to drive/feed SALSA's lip-sync technology with audio clips has the side effect of both Master Audio and SALSA playing the same clip simultaneously. While this may not be perceptable on some systems, odds are it is not desireable whether it is audible or not. A boolean flag is now available that will mute the Master Audio Playlist. This is applied in Awake(), so changes during runtime will not affect script logic. However, this can be overridden at runtime, externally in other scripts or actions if desired.
  2. Master Audio's playlist synchronization feature may produce timing issues with SALSA's lip-sync audio clip timing when using earlier versions of this script (prior to v1.0.2). This effect will not be noticeable when using the "mute" feature in Release Notes item #1 and lip-synchronization itself is not affected. Master Audio Playlist Synchronization functionality is now supported in this helper script (v1.0.2+). There are two options available: "Autodetect Sync Mode" and "Force Sync Mode". Autodetect will check the current clip's SongFadeInPosition mode and if set to "Synchronize Clips", SALSA's AudioSource.time will be synchronized with the Master Audio Playlist's current AudioSource.time. If, for some reason, autodetect mode is not working correctly, use "Force Sync Mode" to indiscriminately force SALSA's AudioSource to synchronize with the Master Audio Playlist's AudioSource. NOTE: "Force Sync Mode" overrides the "autodetect" mode.
Check out the short video tutorial/demonstration:
Lip-sync Using SALSA and Master Audio Playlists - YouTube

Simple Automated Lip Sync Approximation
~ We look forward to seeing what you create! ~

Support Images:

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

SALSA LipSync & RT-Voice :: runtime text-to-speech!

Our RT-Voice addon has been updated for SALSA LipSync v2!

RT-Voice (runtime text-to-speech)
https://www.assetstore.unity3d.com/en/#!/content/41068

SALSA with RandomEyes (realtime lipsync)
https://www.assetstore.unity3d.com/en/#!/content/16944

(UPDATE) 2019-06-21 -- version 2.0.0: initial release for SALSA LipSync v2.0

PLEASE NOTE: These instructions require you to download and install the appropriate add-on scripts in your Unity project. If you skip this step, you will not find the applicable option in the menu.

Installation Instructions
  1. Install SALSA LipSync into your project.
  2. Install RT-Voice into your project.
  3. Import the SALSA LipSync v2 RT-Voice support package (SalsaRtVoice).
Usage Instructions
  1. Setup a SALSA-enabled character.
  2. Setup the RT-Voice components using one of the below methods:
    • Using Speaker.Speak
      • Add the RT-Voice [Speaker] component.
      • Add the SalsaRtVoice component:
        • [Component] -> [Crazy Minnow Studio] -> [SALSA] -> [Addons] -> [RT-Voice] -> [Salsa_RTVoice]
    • Using Speaker.SpeakNative with the OnSpeakNativeCurrentViseme event.
      • Add the RT-Voice [Speaker] component.
      • Add the RT-Voice [Live Speaker] component.
      • Add the Salsa_RTVoice_Native component:
        • [Component] -> [Crazy Minnow Studio] -> [SALSA] -> [Addons] -> [RT-Voice] -> [Salsa_RTVoice_Native]
    • Using Speaker.SpeakNative with the OnSpeakCurrentWord event for use on iOS.
      • Add the RT-Voice [Speaker] component.
      • Add the RT-Voice [Live Speaker] component.
      • Add the Salsa_RTVoice_Native_iOS component:
        • [Component] -> [Crazy Minnow Studio] -> [SALSA] -> [Addons] -> [RT-Voice] -> [Salsa_RTVoice_Native_iOS]
      • Add the TextSync component: (TextSync text-to-lipsync @ https://crazyminnowstudio.com/unity-3d/lip-sync-salsa/downloads/)
        • [Component] -> [Crazy Minnow Studio] -> [SALSA] -> [Addons] -> [TextSync] -> [CM_TextSync]
      • Set the TextSync [Words Per Minute] to 300.
  3. Play the scene and press the [Speak] check box on either the [Salsa_RTVoice] or [Salsa_RTVoice_Native] component.

NOTE: While every attempt has been made to ensure the safe content and operation of these files, they are provided as-is, without warranty or guarantee of any kind. By downloading and using these files you are accepting any and all risks associated and release Crazy Minnow Studio, LLC of any and all liability.

Download Files

Simple Automated Lip Sync Approximation
~ We look forward to seeing what you create! ~

Buy SALSA on the Asset Store
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

New workflow brings improved 1-click SALSA setup to MCS (Morph3D) characters, introduces new default shape groups that produce great looking results, and provides simple access to MCS's huge array of complex facial expression. We leveraged SALSA features to create our new CM_MCSSync script, along with a custom inspector, that provides a unique shape group capability that creates a great default setup, while also extending full shape group customization.

PLEASE NOTE: These instructions require you to download and install the appropriate 1-Click asset scripts in your Unity project. If you skip this step, you will not find the option for 1-Click in the menu.

2019-03-01 - Due to lack of support for the MCS system from the vendor, Crazy Minnow Studio no longer provides support for these models. There will be no further support or modifications for this add-on. You are free to continue using it as long as it works for your needs.

2018-09-08 - MCS v1.6.4+ (2.2.1) Excluded blink shape sync from SyncShapes, removed default SMR's from the BlendShape removal list.

2018-04-13 - MCS v1.6.4+ (2.1.0) Added a new function that automatically remaps BlendShpes indexes to address the MCS moving target indexes (like self healing). Added [Remove BlendShapes] section to the inspector and prepopulated it with a couple hair options to avoid rogue hair movement blendshapes linked to blinking. Added a jaw bone link and sync, and a new default BlendShape map for more dynamic lip-sync.

Installation Instructions

The zip file below contains a Unity package that can be imported into your SALSA with RandomEyes 1.4.0 project. Download and unzip the file.

  1. Install you MCS characters(s) into your project.
  2. Install SALSA with RandomEyes into your project.
    • Select [Window] -> [Asset Store]
    • Once the Asset Store window opens, select the download icon, and download and import [SALSA with RandomEyes].
  3. Import the SALSA with RandomEyes MCS Character support package.
    • Select [Assets] -> [Import Package] -> [Custom Package...]
    • Browse to the [SALSA_3rdPartySupport_MCS.unitypackage] file and [Open].

Usage Instructions

  1. NOTE: Ensure you have downloaded and imported the 1-Click files into your Unity project.
  2. Add an MCS character to your scene.
  3. Select the character root in the hierarchy, then select 1-Click Setup from the following menu:
    • [Component] -> [Crazy Minnow Studio] -> [MCS] -> [SALSA 1-Click MCS Setup]
  4. Add an AudioClip to the SALSA [Audio Clip] field, and play the scene.​

SALSA Lipsync & MCS by Morph 3D - YouTube

NOTE: While every attempt has been made to ensure the safe content and operation of these files, they are provided as-is, without warranty or guarantee of any kind. By downloading and using these files you are accepting any and all risks associated and release Crazy Minnow Studio, LLC of any and all liability.

Download Files

Simple Automated Lip Sync Approximation
~ We look forward to seeing what you create! ~

Buy SALSA on the Asset Store
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

ETA: Submission to Unity by end of January 2019
STATUS: development finalization, documentation

Addon/System Integration Status:
Addon/System: Status:
   
Amplitude Pending
DissonanceLink Functional
MicInput Functional
MasterAudio Groups Functional
MasterAudio Playlists Functional
RT-Voice Pending
SalsaSync Deprecated
TextSync Functional
   
Codeless Systems:  
 Adventure Creator Pending
 Behavior Designer Pending
 Cinema Director Pending
 Cinematic Sequencer Slate Pending
 NodeCanvas Pending
 Playmaker Pending
 Unity Timeline Core Functional
 Unity Timeline TextSync Functional
   
One-Click Systems:  
 Autodesk CharGen Pending
 Boxhead v2 Functional
 DAZ Pending
 Fuse Functional
 iClone Pending
 MCS Pending
 UMA DCS Pending
  UPDATE (2019-01-17)

Not entirely SALSAv2-related, but we wrapped up migrating our server hosting last night. We've been somewhat disappointed with the performance of our previous shared hosting and renewal was coming up so we decided to make the jump. We're now on a VPS server and the performance is quite good. Let us know if you have any issues.

The Eyes module continues the integration process and is nearly done. We are still hopeful for submission by the end of the month, but it may slip a week or two depending on whether we hit any hurdles or not.

UPDATE (2019-01-03)

We have ended the closed-beta testing and received some good feedback. We are now integrating the eyes module in with the main package and expect to have that completed soon. Our current plan is to be submission-ready by the end of this month. In the meantime, we are updating existing add-ons to make them compatible with the new system and getting documentation and tutorials ready for release. Please understand, this is not a promise that we will release the product at the end of January. This is our estimated plan and intent. We are a small team and this is not our only job, so we work with what we have. :)

As we mentioned near the beginning of this development blog/log, we will be increasing the price of SALSA Lip-Sync v2 upon release. The expected release price will be $39. Shortly after release, the price will bump up. Also note, this will be a paid upgrade. As a thank you to those that have supported version 1, we will be offering a price-difference upgrade at launch, so you can continue to purchase version 1 now and upgrade upon release and not pay any extra over waiting for the new version. While the new-purchase price is $39, the upgrade price will be $4. Once the full price changes from $39, the upgrade price will increase and will no longer adhere to the price-difference model. So, get your upgrade quickly!

UPDATE (2018-12-26):

Here's a quick update to show some progress on eye control on a 2D character in SALSA 2.0. The video demonstrates three available methods:

  • Eye transform animation.
  • Nine sprites (forward, up, upper right, right, lower right, down, lower left, left, and upper left.
  • Five sprites (forward, up, right, down, and left)
SALSA 2.0 (2D eye control progress) - YouTube
  UPDATE (2018-11-09):

SALSA and EmoteR are pretty much finalized. Documentation is nearly complete. RandomEyes development continues. We are about 3 weeks into our closed-beta test of SALSA and EmoteR and have had some really good, positive feedback thus far. Very few bugs have turned up and our testers are enjoying the new version and its capabilities.

We have added a few new features and modifications from tester requests. One new key feature is blending back to Animator control in the EmoteR module using Bone components. Previoiusly, if a bone was used in an emote, EmoteR would take over animation and then release the animation at a set position, usually resulting in a jarring snap as Unity's Animator takes control back. Now, EmoteR will track the Animator control of the bone and smoothly deactivates its emote towards the updating Animator pos/rot/scale. The result is a seamless transition back to Animator control. There is also an offset mode available which EmoteR uses to apply relative animations to an Animator's animations. It's all good stuff and opens up a lot of great possibilities.

Here is a view of an expanded SALSA inspector (non-configured) and an expanded configured-for-boxHead) implementation.

 

UPDATE (2018-10-05): Inspector Design

The advanced elements of the SALSA inspector are more or less complete. We've implemented shape, bone/transform, and sprite switcher interface functionality. Additionally, we've been squashing some bugs and optimizing some core code, along with copious amounts of housekeeping and general cleanup. Document rough-drafts are nearly complete for SALSA. Emoter will be next in line for finalized touches, followed by Eyes. Why the huge delay? There are a few reasons, the biggest being trying to implement the inspector interface for the advanced flexibility the suite uses. We've tried to make SALSA 2.0 compatible with as many Unity versions as possible while keeping the inspector up-to-date and fresh enough. That said, our current version cutoff for Unity is planned at v5.6+. Stay tuned for more info, I'll try to post up a new view of the inspector soon!

UPDATE (2018-07-25): Inspector Design, Optimizing, & Unity Timeline

Since the feature set is pretty much complete for SALSA, we have been working on a fresh inspector implementation. The goal is to make the inspector at-a-glance informative, without being cumbersome -- providing feedback for proper implementation and key settings while the sections are completely collapsed. The animation demonstrates some of the interface ideas we have implemented.

We have also been trying out some additional ideas and working through some scenarios to try and ensure the product is as forward looking as possible. This includes optimization and eliminating unnecessary code and libraries. There is still work to be done here, but currently, the profiler is showing zero GC impact and the queue item registration processes in our sample scene are taking just a little over 1ms to process 36 simultaneous SALSA objects! Additionally, the per frame queue process (animation interpolations) for these 36 registered test items are under 1ms!

Finally, we have been ensuring our current code base is compatible with our Timeline add-ons and in fact, we have some added capabilities in Emoter for custom expression firing that are not available in SALSA 1.0, such as shape jitter variation over time when using one-way shape handling.

UPDATE (2018-06-03): technology preview video 2D (progress-to-date)

Here is another technology preview video. This time we used our new combined 2D/3D workflow, our unlimited sprite/texture/material/trigger capability (sprites for this test), and multi-image sprite sequences for each of the eight visemes used to achieve smoother 2D traditional animation. This workflow could also be used to add full lip-sync and emotion expression to low poly 3D characters using texture or material swapping. We think the results look great.

SALSA LipSync 2.0 Pre-Release Technology Preview (2D) - YouTube
UPDATE (2018-05-23): Refactoring and Fixes

The past few days have been spent refactoring and tweaking, while testing the feature sets currently in-place. Mike has been working with an advanced 2D model from Digital Puppets and it has brought quite a few things to light. SALSA and EmoteR have been more than up for the task and it has simply been a matter of adding to the feature set rather than needing broad sweeping changes. The whole system is very modular in design so it has been quite pleasing to work with. The new 2D array controllers have really opened up the possibilities of what the SALSA suite can do for 2D. We will be putting out some 2D tech video demonstrations soon.

UPDATE (2018-05-18): EmoteR updated

The EmoteR (emote randomizer working name) received some lovin' today. Implemented manual emote triggering as well as random triggering (in addition to the already-implemented emphasizer triggering). There are now three pools of emotes: emphasizer, random, manual. An emote can belong to any/all of these groups (all emotes belong to the manual group). The randomizer functionality can be strictly a random emote, triggered by a percentage chance at random (min/max) intervals. Or it can take on some dynamics of its own with random fractional extents and/or randomized hold (min/max) durations. So, it is pretty flexible, but easy to implement, providing quite a bit of variation.

We foresee some cool value add for developer/designers who wish to create demeanors for their characters. By adding multiple EmoteR components to a character, each could be configured with emotes associated with a demeanor (i.e. "happy" or "angry"). Then, programmatically or via a mechanism like Timeline, the demeanor could be switched so that SALSA triggers "happy" emphasis emotes and EmoteR triggers "happy" random emotes. Or, likewise the demeanor could be switched to "angry" emphasis and random emotes. The possibilities are endless!

UPDATE (2018-05-17): technology preview video (progress-to-date)
SALSA LipSync 2 0 Pre-Release Technology Preview - YouTube
UPDATE (2018-05-07):

SALSA 2.0 is shaping up to be an amazing upgrade to the product line. With the new design elements, we have discovered additional possibilities to make the system even more flexible and allows us to give you, the designer, a solution which breathes even more life into your characters without sacrificing SALSA's original goal's: to be simple and automagic! Of course, keep in mind, any sweet, new features we discuss here are not considered set-in-stone until the product ships. In other words, something might come up that prevents us from including a feature in the final product.

Our new sound processing algorithm has allowed us to experiment with some forward looking processing, allowing us to adjust the processing bias forward/backward. The only really useful direction seems to go forward and the benefit is eliminating the apparent processing lag visible in the current version of SALSA. This has a dramatic effect on the perceived lipsync approximation, making it feel even more accurate without having to bake in phoneme maps. We believe this will potentially work for micInput as well, further reducing the visual separation between recording speech and performing lipsync.

In SALSA 1.x, we have a custom shapes interface that allows the designer to include and randomly play emotes on the character in an attempt to give them a little more life and not be so static. Unfortunately, this is somewhat of a double-edged sword and can easily cause the character to look twitchy. SALSA 2.0 goes a few steps further. You will still be able to define shapes and have them randomly play if that is your thing. However, we just introduced a new capability that links SALSA's audio processing algorithm with our new Emoter system, allowing audio timing queues to influence a set of emphasis emotes while the character talks. The result is quite awesome - timing is everything!

Previously, we spoke about the potential of 2D animation arrays (vice single-image switching) and at that point, had not been able to test it. We can now report, the results look pretty amazing. While the system has no limit on the number of mouth shapes, we tested a simple set of 3, 10-frame animations to mimic SALSA 1.x's limit (for small, medium, large mouth shapes - 30 frames total). We will also be testing an advanced 2D character with a larger set of visemes to see how that looks -- SALSA 2.0 imposes no limits on the number of 2D/3D shapes or triggers. We do not have any best-practice recommendations at this point on how many frames are effective for an animation, but the inclusion of this capability should make some 2D enthusiasts super happy! Of course, it all depends on the 'look' you are trying to acheive and 2D is much more artistically motivated (for the most part) than 3D. No matter how you look at it, options are awesome!

The RandomEyes overhaul is also getting lots of love. Movement algorithms are applying some scientific research to the model and, I have to say, eye-movement is looking amazing! In addition to the eyes, head-movement is now a thing. Tying head and eyes together creates an even more compelling character. And similar to SALSA's emphasis emote timing, head movement timing will coordinate with other elements as well, like blinking. Super cool!!!

We realize talk is cheap and to that point, we will be releasing a new teaser shortly, demonstrating just how much life the SALSA package will infuse into your characters with so little effort.

UPDATE (2018-04-19):

We have made some massive progress on SALSA 2.0 over the past couple of weeks. A vast majority of the underlying logic and structure has been completed. We are now confident to say the whole package, in general, is moving to a time-based animation queueing system. As mentioned previously, this queue provides for flexibility and efficiency and is a huge contributor to the new core features of SALSA. In SALSA 1.x, animations conflicted with each other and required limited or zero reuse of shapes (reusing shapes from other triggers and emotes would have caused jittery, push-pull moments and simply wasted processor cycles). Now, the magic of the queue gracefully eliminates this problem by seemlessly tracking and taking over animations using intelligent overrides. Additionally, the queue now operates on time verses speed. Other than allowing for different easing types, this will give cinematic creators a better workflow for timing emotes, being able to separately (and accurately) control animation on, hold, and off times.

We have removed the separate 2D/3D lip-sync elements. Now it is simply SALSA lip synchronization for your 2D or 3D needs. It all uses the same queue and the 2D options have expanded. SALSA 1.x was limited internally to sprite swapping. That process was fine for many customers, but others had different needs, which we were able to accomodate with some bolt-on processes. SALSA 2.0 abstracts the animation control process to a point where we should be able to accomodate any sort of animation or switching needs our customers may have. If it does not come in the box, extending with new functionality is much easier. At present, we have 2D switching processes for sprite, material, and texture. We are even experimenting with an animated array that could offer artists a pathway to smoother 2D animation progression (multiple animation frames per emote or lip-sync trigger). This is still experimental, but the initial results are pretty cool!

One issue some of our customers had with SALSA 1.x was with 3D sound. Basically it broke the lip-sync when the character moved away from the listener. Of course, in true Minnow fashion, we were able to come up with some usable workarounds, but they were not the preferred implementation. Last night, significant progress was made on removing 3D sound shortcomings and implementing a spacially agnostic processing algorithm. Wooohooo!

RandomEyes is getting lots of love as well. We will update on its progress shortly!  Stay tuned!!! ;)

UPDATE (2018-01-09):

SALSA v2.0 development continues. The ability to group shapes/expressions within SALSA (from multiple shapes) brings much more flexibility, but certainly can increase complexity within the inspector.The new inspector is shaping up nicely and providing for a much more streamlined user experience. The queue system is also being refined, unifying the ability for SALSA to use blendshapes and bones to drive lipsync or expressions. This added functionality will enhance SALSA's ability to work with even more model systems. Additionally, it may be possible to implement this same functionality for the 2D system or even implement 2D into the same system. Time will tell if that is going to be possible.

Lately, we have received quite a few inquiries on the release date of SALSA v2.0. It is still too early to establish a release date, but there have been a lot of advances in the development.

In addition to SALSA, two more assets should be available on the AssetStore soon that will enhance the functionality of SALSA (versions 1+). Amplitude for WebGL will provide the ability to link up audio analysis in WebGL projects (not specific to SALSA -- but yes, SALSA will work in WebGL with the product)! The second new asset, MorphMixer, will allow developers/designers to create unique blendshape instances where multiple shapes were previously required. The biggest benefit here is the creation of better defined shapes and the elimination of shape conflicts. This will benefit SALSA v1+.

Neither of these two products require SALSA to operate. They were however, inspired by SALSA in their creation. Amplitude will provide audio amplitude analysis functionality to any project/need. And MorphMixer could be used in any project to blend existing blendshapes together to form new shapes - inside the Unity Editor. Both were submitted over 30 days ago, so we are hoping Unity approves them very soon. SALSA owners will be able to purchase both products at a special price for a time.

UPDATE (2017-09-17):

Most of the 3D features are finalized and in place. Now we're focusing on 2D elements. Still no release date -- sorry.

SALSA v2.0 is coming!

Likely you have already seen some of the teaser videos we have published, showing off the quality gains over SALSA 1.x. If not, head over to our YouTube channel to take a peek. The response and excitement for SALSA 2.0 is growing and with that, there are a lot of questions. This post will discuss the most common questions we get and (for now) will serve as a devBlog for the goings on.

When is SALSA v2.0 going to be released?

This is our most frequently asked question. We currently do not have a date we're comfortable sharing. This version is, for the mostpart, a complete re-write of the SALSA core. Development is advancing quickly and nearly all of the advanced feature-set is already prototyped and maturing. Stumbling blocks have pretty much been kicked to the side of the road -- full steam ahead. In other words, we don't know, but check back here for updates.

Is SALSA v2.0 going to be a free upgrade to existing SALSA 1.x owners?

Ahh, the age-old question -- is it free? Nope, it's not. And there are a few reasons behind this decision. First and foremost, SALSA is being re-written. As mentioned earlier, this is a big code change and is not going to be a drop-in replacement for v1.x. Due to Unity's distribution platform, we felt it necessary to ensure projects were not accidentally broken due to an inadvertent upgrade. However, if you already own SALSA, it will be nearly free to upgrade it -- at least initially. After nearly 3 years of support and updates, we've decided the old SALSA needs a face-lift and face-lifts cost money. With the new release will come a price increase and the upgrade cost will be the difference of the increase and the current price of SALSA. SALSA v2.0 is expected to release at $39, meaning the upgrade will cost you $4 ($39 - $35). This upgrade window will likely be pretty narrow and the price will increase again (along with the upgrade cost) shortly after release. We're still working out the details, so some of this may change. Check back here for updates.

What sorts of new features will my hard-earned $4 get me with v2.0?

There are a couple of big changes and you've already seen the biggest one -- better real-time, lip-synciness in the form of vastly superior quality. The Minnow doesn't lie! It looks really good. And the quality doesn't sacrifice the core SALSA values and goals. It's still real-time and the pipeline is still quick and easy. In fact, you can still achieve compelling lip-sync approximation just like the SALSA 1.x with the same 3 shapes you used before. You can even use a single shape if you want -- although you probably wouldn't want to 'cause jaw flapping is just plain ol' ugly. With SALSA 2.0, you can use as many shapes as you want for lip-synchronization (yup! that's new!).

Behind the scenes, there's also a new trigger processing algorythm. This new tech brings a boatload of efficiency and flexibility. With the existing version of SALSA, re-using shapes in different mouth-shapes or even in the custom-shapes can cause unexpected, unattractive results like stuttering of blendshapes. It also meant animations would be pulling over-time on the processor cycles fighting each other. With the new queue-based system, duplicate shapes can be used and much more gracefully transition control and processing. Fewer shapes animating means more processor cycles in your back pocket. Priority-based transitions mean your blendshapes will spend less time duking it out and more time looking deliciously cool.

What about my 1-clicks?

SALSA 2.0 will still have them. In fact, with the new core, 1-clicks will be even more streamlined and efficient. The core engine will handle multiple skinned-mesh-renderers and won't need an additional intermediary to translate and sync things up.

Will upgrading to SALSA 2.0 be difficult?

That's a hard question to answer. It really depends on your existing integration of SALSA. For example, if you're using one of the character systems we support with a respective 1-click setup, you would likely only need to install SALSA 2.0 along with the new 1-click setup scripts. If you implement several group expressions in 1.x, these may need to be setup again in SALSA 2.0. The new 1-click..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Unity Timeline for SALSA AudioClips, Emotes, and TextSync

Tested with Unity versions 2017.1 - 2018.1 (release)

Timeline functionality for SALSA Lip-Sync is now called "Unity Timeline SALSA Core" and contains functionality for sequencing AudioClips on a SALSA-enabled character as well as a new option to sequence character emotes, leveraging RandomEyes3D configuration of custom-shape groups. In addition to the SALSA Core functionality, we also provide a separate Timeline implementation for our free SALSA TextSync add-on. For implementation instructions, check out the videos below. And download the files in the SALSA Downloads area.

3-part Playlist of Available Videos

SALSA Core:

UPDATE: v0.4.0 (2018-11-29): SalsaAudio tracks/clips now respond to Timeline pause/resume events.
UPDATE: v0.3.4 (2018-07-10): fixes issue where multiple instances of the same clip on a track does not animate properly.
UPDATE: v0.3.3 (2018-07-08): adds functionality for character emote sequencing.
UPDATE: v0.3.2 (2018-05-04): fix for intermittent duplicate play of AudioClip.
UPDATE: v0.3.1 (2018-05-02): disabled clip playing during design time.

The EmoteClip action: When used on an Emote Track allows configuration and timing of RandomEyes custom-shape groups. Simply add an Emote Track to the Timeline and add Emote Clips to the track. Select the emote clip in the Timeline and configure the name of the RandomEyes3D custom-shape group to drive (case-sensitive).

Ensure your RandomEyes3D component is linked to the track binding and your custom-shapes and groups are properly configured. NOTE: the same RandomEyes3D component instance can be linked to multiple Emote Track bindings to create NLE-styled overlaps/blends of emotes.

Ensure the RandomEyes3D custom shapes and groups are configured and the group names match the names configured on the Timeline Emote Clips.

The AudioClip action: When used on a Salsa Audio Track allows configuration and timing of AudioClips to drive SALSA's lipsync approximation algorithms. The Salsa Audio Track is configured to point to the AudioSource used by SALSA. Unity maintains control focus for AudioClips dragged onto a Timeline track, so it is necessary to configure the specific AudioClip in the Inspector. Once an AudioClip is configured for the track clip, the clip container in the track will dynamically adjust its length to be the duration of the AudioClip. Additionally, the clip container's name will also be dynamically changed to that of the AudioClip name.

TextSync Track:

The TextSync action: When used on a Salsa Text Sync Track allows for configuration and timing of TextSync text on a Timeline track to drive SALSA's implementation of text-to-lipsync via TextSync. It works similarly to the Audio Track and dynamically adjusts its length to be representative of the TextSync processing of text input. And, it dynamically changes the clip container's title to be that of the configured text string.

Using Unity's Timeline to Control SALSA Lip-Sync Audio Dialogue (pt 2) - YouTube
Using Unity's Timeline to Control SALSA TextSync Dialogue (pt 1) - YouTube
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We're very excited to announce the release of our new SALSA add-on for UMA DCS. This support package is for the UMA 2.5+ Dynamic Character System (DCS) released May 21, 2017. 

*NOTE: RandomEyes 3D normally supports unlimited custom BlendShapes used for facial expression; however, since UMA DCS characters have no BlendShapes, this feature is unavailable on UMA DCS characters. Instead, we provide custom facial expression functions built into the SalsaUmaSync script that drive the UMAExpressionPlayer.

08/16/2018 - v1.7.0 - Added "Lock Shapes Override" to allow animation when SALSA is not driven by the associated AudioSource.

10/22/2017 - v1.6.0 - For use with UMA DCS, updated to deal with recent UMA API changes.

Installation Instructions
  1. Install SALSA with RandomEyes into your project.
    1. Select [Window] -> [Asset Store]
    2. Once the Asset Store window opens, select the download icon, and download and import [SALSA with RandomEyes].
  2. Install UMA 2 into your project.
    1. Select [Window] -> [Asset Store]
    2. Once the Asset Store window opens, select the download icon, and download and import [UMA 2 - Unity Multipurpose Avatar].
  3. Download and Import the SALSA with RandomEyes UMA Character support package.
    1. Select [Assets] -> [Import Package] -> [Custom Package...]
    2. Browse to the [SALSA_3rdPartySupport_UMA_DCS_{version}.unitypackage] file and [Open].
Quick Start Instructions 
  1. To setup an UMA DCS character
    1. To setup a new UMA DCS character, and add all applicable SALSA components.
      1. [GameObject] -> [Crazy Minnow Studio] -> [UMA DCS] -> [SalsaUmaSync 1-click setup (new DynamicCharacterAvatar)]
    2. To add all the applicable SALSA components to an existing UMA DCS character.
      1. [Component] -> [Crazy Minnow Studio] -> [UMA DCS] -> [SalsaUmaSync 1-click setup (existing DynamicCharacterAvatar)]
  2. Add an AudioClip to the Salsa3D [Audio Clip] field.
  3. Link a RuntimeAnimationController to the SalsaUmaSync [RuntimeAnimatorController] field.
  4. Optionally use the SalsaUmaSync custom expression function to create facial expressions.
    1. public void SetExpression(Expression expression, float blendSpeed, float rangeOfMotion, float duration)
    2. public void SetExpression(Expression expression, float blendSpeed, float percentage, bool active)
  5. If using UMA in a scenario where SALSA is driven by an external mechanism (such as Dissonance or SalsaTextSync), enable the 'Lock Shapes Override' option, otherwise leave it disabled.
  6. Play the scene.

Creating a Prefab for use with Run-Time Systems (such as Dissonance)
  1. Create a working SALSA-UMA model, using the Quick Start steps above.
  2. Click to select the root SALSA_UMA2_DCS game object (this ensures the RandomEyes gizmo is created). (click the image below for a demonstration)
  3. Next, drag the SALSA_UMA2_DCS game object from the scene to a folder in your Project.
  4. The base prefab is now created.
  5. NOTE: The UMA_DCS game object contains the UMA libraries and must remain in the scene for UMA to function properly.

NOTE: While every attempt has been made to ensure the safe content and operation of these files, they are provided as-is, without warranty or guarantee of any kind. By downloading and using these files you are accepting any and all risks associated and release Crazy Minnow Studio, LLC of any and all liability.

Download Files

Simple Automated Lip Sync Approximation
~ We look forward to seeing what you create! ~

Buy SALSA on the Asset Store
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
micInput-lite is a real-time, Unity microphone input asset
(currently free for use with SALSA lip-sync) micInput-lite v1.6.0 is the current stable release version.
micInput-lite v1.7.0-beta is now available and may reduce microphone input lag by up to 10x.

Using a microphone in Unity as a real-time input source for SALSA lip synchronization is a popular feature. However, micInput is continuing to grow into our vision of a stand-alone Unity microphone asset that has much more flexibility than simply being an add-on for SALSA. While the same uber cool capability and simplicity exists for creating lip-synced avatars with real-time microphone input, micInput-lite has reached the next step in becoming self-sufficient as a microphone input asset for any usage scenario. This asset is currently still free for SALSA customers and is available via the download links below.

Buy SALSA on the Asset Store

[UPDATE 2018-04-14] v1.7.0-beta: Experimental change, may significantly reduce microphone input lag. If this is stable and works well for you, please send us an email and let us know!

Quick-Links: Implementation Details:

micInput-lite takes a very basic approach to adding real-time Unity microphone input to your SALSA lip-sync project and while still intended as an example code-set for real-time input, it does include some robust capabilities for both design-time and run-time usage. As of version 1.5.0+, micInput functions differently from previous versions. Currently, micInput-lite only requires a Unity AudioSource for microphone input operation. Previously, a SALSA component was also required. However, as of SALSA 1.3.3+, SALSA's lip-sync functionality can operate passively from an AudioSource, rather than aggressively attaching to it. This greatly increases SALSA's flexibility and allows us to reduce this asset to a single API script vice having separate implementations for 2D and 3D SALSA components. Additionally, micInput-lite can be selectively connected to any Unity AudioSource; therefore, micInput-lite no longer automatically adds an AudioSource for Unity microphone use.

The simplicity of micInput-lite remains a priority and the same automated feature set exists as in previous versions, plus some new capabilities. At run-time, micInput-lite will attempt to gain reference to a local AudioSource component if one has not been specified (linked in the custom inspector). The process of dynamically getting the AudioSource component will operate in a non-blocking coroutine and will continue to look for an AudioSource on the gameObject until one is present. NOTE: Without an AudioSource, this asset cannot function and will simply continue to wait for an AudioSource. The benefit of this method is a flexible start-up solution, allowing programmatic implementations of an AudioSource (such as UMA2 character startup procedures). Once the AudioSource is connected, the asset will automatically complete the wire-up of the Unity microphone (using the default microphone if one has not been selected in the Custom Inspector). It will then start microphone recording (by default - selectable by the developer) and terminate the coroutine.

In addition to all of the automated processes, micInput-lite now has an option to disable the auto-start functionality. During startup, automatic wire-up of the Unity microphone will occur, but recording will not start automatically. This feature is very useful for implementations where microphone input is desired in an on-demand scenario. As expected, micInput can be started and stopped on demand via the API.

Run-time lip syncing with a 2D or 3D avatar with SALSA and a microphone is easy and fun! SALSA lip-sync is a Unity game engine asset, available now on the Unity Asset Store. micInput-lite is currently only available as an addon for SALSA.

Instructions (using micInput-lite with SALSA):
  1. NOTE: a video is available demonstrating how to use micInput-lite v1.5.0+ (this video is also applicable to v1.6.0).
  2. The directory structure for micInput-lite has changed. It is highly recommended that micInput versions prior to v1.5.0 be deleted from your project prior to installing the latest version. to ensure the latest version is appropriately the only instance of micInput in your project.
  3. Ensure you have at least one microphone attached, enabled, and working. It is also a good practice to make sure it is not being used by another application while trying to use it as an input source for lip synchronization. NOTE: [on PC] the 'Default' microphone selection will attach to the operating system default microphone. [on Mac] the 'Default' microphone selection will attempt to attach to the input device currently selected in the operating system's 'Sound' utility applet.
  4. Import SALSA with RandomEyes into your Unity lip-sync project.
  5. Download and import the micInput-lite unitypackage into your Unity lip-sync project. Scripts are imported into "Crazy Minnow Studio/SALSA with RandomEyes/Addons/micInput/".
  6. Add micInput to a GameObject in your scene. If you are not adding it to the same GameObject that contains the AudioSource linked to SALSA lip-sync, it will be necessary to link micInput to the AudioSource driving SALSA. You can also add the micInput component to your GameObject from the Component menu: Crazy Minnow Studio > Addons.
  7. For Unity 5.2+, follow the steps in this blog entry to configure and link an attenuated AudioMixerGroup to the AudioSource linked to micInput. Ensure Mute Microphone is *not* enabled.
  8. Ensure you have added a Salsa2D or Salsa3D component to the appropriate character model in your scene. SALSA needs to connect to the same AudioSource component micInput will use.
  9. Configure the SALSA component per the documentation. At a minimum, link target mouth shapes to your model's blendshapes (3D) or sprites (2D) in SALSA [SaySmall/SayMedium/SayLarge indexes].
  10. Ensure SALSA is linked to the appropriate AudioSource if it is not attached to the same GameObject as the AudioSource micInput is using.
  11. You might need to adjust your Trigger levels in SALSA's "Speech Properties". This depends on the sensitivity of your mic and how dynamic you want the lip sync movement to be.
  12. Hit <Play> and enjoy lip-syncing your model with your own voice!
Unity Microphone Input using SALSA Lip Sync - YouTube
File Contents:
  • This add-on is provided in .unitypackage format  [micInput-lite_version.unitypackage]
    • CM_MicInput.cs is the core helper class and it also includes a Custom Inspector (editor script) for your convenience.

NOTE: While every attempt has been made to ensure the safe content and operation of these files, they are provided as-is, without warranty or guarantee of any kind. By downloading and using these files you are accepting any and all risks associated and release Crazy Minnow Studio, LLC of any and all liability.

Download Files   Troubleshooting and Operational Notes:
  • micInput-lite is no longer bound to a SALSA component, its only dependency is an AudioSource component that SALSA then uses for lip-sync. This AudioSource does not necessarily need to exist on the same GameObject, nor does it need to exist on micInput startup.
  • IMPORTANT! all-versions/platforms: (Unity v5.2+) for proper lip-sync functionality, the microphone/AudioSource cannot be muted. Please see the following blog post for information on setting up real-time microphone lip-sync with SALSA.
  • IMPORTANT! (OS X): Selecting multiple microphones for input may not be supported on your platform. Save your project before testing this feature. In our testing, Unity didn't quite know what to do with itself and required a force-quit.
  • (OS X): Default microphone selection may not work reliably. Select the microphone explicitly if 'Default' does not work as expected.
  • If microphone statuses change, use Refresh Mic List to update the Available Microphones list for selection.
  • If there are no microphones available, the Available Microphones list will display 'ERROR - no microphones available'.
  • Lip synchronization with a microphone also requires an AudioSource component to be present on your object or, if using an AudioSource on a different object, it must be linked to micInput.
  • On mobile devices (Android), lip sync works well, but we have seen some anomalous issues when the application is task-switched to another application and then back. If the microphone seems to misbehave after doing this, try switching to the home screen and then back into the application.
  • Problems with onFocus/Pause changes on desktop clients has been repaired with the latest script version (v1.0.2+).
  • As would be expected, microphone input will only work with one application asset at a time. Windows operating systems may allow more than one microphone input to be active at once on the same computer - this is not guaranteed behavior. If it is desireable to use microphone input for multiple assets/models from the same computer, it would be necessary to programatically disconnect one asset and then connect another.
Release Notes:

v1.7.0-beta - (2018-04-14)[experimental]:

  • + added 1/10 sec timing delay to startup -- gives Unity Microphone and AudioClip classes time to spin up and respond to data requests. In testing, this is imperceptable to startup time and has shown a 10x improvement in microphone input lag/delay.
  • ~ moved micInput component menu location to Crazy Minnow Studio > SALSA > Addons > micInput

v1.6.0 - (2015-12-22) [recommended]:

  • + new isRecording read-only property to see if mic is started and recording.
  • + new isAutoStart mode.
  • + added additional checks and logging for missing AudioSource or mic references. Breaking actions now throw LogWarning() in isDebug.
  • + added logging during coroutines if isDebug.
  • + added sampleRate setting prior to starting the microphone. Ensures valid freq with Editor script processing during runtime.
  • ~ StartMicrophone(): removed the blocking while loop.
  • ~ cleaned up and optimized logic and logging.
  • ~ default sampleRate changed to 22050.
  • ~ CheckFreqCapability() now processes all sampleRate-related functionality.

v1.5.0 BETA - (2015-12-07):

  • - Removed dependency on Salsa2D or Salsa3D type (requires SALSA 1.3.3+). SALSA components are no longer auto-added to the gameObject when micInput is attached.
  • - Removed dependency on local attached AudioSource: now public and if not found, will look for local AudioSource component. An AudioSource component is no longer automatically added to the gameObject when micInput is attached.
  • ~ micInput now looks for an AudioSource in a coroutine to ensure runtime-created AudioSources have sufficient time to spin up. This is beneficial for workflows such as: SALSA with UMA2.
  • ~ It is no longer necessary to attach the micInput addon to the same GameObject SALSA is attached to.

v1.0.8 BETA - (2015-11-13):

  • + Available Microphones selection option in a new Custom Inspector.
  • + Refresh Mic List button added to update the list if available microphones changes prior to running the app/game.
  • ~ bool isMuted (Mute Microphone) now defaults to false.

v1.0.7 - (2015-06-10) [legacy]:

  • ~ Additional error trapping and restructuring to prevent errors if no microphone is attached/found.

v1.0.6 - (2015-05-20):

  • ~ StartMicrophone() and StopMicrophone() are now public functions.
  • + Demo scene added.

v1.0.5 (2015-02-20):

  • ~ Unity 5 ready!

SALSA: Simple Automated Lip Sync Approximation - available on the Unity Asset Store

Watch our lip-sync tutorial video for Real-Time Microphone Input:
Unity Microphone Input using SALSA Lip Sync - YouTube

version 1.5.0

Real-Time Microphone Input for SALSA Lip Sync - YouTube

version 1.0 legacy

Buy SALSA on the Asset Store

Simple Automated Lip Sync Approximation
~ We look forward to seeing what you create! ~

12874-5674 Updated: 2018-04-14 4527-1296 Originally posted: 12 December 2014 11:49 am
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

UPDATE: 2018-08-06: v0.6.0 updated to fix obsolete Unity API calls.

Amplitude for WebGL provides easy access to audio analysis functions for amplitude and frequency data. Being a web-based toolset, it makes sense that our customers would want to access web-based audio files to feed to Amplitude. In fact, we have already had a couple of questions presented in our Unity forum thread for Amplitude asking how to make this work. And additionally, the assumption was it was a bug or implementation problem with Amplitude itself. The process of fetching and using async web-based audio can appear to be a bit daunting. The difficulty is not an Amplitude or webGL issue, it is an issue of timing and dealing with the asynchronous nature of the web.

During our research, we created a small API in an attempt to make the process much easier. You will most likely need to understand Unity's implementation of web-based resources, especially if you wish to modify or troubleshoot issues you may be having. WebAudioUrls is available as a free package for Amplitude for WebGL customers -- as a starting point for their async audio projects. Please feel free to make modifications to the API files if necessary. Since Unity's implementation of WebGL is so basic and constantly changing, we do not provide support for web-audio, Unity's implementation of web-request resources, etc. Consider this package a starting point for asynchronous operations in your project. You can get the files in our download section. We have also published an Examples package, which includes single-action and batch-mode implementation examples, as well as a demo scene. Once you have grabbed the files, head over to the documentation for a detailed look at how to use the API and check out the release notes. A video tutorial is available here. PLEASE NOTE: most browsers do not like cross-site access. If your project attempts to access audio from another site, you may experience a CORS issue. If you are only testing, you can probably bypass the issue with a browser plug-in. Google is your friend -- the Mozilla site has some good information about CORS. If the examples are not working in a browser, use your browser's development mode (usually F12) to see if you are receiving errors.

Read the WebAudioUrls add-on Documentation

Requirements: Accessing web-audio means dealing with the asynchronous tendencies of the Internet (first, accessing the URL and then streaming in the data to create an AudioClip). The API is not limited to web-gl usage and certainly not limited to Amplitude implementations. In fact, the API itself has no requirements for any asset package. The example files, on the other hand, do have dependencies in the demo scene: Amplitude, SALSA with RandomEyes, and AmplitudeSALSA. ALSO NOTE: depending on your implementation, you may run across a CORS issue. See the preceeding paragraph.

Amplitude for WebGL Leveraging Web-based Audio Resources - YouTube

NOTE: While every attempt has been made to ensure the safe content and operation of these files, they are provided as-is, without warranty or guarantee of any kind. By downloading and using these files you are accepting any and all risks associated and release Crazy Minnow Studio, LLC of any and all liability.

Download Files
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TextSync is a free add-on for SALSA lipsync that adds text-based simulated lipsync to your SALSA enabled characters. Simply add TextSync to your SALSA enabled character, set the desired words-per-minute rate, and pass text to the TextSync.Say method. SALSA will simulate lipsync to the text for the duration calculated from the words-per-minute setting.

UPDATES:
2018-08-01: v1.4.0 beta - Removed a missing test script from the example scene. Fixed a bug with audio clip checking.

2018-07-31: v1.3.0 beta - Added a "No Spaces" property that splits the text by an average word length that can be specified in the inspector.

2018-03-08: Use Unity Timeline to control text changes for TextSync. Get the addon in the SALSA Downloads section. Watch the video.
2017-03-27: v1.1 beta.

Free add-on to add text-to-lipsync capability to SALSA enabled characters

Be sure to install the SALSA with RandomEyes Asset before adding this add-on.

  • Setup your SALSA enabled character as you normally would.
  • Add the TextSync component to your SALSA enabled character.
  • Set the SALSA type (2D/3D).
  • Set the desire words-per-minute rate.
  • Pass dialogue text to the TextSync.Say method.

Here is a simple example of how to call the TextSync.Say method.


using UnityEngine;
using CrazyMinnow.SALSA.TextSync;

public class CM_TextSyncTester : MonoBehaviour 
{
	public CM_TextSync textSync;
	public string dialogue = "Here is some text to demonstrate text-to-lipsync.";
	public bool fire = false;

	void Update ()
	{
		if (fire)
		{
			fire = false;
			textSync.Say(dialogue);
		}
	}
}

NOTE: While every attempt has been made to ensure the safe content and operation of these files, they are provided as-is, without warranty or guarantee of any kind. By downloading and using these files you are accepting any and all risks associated and release Crazy Minnow Studio, LLC of any and all liability.

Download Files

Simple Automated Lip Sync Approximation
~ We look forward to seeing what you create! ~

Buy SALSA on the Asset Store
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

After the launch of the IBM Watson Unity SDK, we were curious about how the text-to-speech service might work with SALSA to deliver real-time text-to-speech-based lip-sync. This can already be achieved with the wonderful RT-Voice asset from Crosstales, but we take pride in SALSA working with as many solutions as possible to make our customers options as broad as possible. After signing up for a free account and having a look at the documentation, we found the IBM Watson text-to-speech service simple to use and a great pairing with SALSA lipsync.

NOTE: This is a simple demonstration script. Implementation of Watson or the Unity Watson SDK is beyond the support scope of Crazy Minnow Studio and/or SALSA Lip-Sync. It is a proof-of-concept and not a full implementation of the Watson API. Implementation support questions should be directed to IBM's support tiers. Crazy Minnow Studio does not offer support for Watson implementation of any kind. Please also note, the Watson API support team does not support Watson on a Web-Player or WebGL platform.

SALSA requires no special handling to work with any service that produces output via a standard Unity AudioSource. Likewise, if your implementation of Watson text-to-speech utilizes a standard Unity AudioSource, SALSA should work without issue. Simply output the Watson-generated AudioClip to the same AudioSource component that SALSA references and you're good to go. I've included a very basic script below that demonstrates the process.

Code Example

NOTE: While every attempt has been made to ensure the safe content and operation of these files, they are provided as-is, without warranty or guarantee of any kind. By downloading and using these files you are accepting any and all risks associated and release Crazy Minnow Studio, LLC of any and all liability.


using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1;
using IBM.Watson.DeveloperCloud.Utilities;
using IBM.Watson.DeveloperCloud.Connection;
using IBM.Watson.DeveloperCloud.Logging;

public class WatsonTTS : MonoBehaviour
{
    public string url; // Your IBM Watson URL
    public string user; // Your IBM Watson username
    public string pass; // Your IBM Watson password
    public string text = "Hello SALSA, I'm Watson, your IBM services representative, how can I help you?";
    public bool play;

    private Credentials credentials;
    private TextToSpeech textToSpeech;
    private AudioSource audioSrc;

    void Start ()
    {
        credentials = new Credentials(user, pass, url);
        textToSpeech = new TextToSpeech(credentials);
        audioSrc = GetComponent<AudioSource>(); // Get the SALSA AudioSource from this GameObject
    }

    private void Update()
    {
        if (play)
        {
            play = false;
            GetTTS();
        }
    }

    private void GetTTS()
    {
        textToSpeech.Voice = VoiceType.en_US_Michael;
        textToSpeech.ToSpeech(OnSuccess, OnFail, text, false);
    }

    private void OnSuccess(AudioClip clip, Dictionary<string, object> customData)
    {
        if (Application.isPlaying && clip != null && audioSrc != null)
        {
            audioSrc.spatialBlend = 0.0f;
            audioSrc.clip = clip;
            audioSrc.Play();
        }
    }

    private void OnFail(RESTConnector.Error error, Dictionary<string, object> customData)
    {
        Log.Error("ExampleTextToSpeech.OnFail()", "Error received: {0}", error.ToString());
    }
}

NOTE: While every attempt has been made to ensure the safe content and operation of these files, they are provided as-is, without warranty or guarantee of any kind. By downloading and using these files you are accepting any and all risks associated and release Crazy Minnow Studio, LLC of any and all liability.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview