I’m trying to figure out how to connect SoundManager2 with Web Audio API for creating audio visualizations. I understand that Web Audio API can use HTML <audio> elements as input sources to analyze audio data and create frequency visualizations. The problem is that SoundManager2 doesn’t appear to generate standard <audio> elements that I can reference.
I want to keep using SoundManager2 because it has great features like tracking download progress, timing callbacks, and other functionality that’s either missing or hard to implement in Web Audio API alone. But I also want to use Web Audio API for the visualization part to avoid needing Flash plugins.
Is there a way to bridge these two technologies? How can I capture the audio output from SoundManager2 and feed it into Web Audio API for frequency analysis and visualization purposes?
Direct bridging won’t work - Web Audio API needs the actual audio stream, but SoundManager2 wraps everything in Flash. I ran into this exact problem on a music visualization project last year. Instead of fighting SoundManager2’s output, use createMediaElementSource() with hidden audio elements. Here’s what worked for me: keep SoundManager2 for your UI feedback and progress tracking, but load the same audio files through regular HTML5 audio elements that connect to Web Audio API for visualization. The sync between both systems is tricky, but you can handle it by letting SoundManager2 control the HTML5 element’s currentTime property. You’ll get both systems working together without the headache of trying to tap into Flash audio output.
drop soundmanager2 - it’s 2024 and flash is dead. use web audio api instead. xmlhttprequest gives you progress tracking, and audiocontext.decodeaudiodata() handles loading. much cleaner than hacking two incompatible systems together.
I ran into this same problem updating an audio app recently. SoundManager2’s Flash dependency creates a mess - it cuts off the audio context from Web Audio API completely. Here’s what worked for me: I built a hybrid setup using SoundManager2 for user controls and track monitoring, then ran separate Web Audio API instances for actual playback and visuals. You’ve got to sync both streams using SoundManager2’s callbacks or it’ll sound terrible. For timing issues, I added a small buffer delay (100-200ms) and used requestAnimationFrame for visual updates. Honestly though, you might want to ditch SoundManager2 entirely. Web Audio API and Fetch API can handle pretty much everything it does now.