Open
Description
Is there an existing issue for this?
- I have searched the existing issues.
Which plugins are affected?
Other
Which platforms are affected?
No response
Description
The docs dont contain a working example ? most of the method names dont exist. theres no example of passing audio stream into the model? What the hell is going on here guys?
late LiveModelSession _session; //should be LiveSession
final _audioRecorder = YourAudioRecorder();
// Initialize the Vertex AI Gemini API backend service
// Create a `LiveModel` instance with the model that supports the Live API
final model = FirebaseAI.vertexAI().liveModel( //should be liveGenerationModel
model: 'gemini-2.0-flash-live-preview-04-09',
// Configure the model to respond with audio
config: LiveGenerationConfig(responseModalities: [ResponseModality.audio]), //should be ResponseModalities
);
_session = await model.connect();
final audioRecordStream = _audioRecorder.startRecordingStream();
// Map the Uint8List stream to InlineDataPart stream
final mediaChunkStream = audioRecordStream.map((data) {
return InlineDataPart('audio/pcm', data);
});
await _session.startMediaStream(mediaChunkStream); //no method called startMediaStreamExists
// In a separate thread, receive the audio response from the model
await for (final message in _session.receive()) {
// Process the received message
///HOW??? HOW DO WE PROCESS THE MESSAGE??????
}```
What
### Reproducing the issue
doc links https://firebase.google.com/docs/ai-logic/live-api#audio-in-audio-out
### Firebase Core version
3.13.0
### Flutter Version
3.32
### Relevant Log Output
```shell
Flutter dependencies
Expand Flutter dependencies
snippet
Replace this line with the contents of your `flutter pub deps -- --style=compact`.
Additional context and comments
No response