Releases: muxinc/mux-python
3.11.3
3.11.2
3.11.1
3.11.0
General:
Assorted documentation improvements
Added MIT license to setup.py
Mux Video:
Added Multi-track Audio
Added 4K & max_resolution_tier
Added Encoding tiers
Added Generated captions (beta)
Deprecated max_stored_resolution, replaced with resolution_tier
Mux Data:
Added Video startup failure
Added Monitoring breakdown timeseries
3.10.0 release
This release adds parameters to the response of some of the API calls (to better match our implementation), and improves the unit test for exercising the signing key routes. This release also includes a few documentation updates as well.
3.9.0
This release adds the ability to report delivered seconds by resolution, in support of the new resolution-based pricing. It also adds the crop
layout to Spaces broadcasts.
3.8.0
This release updates the Mux Data portion of the SDK to use the new Monitoring API
3.7.1
This is a minor release to update the PyPi metadata and to remove some trailing spaces in the documentation.
What's Changed
- fix(setup.py): pypi metadata by @sbdchd in #50
- Fix trailing whitespace in the model doc generation by @jaredsmith in #51
New Contributors
3.7.0
Reconnect Window with slates is now available in public beta. Check out the blog post for more information.
Changed Operations
• adds reconnect_slate_url and use_slates_for_standard_latecy parameters for creating and updating Live Streams
3.6.0: Transcription Vocabularies
Transcription vocabularies are live! Check out the docs for an in-depth guide.
Changed Operations
- adds text_source enum field on Tracks, indicating where the text in the track comes from
- uploaded tracks were created via the create-asset-track operation
- embedded tracks were created from embedded CEA-608 closed captions in a live stream
- generated_live tracks were generated using speech recognition during a live stream
- generated_live_final tracks were generated using speech recognition after the end of a live stream and have higher quality text, timing, and formatting
- adds generated_subtitles to CreateLiveStreamRequest and LiveStream schemas, for configuration of speech-recognition-based-generation of text tracks for live streams
New Operations
- create-transcription-vocabulary
- list-transcription-vocabularies
- get-transcription-vocabulary
- update-transcription-vocabulary
- delete-transcription-vocabulary
- update-live-stream-generated-subtitles