Seleccione sus preferencias de cookies

Usamos cookies esenciales y herramientas similares que son necesarias para proporcionar nuestro sitio y nuestros servicios. Usamos cookies de rendimiento para recopilar estadísticas anónimas para que podamos entender cómo los clientes usan nuestro sitio y hacer mejoras. Las cookies esenciales no se pueden desactivar, pero puede hacer clic en “Personalizar” o “Rechazar” para rechazar las cookies de rendimiento.

Si está de acuerdo, AWS y los terceros aprobados también utilizarán cookies para proporcionar características útiles del sitio, recordar sus preferencias y mostrar contenido relevante, incluida publicidad relevante. Para aceptar o rechazar todas las cookies no esenciales, haga clic en “Aceptar” o “Rechazar”. Para elegir opciones más detalladas, haga clic en “Personalizar”.

Tradeoffs - Streaming Media Lens
Esta página no se ha traducido a su idioma. Solicitar traducción

Tradeoffs

SM_PERF5: What tradeoffs have you made in media processing to improve client experience and lower bandwidth costs?  
SM_PBP11 – Optimize the number of adaptive bitrate renditions for your workload
SM_PBP12 – Select appropriate encoding settings for your content type and quality targets
SM_PBP13 – Trade higher content processing cost for lower delivery costs for popular content

For media delivery applications, protocol selection and configuration have a dramatic impact on client performance. Progressive file downloads or RTMP streaming, can be slow to download, costly to scale, or inflexible. Instead, use HTTP-based Adaptive-Bitrate protocols, like Apple HLS, combined with web caching mechanisms to improve distribution efficiency. These protocols also enable clients to select the optimal rendition for playback based on network connection, display resolution, and other client-side characteristics, greatly improving viewer experience.

The ABR ladder, or number of logical renditions available to an individual client, should be tuned to meet the needs of the specific application, taking into account:

  • Playback quality

  • Client and display ecosystem

  • User geography and network connectivity

Providing too many renditions can increase encoding costs and cause clients to fluctuate frequently between renditions, reducing perceived quality. Not providing enough renditions might leave usable bandwidth underutilized and, thus, also reduce quality. Our general recommendation is to determine the maximum bitrate first, then divide by 1.5–2 for each step down. This step in the ladder allows clients to make significant jumps in quality, but not so many that frequent changes could be perceived by end users. If your application is delivering to Apple devices, refer to Apple TN2224 for additional guidance on creating adaptive bitrate content. If you are using AWS Elemental MediaConvert, the Auto ABR capability can automatically determine an optimal ABR ladder for you based on the specific characteristics of the content.

SM_PERF6: What tradeoffs have you made to lower live glass-to-glass latency?
SM_PBP14 – Optimize processing, origination, delivery, and client for low latency
SM_PBP15 – Remove unnecessary processing stages

Latency is inherent in any broadcasting or streaming platform. Over-the-air live broadcast glass-to-glass latency ranges between 3 – 12 seconds, with an average of 6 seconds often seen in practice. This means that it could take up to an average of 6 seconds before the event that is captured in the camera is displayed on the playback device. For streaming platforms, this latency can vary anywhere from 3 to 90 seconds, depending on various design choices. Typically, achieving low latency in a streaming platform may be a tradeoff with other critical aspects of the streaming experience, such as, video quality, re-buffering rate, error rates and other quality-of-service indicators. 

With HTTP-based streaming, latency mainly depends on the media segment length. For instance, the Apple HLS specification recommends at least three segments of buffer for best performance. This directly influences the latency. Other factors in the media delivery pipeline that influence latency include the video encoding operations, the duration of ingest and packaging operations, network propagation delays, and the CDN. In most cases, the player buffer carries the largest share of the overall latency. 

There are several tradeoffs to consider with low latency media streaming design. Shorter media segment lengths will result in increased traffic on the caching servers and then to the origin. This is fairly manageable by a CDN, especially if it supports HTTP 2.0 at the edge and HTTP 1.1 origins. As previously mentioned, encoding parameters have an impact on latency and optimizations for latency typically impact the video quality. For example, setting an encoder lookahead size to a low value will improve latency, but reduces output quality for demanding scene changes. If your content does not have dramatic scene changes, keeping this value low will not have a noticeable impact video quality.

PrivacidadTérminos del sitioPreferencias de cookies
© 2025, Amazon Web Services, Inc o sus afiliados. Todos los derechos reservados.