쿠키 기본 설정 선택

당사는 사이트와 서비스를 제공하는 데 필요한 필수 쿠키 및 유사한 도구를 사용합니다. 고객이 사이트를 어떻게 사용하는지 파악하고 개선할 수 있도록 성능 쿠키를 사용해 익명의 통계를 수집합니다. 필수 쿠키는 비활성화할 수 없지만 '사용자 지정' 또는 ‘거부’를 클릭하여 성능 쿠키를 거부할 수 있습니다.

사용자가 동의하는 경우 AWS와 승인된 제3자도 쿠키를 사용하여 유용한 사이트 기능을 제공하고, 사용자의 기본 설정을 기억하고, 관련 광고를 비롯한 관련 콘텐츠를 표시합니다. 필수가 아닌 모든 쿠키를 수락하거나 거부하려면 ‘수락’ 또는 ‘거부’를 클릭하세요. 더 자세한 내용을 선택하려면 ‘사용자 정의’를 클릭하세요.

Tradeoffs - Streaming Media Lens
이 페이지는 귀하의 언어로 번역되지 않았습니다. 번역 요청

Tradeoffs

SM_PERF5: What tradeoffs have you made in media processing to improve client experience and lower bandwidth costs?  
SM_PBP11 – Optimize the number of adaptive bitrate renditions for your workload
SM_PBP12 – Select appropriate encoding settings for your content type and quality targets
SM_PBP13 – Trade higher content processing cost for lower delivery costs for popular content

For media delivery applications, protocol selection and configuration have a dramatic impact on client performance. Progressive file downloads or RTMP streaming, can be slow to download, costly to scale, or inflexible. Instead, use HTTP-based Adaptive-Bitrate protocols, like Apple HLS, combined with web caching mechanisms to improve distribution efficiency. These protocols also enable clients to select the optimal rendition for playback based on network connection, display resolution, and other client-side characteristics, greatly improving viewer experience.

The ABR ladder, or number of logical renditions available to an individual client, should be tuned to meet the needs of the specific application, taking into account:

  • Playback quality

  • Client and display ecosystem

  • User geography and network connectivity

Providing too many renditions can increase encoding costs and cause clients to fluctuate frequently between renditions, reducing perceived quality. Not providing enough renditions might leave usable bandwidth underutilized and, thus, also reduce quality. Our general recommendation is to determine the maximum bitrate first, then divide by 1.5–2 for each step down. This step in the ladder allows clients to make significant jumps in quality, but not so many that frequent changes could be perceived by end users. If your application is delivering to Apple devices, refer to Apple TN2224 for additional guidance on creating adaptive bitrate content. If you are using AWS Elemental MediaConvert, the Auto ABR capability can automatically determine an optimal ABR ladder for you based on the specific characteristics of the content.

SM_PERF6: What tradeoffs have you made to lower live glass-to-glass latency?
SM_PBP14 – Optimize processing, origination, delivery, and client for low latency
SM_PBP15 – Remove unnecessary processing stages

Latency is inherent in any broadcasting or streaming platform. Over-the-air live broadcast glass-to-glass latency ranges between 3 – 12 seconds, with an average of 6 seconds often seen in practice. This means that it could take up to an average of 6 seconds before the event that is captured in the camera is displayed on the playback device. For streaming platforms, this latency can vary anywhere from 3 to 90 seconds, depending on various design choices. Typically, achieving low latency in a streaming platform may be a tradeoff with other critical aspects of the streaming experience, such as, video quality, re-buffering rate, error rates and other quality-of-service indicators. 

With HTTP-based streaming, latency mainly depends on the media segment length. For instance, the Apple HLS specification recommends at least three segments of buffer for best performance. This directly influences the latency. Other factors in the media delivery pipeline that influence latency include the video encoding operations, the duration of ingest and packaging operations, network propagation delays, and the CDN. In most cases, the player buffer carries the largest share of the overall latency. 

There are several tradeoffs to consider with low latency media streaming design. Shorter media segment lengths will result in increased traffic on the caching servers and then to the origin. This is fairly manageable by a CDN, especially if it supports HTTP 2.0 at the edge and HTTP 1.1 origins. As previously mentioned, encoding parameters have an impact on latency and optimizations for latency typically impact the video quality. For example, setting an encoder lookahead size to a low value will improve latency, but reduces output quality for demanding scene changes. If your content does not have dramatic scene changes, keeping this value low will not have a noticeable impact video quality.

프라이버시사이트 이용 약관쿠키 기본 설정
© 2025, Amazon Web Services, Inc. 또는 계열사. All rights reserved.