쿠키 기본 설정 선택

당사는 사이트와 서비스를 제공하는 데 필요한 필수 쿠키 및 유사한 도구를 사용합니다. 고객이 사이트를 어떻게 사용하는지 파악하고 개선할 수 있도록 성능 쿠키를 사용해 익명의 통계를 수집합니다. 필수 쿠키는 비활성화할 수 없지만 '사용자 지정' 또는 ‘거부’를 클릭하여 성능 쿠키를 거부할 수 있습니다.

사용자가 동의하는 경우 AWS와 승인된 제3자도 쿠키를 사용하여 유용한 사이트 기능을 제공하고, 사용자의 기본 설정을 기억하고, 관련 광고를 비롯한 관련 콘텐츠를 표시합니다. 필수가 아닌 모든 쿠키를 수락하거나 거부하려면 ‘수락’ 또는 ‘거부’를 클릭하세요. 더 자세한 내용을 선택하려면 ‘사용자 정의’를 클릭하세요.

Questions - Healthcare Industry Lens
이 페이지는 귀하의 언어로 번역되지 않았습니다. 번역 요청

Questions

HCL_OE4: How do you track model revisions and ensure traceability of your ML artifacts?

Employ version control for source code, data, and ML artifacts to ensure traceability and reliability of production ML deployments. Version control and traceability may be required by applicable regulatory frameworks, if for example models are deployed in support of medical devices.

HCL_SEC16. How does your organization review, accept, and manage the licenses of open-source software dependencies?

Data science in healthcare often depends on open-source libraries for data processing, model development, training, and hosting. Establish a process to review the privacy and license agreements for all software and ML libraries needed throughout the ML lifecycle. Verify that these agreements comply with your organization’s legal, privacy, and security requirements.

HCL_SEC17. Does your organization deidentify heath data used for machine learning, or otherwise limit access to sensitive, identifiable health data?

Many ML workflows do not require identified health data. Applying ML to deidentified data is one way to develop AI-powered applications without compromising privacy or data security. Cloud services like the Amazon Comprehend Medical DetectPHI API can streamline generating deidentified datasets.

HCL_REL4: How does your organization identify and limit biases in training data and statistical models?

Statistical models trained on real-world health data are susceptible to biases. Health data may inadvertently be collected from populations of individuals with similar characteristics, such as median household income, social determinants of health, and access to care. Care setting and health insurance coverage may also impart biases. For example, treatment cohorts may exhibit higher household income because such individuals may have greater access to advanced care.

Trained models may be misleading if biases are not quantified and mitigated. Also, models may be inaccurate when trained on biased data and applied to settings with different distributions. Examine data distributions and perform analyses to quantify and mitigate biases before training models.

HCL_PERF10: What processes do you use to monitor model performance after deployment and protect against drift?

Health data is often complex, and subject to temporal variations in quality and concept expression. Model performance may degrade over time due to data quality, model quality, and concept drift. Create a baseline for data quality, and automate monitoring performance in production. Automate alerts for changes in data quality or distributions, such as age deciles and prevalence of relevant chronic diseases. SageMaker AI Model Monitor provides an end-to-end framework model monitoring and lifecycle management.

프라이버시사이트 이용 약관쿠키 기본 설정
© 2025, Amazon Web Services, Inc. 또는 계열사. All rights reserved.