As traduções são geradas por tradução automática. Em caso de conflito entre o conteúdo da tradução e da versão original em inglês, a versão em inglês prevalecerá.
Referências
Breiman, L. 2001. "Random Forests." Machine Learning. https://doi.org/10.1023/A:1010933404324
Estlund, D. M. 1994. "Opinion Leaders, Independence, and Condorcet’s Jury Theorem." Theory and Decision. https://doi.org/10.1007/BF01079210
Fort, S., H. Hu e B. Lakshminarayanan. 2019. "Deep Ensembles: A Loss Landscape Perspective." 2, 1–14. https://arxiv.org/abs/1912.02757
Freund, Y. e R.E. Schapire. 1996. "Experiments with a New Boosting Algorithm." Anais da 13.º Conferência Internacional sobre Machine Learning . https://dl.acm.org/doi/10.5555/3091696.3091715
Gal, Y. 2016. "Uncertainty in Deep Learning." Departmento de Engenharia. University of Cambridge.
Gal, Y., e Z. Ghahramani. 2016. "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning." 33.ª Conferência Internacional sobre Machine Learning (ICML 2016). https://arxiv.org/abs/1506.02142
Guo, C., G. Pleiss, Y. Sun e K.Q. Weinberger. 2017. "On Calibration of Modern Neural Networks." 34.ª Conferência Internacional sobre Machine Learning (ICML 2017). https://arxiv.org/abs/1706.04599
Hein, M., M. Andriushchenko, e J. Bitterwolf. 2019. "Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem." 2019. Anais da IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Junho 2019): 41–50. https://doi.org/10.1109/CVPR.2019.00013
Kendall, A. e Y. Gal. 2017. "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?" Advances in Neural Information Processing Systems. https://papers.nips.cc/paper/7141-what-uncertainties-do-we-need-in-bayesian-deep-learning-for-computer-vision
Lakshminarayanan, B., A. Pritzel e C. Blundell. 2017. "Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles." Advances in Neural Information Processing Systems. https://arxiv.org/abs/1612.01474
Liu, Y., M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer e V. Stoyanov. 2019. "RoBERTa: A Robustly Optimized BERT Pretraining Approach." https://arxiv.org/abs/1907.11692
Nado, Z., S. Padhy, D. Sculley, A. D’Amour, B. Lakshminarayanan e J. Snoek. 2020. "Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift." https://arxiv.org/abs/2006.10963
Nalisnick, E., A. Matsukawa, Y.W. Teh, D. Gorur e B. Lakshminarayanan. 2019. "Do Deep Generative Models Know What They Don’t Know?" 7.ª Conferência Internacional sobre Representações de Aprendizagem (ICLR 2019). https://arxiv.org/abs/1810.09136
Ovadia, Y., E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J.V. Dillon, B. Lakshminarayanan e J. Snoek. 2019. "Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift." 33.ª Conferência sobre Sistemas de Processamento de Informação Neural (NeurIPS 2019). https://arxiv.org/abs/1906.02530
Platt, J. e outros. 1999. "Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods." Advances in Large Margin Classifiers. http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1639
Srivastava, N., G. Hinton, A. Krizhevsky, I. Sutskever e R. Salakhutdinov. 2014. "Dropout: A Simple Way to Prevent Neural Networks from Overfitting." Journal of Machine Learning Research. https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
van Amersfoort, J., L. Smith, Y.W. Teh e Y. Gal. 2020. "Uncertainty Estimation Using a Single Deep Deterministic Neural Network." Conferência Internacional sobre Machine Learning. https://arxiv.org/abs/2003.02037
Warstadt, A., A. Singh e S.R. Bowman. 2019. "Neural Network Acceptability Judgments." Transactions of the Association for Computational Linguistics. https://doi.org/10.1162/tacl_a_00290
Wilson, A. G. e P. Izmailov. 2020. "Bayesian Deep Learning and a Probabilistic Perspective of Generalization." https://arxiv.org/abs/2002.08791