Benjamin Shickel, Tyler J Loftus, Yuanfang Ren, Parisa Rashidi, Azra Bihorac, Tezcan Ozrazgat-Baslanti
No abstract text available.
Loftus TJ, Shickel B, Ozrazgat-Baslanti T, et al. Artificial intelligence-enabled decision support in nephrology. Nat Rev Nephrol. 2022;18(7):452–465. doi: 10.1038/s41581-022-00562-3
[DOI:
10.1038/s41581-022-00562-3]
Tomašev N, Glorot X, Rae JW, et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature. 2019;572(7767):116–119. doi: 10.1038/s41586-019-1390-1
[DOI:
10.1038/s41586-019-1390-1]
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Adv Neural Inf Process Syst. 2017;30:5998–6008
Devlin J, Chang M, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv. 2018. doi: 10.48550/arXiv.1810.04805
[DOI:
10.48550/arXiv.1810.04805]
Brown T, Mann B, Ryder N, et al. Language models are few-shot learners. Adv Neural Inf Process Syst. 2020;33:1877–1901.
Li Y, Rao S, Solares JRA, et al. BEHRT: transformer for electronic health records. Sci Rep. 2020;10(1):7155–7212. doi: 10.1038/s41598-020-62922-y
[DOI:
10.1038/s41598-020-62922-y]
Rieke N, Hancox J, Li W, et al. The future of digital health with federated learning. NPJ Digit Med. 2020;3(1):119. doi: 10.1038/s41746-020-00323-1
[DOI:
10.1038/s41746-020-00323-1]
Bommasani R, Hudson DA, Adeli E, et al. On the opportunities and risks of foundation models. arXiv. 2021.doi: 10.48550/arXiv.2108.07258
[DOI:
10.48550/arXiv.2108.07258]