Providing over twice the precision and inference speed compared to the last generation, Nvidia's new TensorRT 8 deep learning SDK clocked in a time of 1.2 ms in BERT-Large's inference.
Providing over twice the precision and inference speed compared to the last generation, Nvidia's new TensorRT 8 deep learning SDK clocked in a time of 1.2 ms in BERT-Large's inference.