A. Change the endpoint instance to an ml.t3 burstable instance with the same vCPU number as the ml.m5.xlarge instance has.
B. Attach an Amazon Elastic Inference ml.eia2.medium accelerator to the endpoint instance.
C. Enable Amazon SageMaker Autopilot to automatically tune performance of the model.
D. Change the endpoint instance to use a memory optimized ML instance.

- Awsexamhub website is not related to, affiliated with, endorsed or authorized by Amazon.
- Trademarks, certification & product names are used for reference only and belong to Amazon.