I've been using Amazon Sagemaker Notebooks to build a pytorch model for an NLP task. I know you can use Sagemaker to train, deploy, hyper parameter tuning, and model monitoring.
However, it looks like you have to create an inference endpoint in order to monitor the model's inference performance.
I already have a EC2 instance setup to perform inference tasks on our model, which is currently on a development box and rather not use an endpoint to make
Is it possible to use Sagemaker to train, run hyperparam tuning and model eval without creating an endpoint.