3

So I'll be licensing my deep learning model with pre-trained weights for deployment to a customer. The usage of this deep learning model will be done over this customer's servers. For security reasons, I was wondering what I could do to obfuscate my Python code and model to make it harder if anybody potentially wanted to reverse-engineer it.

So far, I have used Cython to compile my Python script (used to handle data and call the model) into C. I also have followed Making an executable in Cython to make an executable. Is there any more that I could do to obfuscate better or more efficiently?

In addition, I'm more concerned about obfuscating my pre-trained model's weights. I have the model in ONNX and Tensorflow Lite formats. What are the best ways to protect my models (for ONNX and Tensorflow Lite)?

Note, these models are meant to be deployed in a real-time setting — so it wouldn't be good to continually load the model.

2
  • Hello Shawn, 4 months later, have you found a way to obfuscate your ONNX model weights? Commented Feb 10, 2021 at 15:52
  • Do you have an answer to your own question so far? Commented Jan 17, 2024 at 17:54

1 Answer 1

0

Check out https://pypi.org/project/pyarmor/

afaik there isn't a way to decrypt it

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.