Skip to main content
TensorFlow Lite — now called LiteRT by Google — is the most widely used runtime for embedded and mobile edge AI. Its lightweight tflite-runtime package supports .tflite models on devices such as Raspberry Pi, Coral TPU, and mobile platforms. Models can be verified before creating an interpreter with the secure loader:
import tflite_runtime.interpreter as tflite
from thistle_secure_loader import secure_load

def tflite_loader(path: str):
    return tflite.Interpreter(model_path=path)

MODEL_PATH = "model.tflite"

interpreter = secure_load(MODEL_PATH, tflite_loader)
interpreter.allocate_tensors()

print("TensorFlow Lite model verified and loaded securely.")
The secure_load call verifies the .tflite file’s signature using tuc before passing it to tflite.Interpreter. If the signature check fails, a ModelVerificationError is raised and the model is never loaded.

Requirements

ai-edge-litert>=2.1.0