pub struct EimModel {
path: PathBuf,
socket_path: PathBuf,
socket: UnixStream,
debug: bool,
debug_callback: Option<Box<dyn Fn(&str) + Send + Sync>>,
_process: Child,
model_info: Option<ModelInfo>,
message_id: AtomicU32,
child: Option<Child>,
continuous_state: Option<ContinuousState>,
model_parameters: ModelParameters,
}
Expand description
Edge Impulse Model Runner for Rust
This module provides functionality for running Edge Impulse machine learning models on Linux systems. It handles model lifecycle management, communication, and inference operations.
§Key Components
EimModel
: Main struct for managing Edge Impulse modelsSensorType
: Enum representing supported sensor input typesContinuousState
: Internal state management for continuous inference modeMovingAverageFilter
: Smoothing filter for continuous inference results
§Features
- Model process management and Unix socket communication
- Support for both single-shot and continuous inference modes
- Debug logging and callback system
- Moving average filtering for continuous mode results
- Automatic retry mechanisms for socket connections
- Visual anomaly detection (FOMO AD) support with normalized scores
§Example Usage
use edge_impulse_runner::{EimModel, InferenceResult};
// Create a new model instance
let mut model = EimModel::new("path/to/model.eim").unwrap();
// Run inference with some features
let features = vec![0.1, 0.2, 0.3];
let result = model.infer(features, None).unwrap();
// For visual anomaly detection models, normalize the results
if let InferenceResult::VisualAnomaly { anomaly, visual_anomaly_max, visual_anomaly_mean, visual_anomaly_grid } = result.result {
let (normalized_anomaly, normalized_max, normalized_mean, normalized_regions) =
model.normalize_visual_anomaly(
anomaly,
visual_anomaly_max,
visual_anomaly_mean,
&visual_anomaly_grid.iter()
.map(|bbox| (bbox.value, bbox.x as u32, bbox.y as u32, bbox.width as u32, bbox.height as u32))
.collect::<Vec<_>>()
);
println!("Anomaly score: {:.2}%", normalized_anomaly * 100.0);
}
§Communication Protocol
The model communicates with the Edge Impulse process using JSON messages over Unix sockets:
- Hello message for initialization
- Model info response
- Classification requests
- Inference responses
§Error Handling
The module uses a custom EimError
type for error handling, covering:
- Invalid file paths
- Socket communication errors
- Model execution errors
- JSON serialization/deserialization errors
§Visual Anomaly Detection
For visual anomaly detection models (FOMO AD):
- Scores are normalized relative to the model’s minimum anomaly threshold
- Results include overall anomaly score, maximum score, mean score, and anomalous regions
- Region coordinates are provided in the original image dimensions
- All scores are clamped to [0,1] range and displayed as percentages
- Debug mode provides detailed information about thresholds and regions
§Threshold Configuration
Models can be configured with different thresholds:
- Anomaly detection:
min_anomaly_score
threshold for visual anomaly detection - Object detection:
min_score
threshold for object confidence - Object tracking:
keep_grace
,max_observations
, andthreshold
parameters
Thresholds can be updated at runtime using set_learn_block_threshold
.
Fields§
§path: PathBuf
Path to the Edge Impulse model file (.eim)
socket_path: PathBuf
Path to the Unix socket used for IPC
socket: UnixStream
Active Unix socket connection to the model process
debug: bool
Enable debug logging of socket communications
debug_callback: Option<Box<dyn Fn(&str) + Send + Sync>>
Optional debug callback for receiving debug messages
_process: Child
Handle to the model process (kept alive while model exists)
model_info: Option<ModelInfo>
Cached model information received during initialization
message_id: AtomicU32
Atomic counter for generating unique message IDs
child: Option<Child>
Optional child process handle for restart functionality
continuous_state: Option<ContinuousState>
§model_parameters: ModelParameters
Implementations§
Source§impl EimModel
impl EimModel
Sourcepub fn new<P: AsRef<Path>>(path: P) -> Result<Self, EimError>
pub fn new<P: AsRef<Path>>(path: P) -> Result<Self, EimError>
Creates a new EimModel instance from a path to the .eim file.
This is the standard way to create a new model instance. The function will:
- Validate the file extension
- Spawn the model process
- Establish socket communication
- Initialize the model
§Arguments
path
- Path to the .eim file. Must be a valid Edge Impulse model file.
§Returns
Returns Result<EimModel, EimError>
where:
Ok(EimModel)
- Successfully created and initialized modelErr(EimError)
- Failed to create model (invalid path, process spawn failure, etc.)
§Examples
use edge_impulse_runner::EimModel;
let model = EimModel::new("path/to/model.eim").unwrap();
Sourcepub fn new_with_socket<P: AsRef<Path>, S: AsRef<Path>>(
path: P,
socket_path: S,
) -> Result<Self, EimError>
pub fn new_with_socket<P: AsRef<Path>, S: AsRef<Path>>( path: P, socket_path: S, ) -> Result<Self, EimError>
Creates a new EimModel instance with a specific Unix socket path.
Similar to new()
, but allows specifying the socket path for communication.
This is useful when you need control over the socket location or when running
multiple models simultaneously.
§Arguments
path
- Path to the .eim filesocket_path
- Custom path where the Unix socket should be created
Sourcepub fn new_with_debug<P: AsRef<Path>>(
path: P,
debug: bool,
) -> Result<Self, EimError>
pub fn new_with_debug<P: AsRef<Path>>( path: P, debug: bool, ) -> Result<Self, EimError>
Create a new EimModel instance with debug output enabled
Sourcefn ensure_executable<P: AsRef<Path>>(path: P) -> Result<(), EimError>
fn ensure_executable<P: AsRef<Path>>(path: P) -> Result<(), EimError>
Ensure the model file has execution permissions for the current user
Sourcepub fn new_with_socket_and_debug<P: AsRef<Path>, S: AsRef<Path>>(
path: P,
socket_path: S,
debug: bool,
) -> Result<Self, EimError>
pub fn new_with_socket_and_debug<P: AsRef<Path>, S: AsRef<Path>>( path: P, socket_path: S, debug: bool, ) -> Result<Self, EimError>
Create a new EimModel instance with debug output enabled and a specific socket path
Sourcefn connect_with_retry(
socket_path: &Path,
timeout: Duration,
) -> Result<UnixStream, EimError>
fn connect_with_retry( socket_path: &Path, timeout: Duration, ) -> Result<UnixStream, EimError>
Attempts to connect to the Unix socket with a retry mechanism
This function will repeatedly try to connect to the socket until either:
- A successful connection is established
- An unexpected error occurs
- The timeout duration is exceeded
§Arguments
socket_path
- Path to the Unix sockettimeout
- Maximum time to wait for connection
Sourcefn next_message_id(&self) -> u32
fn next_message_id(&self) -> u32
Get the next message ID
Sourcepub fn set_debug_callback<F>(&mut self, callback: F)
pub fn set_debug_callback<F>(&mut self, callback: F)
Set a debug callback function to receive debug messages
When debug mode is enabled, this callback will be invoked with debug messages from the model runner. This is useful for logging or displaying debug information in your application.
§Arguments
callback
- Function that takes a string slice and handles the debug message
Sourcefn debug_message(&self, message: &str)
fn debug_message(&self, message: &str)
Send debug messages when debug mode is enabled
fn send_hello(&mut self) -> Result<(), EimError>
Sourcepub fn socket_path(&self) -> &Path
pub fn socket_path(&self) -> &Path
Get the socket path used for communication
Sourcepub fn sensor_type(&self) -> Result<SensorType, EimError>
pub fn sensor_type(&self) -> Result<SensorType, EimError>
Get the sensor type for this model
Sourcepub fn parameters(&self) -> Result<&ModelParameters, EimError>
pub fn parameters(&self) -> Result<&ModelParameters, EimError>
Get the model parameters
Sourcepub fn infer(
&mut self,
features: Vec<f32>,
debug: Option<bool>,
) -> Result<InferenceResponse, EimError>
pub fn infer( &mut self, features: Vec<f32>, debug: Option<bool>, ) -> Result<InferenceResponse, EimError>
Run inference on the input features
This method automatically handles both continuous and non-continuous modes:
§Non-Continuous Mode
- Each call is independent
- All features must be provided in a single call
- Results are returned immediately
§Continuous Mode (automatically enabled for supported models)
- Features are accumulated across calls
- Internal buffer maintains sliding window of features
- Moving average filter smooths results
- Initial calls may return empty results while buffer fills
§Arguments
features
- Vector of input featuresdebug
- Optional debug flag to enable detailed output for this inference
§Returns
Returns Result<InferenceResponse, EimError>
containing inference results
fn infer_continuous_internal( &mut self, features: Vec<f32>, debug: Option<bool>, ) -> Result<InferenceResponse, EimError>
fn infer_single( &mut self, features: Vec<f32>, debug: Option<bool>, ) -> Result<InferenceResponse, EimError>
Sourcefn requires_continuous_mode(&self) -> bool
fn requires_continuous_mode(&self) -> bool
Check if model requires continuous mode
Sourcepub fn input_size(&self) -> Result<usize, EimError>
pub fn input_size(&self) -> Result<usize, EimError>
Get the required number of input features for this model
Returns the number of features expected by the model for each classification. This is useful for:
- Validating input size before classification
- Preparing the correct amount of data
- Padding or truncating inputs to match model requirements
§Returns
The number of input features required by the model
Sourcepub async fn set_learn_block_threshold(
&mut self,
threshold: ThresholdConfig,
) -> Result<(), EimError>
pub async fn set_learn_block_threshold( &mut self, threshold: ThresholdConfig, ) -> Result<(), EimError>
Sourcefn get_min_anomaly_score(&self) -> f32
fn get_min_anomaly_score(&self) -> f32
Get the minimum anomaly score threshold from model parameters
Sourcefn normalize_anomaly_score(&self, score: f32) -> f32
fn normalize_anomaly_score(&self, score: f32) -> f32
Normalize an anomaly score relative to the model’s minimum threshold