Plainsight Filters
Overview
Filters are single function-based processing units for computer vision—lightweight, composable applications packaged as Docker containers. They allow developers to build powerful vision pipelines by chaining together small, reusable components. Think of them as Lego bricks for computer vision, enabling flexible and scalable solutions for real-world applications.
Whether you need to preprocess images, detect objects, track movement, or apply transformations, Filters provide a modular way to assemble vision workloads without writing monolithic applications.
Why Use Filters?
1. Modular & Reusable
Each Filter performs a specific function, such as object detection, image enhancement, or feature extraction. You can mix, match, and reuse them across different projects, reducing duplication and speeding up development.
2. Composable Pipelines
Filters can be chained together into a directed graph, allowing you to create sophisticated vision pipelines that adapt to your needs. Want to apply face blurring after detecting people? Simply connect the relevant Filters.
3. Scalable & Deployable Anywhere
Filters run in Docker containers, making them highly portable. Whether you're deploying to edge devices, cloud environments, or hybrid infrastructures, your pipelines remain consistent and scalable.
4. Extensible & Customizable
If a Filter doesn’t exist for your use case, you can create your own. Filters are designed to be developer-friendly, with a standard API and runtime behavior that makes extension straightforward.
5. Efficient Workload Distribution
Filters allow you to break down complex vision tasks into manageable units. By distributing workload efficiently, you can optimize compute resources and reduce latency in real-time applications.
6. Supports GPU & CPU Workloads
Filters can run AI-based computer vision functions using GPUs for accelerated inference, but they can also execute video processing and vision tasks on CPUs when GPU acceleration is not needed. This flexibility allows you to optimize cost and performance based on your deployment environment.
Common Use Cases
Filters can be used in a variety of vision processing scenarios, including:
- Real-time Inference - Processing live video feeds for object detection, facial recognition, and anomaly detection.
- Batch/Async Inference - Running AI workloads on stored images or videos for large-scale data processing.
- Preprocessing - Enhancing, normalizing, or transforming images before feeding them into AI models.
- Edge Inference - Running vision models on embedded or on-premise hardware for low-latency decision-making.
- Cloud Inference - Leveraging cloud GPUs to run large-scale AI models with high availability and scalability.
How to Think About Filters
A Filters-based system consists of three key concepts:
- Sources - These are your data inputs, such as camera streams, image files, or video feeds.
- Filters - The processing units that transform data, whether that means detecting objects, enhancing images, or applying transformations.
- Connectors - These define how data flows between Filters and how output is stored or relayed to other systems.
By structuring your vision pipelines this way, you gain flexibility to swap, scale, and optimize each component independently.
What's Next?
Ready to build your first vision pipeline with Filters? Head over to our Getting Started Guide to learn how to set up your development environment, run your first Filter, and start composing your own vision workflows.