
Simplifying the Machine-Vision Pipeline for Industrial AI
Overview
Robopipe is an industrial machine-vision platform that enables companies to build and deploy AI models for visual inspection on production lines. The system combines a rugged IP65-rated AI camera and Linux-based controller with an integrated software platform that supports the entire workflow—from dataset creation to real-time deployment.
The platform targets use cases such as defect detection, object counting, OCR, and robot guidance across industries including manufacturing, pharmaceuticals, agriculture, and logistics.
Unlike traditional machine-vision setups that require multiple tools and specialized expertise, Robopipe bundles the entire workflow into a single ecosystem. Engineers can capture images directly from the camera, label datasets, train machine-learning models, and run them on the device—without relying on external infrastructure or cloud connectivity.
The goal was to simplify the machine-vision pipeline—from image capture to deployment—while keeping the system powerful enough for engineers working in real production environments.
Client
Koala42
Timeline
2025
Tools used
Figma
ChatGPT
Miro
Webflow
Jitter
Illustrator
Problem
Industrial machine-vision systems are traditionally difficult to implement and operate. Many existing tools are designed primarily for machine-learning specialists rather than engineers responsible for production lines.
This creates several challenges for manufacturing teams.
First, machine-learning workflows are fragmented across multiple tools. Capturing images, labeling datasets, training models, and deploying them often require different software environments, making the process slow and difficult to manage.
Second, data labeling interfaces are often overly technical. Engineers responsible for quality control typically lack deep ML expertise, yet many tools assume knowledge of machine-learning pipelines and dataset management.
Finally, monitoring deployed models in production is unclear. Once models are running on a production line, teams need visibility into performance, defect rates, and system health, but many systems provide limited operational insight.
These issues create friction in adopting machine-vision solutions and slow down experimentation with AI in manufacturing environments.
My role
I worked closely with developers and the product manager to design the platform from the ground up.
My responsibility was to define the UX architecture and core product workflow, translating complex machine-learning processes into an interface that engineers could operate without deep AI expertise.
I designed the end-to-end user experience, from project creation and dataset preparation to model training and production monitoring. This included creating a design system, defining the visual identity, and producing marketing visuals for the product launch.
Because the system integrates both hardware and software, many design decisions required collaboration with engineers to understand technical constraints and ensure the interface aligned with the capabilities of the camera hardware.
Actions
The interface was designed around a simplified version of the machine-learning pipeline, turning complex workflows into a clear sequence of steps.
1. Simplified ML Pipeline
The product workflow follows a structured process:
Create project → Capture images → Label dataset → Train model → Run model
By structuring the platform around this pipeline, engineers can move through the entire machine-vision workflow without switching tools or environments.
This approach replaces fragmented workflows with a single integrated platform.
2. Integrated Camera Capture
A key capability of Robopipe is the ability to capture training data directly from the industrial camera.
Users can collect images from the production environment and immediately store them as datasets within the platform. This removes the need for manual data transfer and simplifies dataset preparation.
The capture interface also supports real-time preview and dataset organization, enabling engineers to quickly build training datasets from real production conditions.
3. Visual Data Labeling
To prepare datasets for training, I designed a visual labeling interface optimized for industrial inspection scenarios.
While the underlying logic builds on concepts from open-source tools such as Label Studio, the interface was simplified to focus on common inspection tasks like object detection and defect annotation.
This allowed engineers to label datasets quickly without needing deep familiarity with machine-learning tools.
4. Dataset Versioning
Training reliable models requires experimentation with different datasets.
To support this, the platform includes dataset versioning, allowing users to select image subsets and create training versions. Each version records the dataset configuration and preprocessing steps used during training.
This makes experimentation more transparent and easier to manage over time.
5. Model Training and Deployment
Robopipe allows models to be trained and deployed directly within the platform ecosystem.
Training is executed on the device hardware, and the resulting models can run on the AI camera or within the system controller. This enables fully offline machine-vision deployments, which is critical for many industrial environments.
The interface provides visibility into training progress, performance metrics, and model outputs.
6. Real-Time Production Monitoring
Once deployed, models begin analyzing production data in real time.
To support operational visibility, I designed an analytics dashboard that presents key production metrics collected from the running models.
The dashboard shows metrics such as detected defects, object counts, and inspection results, giving engineers a quick overview of how the system performs in real production scenarios.
Results
The redesigned interface transformed a complex machine-learning workflow into a structured and approachable product experience.
The platform provides a unified environment for capturing data, preparing datasets, training models, and deploying them in production.
By consolidating these steps into a single interface, Robopipe simplifies the adoption of machine-vision systems in manufacturing environments and reduces the operational friction typically associated with AI deployments.
The integrated workflow also enables faster dataset preparation and experimentation, allowing engineers to iterate on models more efficiently.
Learnings
Designing Robopipe required translating machine-learning concepts into workflows that manufacturing engineers could operate confidently.
One of the key lessons was the importance of reducing conceptual complexity without removing technical power. Engineers needed a system that felt approachable while still supporting advanced experimentation with datasets and models.
The project also highlighted the unique challenges of designing software that interacts directly with industrial hardware and production environments, where reliability and clarity are critical.
Bridging the gap between AI technology and practical engineering workflows was central to making the platform usable in real-world manufacturing settings.
Simplifying the Machine-Vision Pipeline for Industrial AI
Overview
Robopipe is an industrial machine-vision platform that enables companies to build and deploy AI models for visual inspection on production lines. The system combines a rugged IP65-rated AI camera and Linux-based controller with an integrated software platform that supports the entire workflow—from dataset creation to real-time deployment.
The platform targets use cases such as defect detection, object counting, OCR, and robot guidance across industries including manufacturing, pharmaceuticals, agriculture, and logistics.
Unlike traditional machine-vision setups that require multiple tools and specialized expertise, Robopipe bundles the entire workflow into a single ecosystem. Engineers can capture images directly from the camera, label datasets, train machine-learning models, and run them on the device—without relying on external infrastructure or cloud connectivity.
The goal was to simplify the machine-vision pipeline—from image capture to deployment—while keeping the system powerful enough for engineers working in real production environments.
Client
Koala42
Timeline
2025
Tools used
Figma
ChatGPT
Miro
Webflow
Jitter
Illustrator
Problem
Industrial machine-vision systems are traditionally difficult to implement and operate. Many existing tools are designed primarily for machine-learning specialists rather than engineers responsible for production lines.
This creates several challenges for manufacturing teams.
First, machine-learning workflows are fragmented across multiple tools. Capturing images, labeling datasets, training models, and deploying them often require different software environments, making the process slow and difficult to manage.
Second, data labeling interfaces are often overly technical. Engineers responsible for quality control typically lack deep ML expertise, yet many tools assume knowledge of machine-learning pipelines and dataset management.
Finally, monitoring deployed models in production is unclear. Once models are running on a production line, teams need visibility into performance, defect rates, and system health, but many systems provide limited operational insight.
These issues create friction in adopting machine-vision solutions and slow down experimentation with AI in manufacturing environments.
My role
I worked closely with developers and the product manager to design the platform from the ground up.
My responsibility was to define the UX architecture and core product workflow, translating complex machine-learning processes into an interface that engineers could operate without deep AI expertise.
I designed the end-to-end user experience, from project creation and dataset preparation to model training and production monitoring. This included creating a design system, defining the visual identity, and producing marketing visuals for the product launch.
Because the system integrates both hardware and software, many design decisions required collaboration with engineers to understand technical constraints and ensure the interface aligned with the capabilities of the camera hardware.
Actions
The interface was designed around a simplified version of the machine-learning pipeline, turning complex workflows into a clear sequence of steps.
1. Simplified ML Pipeline
The product workflow follows a structured process:
Create project → Capture images → Label dataset → Train model → Run model
By structuring the platform around this pipeline, engineers can move through the entire machine-vision workflow without switching tools or environments.
This approach replaces fragmented workflows with a single integrated platform.
2. Integrated Camera Capture
A key capability of Robopipe is the ability to capture training data directly from the industrial camera.
Users can collect images from the production environment and immediately store them as datasets within the platform. This removes the need for manual data transfer and simplifies dataset preparation.
The capture interface also supports real-time preview and dataset organization, enabling engineers to quickly build training datasets from real production conditions.
3. Visual Data Labeling
To prepare datasets for training, I designed a visual labeling interface optimized for industrial inspection scenarios.
While the underlying logic builds on concepts from open-source tools such as Label Studio, the interface was simplified to focus on common inspection tasks like object detection and defect annotation.
This allowed engineers to label datasets quickly without needing deep familiarity with machine-learning tools.
4. Dataset Versioning
Training reliable models requires experimentation with different datasets.
To support this, the platform includes dataset versioning, allowing users to select image subsets and create training versions. Each version records the dataset configuration and preprocessing steps used during training.
This makes experimentation more transparent and easier to manage over time.
5. Model Training and Deployment
Robopipe allows models to be trained and deployed directly within the platform ecosystem.
Training is executed on the device hardware, and the resulting models can run on the AI camera or within the system controller. This enables fully offline machine-vision deployments, which is critical for many industrial environments.
The interface provides visibility into training progress, performance metrics, and model outputs.
6. Real-Time Production Monitoring
Once deployed, models begin analyzing production data in real time.
To support operational visibility, I designed an analytics dashboard that presents key production metrics collected from the running models.
The dashboard shows metrics such as detected defects, object counts, and inspection results, giving engineers a quick overview of how the system performs in real production scenarios.
Results
The redesigned interface transformed a complex machine-learning workflow into a structured and approachable product experience.
The platform provides a unified environment for capturing data, preparing datasets, training models, and deploying them in production.
By consolidating these steps into a single interface, Robopipe simplifies the adoption of machine-vision systems in manufacturing environments and reduces the operational friction typically associated with AI deployments.
The integrated workflow also enables faster dataset preparation and experimentation, allowing engineers to iterate on models more efficiently.
Learnings
Designing Robopipe required translating machine-learning concepts into workflows that manufacturing engineers could operate confidently.
One of the key lessons was the importance of reducing conceptual complexity without removing technical power. Engineers needed a system that felt approachable while still supporting advanced experimentation with datasets and models.
The project also highlighted the unique challenges of designing software that interacts directly with industrial hardware and production environments, where reliability and clarity are critical.
Bridging the gap between AI technology and practical engineering workflows was central to making the platform usable in real-world manufacturing settings.







