Agenda subject to change. More sessions coming soon.

Select Your Track:

Filter the conference agenda by selecting the track(s) you are interested in to see only sessions in that track. You can also filter by day.

MONDAY, October 10, 2022

 

8:00 am - 11:30 am
Room 101
CVP-Advanced
Advanced Optics for Vision

Stu Singer, Schneider Optics

Stu Singer

Schneider Optics

Designed for the engineering professional, this course concentrates on real world techniques for putting together optic systems that work. You’ll learn how to select proper lens components, optomechanical layout, including system bends, and mounting techniques. Prior attendance at a Basic Optics course is encouraged, but not required.

11:30 am - 12:30 pm
Break
12:30 pm - 2:30 pm
Room 101
CVP-Advanced
Advanced Vision Lighting

Steve King, OMRON Microscan Systems, Inc.

Steve King

OMRON Microscan Systems, Inc.

The advanced lighting session will dive deeper into the main machine vision lighting principles of illumination, reflection, emission, absorption and transmission and how these can be exploited to create high contrast images for inspection and code reading. The course will go through the fundamental concepts in greater detail, and then through the more advanced concepts of color, multi-light, photometric stereo and multispectral imaging, detailing all from both the theoretical and practical viewpoints.

3:00 pm - 5:00 pm
Room 101
CVP-Advanced
Advanced Camera & Image Sensor Technology

Steve Kinney, Smart Vision Lights

Steve Kinney

Smart Vision Lights

Explore the different levels of image quality at the sensor level. Details relating to quantum efficiency, dark noise, signal to noise ratio will be discussed in detail. In addition to topics related to area scan cameras, the proper usage of line scan and TDI cameras will be reviewed. Sensor size classification and relationship to the camera’s lens mount will be covered.

TUESDAY, October 11, 2022

 

8:00 am - 8:45 am
Room 104
Vision Integration
Crash Course in Machine Vision: Lights, Cameras & Connections!

Frantisek Jakubec, Balluff

Frantisek Jakubec

Frantisek Jakubec

Balluff

Never implemented a vision application before? New grad, new job, new expectations? No technical experience? No problem! In this crash course for beginners or for those who are rusty, we will cover the basics of machine vision applications, common terms, must-have parts of a machine vision application and the possible traps to watch out for. Using vendor neutral stories, we will discuss the four major application areas of machine vision: robot guidance, inspection, gauging, and identification. We will also dive into the key components of a vision system and why they matter from the camera to the lights to mounting to the connectors and to standard communication protocols. Finally, we will discuss tips on how to select a system with real world application stories used to explain why you would pick one technology over the other. Presented in a casual, light, and easy style for any technical skill level, attendees should leave this course comfortable in their ability to understand & discuss machine vision applications with integrators & suppliers.

8:00 am - 8:45 am
Room 103
Applications & Technologies
Advances on 3D Laser Line Profilers: MultiPart and MultiPeak at WARP speed

Athinodoros Klipfel, AT - Automation Technology

Athinodoros Klipfel

AT - Automation Technology

Learn about latest developments on 3D laser line triangulation sensors including the speed-boosting Widely Advanced Rapid Profiling technology (WARP) and the new Genicam interface with the MultiPart and MultiPeak features. Their advantages are demonstrated on selected examples of 3D inspection applications.

8:00 am - 8:45 am
Room 105
Applications & Technologies
Seeing Distance and Chemical Composition with LiDAR and Hyperspectral Imaging

Slawomir Piatek, Hamamatsu

Slawomir Piatek

Hamamatsu

Overview of remotely sensing distance and hyperspectral imaging, including LiDar, and examines different approach to hyperspectral imaging. Then describes a functioning device which combines LiDAR and hyperspectral imaging, and ends with examples of applications utilizing such device.

8:00 am - 10:00 am
Room 101
CVP-Advanced
3D Vision System Development

Jim Anderson, SICK

Jim Anderson

SICK

Learn how advancements in 3-D camera technology are enabling new solutions for more applications than ever before. Review the many vision-based 3-D measurement techniques and which achieve the best results for different application scenarios. This session will provide real application techniques you can use in electronics, pharmaceutical, food & beverage, aerospace, automotive and many other industries.”

8:00 am - 12:00 pm
Room 102
CVP-Basic
The Fundamentals of Machine Vision

David Dechow, Landing AI

David Dechow

Landing AI

You’ll learn all the basics, including how images are captured and transferred to the computer, the principles of lighting, and the common processing algorithms used by machine vision systems. Discover how to successfully implement machine vision and how to avoid common pitfalls during the implementation, launch and production phases. This is an ideal training course for people new to machine vision as well as a great refresher course for anyone with machine vision responsibilities.

9:00 am - 9:45 am
Keynote
Mobility, Perception, & Manipulation: The Building Blocks of Mobile Robots

Kevin Blankespoor, Boston Dynamics

Kevin Blankespoor

Boston Dynamics

Details to come.

10:00 am - 10:45 am
Room 104
Vision & Integration
An Engineering Perspective for Opportunities in Machine Vision in Consumer Packaged Goods

Paul Thomas, Procter & Gamble

Paul Thomas

Procter & Gamble

This talk will focus on opportunities and use cases for Machine Vision in Consumer Packaged Goods. Attend to hear how a Fortune 50 company leverages Machine Vision across multiple billion dollar brands. We will start with Machine Vision Applications current state, explaining some of the complexities and challenges with this technology. I will talk about the impact of planned and unplanned changes currently effecting our technology. I will share my perspective on emerging technologies such as Cloud and Edge Compute as well the successful use Deep Learning in Manufacturing.

10:00 am - 10:45 am
Room 103
Vision & Robotics
How to Build Compact, Lightweight and Affordable Depth Perception and Object Detection for Robotic Guidance and Situational Awareness

Stephen Se, Teledyne FLIR

Stephen Se

Teledyne FLIR

Today there are a number of off-the-shelf stereo depth perception systems on the market. However, depending on factors such as accuracy, baseline, size, weight or cost, there are times system engineers need to build a custom system. In this talk, we will discuss key design drivers for using stereo vision and provide step-by-step instructions on how to build your own compact, lightweight and affordable embedded stereo vision-based system. We will be using off-the-shelf components, open source software, and provide you with code snippets. The resulting system can perform depth perception as well as deep learning inference to detect objects in the scene for enhanced situational awareness which is important in various robotics applications.

10:00 am - 10:45 am
Room 105
Artificial Intelligence& IIoT
Easy Vision & AI Programming

Philip Freidin, IDS Imaging Development

Philip Freidin

IDS Imaging Development

Finding, counting, inspecting, sorting or privacy is possible even without knowing the syntax of a specific programming language. With this new degree easiness, AI vision becomes accessible to everyone.

10:30 am - 12:30 pm
Room 101
CVP Advanced
Introduction to Machine Learning

Andy Long, Cyth Systems, Inc.

Andy Long

Cyth Systems

Details to come.

11:00 am - 11:45 am
Room 104
Applications & Technologies
What is Hyperspectral Machine Vision and What Can It Do for Me?

Alexandre Lussier, Resonon

Alexandre Lussier

Resonon

Hyperspectral imaging combines standard imaging with spectroscopy to provide enhanced vision capabilities that no other imaging technique can provide. It can be summarized as chemical analysis with pixel resolution. In this presentation, we will cover imaging, analysis, and machine vision applications with the goal of attendees walking away with a sense of what hyperspectral machine vision is, and what it can do to overcome their industrial or R&D challenges.

First, this talk will go over the basics of hyperspectral data acquisition, which involves recording a complete light spectrum for every pixel in an image. Second, the data must be properly analyzed to fully leverage the enormous wealth of information that comes from a hyperspectral scan. A bit of scientific background will be provided, in laymen’s terms, to demonstrate how this data is utilized. Finally, we will demonstrate, with examples, how powerful learning algorithms with adequate computing power can classify materials pixel-by-pixel in real-time on assembly lines or sorting conveyors.

11:00 am - 11:45 am
Room 105
Vision Integration
Industrial Machine Vision Applications – An End User Perspective

Daniel Loucks, Corning

Daniel Loucks

Corning

Corning Incorporated is a US based multinational technology company that manufactures specialty glass, ceramics, and related materials and technologies, primarily for industrial and scientific applications. The company was founded in 1851 and it operates in the 5 major business sectors of display technologies, environmental technologies, life sciences, optical communications, and specialty materials. Gorilla® Glass is one of its more recognizable products, used by many smartphone makers as well as new novel glass packaging that is used for pharmaceuticals such as the Covid-19 vaccines. Corning has won the National Medal of Technology and Innovation 4 times for its product and process innovations.

About 20 years ago, as digital cameras became practically available, Corning started developing and deploying machine vision systems on manufacturing lines. System complexity has ranged from simple smart cameras to involved integrated systems. The primary goal of these vision systems has been to improve the sensitivity and consistency measurements, and the forces driving evolution of our machine vision designs include improved performance and lower costs. There are currently about 300 engineers active at Corning in machine vision technology.

Some applications to be discussed in more detail include:

  • Ceramic filter substrate end-face inspection with an 86MP camera and a 700 mm telecentric lens (Corning Environmental Technologies)
  • LCD glass sheet inspection with high-speed line-scan cameras followed by micro-revisit (Corning Display Technologies) 10’ x 10’ glass inspection in 20 seconds and finding defects of 30um anywhere on the sample. (Corning Display Technologies)
  • Measure of photo-elastic stress in glass sheet (Corning Display Technologies and others)
  • Optical fiber end-face inspection (Corning Optical Communications)
  • Optimized image quality and introduced DL into defect classification
  • Dimensional measurements of pharmaceutical glass containers (Corning Life Sciences)
  • In-line monitoring of coating spray patterns. (Corning Life Sciences)
  • Vision Guided CNC micro-machining (Corning Environmental Technologies)
  • Light field technology applications (Corning Display Technologies)
Other machine vision topics of discussion will include key takeaways from of Corning’s experience in 20 years of machine vision development as well as some disconnects, we see with in the vision industry. ?

 

11:00 am - 11:45 am
Room 103
Artificial Intelligence & IIoT
Protect Your Brand with AI Decision-Support for Visual Inspection

Ed Goffin, Pleora Technologies

Ed Goffin

Pleora Technologies

Details to come.

Noon - 1:30 pm
Break
1:30 pm - 2:15 pm
Room 104
Applications & Technologies
UV, VIS, IR: Improve Machine Vision Imaging Results with Optical Filters

Georgy Das, Midwest Optical Systems

Georgy Das

Midwest Optical Systems

Filters are an extremely simple and cost-effective optical component capable of producing exceptional imaging results in a wide range of vision applications operating across applications such as Microscopy, Inspection, NDVI, Near IR and SWIR. Simply put, filters serve as a vital method for obtaining high quality imaging results with greater consistency and repeatability from your vision system, regardless of what wavelength range the application operates in.

This presentation will provide a brief overview of the different machine vision filter categories commonly used in new, emerging vision applications. This includes Dual and Triple bandpass filters for use in Day/Night Surveillance imaging and NDVI, Protective Window filters used to protect your lens in wide range of diverse environmental conditions, Polarizers as an affordable method to combat a multitude of lighting challenges, and SWIR-range filters and their capabilities to enhance INGAAS camera technology in Infrared imaging.

1:30 pm - 2:15 pm
Room 103
Artificial Intelligence & IIoT
Future is Self-Aware Multi-Function Cables for Machine Vision

Jerome Taylor, Nortech Systems

Jerome Taylor

Nortech Systems

A paradigm shift is occurring across the electronics industry where vision and sensor systems are rapidly advancing in intelligence, processing capability, and independence, leading to the need to communicate more data reliably. Added to this are increasing deployments of factory automation and monitoring systems. Future vision and sensor applications will leverage converging connector standards and consolidations in wiring which feature a combination of optical fiber and copper wires in multi-function cables. The net effect is a simplified architecture with improved performance resulting in a more efficient supply chain and economies-of-scale. Today’s cable ecosystems for vision applications are not only more diverse but are demanding more data, longer distances, and greater reliability in a simplified interconnect architecture (smaller, lighter, faster). At the center of it all are connected and aggregated data systems from assets and machines. That Big Data is analyzed to drive optimization and increased value. A new advantage will be the ability to monitor the health of our digital enterprise and predictively model asset maintenance. Cables will no longer be passive and unintelligent but will evolve to integrate intelligence and sensing within the cable to generate actionable diagnostics. Predictive maintenance becomes possible when diagnostic data can be statistically modeled to provide trends in performance degradation. This new advantage will allow us to proactively schedule system maintenance, avoid line-down interruptions, and benefit from more efficient supply chain and operational results. In addition to the introduction of digital diagnostics embedded within interconnect, the cable industry is following the trend to miniaturize electronics. The days of assembling multiple cables into a larger wiring harness to provide multiple functions from point to point are coming to an end. Rather than combine separate copper cables for RF data communication, sensing, control, power, etc., the cable industry is shrinking its products and increasing its value by transitioning to hybrid construction by replacing large portions of copper content with fiber optics.

1:30 pm - 2:15 pm
Room 105
Artificial Intelligence & IIoT
Supercharging Machine Vision - 3 Ways To Help Customers Realize the True Power of Image Data

Scott Everett, Eigen Innovations

Scott Everett

Eigen Innovations

Machine vision is a powerful solution for managing defects. Is it the key to preventing them? Eigen’s Scott Everett discusses three ways manufacturers can leverage image data to get a handle on product and process variability.

1:30 pm - 3:00 pm
Room 101
CVP-Advanced
Designing Linescan Vision Systems

Dale Deering, Teledyne DALSA

Dale Deering

Teledyne DALSA

In this course you learn about Line-scan imaging, and how using a scanning technique can be beneficial for efficient image capture of moving objects. Topics cover components for line-scan image acquisition, when to use line-scan, how to achieve optimum results, and trends in the industry. When you complete this course, you will be able to recognize candidate applications for line-scan imaging and understand how to develop and implement line-scan solutions.

1:30 pm - 5:30 pm
Room 102
CVP-Basic
Beginning Optics for Machine Vision

Gregory Hollows, Edmund Optics

Gregory Hollows

Edmund Optics

This course teaches the fundamentals for optics for machine vision and robotics. Students will learn the fundamental parameters of an imaging system and why they are important, as well as how to choose a lens using first order parameters. The course then teaches the concept and real-world applicability of the modulation transfer function (MTF) and how to manipulate an MTF with different variables to change things such as the depth of field. Lastly, the course will introduce telecentric lenses and how they are different from more traditional imaging optics.

2:30 pm - 3:15 pm
Room 104
Applications & Technologies
Investigation of Component Technologies for SWIR Camera

Ian Blasch, Jabil

Ian Blasch

Jabil

Non-automotive autonomous platforms, designed to operate in outdoor environments, such as lawn mowers, last mile delivery robots, agricultural robots, forklifts, etc. require sensors, typically 3D cameras for obstacle detection and collision avoidance. However, the majority of leading 3D cameras currently use active illumination sources operating at 850nm, 905nm, or 940nm prone to failure in high ambient light conditions. At these wavelengths, the Sun's contribution of background noise leading to higher levels of depth error and in some cases outright failure.

Fortunately, the Earth's atmosphere absorbs photons at certain wavelengths leading to gaps in the Sun's spectral curve. By targeting these gaps, in the general regions of 1130nm and 1380nm, new 3D cameras can be designed with significantly improved performance in outdoor environments.

There is also an added benefit with shortwave infrared wavelengths, significantly increased laser eye safety. Laser eye safety thresholds in the SWIR region can be upwards of 100x greater than those at 940nm The implication is significantly more active illumination can be used to reduce depth error while creating a safer solution for users.

Jabil, collaborating with several leading component suppliers, designed multiple proof-of-concept iToF cameras to test the hypothesis of improved outdoor performance highlighted earlier. In this session, Jabil will present results from testing both an array of leading 3D cameras @ 940nm, as well as, an array of proof-of-concepts using state-of-the-art component technologies designed for operation at 1130nm and 1380nm.

Jabil will follow the proposed testing procedure by the ASTM Standards Committee E57 on 3D Imaging Systems, specifically, WK72962 Standard Test Method for Evaluating Depth Image Quality.

2:30 pm - 3:15 pm
Room 103
Artificial Intelligence & IIoT
A New Paradigm: How Increasing Perception Decreases Costs

Garrett Place, ifm efector

Garrett Place

ifm efector

Mobile robotics is proven to be a great answer to many current business problems. Unfortunately, only the largest of manufacturers are implementing full solutions to date. The reason: Entry price. Today’s mobile robot fleets can come with large price tags (both CapEx and maintenance), creating a challenging ROI calculation that only the largest manufacturers can solve. Robot developers are attempting to change this equation through creative business models (e.g. RAAS), and this can help, but is it sufficient to help mobile robots “cross the chasm”? Probably not, and this also shifts the risk primarily on the robot manufacturer.

We look at the problem slightly differently. The autonomous car industry utilizes an amazing array of perception devices designed to better perceive the environment. This “perception” is required to achieve Level 5 autonomy. In other words, the environmental understanding provided by the additional perception allows for greater performance efficiency. They “know more” so they can “do more”. Can’t we use the same approach for industrial robotics? We believe the answer is yes. Adding better environmental awareness through perception will lead to more efficient robots. A more efficient robot can perform missions at a faster rate, leading to smaller fleets required to accomplish the desired business goals. Smaller fleets equal less CapEx (less vehicles required) and less overall maintenance over time, making the solution approachable to SMEs.

3:30 pm - 4:15 pm
Room 104
Vision & Robotics
3D-Perception-Based Applied AI Solutions Effectively Address Three Major Trends in Robotics Today

Michael Suppa, Roboception

Michael Suppa

Roboception

This talk discusses how 3D-perception-based Applied AI – the smart coupling of advanced machine learning algorithms and classical image processing methodologies – are key to an efficient response to (all) three major trends in robotics today.

In the logistics domain, an area that is predestined for a higher degree of automation, manual work is still pre-dominant due to the high cycle time and the infinite variation of unknown objects. In lab automation, another area that would hugely benefit from increased process automation, the challenge lies in the nature of the objects to be handled, as they are often fragile and/or transparent.

Even in the field of industrial production, where automation is very well advanced when it comes to standardized, repetitive tasks, accurate placement remains a challenge, and any changes w.r.t. the items to be handled, the processes or the production environment require complex adjustments of the automation solutions.

As manual labor becomes more expensive and skilled workers are increasingly scarce, factories seek to automate as many production processes as possible. Shop floor space must be used efficiently, processes should ideally run 24/7 and the often expansive classical feeder systems should be replaced by pick-from-bin processes.

In short: Today’s robots are not smart enough for the next level of Industry 4.0. In order to support flexible automation, robots must be able to reliably detect and locate objects and human collaborators und varying illumination, work pieces type and locations. As the engineering of individual solutions is often costly and typically does not scale.

3D robot vision is key to achieving this increased flexibility that users across the various domains are looking for.

3D vision provides metric information in unstructured environments. It allows for a flexible positioning of the cameras and is robust to illumination changes. Enriched with Applied AI methods, more complex automation problems are solved at minimum shop floor usage and increased flexibility. The proposed 3D vision system combines machine learning and classical methods in order to provide a maximum of reliability, robustness and flexibility for optimizing production processes and minimizing downtimes. Multiple processing pipelines easily calibrated on the robot system can be configured and data and/or model-driven machine learning approaches reduce parameterization effort significantly to help to fulfill the requirements in classical industrial automation as well as e.g. lab automation, agile production, or logistics.

En route to a more flexible automation, three major trends can be observed, and all of them can be addressed by 3D-perception-based Applied AI solutions.

  1. GOOD DATA, NOT BIG DATA: Andrew Ng states that “80% of the AI developer’s time is spent on data preparation”, and calls for good data, i.e. “Data that is defined consistently, covers the important cases, has timely feedback from production data, and is sized appropriately.”

    Simulations based on model data and enriched by real data create realistic ground-truth training data sets for machine learning so that robotic depalletizing, singulation, and bin picking can be implemented with a minimized on-site training time.
  2. PLUG-AND-PRODUCE: The perception component of an automation solution must be easily adjustable to changing requirements. Coupling advanced vision hardware with scalable software platforms enables plug-and-produce scenarios that can at all times be flexibly enhanced and adjusted by adding task-specific software packages and/or an individual user’s own software components. Resources are shared by deployment concept and the smart sensors allow for a distribution of computing resources.
  3. EASE-OF-USE: With robot vision expertise being a scarce resource, usability for robot programmers with little to know vision knowledge is a true game-changer. A quick and easy set-up and adaption thanks to intuitive user interfaces, management of basic software and add-on modules via the same interface and simple ‚try out‘ functionality for a quick assessment of selected settings support this ease-of-use. The machine learning component reduces the parameter space for the user allowing for ease-of-use.

At the same time, users remain skeptical. For them, the key to success is a reliable system. As one Danish integrator recently described, the “experience with camera systems in the past was not particularly good. We looked into various systems over the past decade, but did not find a single 3D vision solution that met our needs.” Their conclusion: “What works at a trade fair or in a manufacturer’s demonstration room is often not viable under real-world conditions.” The presentation will include a number of real-world use cases from North America and Europe (including one that was successfully implemented by the aforementioned integrator). Potentially, a user would be able to join the presentation in order to share their experience and insights live.

3:30 pm - 4:15 pm
Room 103
Artificial Intelligence & IIoT
Auto Deep Learning Vision Software: Leap to the Next Level of Deep Learning Vision Inspection

Hongsuk Lee, Neurocle

Hongsuk Lee

Neurocle

With an increasing demand for fast and cost-effective product inspection, the development of new machine vision systems is at its peak. This session covers the challenges of the current vision system industry and provides a novel solution: Auto Deep Learning Algorithm. Human visual inspection and Rule-Based visual inspection are still commonly used in the industry, yet there are clear limitations to these methods. The main drawback of human inspection is human error. As inspection standards depend on individual inspectors, accuracy and consistency are compromised. Moreover, machine speed will always exceed human speed. While Rule-Based inspection improves on these issues, it still falls short of an ideal inspection system; there are defects that lie outside of the rules and decisions that need human intervention. Fortunately, more companies are adopting deep learning-based visual inspection, which enables consistent, reliable, and rapid inspection. As many companies are still in the early stages of implementing deep learning into their product inspection, they are facing common challenges such as the time and resources consumed in expert recruitment, processing time issues, and maintenance. One of the main difficulties in a DL inspection project comes from having to hire or outsource experts, which both increases the time and cost of the project. From data management to model maintenance, each step of the process is overly complicated and expensive. In particular, the iterative process of testing and retraining the model takes an unnecessary amount of time that can be reduced using the Auto Deep Learning Algorithm.

3:30 pm - 5:00 pm
Room 101
CVP-Advanced
High-Speed, Real-Time Machine Vision

Perry West, Automated Vision Systems, Inc.

Perry West

Automated Vision Systems, Inc.

This course gives you the insights to achieve the speed and performance you need in your vision systems including system architecture, programming tips, and common challenges. You will understand the ways high-speed is determined and the different real-time performance requirements. The course follows two vision system designs to see how high-speed and real-time techniques are put into practice.

WEDNESDAY, October 12, 2022

 

8:00 am - 8:45 am
Room 104
Applications & Technologies
The Benefits of Active Optical Cables for High Performance Video Applications

Mark Jakusovszky, Silicon Line GmbH

Mark Jakusovszky

Silicon Line GmbH

Higher resolution imagers and higher frame rates have put a tremendous strain on traditional copper video cables, resulting in shorter, thicker cables and elaborate, power-hungry active cabling solutions. Meanwhile, fiberoptic Active Optical Cables (AOCs) have improved tremendously in the past 5 years into a reasonably priced, nearly transparent replacement for copper cables. AOCs have begun to replace copper cables in HDMI, USB, DisplayPort and MIPI systems for AR/VR, Gaming, Computing, Medical Imaging, Machine Vision and Automotive applications due to their advantages in length, weight, size, EMI and signal integrity and reliability. New fiber technology survives millions of bends at <5mm and elevated temperatures for extended periods and has shown to be MORE RELIABLE and MORE SUITABLE for emerging vision systems. Learn about the benefits of AOCs and the latest advances in AOC technology in this session that you can apply to your next high-performance vision system design.

8:00 am - 8:45 am
Room 103
Artificial Intelligence & IIoT
Under-Served Problems in Machine Vision For Warehouse Pick-and-Place Logistic

Dan Grollman, Plus One Robotics

Dan Grollman

Plus One Robotics

When developing industrial machine-learning based vision solutions, collected datasets are often kept private in the belief that they represent competitive advantage, some ‘secret sauce’ that, if released, would allow others to leapfrog ahead using off-the-shelf learning models. This, in turn, means that external researchers, including most academics, have to rely on assumptions about the data that are often overly simplified and false, such that the input distribution is fixed, or that there is access to arbitrary amounts of computational power and data curation effort. Systems built on these assumptions do not transfer well into actual production environments, ultimately causing additional work that may have been avoided if the data had been public originally. In this session, Dr. Dan Grollman will describe the “under-served problems” that currently result from applying machine-learning based vision to warehouse pick-and-place logistics and the importance of providing academic researchers with the industry standard data and evaluative processes they need to address them.

8:00 am - 8:45 am
Room 105
Vision & Robotics
Robotic Integration for Distribution & Fulfilment Workflows

Bruce Baring, Tompkins Robotics

Bruce Baring

Tompkins Robotics

Explore how different robotic technologies serving different operational functions can be integrated to provide enhanced performance and faster ROI

8:00 am - 10:30 am
Room 102
CVP-Basic
Beginning Lighting for Machine Vision

Daryl Martin, Advanced Illumination

Daryl Martin

Advanced Illumination

This course focuses on providing the attendee with a background and a basic set of tools to apply a more rigorous analytical approach to solving lighting applications. Topics covered include overview of light, lighting geometry and structure, color tools, filters - illustrated by examples and graphics. We also briefly address LED technology, safety, radiant power measurements, illuminator strobing and preview advanced lighting non-visible and geometry techniques.

8:00 am - 10:00 am
Room 101
CVP-Advanced
Advanced Vision Guided Robotics

David Bruce, FANUC America Corporation

David Bruce

FANUC America Corporation

This course covers 2D & 3D machine vision camera calibration for machine guidance including for industrial robots together with basic information on the types of industrial robots in use today. Along with the methods for representing 3D positional data for both machine vision and industrial robotics and how to ensure a machine vision system provides useful positional data to an industrial robot for a Vision Guided Robot (VGR) application. The course also presents how to implement a fixed mounted and robot mounted 2D/3D VGR application as well as examples of each.

9:00 am - 9:45 am
Keynote
From the Pitching Mound to the Traffic Light: Machine Vision and AI Beyond the Factory

Jumbi Edulbehram, NVIDIA

Jumbi Edulbehram

NVIDIA

The universe of applications for machine vision and intelligent video analytics is growing rapidly. These new use cases open up massive opportunities for application providers and system integrators. This presentation will explore some key use cases in intelligent transportation and sports. Cities around the world are deploying AI-enabled computer vision to tackle their most pressing challenges in traffic management, city planning and mobility. From intelligent intersections, monetizing and managing their curbside real-estate, to smarter tollways, cities increasingly rely on AI. AI is making sports and athletics more entertaining, accessible and competitive. It is transforming the sporting experience for athletes and sports fans, including how stadiums operate, how sports broadcasting is being improved and how coaches measure player performance. We will share how a large ecosystem of application providers is developing and deploying these applications.

10:00 am - 10:45 am
Room 104
Applications & Technologies
There is a World Between Frames: Event Cameras Come of Age for Machine Vision in Industrial Automation

Jacob Vingless, Prophesee

Jacob Vingless

Prophesee

Understand how Prophesee Event-Based Metavision can be applied to achieve new performance and efficiency levels in inspection, counting, predictive maintenance and process monitoring.

10:00 am - 10:45 am
Room 103
Artificial Intelligence & IIoT
Manufacturing Through a Very Different Lens

Prasad Akella, Drishti

Prasad Akella

Drishti

Drishti is a provider of AI-powered video analytics technology that gives visibility and insights for manual assembly line improvement.

10:00 am - 10:45 am
Room 105
Vision Integration
How LED Multispectral Imaging Can Replace More Complex Hyperspectral Imaging for Applications Characterized by Slowly Varying Spectra

Thomas Brukilacchio, Innovations In Optics

Thomas Brukilacchio

Innovations In Optics

Recent advances in high optical power LED light sources in combination with newly available InGaAs image sensors has enabled Multispectral Imaging to replace more complex and expensive Hyperspectral Imaging systems in many applications for objects characterized by slowly varying spectral reflectance.

10:30 am - 12:30 pm
Room 101
CVP Advanced
Non-Visible Imaging: Infrared Technology and Applications

Martin Ettenberg, Princeton Infrared Technologies, Inc.

Martin Ettenberg

Princeton Infrared Technologies, Inc.

Non-visible imaging methods offer unique benefits for a variety of vision tasks. In this session, you’ll learn more about infrared and thermal techniques and better understand if non-visible imaging solutions are right for your specific needs .

11:00 am - 11:45 am
Room 104
Vision Integration
Hyperspectral Imaging Success Stories

Mathieu Marmion, SPECIM

Mathieu Marmion

SPECIM

Vision systems play a crucial role in providing industrial systems with relevant data in order to optimize processes and to ensure high production quality. A large panel of technics and methods are nowadays available. Recent developments in photonics enable integrators and OEMs for accurate sorting and developing reliable inspection systems. Hypespectral imaging, a non destructive method, has been well known for decades, but faster and cheaper available sensors make the method now attractive for the industry. In fact, hyperspectral imaging combines together imaging and spectroscopy, offering new insights over traditional systems, mostly based on Xrays, RGB or multispectral sensors. The aim of this presentation is to show various examples, where successful integration of such technology has been done. The application fields which will be covered are pharmaceutics, phenotyping, food industry as well as recycling. Qualitative and quantitative applications will be highlighted.

Presentation focus will also be dedicated on a new processing platform, making easy the integration of hyperspectral cameras into various industrial environments. This platform has been designed to fulfill industrial requirements, in term of speed and robustness, and enables a faster integration of the technology. No extensive knowledges in data analysis or chemometrics are required. The user-friendly interface allows a quick adoption and deployment of the technology.

This presentation will not be commercially oriented, but will depict the advantages of the of the hyperspectral technology with success stories, giving also an integration path to potential users.

11:00 am - 11:45 am
Room 103
Artificial Intelligence & IIoT
Using A Data-Driven Approach to Visual Inspections to Beat Legacy Machine Vision Systems

Krishna Gopalakrishnan, Elementary Robotics

Krishna Gopalakrishnan

Elementary Robotics

This talk will discuss machine vision systems powered by machine learning. The talk will review applications that new cloud-based ML vision systems are able to solve, allowing customers to get full traceability and be able to close the loop on their inspection processes. He will also talk about how this compares to legacy machine vision and why a data-driven approach is more scalable, is lower cost and has higher performance than legacy systems.

11:00 am - 11:45 am
Room 105
Vision & Robotics
Photo-Realistic Simulation for the Design and Development of Robotics Systems

Erin Rapacki, NVIDIA

Erin Rapacki

NVIDIA

Details to come.

Noon - 12:45 pm
Keynote
Optimizing Cloud and Edge - Vision AI/ML Solutions at Scale

Mark Hanson, Sony Electronics – Semiconductor Solutions of America
Rajat Gupta, Microsoft

Mark Hanson

Sony Electronics – Semiconductor Solutions of America

Rajat Gupta

Microsoft

Most of the current Visual AI and Machine Learning industry effort is focused today on the development (R&D) of the solution set – to prove the capability and accuracy of AI/ML. There hasn’t been a lot of effort on how to effectively and efficiently deploy solutions for customers in the field at scale – addressing practical needs of businesses. Businesses don’t have unlimited resources and can’t justify unlimited cameras, installation costs, networking bandwidth, unlimited power consumption, or unlimited cloud. This keynote will explore how industry can scale solutions -- a hybrid mode of edge processing plus cloud -- to address both the capability as well as the practical needs of businesses.

Noon - 1:30 pm
Break
1:30 pm - 2:15 pm
Room 104
Applications & Technologies
How Infrared Imaging with IOT is Revolutionizing Early Fire Warning Systems

David C. Bursell, MoviTherm

David C. Bursell

MoviTherm

This presentation explains how industrial facilities are finding ways to mitigate and prevent fire damage by implementing infrared camera (IR) technologies and the Internet of Things (IoT) for early fire detection. Fire safety is an area that realizes the benefits of IoT when combined with IR camera systems. By warning earlier on the pathway to ignition, industrial facility managers avert costly and potentially life-threatening fires before they are permitted to start and spread. IR cameras are the first to alert before a fire develops. They “see” a warming-up of material early in the fire development process before forming smoke particles or flames. These warming materials appear as hot spots in a thermal image and are quantified with regions of interestthat report temperature values. By connecting IR cameras and other detection sensors that alert at different stages of fire development, potential fires can more readily be detected and prevented.

1:30 pm - 2:15 pm
Room 103
Artificial Intelligence & IIoT
A Natural Combination? Barcode Reading and Deep Learning

Jim Witherspoon, Zebra Technologies

Jim Witherspoon

Zebra Technologies

At first glance, barcode reading and deep learning fall on opposite ends of the innovation spectrum. Barcode reading is foundational technology that has been around for decades and is often overlooked as a sub-class of vision applications. Today however, thanks to drastic market shifts and the surge in deep learning, barcode reading is regaining some of its previous luster. Come learn how barcode reading and deep learning applications coexist and add mutual value. Hear about the latest trends in barcode reading and take home with you some small steps you can take to maximize the value of your processes and technology as the information age continues to evolve.

1:30 pm - 2:15 pm
Room 105
Applications & Technologies
Color in Motion - The Next Big Thing in 3D Machine Vision

Kurt Häusler

Kurt Häusler

Photoneo

Photoneo presents the next silver bullet in 3D machine vision. Get to know the most efficient technology for real-time colorful 3D point cloud creation of moving scenes in high quality.

2:00 pm - 5:00 pm
Room 102
CVP-Basic
The Fundamentals of Camera & Image Sensor Technology

Kevin McCabe, IDS Imaging Development Systems, Inc.

Kevin McCabe

IDS Imaging Development Systems, Inc.

Gain an understanding of digital camera principles. Find out about different camera types and their capabilities. Learn about what digital interfaces these cameras use, from Gigabit Ethernet to Camera Link HS. Other topics include how image sensors capture light, basic understanding of image quality terms, digital camera parameterization, and the capabilities of monochrome versus color sensors.

2:00 pm - 5:00 pm
Room 101
CVP-Advanced
Advanced Color Machine Vision & Applications

Romik Chatterjee, Graftek Imaging, Inc.

Romik Chatterjee

Graftek Imaging, Inc.

Explore the different levels of image quality at the sensor level. Details relating to quantum efficiency, dark noise, signal to noise ratio will be discussed in detail. In addition to topics related to area scan cameras, the proper usage of line scan and TDI cameras will be reviewed. Sensor size classification and relationship to the camera’s lens mount will be covered.

2:30 pm - 3:15 pm
Room 104
Vision & Robotics
The Sweet Spot: Improving Safety and Productivity Together

Patrick Sobalvarro, Veo Robotics

Patrick Sobalvarro

Veo Robotics

Productivity and safety are typically contrasting concepts in the manufacturing world as it is traditionally practiced. For decades, organizations have had to sacrifice one for the other, with traditional warehouse safeguarding methods contributing to slower processes and a hit to output.

Ironically, this reality has worsened since manufacturers have increased their use of robotics and automation in warehouses. Although these technologies are designed to boost productivity and scale workplace efficiencies, the more robots you have working alongside humans in a workplace, the more stringent safety measures must be. And unfortunately, the safeguarding methods most manufacturers deploy today – like keeping robots caged – are highly restrictive and inefficient.

In this presentation, Patrick Sobalvarro, President, CEO, and Co-founder of industrial automation company Veo Robotics, will outline the safety barriers reversing the intended impact of automation within warehouses today and contributing to an inflexible and costly manufacturing model.

He will explore how as organizations increase their investments in robots and automation to keep up with high market demand, they are hindering productivity and ROI by deploying these technologies alongside outdated safeguarding methods. For example, a recent study by Patrick’s company Veo Robotics found that fully-fenced environments (41%) are still the most used method of safeguarding in manufacturing facilities today. Fenced environments do not provide the speed, efficiency, and flexibility that current modern manufacturing environments demand.

Patrick will educate his audience on the dangers of relying on these methods and share his tips for safe and productive human-and-robot collaboration. These include:

  • How to strike the right balance between safety and productivity in order to break down old safeguarding barriers and create something new. Current macroeconomic conditions require manufacturers to increase output while keeping costs in check, meaning they must find ways for humans and machines to work together safely.
  • When to utilize new 3D safeguarding methods like Speed and Separation Monitoring (SSM) to break robots free from their cages and enable warehouse workers to keep safe in collaborative robot applications. SSM, which follows standards set by the International Organization for Standardization, such as ISO 10218-1 and ISO/TS 15066, endows any robot with spatial awareness to avoid people and obstacles around it. So instead of preventing warehouse managers from automating even the most basic tasks and thus wasting their investment in automation, SSM overcomes these limitations while opening up tremendous new opportunities for human-robot collaboration and cage-free environments.
2:30 pm - 3:15 pm
Room 103
Applications & Technologies
3D Machine Vision Standards for Manufacturing Applications and Beyond

Kamel Saidi, National Institute of Standards and Technology

Kamel Saidi

National Institute of Standards

The session will introduce the work that the National Institute of Standards and Technology (NIST) and ASTM International Committee E57 on 3D Imaging Systems have been doing to develop voluntary industry standards for the use of 3D imaging systems in manufacturing applications. NIST’s Sensing and Perception Systems Group is focused on the application of systems such as depth cameras, time-of-flight sensors, and structured-light systems to guidance of robotic operations such as bin picking and assembly. Together with participants from across industry, NIST has developed a standards roadmap that prioritizes the needs of the industry and ASTM E57 has established 4 working groups that are working on various standards being led by industry to address these needs. The four ASTM E57 efforts are currently focused on developing metrics and tests for understanding the basic performance of 3D machine vision systems, their performance as parts of robotic bin picking systems, and the best practices for selecting appropriate 3D machine vision systems for specific robotic applications. This session will also showcase the most recent results of these efforts and will present the latest thinking about how 3D machine vision systems are tested and selected."

3:30 pm - 4:15 pm
Room 104
Vision Integration
Using Open Source Technology to Solve Tough Inspection Problems

Chris Aden, LMI Technologies

Chris Aden

LMI Technologies

The number of innovations in AI technology has been growing at an amazing clip since the mid 2010s. Unlike the early pioneers of computer vision technology, who invested-in and often patented rules based computer vision algorithms featured in expensive do-it-yourself kits, the innovators of modern AI technology have chosen to deliver not just ideas, but the software itself to the community at large. In this session we'll review the trends in open source AI technology, examine some of the state of the art deep learning models, and show examples of how these models are being used to solve tough factory applications. We'll also introduce you to the LMI Technologies open platform that you can use deploy your own optimized models, or models that our team has developed to solve your inspection problems.

3:30 pm - 4:15 pm
Room 103
Artificial Intelligence & IIoT
Fake it Until You Make it: Synthetic Image Generation for Training Deep Learning Models for Consumer Packaged Goods

Laura Pahren, Procter & Gamble

Laura Pahren

Procter & Gamble

Constraints on image collection in manufacturing has become a driver for synthetic images, whether it be disruption to manufacturing itself or introduction of new product types. For deep learning applications, the intelligence is artificial, so why not artificial images too? Synthetic images can offer additional model robustness in terms of image capture, defect variability and even background independence. We will explore the wonderful world of synthetic data and considerations as to which strategy can best benefit your future use cases. This talk won’t be bogus, but we might create some images that may be.

THURSDAY, October 13, 2022

 

8:00 am - 8:45 am
Room 103
Vision Integration
Seeing the Invisible Helps with Quality Control

Anupriya Ashok Balikai, Spookfish Innovations

Anupriya Ashok Balikai

Spookfish Innovations

Visualising a prism is sure to take us back to high school physics, where we learnt about how white light contains the entire spectrum of visible light. In this session we will see what lies beyond this in both directions of the visible light and how this can be applied to machine vision and the manufacturing industry.

Long wave infrared has long been used in applications related to real-time monitoring, stockpile monitoring, looking for potential fire eruptions in various scenarios of buildings, ships etc.

Recent developments in long and short wave infrared technology has opened up many more opportunities in the machine vision space. For instance, the basic version of Spookfish’s seal inspection machine, named Snipe, works with long wave infrared cameras to ensure seal integrity in packaging. At an industrial scale, we have made use of MWIR and SWIR cameras on packaging lines as well, to monitor complex packaging and seals. The higher accuracy provided with the use of MWIR cameras makes it possible to completely get rid of other leak testing mechanisms, which depend on random sampling, are destructive and time consuming. We will be discussing two cases here - the first being aluminium bags in the food industry, and the second induction-sealed bottles in the pharma industry.

The application of SWIR and NIR cameras open up a very exciting opportunity in the agricultural, food and pharmaceutical industries. SWIR are capable of highlighting differences and non-uniformity in moisture content. NIR cameras on the other hand are capable of identifying the homogeneity of any item by sampling pixels on its surface, even if it is a compound containing several mixtures - for example, a pharmaceutical pill. There are two ways to do this - keeping the camera at a distance to observe the spectral characteristics of the surface, or with NIR we could use a probe inserted into a mixture. The probe relays information about the homogeneity of the mixture, and helps with online checks in processes like blending.

To summarise, this session will look through the applicability through examples of various bands of the invisible light spectrum, applied to several sectors including pharma, food and healthcare and the nuances of putting together both the optics and cameras required to achieve this.

Furthermore this session will also help manufacturers gain perspective about how machine vision can help them be more sustainable. 100% Quality control is now achievable and needs to be applied across several processes using the new technologies available.

8:00 am - 8:45 am
Room 104
Artificial Intelligence & IIoT
A Low-Code Approach to the Digital Economy

José Pedro Ferreira, Neadvance

José Pedro Ferreira

Neadvance

Throughout History, humankind has always been dependent on technology. People used the technology they had available to help make their lives easier.

Today we are living the exciting journey of the fourth industrial revolution where data, connectivity and cyber-physical systems are taking in our lives.

To hide complexity, increase quality, reduce human effort needs and reduce response times, we need to bring reusability and low code principles from traditional software development to support us dealing with the current revolution.

When we apply low code techniques, it is possible to reduce time-to-line while increasing quality. Merging the power of computer vision, robotics and machine learning in a simple and easy to use interface democratizes the use of a powerful set tools to non-experts maintaining the quality and the control required for their applications.

Low code-based approaches should make systems development so easy and fast that even a teenager can do it!

8:00 am - 10:00 am
Room 102
CVP-Basic
Image Processing Fundamentals

Romik Chatterjee, Graftek Imaging, Inc.

Romik Chatterjee

Graftek Imaging, Inc.

In this class, you’ll gain an understanding of how machine vision and imaging algorithms work. These fundamentals will be used to show a variety of ways to solve real-world industry application challenges. Attendees will be exposed to the strength and capabilities that software can provide while gaining an understanding of complete imaging system requirements needed to optimize their application needs.

9:00 am - 9:45 am
Room 104
Vision & Robotics
Optimizing Your Robot Deployments to Help Solve the Labor Shortage

Kevin Carlin, Realtime Robotics

Kevin Carlin

Realtime Robotics

The key success factor in the manufacturing and logistics industries is rate – how do you do things faster, better, and more efficiently, without paying for more employees? Over the next decade, 4.6 million manufacturing jobs will likely be needed, and at least 2.4 million of them are expected to go unfilled. That is why robots are so important. But industrial optimization is hard. Optimizing your operations for effective robot deployment isn’t as easy as purchasing and hitting the “go” button. There are several actions that should be taken to streamline and optimize your deployments, including: better simulation tools (enable better facility mapping out/planning so you use more of available space); collision avoidance technology (being aware of surroundings); monitoring and alerting systems (alerting to issues before they happen); and enabling multi-task programming within your robots (speeding redeployment and limiting cycle times).

9:00 am - 9:45 am
Room 103
Applications & Technologies
How Embedded Vision is Revolutionizing Industrial Automation

Maharajan Veerabahu, e-con Systems

Maharajan Veerabahu

e-con Systems

As machines become more powerful, it inherits the capacity to process more information from their surroundings via sensors. This ability is leveraged by modern factories and industries to bring more computing from the cloud to the edge. In this session, you will learn how traditional machine vision system architectures have always played a key role in keeping factory automation running smoothly and how a transition in this architecture will define how modern industries will operate. This session will help you understand the changes happening in the industry as well as help you become prepared for the transition

9:00 am - 11:00 am
Room 101
CVP-Advanced
Metrology & 2D Calibration Techniques

David Michael

 

David Michael

ShipIn Systems

Participants will gain an understanding of techniques for creating systems that yield reliable and repeatable measurement results. Practices for proper calibration of imaging systems ranging from appropriate usage of targets to accurate algorithm deployment will be discussed. How to manage images correctly to create repeatable results will be reviewed. Anyone who is developing metrology systems or having and need for accurate measurements will benefit from this curriculum.

10:00 am - 10:45 am
Room 104
Applications & Technologies
Calibrated and Characterized Lenses for Improved Image Quality

Mark Peterson, Theia Technologies

Mark Peterson

Theia Technologies

Many applications have benefited from improved performance and speed by using image sensors with higher resolutions, GPUs and ASICs that allow higher data processing speed and image throughput. However just having more data from higher resolution sensors is not automatically useful and depends greatly on the quality of the data. Just as image sensors and image processing pipelines have done, lenses can be characterized to understand their limitations and optimized to allow the system integrator to design the sharpest optical system. Theia’s calibrated lenses are individually characterized and calibrated to provide a rich array of data sets depending on the lens features including measured MTF resolution performance, focal length, F/#, geometrical distortion, relative illumination, lateral color as well as custom calibrations. The calibrated data is provided so the user can optimize image quality in real time and possibly without the requirement to use difficult or costly field calibration fixtures. Calibrated data is a combination of both data unique to the individual lens and design data that is common across all lenses of the same model. This data is available for download from a cloud database along with application notes providing some options for using the data. The system integrator can access the data via a non-proprietary format using the integrator’s embedded computer which is responsible for image processing and interpretation. Integrators can optimize the image with individual lens data in their own applications. There are many points in an optical imaging system that can limit performance. Applying the exact image characterization parameters from each individual lens can improve the image quality. The improvement amount may be small for each characterization parameter but many incremental changes in multiple imaging parameters can lead to a noticeable overall improvement in image quality. This paper will examine several examples of improvement.

10:00 am - 10:45 am
Room 103
Artificial Intelligence & IIoT
Optimize Overall Equipment Effectiveness of any Vision System

Neil Farrow, DataSensing - A division of Datalogic

Neil Farrow

DataSensing - A division of Datalogic

Overall Equipment Effectiveness (OEE) is a proven way to measure and analyze a manufacturing process. Having a mindset from tracking OEE metrics lets production stakeholders find and improve areas of their process for maximum return on investment (ROI). The session will review a Pareto Chart of smart camera downtime reasons from actual manufacturing environments, followed by proven real-world methods to attack these root causes of downtime. He will cover methods that can be added to almost any smart camera or vision system as well as highlight some features to look for in new smart cameras that will help reduce the camera configuration time. It will explain how to select a camera with these features for reduced configuration, startup and testing work-hours.

10:30 am - 1:00 pm
Room 102
CVP Basic
Vision System Design
  • David Dechow, Landing AI

  • Perry West, Automated Vision Systems, Inc.

David Dechow

Landing AI

Perry West

Automated Vision Systems, Inc

Ultimately the value of any machine vision technology lies in the successful implementation of a systems solution for a task in an automated process. The knowledge gained in cameras, lighting, optics, and image processing is the foundation required to move on to the successful design of a working machine vision system. In this course, you will learn the role of machine vision systems design in the broader task of systems integration and the general steps and strategies involved in the design of a vision system, including selection of components in typical use cases, and specification of the implementation of those components. The information provided will enable you to participate in and support a team delivering practical machine vision to plant floor automation.

11:00 am - 11:45 am
Room 104
Vision Integration
Color Correction in Multi- and Hyperspectral Imaging Lenses

Andreas Platz, Sill Optics

Andreas Platz

Sill Optics

Color information is becoming more and more important in a lot of machine vision applications. Such as biomedical, food and drug industries, but also for precise measurement in semiconductor industries. To take full advantage of getting additional information from the wavelength besides just the dimensions.

The session gives an overview of different chances and challenges regarding color correction of imaging lenses in consideration of your project’s needs. It refers to different sensor techniques e.g. Bayer pattern, multi-sensor cameras and multi-line cameras, SWIR design and special hyperspectral imaging setups.

In many applications, defining the lens specs close to physical limitations, production feasibility and customization of lenses leads to a significant improvement for the color imaging application.

11:00 am - 11:45 am
Room 103
Artificial Intelligence & IIoT
Quality Inspection Using Machine Vision with Artificial Intelligence

Tad A.G. Newman, Omron

Tad A.G. Newman

Omron

The breadth and scope of machine vision has expanded significantly over the years along with advances in technology, making it a ubiquitous and critical application for today’s world-class manufacturers. The economic justification of implementing unit-level quality inspection in manufacturing operations is substantial. And with the addition of Artificial Intelligence to Vision inspection, it has become easier to solve the more challenging applications without the use of specialized engineering staff. In this presentation, we will briefly discuss the internal justification for and benefits of machine vision quality inspection for multiple industries. We will also explore the evolution of Artificial Intelligence, Machine Learning, and Deep Learning within Vision Systems and Smart Cameras. We will explain why the latest generation of AI implementations not only speeds up setup time but can process images better and faster than the human eye. Regarding specific technologies used in Vision inspection applications, we will briefly review the applicability and relative advantages of Vision Systems, Smart Cameras, and Industrial Cameras. Finally, we will highlight relevant new capabilities and solutions in this space, including a new generation of quality inspection systems in the electronics and food and beverage industries.

Noon - 1:30 pm
Break
2:00 pm - 4:30 pm
Room 101
CVP Advanced
Advanced Image Processing

David Zerkler, Matrox

3:30 pm - 4:30 pm
Room 102
CVP-Basic
CVP-Basic Exam (Optional)
 

 

 

Details to come.

FRIDAY, October 14, 2022

 

8:00 am - 10:00 am
Room 101
CVP-Advanced
Advanced Vision System Integration
  • David Dechow, Landing AI

  • Robert Tait, Optical Metrology Solutions LLC

David Dechow

Landing AI

Details to come.

Robert Tait

Optical Metrology Solutions LLC

Producing a reliable vision system is no accident. It begins with creating a strong specification that carries through from component selection to system development and finally on-line deployment. Successful and efficient vision systems integration in an automation environment can be achieved by following a general well-accepted workflow that will guide the execution of each phase of the process. It’s important also to be able to identify certain classic integration challenges that may happen along the way. This course will take you through the steps needed to achieve vision system integration success and will detail practical examples of typical use cases and the annoying but sometimes-amusing pitfalls that can (and will) occur.

1:00 pm - 3:00 pm
Room 101
CVP-Advanced
CVP-Advanced Exam
 

 

 

Details to come.