Q&A With Natasha Porter, Chief Customer Officer at Benchmark Gensuite
Artificial intelligence (AI) is transforming the world of workplace safety and health. Natasha Porter, Chief Customer Officer at Benchmark Gensuite, discusses how AI tools can help safety professionals reduce risks, improve processes and prevent incidents in their workplaces.
PSJ: What are ways that AI can help safety professionals reduce risks and mitigate hazards in the workplace?
Natasha: One of the key things that AI can do is help analyze vast amounts of safety data in real time and identify patterns in the data of potential hazards before an incident actually occurs. AI really can enhance risk assessment and automate inspections, providing predictive insights to prevent incidents. This can be done in a number of different ways. As an EHS professional many years ago, I worked with more than 80 facilities worldwide, and they had all different operating profiles and regulatory requirements that they were required to meet locally. There were tons of data streams, and information was coming in from both leading and lagging indicators that we were managing as an overall business. I wish I had a time machine so I could go back and apply AI technology at that time, because the number of days and weeks it would take me to manually crunch and process data and generate insights was intensive and time consuming. I think that’s really where AI can provide significant value for health and safety leaders and enable folks to get out on the shop floor and put those insights into action, versus spending time trying to figure out what those insights and trends are in their datasets today.
Imporoving Safety With AI
- Use AI for proactive risk detection. Feed existing leading and lagging safety indicators into AI tools to spot patterns and emerging hazards before incidents occur.
- Automate inspections and risk assessments. Deploy AI to streamline routine audits and generate predictive insights so safety leaders can spend more time in the field acting on findings.
- Apply computer vision to identify real-time hazards. Use video or still-image scanning to flag unsafe conditions and behaviors (e.g., forklift speeding, improper lifting operations) and track risk profiles over time.
- Scale ergonomic assessments with portable tech. Capture job tasks on a smartphone and use AI to perform detailed ergonomic scoring.
- Use AI to detect PSIF precursors and generate actions. Run AI across injury, concern and event records to identify PSIF precursors, then use generative AI summaries to prioritize causal factors and mitigation steps.
- Pilot responsibly and use a “trust but verify” approach. Ensure tools are trained on high-quality relevant data, address privacy and ethics transparently, involve employees in trials and treat outputs as decision support—not a replacement—for human judgment.
PSJ: What kind of information can AI tools provide to improve decision-making and help safety professionals understand the performance of their safety management system?
Natasha: AI tools offer real-time analytics of different aspects of work such as workplace conditions, tracking overall compliance, equipment performance or behaviors that workers have when they are performing certain operations and job tasks. There are examples related to computer vision, where a user can video record and scan a scene and identify specific risks. These risks could include forklift speeding or a lift being done with equipment in an improper way where a hazard is present. And the goal is the ability to determine which risk profiles are currently present in a company’s operations, how they shift over time, and how to take action to reduce or eliminate those risks. That is really what every professional is looking at how to do most efficiently and effectively across different sites and different operations, and I think AI can be a great tool to support that.
PSJ: Please talk about computer vision and other examples that can be used on jobsites.
Natasha: A couple examples that we have some practical experience with are in the ergonomics space. There are companies doing fantastic work in leveraging AI to capture video of workers performing job tasks and running detailed ergonomic assessment and conducting the ergonomic assessment scoring. I personally took ergonomics training, and it is a lot to learn and understand. Proficiency at ergonomic assessments is achieved through practice, and AI technology in this space allows safety professionals to essentially skip a step and allow the technology to do the hard work of crunching the assessments. This allows democratization of ergonomic assessments out to more people. A smartphone or other portable device can capture the job being done in real time, and AI can do the rest of the work. We have also spent some time with our subscriber community creating an AI solution surrounding potentially serious incidents and fatalities (PSIFs) and identifying precursors. This is leveraging all the data and information coming in from various sources and exploring whether AI can look at these different data records and determine whether there is enough in the context of that description for a PSIF or a precursor risk to be present. This is done in real time. The AI processes these data and gives the output for a business leader on what risks are currently trending in their business that have some significance to them, and then generative AI can generate a summary of the causal factors and recommendations for mitigation. This is taking thousands of data records, consolidating them, identifying precursor risks, and then creating a report of the causes and actions to take to mitigate or eliminate those risks.
PSJ: How can these tools help safety professionals improve training?
Natasha: Continuing with the example of ergonomic AI technology, we partnered directly with an AI company that would go out and do the assessments. The employee would see it and ask to be walked through the assessment process, and the AI does this frame by frame as the worker is conducting their job. It shows where the risks get higher and lower. In that moment when the assessment is being done, the employee can receive real-time feedback. For example, if the amount of reaching being done in a particular task can be reduced, the risk to the worker’s shoulders, upper back and arms can be reduced. On the computer vision side, being able to see real-time snapshots—either still frame pictures or the video itself—of where the risks are present is great for training. And then the PSIF and precursor risk is being used to support explanations of why a particular concern report or an injury case is a PSIF. The AI technology taps into 100,000 publicly available data sets that are provided through OSHA and makes a connection between the employee concern or injury that came in and a PSIF or precursor.
“Worker privacy, transparency and ethical AI use need to be considered and addressed in any kind of AI-based pilots being explored.”
PSJ: What should safety professionals know before they use this technology to ensure that it is used effectively?
Natasha: I typically share three key tips. First and foremost, AI really should complement—not replace— human judgment. All these different AI technologies augment and enable safety professionals to have a much bigger impact because they are either getting to information more quickly or processing data faster. The idea is to help the leader get to their end objective more effectively. It is not replacing human judgment. The tool gets to that answer more quickly, and then the user needs to think about whether the answer makes sense based on their judgment. The other thing safety professionals need to think about is ensuring that any models they are using are trained on high-quality and relevant data. Worker privacy, transparency and ethical AI use need to be considered and addressed in any kind of AI-based pilots being explored. Involving the employees in the process of selecting or running trials of AI technology is also important. This provides transparency and also gets folks excited about it. In almost all the cases that I have heard about, when doing an ergonomic assessment, employees get really excited to be able to see what the assessment is looking for, what feedback it is providing and what can be done to improve the workplace and operations to prevent ergonomic injury. That is really powerful. The last tip I always share is to remember that AI is not perfect. That may change in the coming years, but right now, AI is a powerful tool that users need to trust but verify. It is no different than having a senior, Ph.D.-level expert in EHS. They provide an answer or solution set to which human judgment must be applied before proceeding. The same type of approach is needed with AI.
PSJ: What information should safety professionals provide AI to get the best results?
Natasha: AI thrives on good quality data. In the EHS space, I have heard for years that it is challenging because good quality data is not coming in. This really depends on what good quality means to each individual. How one person defines a really good quality injury description might be very different versus another person.
PSJ: How can safety professionals take the first step toward implementing AI in their work?
Natasha: AI is and will continue to be an embedded part of everything that we do. For example, open Google and do a search. Google has AI overview integrated directly into search pages. The power of that is really consolidating and compressing all the information from the individual links that Google would originally provide. It gives an executive summary and the resource links to back it up. That gives a sense of how it works and the value and the time savings it provides. I always encourage folks who are considering applying it specifically in EHS to think about their biggest pain point or need in their organization and focus there first. If ergonomics is an issue that you have, explore solutions like 3motionAI or Ergo Evaluator. If consistent identification of SIFs and precursors is critical in your business, look at a PSIF or PSI AI advisor. There are many different options, but the choice should add immediate value and should be connected to something that is a real challenge in the user’s organization. Once they narrow in on an AI technology to support that space, I encourage safety professionals to do proof of concept and get employees engaged in that process to understand more about the AI technology, get comfortable with it, use it and get value. They can then be the champions and spokespeople for that solution as it is rolled out more broadly across the enterprise.
“AI tools offer real-time analytics of different aspects of work such as workplace conditions, tracking overall compliance, equipment performance or behaviors that workers have when they are performing certain operations and job tasks.”
PSJ: How can safety professionals determine how much data they need to obtain useful results from AI?
Natasha: AI can only do something if it is given something. If ergonomics is the pain point and a company is using an AI that can help film and analyze job tasks and identify the ergonomic risks, the more assessments they do using that technology, the broader perspective they will have on ergonomic risks at a site or at multiple sites across the business. In that case, it is all about conducting the assessments using AI. For the most part, people are not starting from a video library. They probably had done previous assessments using pen and paper or a similar form. I would say computer vision is the same thing. To get a good understanding of forklift risks or using computer vision, the user is going to tap into whatever type of video surveillance technology that exists in a facility to get at that information. On the opposite side of that, for example, something like PSIFs and precursors is going to depend on the available injury, concern and event data. We have worked with a business that has on average about 3,000 to 4,000 total concerns, injuries and events per year, and we have worked with a business that has 120,000 data records in those three areas per year. Both of those businesses can utilize AI technology to obtain value and insights. A person can only process so many records to determine trends. With AI, you can multiply that significantly and still be able to get those insights. I suggest just starting with the data that already exist. It might be possible to identify cases where what safety professionals see in the field represents certain risks and what is seen in the data represents something else. So then the question is why is there a mismatch? Is it a lack of data and information, or could the AI be raising other things that need to be considered for risk management? Or could it be a combination of the two? It is not necessary to have all of the data to get started. If all that is available is the TRIR data, for example, start there and incorporate the other data sets coming from computer vision or other sources. Based on the biggest challenge and pain point and the goal, start with the data that is available and determine which datasets need to be collected and how to combine them to get the end result.
PSJ: Anything else you would like to add about how safety professionals can use AI in the workplace?
Natasha: If you have not started, then please start. This is not going away. The rate at which this technology is morphing around us, even in just day-to-day things we do, is exponential. I encourage everybody who has not done anything with AI to go into Google and do a search, and look at the AI overview results. That is step one; it cannot get any easier than that. The other thing I want to mention is agentic AI. Generative AI is all about taking lots of data and providing concise summarization of information and insights. Agentic AI is taking that to the next level. With processes that have 10 different steps, an AI agent can actually do all 10 of those steps and create the end result that a human actually reviews, applies human judgment on and then utilizes. But to get to that step, it is necessary to start using some little AI components. The agentic or agent aspect is usually a combination of small AI engines that are connected together. I am really excited to see where this goes.
Natasha Porter, M.S.E., is chief customer officer at Benchmark Gensuite. She holds a master of science in engineering in Environmental Management and Economics and a bachelor’s degree in Civil Engineering from Johns Hopkins University.
Source Publication: ASSP. (2026, March). AI for risk reduction: Q&A with Natasha Porter, chief customer officer at Benchmark Gensuite. Professional Safety, 71(3), 32-34.


